When reviewing my own old code, I noticed several monstrosities like
this:
batchResult := tx.SendBatch(ctx, batch)
defer batchResult.Close()
for range batch.Len() {
if _, err := batchResult.Exec(); err != nil {
return err
}
}
return nil
All of them can replaced with just this:
return tx.SendBatch(ctx, batch).Close()
So I thought it might be a good idea to give a hint in the docs
explicitly. This trick is not so apparent, after all.
For a simple query like `SELECT * FROM t WHERE id = $1`,
QueryContext() used to make 6 allocations on average.
Reduce that to 4 by avoiding two allocations that
are immediately discarded:
// the variadic arguments of c.conn.Query() escape,
// so args is heap-allocated
args := []any{databaseSQLResultFormats}
// append() reallocates args to add enough space +
// an array returned by namedValueToInterface()
// is immediately discarded
args = append(args, namedValueToInterface(argsV)...)
- BeforeAcquire now marked as deprecated, and re-implemented in terms of PrepareConn
- PrepareConn now takes precidence over BeforeAcquire if both are provided
- New tests added, so both old and new behavior are tested
- One niggle: AcquireAllIdle does not return an error, so the only recourse
that seems reasonable when PrepareConn returns an error in that context,
is to destroy the connection. This more or less retains the spirit of
existing functionality, without changing the public API of that method.
Although maybe an error-returning variant would be a useful addition as
well.
Previously, PlanScan used a cache to improve performance. However, the
cache could get confused in certain cases. For example, the following
would fail:
m := pgtype.NewMap()
var err error
var tags any
err = m.Scan(pgtype.TextArrayOID, pgx.TextFormatCode, []byte("{foo,bar,baz}"), &tags)
require.NoError(t, err)
var cells [][]string
err = m.Scan(pgtype.TextArrayOID, pgx.TextFormatCode, []byte("{{foo,bar},{baz,quz}}"), &cells)
require.NoError(t, err)
This commit removes the memoization and adds a test to ensure that this
case works.
The benchmarks were also updated to include an array of strings to
ensure this path is benchmarked. As it turned out, there was next to no
performance difference between the cached and non-cached versions.
It's possible there may be a performance impact in certain complicated
cases, but I have not encountered any. If there are any performance
issues, we can optimize the narrower case rather than adding memoization
everywhere.