This allows for connection settings to be updated without having to create
a new pool. The callback is passed a copy of the pgx.ConnConfig and will
not impact existing live connections.
Using CopyFromRows can often be inconvenient to use, because you would
need to convert a typed array to an [][]interface{}. Similarly,
implementing a custom CopyFromSource is too verbose for one-off things.
Add CopyFromSlice that allows to more easily convert a slice to a
CopyFromSource. Example:
copyCount, err := conn.CopyFrom(
context.Background(),
pgx.Identifier{"people"},
[]string{"first_name", "last_name", "age"},
pgx.CopyFromSlice(len(rows), func(i int) ([]interface{}, error) {
return []interface{user.FirstName, user.LastName, user.Age}, nil
}),
)
This patch fixes jackc/pgx#841. The meat of the fix lives
in [a PR to the pgconn repo][1]. This change just checks
for errors after executing a prepared statement and informs
the underlying stmtcache about them so that it can properly
clean up. We don't try to get fancy with retries or anything
like that, just return the error and allow the application to handle it.
I had to make [some][1] [changes][2] to to the jackc/pgconn package as well
as this package.
Fixes#841
[1]: https://github.com/jackc/pgconn/pull/56
[2]: https://github.com/jackc/pgconn/pull/55
Previously, the Scan documentation stated that scanning into a []byte
will skip the decoding process and directly copy the raw bytes received
from PostgreSQL.
This has not been true for at least 2 months. It is also undesirable
behavior in some cases such as a binary formatted jsonb. In that case
the '1' prefix needs to be stripped to have valid JSON. If the raw
bytes are desired this can easily be accomplished by scanning into
pgtype.GenericBinary or using Rows.RawValues.
In light of the fact that the new behavior is superior, and that it has
been in place for a significant amount of time, I have decided to
document the new behavior rather than change back to the old behavior.