Previously, Get implicitly allowed returning a reference to an internal
value (e.g. a []byte) but AssignTo was documented as requiring a deep
copy.
This inconsistency meant that either Get was unsafe or the deep copy in
AssignTo was superfluous. In addition, Scan into a []byte skips going
through Bytea and returns a []byte of the unparsed bytes directly. i.e.
a reference not a copy.
Standardize on allowing Get and AssignTo to return internal references
but require a Value never mutate internal values - only replace them.
This will be useful for array and composite types that may have to
support elements that may not support binary encoding.
It also is slightly more convenient for text-ish types to have a default
format of text.
PlanScan used to require the exact same value be used every time. While
this was great for performance, on further consideration I think it is
too much of a potential foot-gun.
This moves back in the other direction. A plan tolerates a change in
destination. It even detects a change in destination type and falls
back to a new plan.
Perfectly matched hot scan paths (e.g. PG int4 to Go int32) are still
much faster than they were before this set of optimizations. The first
scan of a destination that uses a decoder is faster due to not
allocating. It's a little bit slower on subsequent runs than before
this set of optimizations. But it is preferable to optimize for the
most common scan targets (e.g. *int32, *int64, *string) over generic
decoder destinations.
In addition this fees pgx.connRows.Scan from having to check that
the destination is unchanged.
This allows registering a mapping of a Go type to a PostgreSQL type
name. If the OID of a value to be encoded or decoded is unknown, this
additional mapping will be used to determine a suitable data type.
Instead of hardcoding specific types and skipping type assertions based
on that, only check if a destination is a (sql.Scanner) after a failed
AssignTo.
This is slightly slower in the non-decoder case and *very* slightly
faster in the decoder. However, this approach is cleaner and has the
potential for further optimizations.
Specifying behavior for Status Null and Undefined is incorrect because
a Value is not required to have a Status. In addition, standard
behavior is to return nil, not pgtype.Null when the Status is
pgtype.Null.
This comes at a small expense to scanning into a type that implements
TextDecoder or BinaryDecoder but I think it is a good trade.
Before:
BenchmarkConnInfoScanInt4IntoBinaryDecoder-16 88181061 12.4 ns/op 0 B/op 0 allocs/op
BenchmarkConnInfoScanInt4IntoGoInt32-16 30402768 36.8 ns/op 0 B/op 0 allocs/op
After:
BenchmarkConnInfoScanInt4IntoBinaryDecoder-16 79859755 14.6 ns/op 0 B/op 0 allocs/op
BenchmarkConnInfoScanInt4IntoGoInt32-16 38969991 30.0 ns/op 0 B/op 0 allocs/op
Composite() function returns a private type, which should
be registered with ConnInfo.RegisterDataType for the composite
type's OID.
All subsequent interaction with Composite types is to be done
via Row(...) function. Function return value can be either
passed as a query argument to build SQL composite value out of
individual fields or passed to Scan to read SQL composite value
back.
When passed to Scan, Row() should have first argument of type
*bool to flag NULL values returned from query.
It was a mistake to use it in other contexts. This made interop
difficult between pacakges that depended on pgtype such as pgx and
packages that did not like pgconn and pgproto3. In particular this was
awkward for prepared statements.
Because pgx depends on pgtype and the tests for pgtype depend on pgx
this change will require a couple back and forth commits to get the
go.mod dependecies correct.
Instead of needing to instrospect the database on connection preload the
standard OID / type map. Types from extensions (like hstore) and custom
types can be registered by the application developer. Otherwise, they
will be treated as strings.