CopyFrom requires that all values are encoded in the binary format. It
already tried to parse strings to values that can then be encoded into
the binary format. But it didn't handle types that can be encoded as
text and then parsed and converted to binary. It now does.
The context timeouts for tests are designed to give a better error
message when something hangs rather than the test just timing out.
Unfortunately, the potato CI frequently has some test or another
randomly take a long time. While the increased times are somewhat less
than optimal on a real computer, hopefully this will solve the
flickering CI.
Tests should timeout in a reasonable time if something is stuck. In
particular this is important when testing deadlock conditions such as
can occur with the copy protocol if both the client and the server are
blocked writing until the other side does a read.
CopyFrom had to create a prepared statement to get the OIDs of the data
types that were going to be copied into the table. Every COPY operation
required an extra round trips to retrieve the type information. There
was no way to customize this behavior.
By leveraging the QueryExecMode feature, like in `Conn.Query`, users can
specify if they want to cache the prepared statements, execute
them on every request (like the old behavior), or bypass the prepared
statement relying on the pgtype.Map to get the type information.
The `QueryExecMode` behave exactly like in `Conn.Query` in the way the
data type OIDs are fetched, meaning that:
- `QueryExecModeCacheStatement`: caches the statement.
- `QueryExecModeCacheDescribe`: caches the statement and assumes they do
not change.
- `QueryExecModeDescribeExec`: gets the statement description on every
execution. This is like to the old behavior of `CopyFrom`.
- `QueryExecModeExec` and `QueryExecModeSimpleProtocol`: maintain the
same behavior as before, which is the same as `QueryExecModeDescribeExec`.
It will keep getting the statement description on every execution
The `QueryExecMode` can only be set via
`ConnConfig.DefaultQueryExecMode`, unlike `Conn.Query` there's no
support for specifying the `QueryExecMode` via optional arguments
in the function signature.
Using CopyFromRows can often be inconvenient to use, because you would
need to convert a typed array to an [][]interface{}. Similarly,
implementing a custom CopyFromSource is too verbose for one-off things.
Add CopyFromSlice that allows to more easily convert a slice to a
CopyFromSource. Example:
copyCount, err := conn.CopyFrom(
context.Background(),
pgx.Identifier{"people"},
[]string{"first_name", "last_name", "age"},
pgx.CopyFromSlice(len(rows), func(i int) ([]interface{}, error) {
return []interface{user.FirstName, user.LastName, user.Age}, nil
}),
)
In case of an error it was possible for the goroutine that builds the
copy stream to still be running after CopyFrom returned. Since that
goroutine uses the connections ConnInfo data types to encode the copy
data it was possible for those types to be concurrently used in an
unsafe fashion.
CopyFrom will no longer return until that goroutine has completed.
Because reading a record type requires the decoder to be able to look up oid
to type mapping and types such as hstore have types that are not fixed between
different PostgreSQL servers it was necessary to restructure the pgtype system
so all encoders and decodes take a *ConnInfo that includes oid/name/type
information.
This replaces *Conn.CopyTo. CopyTo was named incorrectly. In PostgreSQL
COPY FROM is the command that copies from the client to the server. In
addition, CopyTo does not accept a schema qualified table name. This
commit introduces the Identifier type which handles multi-part names and
correctly quotes/sanitizes them. The new CopyFrom method uses this
Identifier type.
Conn.CopyTo is deprecated.
refs #243 and #190