If ScanLocation is set, the timestamps will be assumed to be in the
given location when scanning from the database.
The Codec interface is now implemented by *pgtype.TimestampCodec instead
of pgtype.TimestampCodec. This is technically a breaking change, but it
is extremely unlikely that anyone is depending on this, and if there is
downstream breakage it is trivial to fix.
https://github.com/jackc/pgx/issues/1195https://github.com/jackc/pgx/issues/1945
If ScanLocation is set, it will be used to convert the time to the given
location when scanning from the database.
The Codec interface is now implemented by *pgtype.TimestamptzCodec
instead of pgtype.TimestamptzCodec. This is technically a breaking
change, but it is extremely unlikely that anyone is depending on this,
and if there is downstream breakage it is trivial to fix.
https://github.com/jackc/pgx/issues/1195https://github.com/jackc/pgx/issues/1945
Modify the RowToStructByPos/Name functions to store the computed mapping
of columns to struct field locations in a cache to reuse between calls.
Because this computation can be expensive and the same few results will
frequently be reused, caching these results provides a significant
speedup.
For positional mappings, we can key the cache by just the struct-type.
However, for named mappings, the key must include a representation of
the columns, in order, since different columns produce different
mappings.
It is more often that interesting information is buried by the verbose
output than the verbose output is useful. It can be reenabled later if
necessary.
Tests were failing with:
Error: Process completed with exit code 143.
This appears to mean that Github Actions killed the runner.
See https://github.com/jackc/pgx/actions/runs/8216337993/job/22470808811
for an example.
It appears Github Actions kills runners based on resource usage. Running
tests one at a time reduces the resource usage and avoids the problem.
Or at least that's what I presume is happening. It sure is fun debugging
issues on cloud systems where you have limited visibility... :(
fixes https://github.com/jackc/pgx/issues/1934
This still solves the problem of negative numbers creating a line
comment, but this avoids breaking edge cases such as `set foo to $1`
where the substition is taking place in a location where an arbitrary
expression is not allowed.
https://github.com/jackc/pgx/issues/1928
pgx v5 was not vulnerable to CVE-2024-27289 do to how the sanitizer was
being called. But the sanitizer itself still had the underlying issue.
This commit ports the fix from pgx v4 to v5 to ensure that the issue
does not emerge if pgx uses the sanitizer differently in the future.
The PostgreSQL server will reject messages greater than ~1 GB anyway.
However, worse than that is that a message that is larger than 4 GB
could wrap the 32-bit integer message size and be interpreted by the
server as multiple messages. This could allow a malicious client to
inject arbitrary protocol messages.
https://github.com/jackc/pgx/security/advisories/GHSA-mrww-27vc-gghv
The underlying type of json.RawMessage is a []byte so to avoid it being
considered binary data we need to handle it specifically. This is done
by registerDefaultPgTypeVariants. In addition, handle json.RawMessage in
the JSONCodec PlanEncode to avoid it being mutated by json.Marshal.
https://github.com/jackc/pgx/issues/1763
Otherwise, it might be possible to panic when closing the pipeline if it
tries to read a connection that should be closed but still has a fatal
error on the wire.
https://github.com/jackc/pgx/issues/1920
When a conn is going to execute a query, the first thing it does is to
deallocate any invalidated prepared statements from the statement cache.
However, the statements were removed from the cache regardless of
whether the deallocation succeeded. This would cause subsequent calls of
the same SQL to fail with "prepared statement already exists" error.
This problem is easy to trigger by running a query with a context that
is already canceled.
This commit changes the deallocate invalidated cached statements logic
so that the statements are only removed from the cache if the
deallocation was successful on the server.
https://github.com/jackc/pgx/issues/1847
CopyFrom requires that all values are encoded in the binary format. It
already tried to parse strings to values that can then be encoded into
the binary format. But it didn't handle types that can be encoded as
text and then parsed and converted to binary. It now does.
TLS setup and tests were rather finicky. It seems that openssl 3
encrypts certificates differently than older openssl and it does it in
a way Go and/or pgx ssl handling code can't handle. It appears that
this related to the use of a deprecated client certificate encryption
system.
This caused CI to be stuck on Ubuntu 20.04 and recently caused the
contributing guide to fail to work on MacOS.
Remove openssl from the test setup and replace it with a Go program
that generates the certificates.