Previously it was checking TotalConns but that includes ConstructingConns.
Instead it should directly check IdleConns so the next Acquire takes one of
those and doesn't make a 3rd connection. The check against the context was
also wrong which prevented this from timing out after 2 minutes.
This also fixes a bug where NewConnsCount was not correctly counting
connections created by Acquire directly.
Fixes#1690
It was previously possible for a connection to be created while the
background health check was running. The health check could create
connection(s) in excess of the maximum pool size in this case.
https://github.com/jackc/pgx/issues/1660
connect_timeout given in conn string was not obeyed if sslmode is not specified (default is prefer) or equals sslmode=allow|prefer. It took twice the amount of time specified by connect_timeout in conn string. While this behavior is correct if multi-host is provided in conn string, it doesn't look correct in case of single host. This behavior was also not matching with libpq.
The root cause was to implement sslmode=allow|prefer conn are tried twice. First with TLSConfig and if that doesn't work then without TLSConfig. The fix for this issue now uses the same context if same host is being tried out. This change won't affect the existing multi-host behavior.
This PR goal is to close issue [jackc/pgx/issues/1672](https://github.com/jackc/pgx/issues/1672)
The context timeouts for tests are designed to give a better error
message when something hangs rather than the test just timing out.
Unfortunately, the potato CI frequently has some test or another
randomly take a long time. While the increased times are somewhat less
than optimal on a real computer, hopefully this will solve the
flickering CI.
Most of this is in conversion, and I assume it became unused with some
of the v5 changes and refactors to a codec-based approach.
There are likely a few more cleanups to be made, but these ones seemed
easy and safe to start with.
As recommended by go-staticcheck, but also might be a bit more efficient
for the compiler to implement, since we don't care about which slice of
bytes is greater than the other one.
This ensures the output of Encode can pass through Scan and produce
the same input. This found two two minor problems with the text
codec. These are not bugs: These situations do not happen when using
pgx with Postgres. However, I think it is worth fixing to ensure the
code is internally consistent.
The problems with the text codec are:
* It did not correctly distinguish between nil and empty. This is not
a problem with Postgres, since NULL values are marked separately,
but the binary codec distinguishes between them, so it seems like
the text codec should as well.
* It did not output spaces between keys. Postgres produces output in
this format, and the parser now only strictly parses the Postgres
format. This is not a bug, but seems like a good idea.
Arrays with values that start or end with vtab ("\v") must be quoted.
Postgres's array parser skips leading and trailing whitespace with
the array_isspace() function, which is slightly different from the
scanner_isspace() function that was previously linked. Add a test
that reproduces this failure, and fix the definition of isSpace.
This also includes a change to use strings.EqualFold which should
really not matter, but does not require copying the string.
* use []string for value string pointers: one allocation instead of
one per value.
* use one string for all key/value pairs, instead of one for each.
After this change, one Hstore will share two allocations: one string
and one []string. The disadvantage is that it cannot be deallocated
until all key/value pairs are unused. This means if an application
takes a single key or value from the Hstore and holds on to it, its
memory footprint will increase. I would guess this is an unlikely
problem, but it is possible.
The benchstat results from my M1 Max are below.
goos: darwin
goarch: arm64
pkg: github.com/jackc/pgx/v5/pgtype
│ orig.txt │ new.txt │
│ sec/op │ sec/op vs base │
HstoreScan/databasesql.Scan-10 82.11µ ± 1% 82.66µ ± 2% ~ (p=0.436 n=10)
HstoreScan/text-10 83.30µ ± 1% 84.24µ ± 3% ~ (p=0.165 n=10)
HstoreScan/binary-10 15.987µ ± 2% 7.459µ ± 6% -53.35% (p=0.000 n=10)
geomean 47.82µ 37.31µ -21.98%
│ orig.txt │ new.txt │
│ B/op │ B/op vs base │
HstoreScan/databasesql.Scan-10 56.23Ki ± 0% 56.23Ki ± 0% ~ (p=0.324 n=10)
HstoreScan/text-10 65.12Ki ± 0% 65.12Ki ± 0% ~ (p=0.675 n=10)
HstoreScan/binary-10 21.09Ki ± 0% 20.73Ki ± 0% -1.70% (p=0.000 n=10)
geomean 42.58Ki 42.34Ki -0.57%
│ orig.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
HstoreScan/databasesql.Scan-10 744.0 ± 0% 744.0 ± 0% ~ (p=1.000 n=10) ¹
HstoreScan/text-10 743.0 ± 0% 743.0 ± 0% ~ (p=1.000 n=10) ¹
HstoreScan/binary-10 464.00 ± 0% 41.00 ± 0% -91.16% (p=0.000 n=10)
geomean 635.4 283.0 -55.46%
¹ all samples are equal
I am working on an application that uses hstore types, and we found
that returning the values is slow, particularly when using the text
protocol, such as when using database/sql. This improves parsing to
be about 6X faster (currently faster than binary). The changes are:
* referencing the original string instead of copying into new strings
(very large win)
* using string.IndexByte to scan double quoted strings: it has
architecture-specific assembly implementations, and most of the
time is spent in key/value strings.
* estimating the number of key/value pairs to allocate the correct
size of the slice and map up front. This reduces the number of
allocations and bytes allocated by a factor of 2, and was a small
CPU win.
* parsing directly into the Hstore, rather than copying into it.
This parser is stricter than the old one. It only accepts hstore
strings serialized by Postgres. The old one was already stricter
than Postgres's own parser, but previously accepted any whitespace
character after a comma. This one only accepts space. Example:
"k1"=>"v1",\t"k2"=>"v2"
Postgres only ever uses ", " as the separator. See hstore_out:
https://github.com/postgres/postgres/blob/master/contrib/hstore/hstore_io.c
The result of using benchstat to compare the benchmark on my M1 Pro
with the following command line in below. The new text parser is now
faster than the binary parser. I will improve the binary parser in a
separate change.
for i in $(seq 10); do go test ./pgtype -run=none -bench=BenchmarkHstoreScan -benchtime=1s >> new.txt; done
goos: darwin
goarch: arm64
pkg: github.com/jackc/pgx/v5/pgtype
│ orig.txt │ new.txt │
│ sec/op │ sec/op vs base │
HstoreScan/databasesql.Scan-10 82.11µ ± 1% 10.51µ ± 0% -87.20% (p=0.000 n=10)
HstoreScan/text-10 83.30µ ± 1% 11.49µ ± 1% -86.20% (p=0.000 n=10)
HstoreScan/binary-10 15.99µ ± 2% 15.77µ ± 1% -1.35% (p=0.007 n=10)
geomean 47.82µ 12.40µ -74.08%
│ orig.txt │ new.txt │
│ B/op │ B/op vs base │
HstoreScan/databasesql.Scan-10 56.23Ki ± 0% 11.68Ki ± 0% -79.23% (p=0.000 n=10)
HstoreScan/text-10 65.12Ki ± 0% 20.58Ki ± 0% -68.40% (p=0.000 n=10)
HstoreScan/binary-10 21.09Ki ± 0% 21.09Ki ± 0% ~ (p=0.378 n=10)
geomean 42.58Ki 17.18Ki -59.66%
│ orig.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
HstoreScan/databasesql.Scan-10 744.00 ± 0% 44.00 ± 0% -94.09% (p=0.000 n=10)
HstoreScan/text-10 743.00 ± 0% 44.00 ± 0% -94.08% (p=0.000 n=10)
HstoreScan/binary-10 464.0 ± 0% 464.0 ± 0% ~ (p=1.000 n=10) ¹
geomean 635.4 96.49 -84.81%
¹ all samples are equal