Previously, items were never removed from the preparedStatements map.
This means workloads that send a large number of unique queries could
run out of memory. Delete items from the map when sending the
deallocate command to Postgres. Add a test to verify this works.
Fixes https://github.com/jackc/pgx/issues/1456
Before this change, the Hstore text protocol did not quote keys or
values containing non-space whitespace ("\r\n\v\t"). This causes
inserts with these values to fail with errors like:
ERROR: Syntax error near "r" at position 17 (SQLSTATE XX000)
The previous version also quoted curly braces ("{}"), but they don't
seem to require quoting.
It is possible that it would be easier to just always quote the
values, which is what Postgres does when encoding its text protocol,
but this is a smaller change.
The parseHstore function did not check the return value from
p.Consume() after a ', ' sequence. It expects a doublequote '"' that
starts the next key, but would accept any character. This means it
accepted invalid input such as:
"key1"=>"b", ,key2"=>"value"
Add a unit test that covers this case
Fix a couple of the nearby error strings while looking at this.
Found by looking at staticcheck warnings:
pgtype/hstore.go:434:6: this value of end is never used (SA4006)
pgtype/hstore.go:434:6: this value of r is never used (SA4006)
The existing test registers pgtype.Hstore in the text map, then uses
the query modes that use the binary protocol. The existing test did
not use the text parsing code. Add a version of the test that uses
pgtype.Hstore as the input and output argument in all query modes,
and tests it without registering the codec.
When running queries with the hstore type registered, and with simple
mode queries, the scan implementation does not correctly distinguish
between NULL and empty. Fix the implementation and add a test to
verify this.
...on a select that returns an error after some rows.
This was initially found in by a failure with CockroachDB because it
seems to send a RowDescription before an error even when no rows are
returned. PostgreSQL doesn't.
Table types have system / hidden columns like tableoid, cmax, xmax, etc.
These are not included when sending or receiving composite types.
https://github.com/jackc/pgx/issues/1576
When using `scany` I encountered the following case. This seems to fix it.
Looks like null `jsonb` columns cause the problem. If you create a table like below you can see that the following code fails. Is this expected?
```sql
CREATE TABLE test (
a int4 NULL,
b int4 NULL,
c jsonb NULL
);
INSERT INTO test (a, b, c) VALUES (1, null, null);
```
```go
package main
import (
"context"
"log"
"github.com/georgysavva/scany/v2/pgxscan"
"github.com/jackc/pgx/v5"
)
func main() {
var rows []map[string]interface{}
conn, _ := pgx.Connect(context.Background(), , ts.PGURL().String())
// this will fail with can't scan into dest[0]: cannot scan NULL into *interface {}
err := pgxscan.Select(context.Background(), conn, &rows, `SELECT c from test`)
// this works
// err = pgxscan.Select(context.Background(), conn, &rows, `SELECT a,b from test`)
if err != nil {
panic(err)
}
log.Printf("%+v", rows)
}
```
Sleeping for a microsecond on Windows actually takes 10ms. This caused
the test to never finish. Instead use channel to ensure the two
goroutines start working at the same time and remove the sleeps.
The test was relying on sending so big a message that the write blocked.
However, it appears that on Windows the TCP connections over localhost
have an very large or infinite sized buffer. Change the test to simply
set the deadline to the current time before triggering the write.