Tests should timeout in a reasonable time if something is stuck. In
particular this is important when testing deadlock conditions such as
can occur with the copy protocol if both the client and the server are
blocked writing until the other side does a read.
Before this change, the Hstore text protocol did not quote keys or
values containing non-space whitespace ("\r\n\v\t"). This causes
inserts with these values to fail with errors like:
ERROR: Syntax error near "r" at position 17 (SQLSTATE XX000)
The previous version also quoted curly braces ("{}"), but they don't
seem to require quoting.
It is possible that it would be easier to just always quote the
values, which is what Postgres does when encoding its text protocol,
but this is a smaller change.
The parseHstore function did not check the return value from
p.Consume() after a ', ' sequence. It expects a doublequote '"' that
starts the next key, but would accept any character. This means it
accepted invalid input such as:
"key1"=>"b", ,key2"=>"value"
Add a unit test that covers this case
Fix a couple of the nearby error strings while looking at this.
Found by looking at staticcheck warnings:
pgtype/hstore.go:434:6: this value of end is never used (SA4006)
pgtype/hstore.go:434:6: this value of r is never used (SA4006)
The existing test registers pgtype.Hstore in the text map, then uses
the query modes that use the binary protocol. The existing test did
not use the text parsing code. Add a version of the test that uses
pgtype.Hstore as the input and output argument in all query modes,
and tests it without registering the codec.
When running queries with the hstore type registered, and with simple
mode queries, the scan implementation does not correctly distinguish
between NULL and empty. Fix the implementation and add a test to
verify this.
Table types have system / hidden columns like tableoid, cmax, xmax, etc.
These are not included when sending or receiving composite types.
https://github.com/jackc/pgx/issues/1576
When using `scany` I encountered the following case. This seems to fix it.
Looks like null `jsonb` columns cause the problem. If you create a table like below you can see that the following code fails. Is this expected?
```sql
CREATE TABLE test (
a int4 NULL,
b int4 NULL,
c jsonb NULL
);
INSERT INTO test (a, b, c) VALUES (1, null, null);
```
```go
package main
import (
"context"
"log"
"github.com/georgysavva/scany/v2/pgxscan"
"github.com/jackc/pgx/v5"
)
func main() {
var rows []map[string]interface{}
conn, _ := pgx.Connect(context.Background(), , ts.PGURL().String())
// this will fail with can't scan into dest[0]: cannot scan NULL into *interface {}
err := pgxscan.Select(context.Background(), conn, &rows, `SELECT c from test`)
// this works
// err = pgxscan.Select(context.Background(), conn, &rows, `SELECT a,b from test`)
if err != nil {
panic(err)
}
log.Printf("%+v", rows)
}
```
It should pass a FlatArray[T] to the next step instead of a
anySliceArrayReflect. By using a anySliceArrayReflect, an encode of
[]github.com/google/uuid.UUID followed by []string into a PostgreSQL
uuid[] would crash. This was caused by a EncodePlan cache collision
where the second encoding used part of the cached plan of the first.
In proper usage a cache collision shouldn't be able to occur. If this
assertion proves incorrect it will be necessary to add an optional
interface to ScanPlan and EncodePlan that marks the plan as ineligable
for caching. But I have been unable to construct a failing case, and
given that ScanPlans have been cached for quite some time now without
incident I do not think it is possible. This issue only occurred due to
the bug in *wrapSliceEncodePlan[T].Encode.
https://github.com/jackc/pgx/issues/1502
This significantly reduces memory allocations in paths that repeatedly
encode the same type of values such as CopyFrom.
https://github.com/jackc/pgx/issues/1481
This seems a bit of a hack. It fixes the problems demonstrated in my previous commit.
Maybe there's a cleaner way?
Associated: https://github.com/jackc/pgx/issues/1426
Demonstrate the problem with the tests:
...for negative decimal values e.g. -0.01
This causes errors when encoding to JSON:
"json: error calling MarshalJSON for type pgtype.Numeric"
It also causes scan failures of sql.NullFloat64:
"converting driver.Value type string ("0.-1") to a float64"
As reported here: https://github.com/jackc/pgx/issues/1426
This improves handling of unregistered types. In general, they should
"just work". But there are performance benefits gained and some edge
cases avoided by registering types. Updated documentation to mention
this.
https://github.com/jackc/pgx/issues/1445
Previously on the minimum condition the error would be:
"is greater than maximum"
Also add encoding/json import into the .erb template as the import was
missing after running rake generate.
Previously, an infinite value was returned as a string. Other types
that can be infinite such as Timestamptz return a
pgtype.InfinityModifier. This change brings them into alignment.
A `driver.Valuer()` results in a `string` that the `Codec` for the
PostgreSQL type doesn't know how to handle. That string is scanned into
whatever the default type for that `Codec` is. That new value is
encoded. If the new value is the same type as the original type than an
infinite loop occured. Check that the types are different.
https://github.com/jackc/pgx/issues/1331