Table types have system / hidden columns like tableoid, cmax, xmax, etc.
These are not included when sending or receiving composite types.
https://github.com/jackc/pgx/issues/1576
When using `scany` I encountered the following case. This seems to fix it.
Looks like null `jsonb` columns cause the problem. If you create a table like below you can see that the following code fails. Is this expected?
```sql
CREATE TABLE test (
a int4 NULL,
b int4 NULL,
c jsonb NULL
);
INSERT INTO test (a, b, c) VALUES (1, null, null);
```
```go
package main
import (
"context"
"log"
"github.com/georgysavva/scany/v2/pgxscan"
"github.com/jackc/pgx/v5"
)
func main() {
var rows []map[string]interface{}
conn, _ := pgx.Connect(context.Background(), , ts.PGURL().String())
// this will fail with can't scan into dest[0]: cannot scan NULL into *interface {}
err := pgxscan.Select(context.Background(), conn, &rows, `SELECT c from test`)
// this works
// err = pgxscan.Select(context.Background(), conn, &rows, `SELECT a,b from test`)
if err != nil {
panic(err)
}
log.Printf("%+v", rows)
}
```
It should pass a FlatArray[T] to the next step instead of a
anySliceArrayReflect. By using a anySliceArrayReflect, an encode of
[]github.com/google/uuid.UUID followed by []string into a PostgreSQL
uuid[] would crash. This was caused by a EncodePlan cache collision
where the second encoding used part of the cached plan of the first.
In proper usage a cache collision shouldn't be able to occur. If this
assertion proves incorrect it will be necessary to add an optional
interface to ScanPlan and EncodePlan that marks the plan as ineligable
for caching. But I have been unable to construct a failing case, and
given that ScanPlans have been cached for quite some time now without
incident I do not think it is possible. This issue only occurred due to
the bug in *wrapSliceEncodePlan[T].Encode.
https://github.com/jackc/pgx/issues/1502
This significantly reduces memory allocations in paths that repeatedly
encode the same type of values such as CopyFrom.
https://github.com/jackc/pgx/issues/1481
This seems a bit of a hack. It fixes the problems demonstrated in my previous commit.
Maybe there's a cleaner way?
Associated: https://github.com/jackc/pgx/issues/1426
Demonstrate the problem with the tests:
...for negative decimal values e.g. -0.01
This causes errors when encoding to JSON:
"json: error calling MarshalJSON for type pgtype.Numeric"
It also causes scan failures of sql.NullFloat64:
"converting driver.Value type string ("0.-1") to a float64"
As reported here: https://github.com/jackc/pgx/issues/1426
This improves handling of unregistered types. In general, they should
"just work". But there are performance benefits gained and some edge
cases avoided by registering types. Updated documentation to mention
this.
https://github.com/jackc/pgx/issues/1445
Previously on the minimum condition the error would be:
"is greater than maximum"
Also add encoding/json import into the .erb template as the import was
missing after running rake generate.
Previously, an infinite value was returned as a string. Other types
that can be infinite such as Timestamptz return a
pgtype.InfinityModifier. This change brings them into alignment.
A `driver.Valuer()` results in a `string` that the `Codec` for the
PostgreSQL type doesn't know how to handle. That string is scanned into
whatever the default type for that `Codec` is. That new value is
encoded. If the new value is the same type as the original type than an
infinite loop occured. Check that the types are different.
https://github.com/jackc/pgx/issues/1331
This is tricky due to driver.Valuer returning any. For example, we can
plan for fmt.Stringer because it always returns a string.
Because of this driver.Valuer was always handled as the last option. But
with pgx v5 now having the ability to find underlying types like a
string and supporting fmt.Stringer it meant that driver.Valuer was
often not getting called because something else was found first.
This change tries driver.Valuer immediately after the initial PlanScan
for the Codec. So a type that directly implements a pgx interface should
be used, but driver.Valuer will be prefered before all the attempts to
handle renamed types, pointer deferencing, etc.
fixes https://github.com/jackc/pgx/issues/1319
fixes https://github.com/jackc/pgx/issues/1311
e.g. 2000-01-01 instead of 2000-1-1. PostgreSQL accepted it without
zeroes but our text decoder didn't. This caused a problem when we needed
to take a value and encode to text so something else could parse it as
if it had come from the PostgreSQL server in text format. e.g.
database/sql compatibility.