...on a select that returns an error after some rows.
This was initially found in by a failure with CockroachDB because it
seems to send a RowDescription before an error even when no rows are
returned. PostgreSQL doesn't.
Table types have system / hidden columns like tableoid, cmax, xmax, etc.
These are not included when sending or receiving composite types.
https://github.com/jackc/pgx/issues/1576
When using `scany` I encountered the following case. This seems to fix it.
Looks like null `jsonb` columns cause the problem. If you create a table like below you can see that the following code fails. Is this expected?
```sql
CREATE TABLE test (
a int4 NULL,
b int4 NULL,
c jsonb NULL
);
INSERT INTO test (a, b, c) VALUES (1, null, null);
```
```go
package main
import (
"context"
"log"
"github.com/georgysavva/scany/v2/pgxscan"
"github.com/jackc/pgx/v5"
)
func main() {
var rows []map[string]interface{}
conn, _ := pgx.Connect(context.Background(), , ts.PGURL().String())
// this will fail with can't scan into dest[0]: cannot scan NULL into *interface {}
err := pgxscan.Select(context.Background(), conn, &rows, `SELECT c from test`)
// this works
// err = pgxscan.Select(context.Background(), conn, &rows, `SELECT a,b from test`)
if err != nil {
panic(err)
}
log.Printf("%+v", rows)
}
```
Sleeping for a microsecond on Windows actually takes 10ms. This caused
the test to never finish. Instead use channel to ensure the two
goroutines start working at the same time and remove the sleeps.
The test was relying on sending so big a message that the write blocked.
However, it appears that on Windows the TCP connections over localhost
have an very large or infinite sized buffer. Change the test to simply
set the deadline to the current time before triggering the write.
It should pass a FlatArray[T] to the next step instead of a
anySliceArrayReflect. By using a anySliceArrayReflect, an encode of
[]github.com/google/uuid.UUID followed by []string into a PostgreSQL
uuid[] would crash. This was caused by a EncodePlan cache collision
where the second encoding used part of the cached plan of the first.
In proper usage a cache collision shouldn't be able to occur. If this
assertion proves incorrect it will be necessary to add an optional
interface to ScanPlan and EncodePlan that marks the plan as ineligable
for caching. But I have been unable to construct a failing case, and
given that ScanPlans have been cached for quite some time now without
incident I do not think it is possible. This issue only occurred due to
the bug in *wrapSliceEncodePlan[T].Encode.
https://github.com/jackc/pgx/issues/1502