Previously, a failed connection could be put back in a pool and when the
next query was attempted it would fail immediately trying to prepare the
query or reset the deadline. It wasn't clear if the Query or Exec call
could safely be retried since there was no way to know where it failed.
You can now call LastQuerySent and if it returns false then you're
guaranteed that the last call to Query(Ex)/Exec(Ex) didn't get far enough
to attempt to send the query. The call can be retried with a new
connection.
This is used in the stdlib to return a ErrBadConn if a network error
occurred and the statement was not attempted.
Fixes#427
An incompletely read select followed by an insert would fail. This was
caused by query methods in the non-batch path always calling
ensureConnectionReadyForQuery. This ensures that connections interrupted
by context cancellation are still usable. However, in the batch case
query methods are not being called while reading the result. A
incompletely read select followed by another select would not manifest
this error due to it starting by reading until row description. But when
an incomplete select (which even a successful QueryRow would be
considered) is followed by an Exec, the CommandComplete message from the
select would be considered as the response to the subsequent Exec.
The fix is the batch tracking whether a CommandComplete is pending and
reading it before advancing to the next result. This is similar in
principle to ensureConnectionReadyForQuery, just specific to Batch.
Since QueryRow delegates to Query, it needs Query to always return
non-nil *Rows to prevent a nil pointer deference when the QueryRow
caller calls Scan(). This commit fixes the few returns in QueryEx that
return nil on errors rather than *Rows with its err field set.
QueryEx was calling termContext and rows.fatal on err of sendPreparedQuery.
rows.fatal calls rows.Close which already calls termContext. This sequence of
calls was causing underlying io timeout errors to be returned instead of context
errors.
In addition, added fatalWriteErr helper method to allow recovery of write
timeout errors where no bytes were written.
This should solve flickering errors on Travis.
Because reading a record type requires the decoder to be able to look up oid
to type mapping and types such as hstore have types that are not fixed between
different PostgreSQL servers it was necessary to restructure the pgtype system
so all encoders and decodes take a *ConnInfo that includes oid/name/type
information.
Though this doesn't follow Go naming conventions exactly it makes names more
consistent with PostgreSQL and it is easier to read. For example, TIDOID becomes
TidOid. In addition this is one less breaking change in the move to V3.