Added a lot of documentation

scan-io
Jack Christensen 2014-07-12 21:17:38 -05:00
parent e33fb9d5d8
commit aff5043df9
7 changed files with 126 additions and 67 deletions

135
README.md
View File

@ -4,52 +4,74 @@ PostgreSQL client library for Go
## Description ## Description
Pgx is a database connection library designed specifically for PostgreSQL. pgx is a database connection library designed specifically for PostgreSQL. pgx offers an interface similar to database/sql that offers more performance and features than are available the database/sql interface. It also can run as a database/sql compatible driver by importing github.com/jackc/pgx/stdlib.
There are features of PostgreSQL that are difficult or impossible to use with
the standard Go library SQL interface. This library trades conformity with the
standard interface for ease of use and the power that is available when
working directly with PostgreSQL.
## Features ## Features
Below are some of the standout features of pgx. Below are some of the standout features of pgx.
### Simple Query Interface ### Familiar Query Interface
Pgx has easy to use functions for common query operations like SelectValue, pgx implements Query, QueryRow, and Scan in the familiar database/sql style.
SelectValues, SelectRow, and SelectRows. These can be easier to use than the
standard Scan interface. These directly return interface{}, []interface{},
map[string]interface{}, and []map[string]interface{} respectively. SelectFunc
offers custom row by row processing.
```go ```go
if widgets, err := conn.SelectRows("select name, weight from widgets where type=$1", type); err != nil { var name string
for w := range widgets { var weight int64
fmt.Printf("%v has a weight of %v.", widgets["name"], widgets["weight"]) err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
return err
} }
```
pgx adds convenience to Query in that it is only necessary to call Close if you
want to ignore the rest of the rows. When Next has read all rows or an error
occurs, the rows are closed automatically.
```go
var sum int32
rows, err := conn.Query("select generate_series(1,$1)", 10)
if err != nil {
t.Fatalf("conn.Query failed: ", err)
} }
for rows.Next() {
var n int32
rows.Scan(&n)
sum += n
}
// rows.Close implicitly called when rows.Next is finished
if rows.Err() != nil {
t.Fatalf("conn.Query failed: ", err)
}
// ...
``` ```
### Prepared Statements ### Prepared Statements
Prepared statements are easy to use in pgx. Just call Prepare with the name of Prepared statements are easy to use in pgx. Just call Prepare with the name of
the statement and the SQL. To execute a prepared statement just pass the name the statement and the SQL. To execute a prepared statement just pass the name
of the statement into a Select* or Exec command as the SQL text. It will of the statement into a Query, QueryRow, or Exec as the SQL text. It will
automatically detect that it is the name of a prepared statement and execute automatically detect that it is the name of a prepared statement and execute
it. it.
```go ```go
if err := conn.Prepare("getTime", "select now()"); err == nil { if _, err := conn.Prepare("getTime", "select now()"); err == nil {
// handle err // handle err
} }
if time, err := conn.SelectValue("getTime"); err != nil {
// do something with time var t time.Time
err := conn.QueryRow("getTime").Scan(&t)
if err != nil {
return err
} }
``` ```
Prepared statements will use the binary transmission format for types that Prepared statements will use the binary transmission when possible. This can
have a binary transcoder available (this can substantially reduce overhead substantially increase performance.
when using the bytea type).
### Explicit Connection Pool ### Explicit Connection Pool
@ -61,27 +83,39 @@ being made available in the connection pool. This is especially useful to
ensure all connections have the same prepared statements available or to ensure all connections have the same prepared statements available or to
change any other connection settings. change any other connection settings.
It also delegates Select* and Exec functions to an automatically checked It delegates Query, QueryRow, Exec, and Begin functions to an automatically
out and released connection so you can avoid manually acquiring and releasing checked out and released connection so you can avoid manually acquiring and
connections when you do not need that level of control. releasing connections when you do not need that level of control.
```go ```go
if widgets, err := pool.SelectRows("select * from widgets where type=$1", type); err != nil { var name string
// do something with widgets var weight int64
err := pool.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
return err
} }
``` ```
### Transactions ### Transactions
Transactions are are used by passing a function to the Transaction function. Transactions are started by calling Begin or BeginIso. The BeginIso variant
This function ensures that the transaction is committed or rolled back creates a transaction with a specified isolation level.
automatically. The TransactionIso variant creates a transaction with a
specified isolation level.
```go ```go
committed, err := conn.TransactionIso("serializable", func() bool { tx, err := conn.Begin()
// do something with transaction if err != nil {
return true // return true to commit / false to rollback t.Fatalf("conn.Begin failed: %v", err)
}
_, err = tx.Exec("insert into foo(id) values (1)")
if err != nil {
t.Fatalf("tx.Exec failed: %v", err)
}
err = tx.Commit()
if err != nil {
t.Fatalf("tx.Commit failed: %v", err)
}
}) })
``` ```
@ -103,36 +137,25 @@ The pgx ConnConfig struct has a TLSConfig field. If this field is
nil, then TLS will be disabled. If it is present, then it will be used to nil, then TLS will be disabled. If it is present, then it will be used to
configure the TLS connection. configure the TLS connection.
### Custom Transcoder Support ### Custom Type Support
Pgx includes transcoders for the common data types like integers, floats, pgx includes support for the common data types like integers, floats, strings,
strings, dates, and times that have direct mappings between Go and SQL. dates, and times that have direct mappings between Go and SQL. Support can be
Transcoders can be added for additional types like point, hstore, numeric, added for additional types like point, hstore, numeric, etc. that do not have
etc. that do not have direct mappings in Go. pgx.ValueTranscoders is a map of direct mappings in Go by the types implementing Scanner, TextEncoder, and
PostgreSQL OID's to transcoders. All that is needed to add or change how a optionally BinaryEncoder. To enable binary format for custom types, a prepared
data type is to set that OID's transcoder. See statement must be used and the field description of the returned field must have
example_value_transcoder_test.go for an example of a custom transcoder for the FormatCode set to BinaryFormatCode. See example_value_transcoder_test.go for an
PostgreSQL point type. example of a custom type for the PostgreSQL point type.
### Null Mapping ### Null Mapping
As pgx uses interface{} for all values SQL nulls are mapped to nil. This pgx includes Null* types in a similar fashion to database/sql that implement the
eliminates the need for wrapping values in structs that include a boolean for necessary interfaces to be encoded and scanned.
the null possibility. On the other hand, returned values usually must be type
asserted before use. It also presents difficulties dealing with complex types
such as arrays. pgx directly maps a Go []int32 to a PostgreSQL int4[]. The
problem is the PostgreSQL array can include nulls, but the Go slice cannot.
Array transcoding should be considered experimental. On the plus side, because
of the pluggable transcoder support, an application that wished to handle
arrays (or any types) differently can easily override the default transcoding
(so even using a strict with value and null fields would simply be a matter of
changing transcoders).
### Logging ### Logging
Pgx defines the pgx.Logger interface. A value that satisfies this interface pgx connections optionally accept a logger from the [log15 package](http://gopkg.in/inconshreveable/log15.v2).
used as part of ConnectionOptions or ConnPoolConfig to enable logging
of pgx activities.
## Testing ## Testing

View File

@ -1,6 +1,9 @@
// Package pgx is a PostgreSQL database driver. // Package pgx is a PostgreSQL database driver.
// //
// It does not implement the standard database/sql interface. // pgx provides lower level access to PostgreSQL than the standard database/sql
// It remains as similar to the database/sql interface as possible while
// providing better speed and access to PostgreSQL specific features. Import
// github.com/jack/pgx/stdlib to use pgx as a database/sql compatible driver.
package pgx package pgx
import ( import (

View File

@ -9,7 +9,7 @@ import (
type ConnPoolConfig struct { type ConnPoolConfig struct {
ConnConfig ConnConfig
MaxConnections int // max simultaneous connections to use, default 5, must be at least 2 MaxConnections int // max simultaneous connections to use, default 5, must be at least 2
AfterConnect func(*Conn) error AfterConnect func(*Conn) error // function to call on every new connection
} }
type ConnPool struct { type ConnPool struct {
@ -124,7 +124,7 @@ func (p *ConnPool) Release(conn *Conn) {
p.cond.Signal() p.cond.Signal()
} }
// Close ends the use of a connection by closing all underlying connections. // Close ends the use of a connection pool by closing all underlying connections.
func (p *ConnPool) Close() { func (p *ConnPool) Close() {
for i := 0; i < p.maxConnections; i++ { for i := 0; i < p.maxConnections; i++ {
if c, err := p.Acquire(); err != nil { if c, err := p.Acquire(); err != nil {
@ -133,6 +133,7 @@ func (p *ConnPool) Close() {
} }
} }
// Stat returns connection pool statistics
func (p *ConnPool) Stat() (s ConnPoolStat) { func (p *ConnPool) Stat() (s ConnPoolStat) {
p.cond.L.Lock() p.cond.L.Lock()
defer p.cond.L.Unlock() defer p.cond.L.Unlock()
@ -168,6 +169,8 @@ func (p *ConnPool) Exec(sql string, arguments ...interface{}) (commandTag Comman
return c.Exec(sql, arguments...) return c.Exec(sql, arguments...)
} }
// Query acquires a connection and delegates the call to that connection. When
// *Rows are closed, the connection is released automatically.
func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error) { func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error) {
c, err := p.Acquire() c, err := p.Acquire()
if err != nil { if err != nil {
@ -185,6 +188,9 @@ func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error) {
return rows, nil return rows, nil
} }
// QueryRow acquires a connection and delegates the call to that connection. The
// connection is released automatically after Scan is called on the returned
// *Row.
func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row { func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row {
rows, _ := p.Query(sql, args...) rows, _ := p.Query(sql, args...)
return (*Row)(rows) return (*Row)(rows)

View File

@ -73,6 +73,8 @@ func newWriteBuf(buf []byte, t byte) *WriteBuf {
return &WriteBuf{buf: buf, sizeIdx: 1} return &WriteBuf{buf: buf, sizeIdx: 1}
} }
// WrifeBuf is used build messages to send to the PostgreSQL server. It is used
// by the BinaryEncoder interface when implementing custom encoders.
type WriteBuf struct { type WriteBuf struct {
buf []byte buf []byte
sizeIdx int sizeIdx int

View File

@ -6,8 +6,13 @@ import (
"time" "time"
) )
// Row is a convenience wrapper over Rows that is returned by QueryRow.
type Row Rows type Row Rows
// Scan reads the values from the row into dest values positionally. dest can
// include pointers to core types and the Scanner interface. If no rows were
// found it returns ErrNoRows. If multiple rows are returned it ignores all but
// the first.
func (r *Row) Scan(dest ...interface{}) (err error) { func (r *Row) Scan(dest ...interface{}) (err error) {
rows := (*Rows)(r) rows := (*Rows)(r)
@ -28,6 +33,9 @@ func (r *Row) Scan(dest ...interface{}) (err error) {
return rows.Err() return rows.Err()
} }
// Rows is the result set returned from *Conn.Query. Rows must be closed before
// the *Conn can be used again. Rows are closed by explicitly calling Close(),
// calling Next() until it returns false, or when a fatal error occurs.
type Rows struct { type Rows struct {
pool *ConnPool pool *ConnPool
conn *Conn conn *Conn
@ -80,6 +88,10 @@ func (rows *Rows) readUntilReadyForQuery() {
} }
} }
// Close closes the rows, making the connection ready for use again. It is not
// usually necessary to call Close explicitly because reading all returned rows
// with Next automatically closes Rows. It is safe to call Close after rows is
// already closed.
func (rows *Rows) Close() { func (rows *Rows) Close() {
if rows.closed { if rows.closed {
return return
@ -103,7 +115,8 @@ func (rows *Rows) abort(err error) {
rows.close() rows.close()
} }
// Fatal signals an error occurred after the query was sent to the server // Fatal signals an error occurred after the query was sent to the server. It
// closes the rows automatically.
func (rows *Rows) Fatal(err error) { func (rows *Rows) Fatal(err error) {
if rows.err != nil { if rows.err != nil {
return return
@ -113,6 +126,9 @@ func (rows *Rows) Fatal(err error) {
rows.Close() rows.Close()
} }
// Next prepares the next row for reading. It returns true if there is another
// row and false if no more rows are available. It automatically closes rows
// when all rows are read.
func (rows *Rows) Next() bool { func (rows *Rows) Next() bool {
if rows.closed { if rows.closed {
return false return false
@ -170,6 +186,8 @@ func (rows *Rows) nextColumn() (*ValueReader, bool) {
return &rows.vr, true return &rows.vr, true
} }
// Scan reads the values from the current row into dest values positionally.
// dest can include pointers to core types and the Scanner interface.
func (rows *Rows) Scan(dest ...interface{}) (err error) { func (rows *Rows) Scan(dest ...interface{}) (err error) {
if len(rows.fields) != len(dest) { if len(rows.fields) != len(dest) {
err = errors.New("Scan received wrong number of arguments") err = errors.New("Scan received wrong number of arguments")
@ -177,7 +195,6 @@ func (rows *Rows) Scan(dest ...interface{}) (err error) {
return err return err
} }
// TODO - decodeX should return err and Scan should Fatal the rows
for _, d := range dest { for _, d := range dest {
vr, _ := rows.nextColumn() vr, _ := rows.nextColumn()
switch d := d.(type) { switch d := d.(type) {
@ -288,7 +305,9 @@ func (rows *Rows) Values() ([]interface{}, error) {
return values, rows.Err() return values, rows.Err()
} }
// TODO - document // Query executes sql with args. If there is an error the returned *Rows will
// be returned in an error state. So it is allowed to ignore the error returned
// from Query and handle it in *Rows.
func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error) { func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error) {
c.rows = Rows{conn: c} c.rows = Rows{conn: c}
rows := &c.rows rows := &c.rows
@ -331,6 +350,9 @@ func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error) {
} }
} }
// QueryRow is a convenience wrapper over Query. Any error that occurs while
// querying is deferred until calling Scan on the returned *Row. That *Row will
// error with ErrNoRows if no rows are returned.
func (c *Conn) QueryRow(sql string, args ...interface{}) *Row { func (c *Conn) QueryRow(sql string, args ...interface{}) *Row {
rows, _ := c.Query(sql, args...) rows, _ := c.Query(sql, args...)
return (*Row)(rows) return (*Row)(rows)

View File

@ -4,7 +4,7 @@ import (
"errors" "errors"
) )
// ValueReader the mechanism for implementing the BinaryDecoder interface. // ValueReader is used by the Scanner interface to decode values.
type ValueReader struct { type ValueReader struct {
mr *msgReader mr *msgReader
fd *FieldDescription fd *FieldDescription

View File

@ -11,6 +11,7 @@ import (
"unsafe" "unsafe"
) )
// PostgreSQL oids for common types
const ( const (
BoolOid = 16 BoolOid = 16
ByteaOid = 17 ByteaOid = 17
@ -26,11 +27,13 @@ const (
TimestampTzOid = 1184 TimestampTzOid = 1184
) )
// PostgreSQL format codes
const ( const (
TextFormatCode = 0 TextFormatCode = 0
BinaryFormatCode = 1 BinaryFormatCode = 1
) )
// EncodeText statuses
const ( const (
NullText = iota NullText = iota
SafeText = iota SafeText = iota
@ -45,8 +48,8 @@ func (e SerializationError) Error() string {
// Scanner is an interface used to decode values from the PostgreSQL server. // Scanner is an interface used to decode values from the PostgreSQL server.
type Scanner interface { type Scanner interface {
// Scan MUST check fd's DataType and FormatCode before decoding. It should // Scan MUST check r.Type().DataType and r.Type().FormatCode before decoding.
// not assume that it was called on the type of value. // It should not assume that it was called on the type of value.
Scan(r *ValueReader) error Scan(r *ValueReader) error
} }