Added a lot of documentation

scan-io
Jack Christensen 2014-07-12 21:17:38 -05:00
parent e33fb9d5d8
commit aff5043df9
7 changed files with 126 additions and 67 deletions

137
README.md
View File

@ -4,52 +4,74 @@ PostgreSQL client library for Go
## Description
Pgx is a database connection library designed specifically for PostgreSQL.
There are features of PostgreSQL that are difficult or impossible to use with
the standard Go library SQL interface. This library trades conformity with the
standard interface for ease of use and the power that is available when
working directly with PostgreSQL.
pgx is a database connection library designed specifically for PostgreSQL. pgx offers an interface similar to database/sql that offers more performance and features than are available the database/sql interface. It also can run as a database/sql compatible driver by importing github.com/jackc/pgx/stdlib.
## Features
Below are some of the standout features of pgx.
### Simple Query Interface
### Familiar Query Interface
Pgx has easy to use functions for common query operations like SelectValue,
SelectValues, SelectRow, and SelectRows. These can be easier to use than the
standard Scan interface. These directly return interface{}, []interface{},
map[string]interface{}, and []map[string]interface{} respectively. SelectFunc
offers custom row by row processing.
pgx implements Query, QueryRow, and Scan in the familiar database/sql style.
```go
if widgets, err := conn.SelectRows("select name, weight from widgets where type=$1", type); err != nil {
for w := range widgets {
fmt.Printf("%v has a weight of %v.", widgets["name"], widgets["weight"])
}
var name string
var weight int64
err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
return err
}
```
pgx adds convenience to Query in that it is only necessary to call Close if you
want to ignore the rest of the rows. When Next has read all rows or an error
occurs, the rows are closed automatically.
```go
var sum int32
rows, err := conn.Query("select generate_series(1,$1)", 10)
if err != nil {
t.Fatalf("conn.Query failed: ", err)
}
for rows.Next() {
var n int32
rows.Scan(&n)
sum += n
}
// rows.Close implicitly called when rows.Next is finished
if rows.Err() != nil {
t.Fatalf("conn.Query failed: ", err)
}
// ...
```
### Prepared Statements
Prepared statements are easy to use in pgx. Just call Prepare with the name of
the statement and the SQL. To execute a prepared statement just pass the name
of the statement into a Select* or Exec command as the SQL text. It will
of the statement into a Query, QueryRow, or Exec as the SQL text. It will
automatically detect that it is the name of a prepared statement and execute
it.
```go
if err := conn.Prepare("getTime", "select now()"); err == nil {
if _, err := conn.Prepare("getTime", "select now()"); err == nil {
// handle err
}
if time, err := conn.SelectValue("getTime"); err != nil {
// do something with time
var t time.Time
err := conn.QueryRow("getTime").Scan(&t)
if err != nil {
return err
}
```
Prepared statements will use the binary transmission format for types that
have a binary transcoder available (this can substantially reduce overhead
when using the bytea type).
Prepared statements will use the binary transmission when possible. This can
substantially increase performance.
### Explicit Connection Pool
@ -61,27 +83,39 @@ being made available in the connection pool. This is especially useful to
ensure all connections have the same prepared statements available or to
change any other connection settings.
It also delegates Select* and Exec functions to an automatically checked
out and released connection so you can avoid manually acquiring and releasing
connections when you do not need that level of control.
It delegates Query, QueryRow, Exec, and Begin functions to an automatically
checked out and released connection so you can avoid manually acquiring and
releasing connections when you do not need that level of control.
```go
if widgets, err := pool.SelectRows("select * from widgets where type=$1", type); err != nil {
// do something with widgets
var name string
var weight int64
err := pool.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
return err
}
```
### Transactions
Transactions are are used by passing a function to the Transaction function.
This function ensures that the transaction is committed or rolled back
automatically. The TransactionIso variant creates a transaction with a
specified isolation level.
Transactions are started by calling Begin or BeginIso. The BeginIso variant
creates a transaction with a specified isolation level.
```go
committed, err := conn.TransactionIso("serializable", func() bool {
// do something with transaction
return true // return true to commit / false to rollback
tx, err := conn.Begin()
if err != nil {
t.Fatalf("conn.Begin failed: %v", err)
}
_, err = tx.Exec("insert into foo(id) values (1)")
if err != nil {
t.Fatalf("tx.Exec failed: %v", err)
}
err = tx.Commit()
if err != nil {
t.Fatalf("tx.Commit failed: %v", err)
}
})
```
@ -103,36 +137,25 @@ The pgx ConnConfig struct has a TLSConfig field. If this field is
nil, then TLS will be disabled. If it is present, then it will be used to
configure the TLS connection.
### Custom Transcoder Support
### Custom Type Support
Pgx includes transcoders for the common data types like integers, floats,
strings, dates, and times that have direct mappings between Go and SQL.
Transcoders can be added for additional types like point, hstore, numeric,
etc. that do not have direct mappings in Go. pgx.ValueTranscoders is a map of
PostgreSQL OID's to transcoders. All that is needed to add or change how a
data type is to set that OID's transcoder. See
example_value_transcoder_test.go for an example of a custom transcoder for the
PostgreSQL point type.
pgx includes support for the common data types like integers, floats, strings,
dates, and times that have direct mappings between Go and SQL. Support can be
added for additional types like point, hstore, numeric, etc. that do not have
direct mappings in Go by the types implementing Scanner, TextEncoder, and
optionally BinaryEncoder. To enable binary format for custom types, a prepared
statement must be used and the field description of the returned field must have
FormatCode set to BinaryFormatCode. See example_value_transcoder_test.go for an
example of a custom type for the PostgreSQL point type.
### Null Mapping
As pgx uses interface{} for all values SQL nulls are mapped to nil. This
eliminates the need for wrapping values in structs that include a boolean for
the null possibility. On the other hand, returned values usually must be type
asserted before use. It also presents difficulties dealing with complex types
such as arrays. pgx directly maps a Go []int32 to a PostgreSQL int4[]. The
problem is the PostgreSQL array can include nulls, but the Go slice cannot.
Array transcoding should be considered experimental. On the plus side, because
of the pluggable transcoder support, an application that wished to handle
arrays (or any types) differently can easily override the default transcoding
(so even using a strict with value and null fields would simply be a matter of
changing transcoders).
pgx includes Null* types in a similar fashion to database/sql that implement the
necessary interfaces to be encoded and scanned.
### Logging
Pgx defines the pgx.Logger interface. A value that satisfies this interface
used as part of ConnectionOptions or ConnPoolConfig to enable logging
of pgx activities.
pgx connections optionally accept a logger from the [log15 package](http://gopkg.in/inconshreveable/log15.v2).
## Testing

View File

@ -1,6 +1,9 @@
// Package pgx is a PostgreSQL database driver.
//
// It does not implement the standard database/sql interface.
// pgx provides lower level access to PostgreSQL than the standard database/sql
// It remains as similar to the database/sql interface as possible while
// providing better speed and access to PostgreSQL specific features. Import
// github.com/jack/pgx/stdlib to use pgx as a database/sql compatible driver.
package pgx
import (

View File

@ -8,8 +8,8 @@ import (
type ConnPoolConfig struct {
ConnConfig
MaxConnections int // max simultaneous connections to use, default 5, must be at least 2
AfterConnect func(*Conn) error
MaxConnections int // max simultaneous connections to use, default 5, must be at least 2
AfterConnect func(*Conn) error // function to call on every new connection
}
type ConnPool struct {
@ -124,7 +124,7 @@ func (p *ConnPool) Release(conn *Conn) {
p.cond.Signal()
}
// Close ends the use of a connection by closing all underlying connections.
// Close ends the use of a connection pool by closing all underlying connections.
func (p *ConnPool) Close() {
for i := 0; i < p.maxConnections; i++ {
if c, err := p.Acquire(); err != nil {
@ -133,6 +133,7 @@ func (p *ConnPool) Close() {
}
}
// Stat returns connection pool statistics
func (p *ConnPool) Stat() (s ConnPoolStat) {
p.cond.L.Lock()
defer p.cond.L.Unlock()
@ -168,6 +169,8 @@ func (p *ConnPool) Exec(sql string, arguments ...interface{}) (commandTag Comman
return c.Exec(sql, arguments...)
}
// Query acquires a connection and delegates the call to that connection. When
// *Rows are closed, the connection is released automatically.
func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error) {
c, err := p.Acquire()
if err != nil {
@ -185,6 +188,9 @@ func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error) {
return rows, nil
}
// QueryRow acquires a connection and delegates the call to that connection. The
// connection is released automatically after Scan is called on the returned
// *Row.
func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row {
rows, _ := p.Query(sql, args...)
return (*Row)(rows)

View File

@ -73,6 +73,8 @@ func newWriteBuf(buf []byte, t byte) *WriteBuf {
return &WriteBuf{buf: buf, sizeIdx: 1}
}
// WrifeBuf is used build messages to send to the PostgreSQL server. It is used
// by the BinaryEncoder interface when implementing custom encoders.
type WriteBuf struct {
buf []byte
sizeIdx int

View File

@ -6,8 +6,13 @@ import (
"time"
)
// Row is a convenience wrapper over Rows that is returned by QueryRow.
type Row Rows
// Scan reads the values from the row into dest values positionally. dest can
// include pointers to core types and the Scanner interface. If no rows were
// found it returns ErrNoRows. If multiple rows are returned it ignores all but
// the first.
func (r *Row) Scan(dest ...interface{}) (err error) {
rows := (*Rows)(r)
@ -28,6 +33,9 @@ func (r *Row) Scan(dest ...interface{}) (err error) {
return rows.Err()
}
// Rows is the result set returned from *Conn.Query. Rows must be closed before
// the *Conn can be used again. Rows are closed by explicitly calling Close(),
// calling Next() until it returns false, or when a fatal error occurs.
type Rows struct {
pool *ConnPool
conn *Conn
@ -80,6 +88,10 @@ func (rows *Rows) readUntilReadyForQuery() {
}
}
// Close closes the rows, making the connection ready for use again. It is not
// usually necessary to call Close explicitly because reading all returned rows
// with Next automatically closes Rows. It is safe to call Close after rows is
// already closed.
func (rows *Rows) Close() {
if rows.closed {
return
@ -103,7 +115,8 @@ func (rows *Rows) abort(err error) {
rows.close()
}
// Fatal signals an error occurred after the query was sent to the server
// Fatal signals an error occurred after the query was sent to the server. It
// closes the rows automatically.
func (rows *Rows) Fatal(err error) {
if rows.err != nil {
return
@ -113,6 +126,9 @@ func (rows *Rows) Fatal(err error) {
rows.Close()
}
// Next prepares the next row for reading. It returns true if there is another
// row and false if no more rows are available. It automatically closes rows
// when all rows are read.
func (rows *Rows) Next() bool {
if rows.closed {
return false
@ -170,6 +186,8 @@ func (rows *Rows) nextColumn() (*ValueReader, bool) {
return &rows.vr, true
}
// Scan reads the values from the current row into dest values positionally.
// dest can include pointers to core types and the Scanner interface.
func (rows *Rows) Scan(dest ...interface{}) (err error) {
if len(rows.fields) != len(dest) {
err = errors.New("Scan received wrong number of arguments")
@ -177,7 +195,6 @@ func (rows *Rows) Scan(dest ...interface{}) (err error) {
return err
}
// TODO - decodeX should return err and Scan should Fatal the rows
for _, d := range dest {
vr, _ := rows.nextColumn()
switch d := d.(type) {
@ -288,7 +305,9 @@ func (rows *Rows) Values() ([]interface{}, error) {
return values, rows.Err()
}
// TODO - document
// Query executes sql with args. If there is an error the returned *Rows will
// be returned in an error state. So it is allowed to ignore the error returned
// from Query and handle it in *Rows.
func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error) {
c.rows = Rows{conn: c}
rows := &c.rows
@ -331,6 +350,9 @@ func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error) {
}
}
// QueryRow is a convenience wrapper over Query. Any error that occurs while
// querying is deferred until calling Scan on the returned *Row. That *Row will
// error with ErrNoRows if no rows are returned.
func (c *Conn) QueryRow(sql string, args ...interface{}) *Row {
rows, _ := c.Query(sql, args...)
return (*Row)(rows)

View File

@ -4,7 +4,7 @@ import (
"errors"
)
// ValueReader the mechanism for implementing the BinaryDecoder interface.
// ValueReader is used by the Scanner interface to decode values.
type ValueReader struct {
mr *msgReader
fd *FieldDescription

View File

@ -11,6 +11,7 @@ import (
"unsafe"
)
// PostgreSQL oids for common types
const (
BoolOid = 16
ByteaOid = 17
@ -26,11 +27,13 @@ const (
TimestampTzOid = 1184
)
// PostgreSQL format codes
const (
TextFormatCode = 0
BinaryFormatCode = 1
)
// EncodeText statuses
const (
NullText = iota
SafeText = iota
@ -45,8 +48,8 @@ func (e SerializationError) Error() string {
// Scanner is an interface used to decode values from the PostgreSQL server.
type Scanner interface {
// Scan MUST check fd's DataType and FormatCode before decoding. It should
// not assume that it was called on the type of value.
// Scan MUST check r.Type().DataType and r.Type().FormatCode before decoding.
// It should not assume that it was called on the type of value.
Scan(r *ValueReader) error
}