mirror of
https://github.com/VinGarcia/ksql.git
synced 2025-09-04 19:36:56 +00:00
Created Querying in Chunks for Big Queries (markdown)
parent
a660fe3239
commit
9d8c5794fb
35
Querying-in-Chunks-for-Big-Queries.md
Normal file
35
Querying-in-Chunks-for-Big-Queries.md
Normal file
@ -0,0 +1,35 @@
|
||||
## The Query Chunks Feature
|
||||
|
||||
It's very unusual for us to need to load a number of records from the
|
||||
database that might be too big for fitting in memory, e.g. load all the
|
||||
users and send them somewhere. But it might happen.
|
||||
|
||||
For these cases, it's best to load chunks of data at a time so
|
||||
that we can work on a substantial amount of data at a time and never
|
||||
overload our memory capacity. For this use-case we have a specific
|
||||
function called `QueryChunks`:
|
||||
|
||||
```golang
|
||||
err = db.QueryChunks(ctx, ksql.ChunkParser{
|
||||
Query: "SELECT * FROM users WHERE type = ?",
|
||||
Params: []interface{}{usersType},
|
||||
ChunkSize: 100,
|
||||
ForEachChunk: func(users []User) error {
|
||||
err := sendUsersSomewhere(users)
|
||||
if err != nil {
|
||||
// This will abort the QueryChunks loop and return this error
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
```
|
||||
|
||||
Its signature is more complicated than the other two Query\* methods,
|
||||
thus, it is advisable to always prefer using the other two when possible
|
||||
reserving this one for the rare use-cases where you are actually
|
||||
loading big sections of the database into memory.
|
||||
|
Loading…
x
Reference in New Issue
Block a user