pub struct ConcurrentBlockCursor<C, B> { /* private fields */ }
Expand description
A wrapper around block cursors which fetches data in a dedicated system thread. Intended to fetch data batch by batch while the application processes the batch last fetched. Works best with a double buffer strategy using two fetch buffers.
§Example
use odbc_api::{
Environment, buffers::{ColumnarAnyBuffer, BufferDesc}, Cursor, ConcurrentBlockCursor
};
use std::sync::OnceLock;
// We want to use the ODBC environment from another system thread without scope => Therefore it
// needs to be static.
static ENV: OnceLock<Environment> = OnceLock::new();
let env = Environment::new()?;
let conn = ENV.get_or_init(|| env).connect_with_connection_string(
"Driver={ODBC Driver 18 for SQL Server};Server=localhost;UID=SA;PWD=My@Test@Password1;",
Default::default())?;
// We must use into_cursor to create a statement handle with static lifetime, which also owns
// the connection. This way we can send it to another thread safely.
let cursor = conn.into_cursor("SELECT * FROM very_big_table", ())?.unwrap();
// Batch size and buffer description. Here we assume there is only one integer column
let buffer_a = ColumnarAnyBuffer::from_descs(1000, [BufferDesc::I32 { nullable: false }]);
let mut buffer_b = ColumnarAnyBuffer::from_descs(1000, [BufferDesc::I32 { nullable: false }]);
// And now we have a sendable block cursor with static lifetime
let block_cursor = cursor.bind_buffer(buffer_a)?;
let mut cbc = ConcurrentBlockCursor::from_block_cursor(block_cursor);
while cbc.fetch_into(&mut buffer_b)? {
// Proccess batch in buffer b asynchronously to fetching it
}
Implementations§
Source§impl<C, B> ConcurrentBlockCursor<C, B>
impl<C, B> ConcurrentBlockCursor<C, B>
Sourcepub fn from_block_cursor(block_cursor: BlockCursor<C, B>) -> Self
pub fn from_block_cursor(block_cursor: BlockCursor<C, B>) -> Self
Construct a new concurrent block cursor.
§Parameters
block_cursor
: Taking a BlockCursor instead of a Cursor allows for better resource stealing if constructing starting from a sequential Cursor, as we do not need to undbind and bind the cursor.
Sourcepub fn into_cursor(self) -> Result<C, Error>
pub fn into_cursor(self) -> Result<C, Error>
Join fetch thread and yield the cursor back.
Source§impl<C, B> ConcurrentBlockCursor<C, B>
impl<C, B> ConcurrentBlockCursor<C, B>
Sourcepub fn fetch(&mut self) -> Result<Option<B>, Error>
pub fn fetch(&mut self) -> Result<Option<B>, Error>
Receive the current batch and take ownership of its buffer. None
if the cursor is already
consumed, or had an error previously. This method blocks until a new batch available. In
order for new batches available new buffers must be send to the thread in order for it to
fill them. So calling fetch repeatedly without calling Self::fill
in between may
deadlock.
Sourcepub fn fill(&mut self, buffer: B)
pub fn fill(&mut self, buffer: B)
Send a buffer to the thread fetching in order for it to be filled and to be retrieved later
using either fetch
, or fetch_into
.
Sourcepub fn fetch_into(&mut self, buffer: &mut B) -> Result<bool, Error>
pub fn fetch_into(&mut self, buffer: &mut B) -> Result<bool, Error>
Fetches values from the ODBC datasource into buffer. Values are streamed batch by batch in
order to avoid reallocation of the buffers used for tranistion. This call blocks until a new
batch is ready. This method combines both Self::fetch
and Self::fill
.
§Parameters
buffer
: A columnar any buffer which can bind to the cursor wrapped by this instance. After the method call the reference will not point to the same instance which had been passed into the function call, but to the one which was bound to the cursor in order to fetch the last batch. The buffer passed into this method, is then used to fetch the next batch. As such this method is ideal to implement concurrent fetching using two buffers. One which is written to, and one that is read, which flip their roles between batches. Also called double buffering.
§Return
true
: Fetched a batch from the data source. The contents of that batch are now inbuffer
.false
: No batch could be fetched. The result set is consumed completly.