odbc_api::buffers

Type Alias TextRowSet

Source
pub type TextRowSet = ColumnarBuffer<TextColumn<u8>>;
Expand description

This row set binds a string buffer to each column, which is large enough to hold the maximum length string representation for each element in the row set at once.

§Example

//! A program executing a query and printing the result as csv to standard out. Requires
//! `anyhow` and `csv` crate.

use anyhow::Error;
use odbc_api::{buffers::TextRowSet, Cursor, Environment, ConnectionOptions, ResultSetMetadata};
use std::{
    ffi::CStr,
    io::{stdout, Write},
    path::PathBuf,
};

/// Maximum number of rows fetched with one row set. Fetching batches of rows is usually much
/// faster than fetching individual rows.
const BATCH_SIZE: usize = 5000;

fn main() -> Result<(), Error> {
    // Write csv to standard out
    let out = stdout();
    let mut writer = csv::Writer::from_writer(out);

    // We know this is going to be the only ODBC environment in the entire process, so this is
    // safe.
    let environment = unsafe { Environment::new() }?;

    // Connect using a DSN. Alternatively we could have used a connection string
    let mut connection = environment.connect(
        "DataSourceName",
        "Username",
        "Password",
        ConnectionOptions::default(),
    )?;

    // Execute a one of query without any parameters.
    match connection.execute("SELECT * FROM TableName", ())? {
        Some(mut cursor) => {
            // Write the column names to stdout
            let mut headline : Vec<String> = cursor.column_names()?.collect::<Result<_,_>>()?;
            writer.write_record(headline)?;

            // Use schema in cursor to initialize a text buffer large enough to hold the largest
            // possible strings for each column up to an upper limit of 4KiB
            let mut buffers = TextRowSet::for_cursor(BATCH_SIZE, &mut cursor, Some(4096))?;
            // Bind the buffer to the cursor. It is now being filled with every call to fetch.
            let mut row_set_cursor = cursor.bind_buffer(&mut buffers)?;

            // Iterate over batches
            while let Some(batch) = row_set_cursor.fetch()? {
                // Within a batch, iterate over every row
                for row_index in 0..batch.num_rows() {
                    // Within a row iterate over every column
                    let record = (0..batch.num_cols()).map(|col_index| {
                        batch
                            .at(col_index, row_index)
                            .unwrap_or(&[])
                    });
                    // Writes row as csv
                    writer.write_record(record)?;
                }
            }
        }
        None => {
            eprintln!(
                "Query came back empty. No output has been created."
            );
        }
    }

    Ok(())
}

Aliased Type§

struct TextRowSet { /* private fields */ }

Implementations§

Source§

impl TextRowSet

Source

pub fn for_cursor( batch_size: usize, cursor: &mut impl ResultSetMetadata, max_str_limit: Option<usize>, ) -> Result<TextRowSet, Error>

The resulting text buffer is not in any way tied to the cursor, other than that its buffer sizes a tailor fitted to result set the cursor is iterating over.

This method performs fallible buffer allocations, if no upper bound is set, so you may see a speedup, by setting an upper bound using max_str_limit.

§Parameters
  • batch_size: The maximum number of rows the buffer is able to hold.
  • cursor: Used to query the display size for each column of the row set. For character data the length in characters is multiplied by 4 in order to have enough space for 4 byte utf-8 characters. This is a pessimization for some data sources (e.g. SQLite 3) which do interpret the size of a VARCHAR(5) column as 5 bytes rather than 5 characters.
  • max_str_limit: Some queries make it hard to estimate a sensible upper bound and sometimes drivers are just not that good at it. This argument allows you to specify an upper bound for the length of character data. Any size reported by the driver is capped to this value. In case the upper bound can not inferred by the metadata reported by the driver the element size is set to this upper bound, too.
Source

pub fn from_max_str_lens( row_capacity: usize, max_str_lengths: impl IntoIterator<Item = usize>, ) -> Result<Self, Error>

Creates a text buffer large enough to hold batch_size rows with one column for each item max_str_lengths of respective size.

Source

pub fn at(&self, buffer_index: usize, row_index: usize) -> Option<&[u8]>

Access the element at the specified position in the row set.

Source

pub fn at_as_str( &self, col_index: usize, row_index: usize, ) -> Result<Option<&str>, Utf8Error>

Access the element at the specified position in the row set.

Source

pub fn indicator_at(&self, buf_index: usize, row_index: usize) -> Indicator

Indicator value at the specified position. Useful to detect truncation of data.

§Example
use odbc_api::buffers::{Indicator, TextRowSet};

fn is_truncated(buffer: &TextRowSet, col_index: usize, row_index: usize) -> bool {
    match buffer.indicator_at(col_index, row_index) {
        // There is no value, therefore there is no value not fitting in the column buffer.
        Indicator::Null => false,
        // The value did not fit into the column buffer, we do not even know, by how much.
        Indicator::NoTotal => true,
        Indicator::Length(total_length) => {
            // If the maximum string length is shorter than the values total length, the
            // has been truncated to fit into the buffer.
            buffer.max_len(col_index) < total_length
        }
    }
}
Source

pub fn max_len(&self, buf_index: usize) -> usize

Maximum length in bytes of elements in a column.