Struct datafusion_common::config::ParquetOptions

source ·
pub struct ParquetOptions {
Show 27 fields pub enable_page_index: bool, pub pruning: bool, pub skip_metadata: bool, pub metadata_size_hint: Option<usize>, pub pushdown_filters: bool, pub reorder_filters: bool, pub data_pagesize_limit: usize, pub write_batch_size: usize, pub writer_version: String, pub compression: Option<String>, pub dictionary_enabled: Option<bool>, pub dictionary_page_size_limit: usize, pub statistics_enabled: Option<String>, pub max_statistics_size: Option<usize>, pub max_row_group_size: usize, pub created_by: String, pub column_index_truncate_length: Option<usize>, pub data_page_row_count_limit: usize, pub encoding: Option<String>, pub bloom_filter_on_read: bool, pub bloom_filter_on_write: bool, pub bloom_filter_fpp: Option<f64>, pub bloom_filter_ndv: Option<u64>, pub allow_single_file_parallelism: bool, pub maximum_parallel_row_group_writers: usize, pub maximum_buffered_record_batches_per_stream: usize, pub schema_force_string_view: bool,
}
Expand description

Options for reading and writing parquet files

See also: SessionConfig

Fields§

§enable_page_index: bool

(reading) If true, reads the Parquet data page level metadata (the Page Index), if present, to reduce the I/O and number of rows decoded.

§pruning: bool

(reading) If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file

§skip_metadata: bool

(reading) If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata

§metadata_size_hint: Option<usize>

(reading) If specified, the parquet reader will try and fetch the last size_hint bytes of the parquet file optimistically. If not specified, two reads are required: One read to fetch the 8-byte parquet footer and another to fetch the metadata length encoded in the footer

§pushdown_filters: bool

(reading) If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded. This optimization is sometimes called “late materialization”.

§reorder_filters: bool

(reading) If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query

§data_pagesize_limit: usize

(writing) Sets best effort maximum size of data page in bytes

§write_batch_size: usize

(writing) Sets write_batch_size in bytes

§writer_version: String

(writing) Sets parquet writer version valid values are “1.0” and “2.0”

§compression: Option<String>

(writing) Sets default parquet compression codec. Valid values are: uncompressed, snappy, gzip(level), lzo, brotli(level), lz4, zstd(level), and lz4_raw. These values are not case sensitive. If NULL, uses default parquet writer setting

Note that this default setting is not the same as the default parquet writer setting.

§dictionary_enabled: Option<bool>

(writing) Sets if dictionary encoding is enabled. If NULL, uses default parquet writer setting

§dictionary_page_size_limit: usize

(writing) Sets best effort maximum dictionary page size, in bytes

§statistics_enabled: Option<String>

(writing) Sets if statistics are enabled for any column Valid values are: “none”, “chunk”, and “page” These values are not case sensitive. If NULL, uses default parquet writer setting

§max_statistics_size: Option<usize>

(writing) Sets max statistics size for any column. If NULL, uses default parquet writer setting

§max_row_group_size: usize

(writing) Target maximum number of rows in each row group (defaults to 1M rows). Writing larger row groups requires more memory to write, but can get better compression and be faster to read.

§created_by: String

(writing) Sets “created by” property

§column_index_truncate_length: Option<usize>

(writing) Sets column index truncate length

§data_page_row_count_limit: usize

(writing) Sets best effort maximum number of rows in data page

§encoding: Option<String>

(writing) Sets default encoding for any column. Valid values are: plain, plain_dictionary, rle, bit_packed, delta_binary_packed, delta_length_byte_array, delta_byte_array, rle_dictionary, and byte_stream_split. These values are not case sensitive. If NULL, uses default parquet writer setting

§bloom_filter_on_read: bool

(writing) Use any available bloom filters when reading parquet files

§bloom_filter_on_write: bool

(writing) Write bloom filters for all columns when creating parquet files

§bloom_filter_fpp: Option<f64>

(writing) Sets bloom filter false positive probability. If NULL, uses default parquet writer setting

§bloom_filter_ndv: Option<u64>

(writing) Sets bloom filter number of distinct values. If NULL, uses default parquet writer setting

§allow_single_file_parallelism: bool

(writing) Controls whether DataFusion will attempt to speed up writing parquet files by serializing them in parallel. Each column in each row group in each output file are serialized in parallel leveraging a maximum possible core count of n_filesn_row_groupsn_columns.

§maximum_parallel_row_group_writers: usize

(writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.

§maximum_buffered_record_batches_per_stream: usize

(writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.

§schema_force_string_view: bool

(reading) If true, parquet reader will read columns of Utf8/Utf8Large with Utf8View, and Binary/BinaryLarge with BinaryView.

Trait Implementations§

source§

impl Clone for ParquetOptions

source§

fn clone(&self) -> ParquetOptions

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl ConfigField for ParquetOptions

source§

fn set(&mut self, key: &str, value: &str) -> Result<()>

source§

fn visit<V: Visit>( &self, v: &mut V, key_prefix: &str, _description: &'static str, )

source§

impl Debug for ParquetOptions

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for ParquetOptions

source§

fn default() -> Self

Returns the “default value” for a type. Read more
source§

impl PartialEq for ParquetOptions

source§

fn eq(&self, other: &ParquetOptions) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for ParquetOptions

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> CloneToUninit for T
where T: Clone,

source§

default unsafe fn clone_to_uninit(&self, dst: *mut T)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> Allocation for T
where T: RefUnwindSafe + Send + Sync,