Struct datafusion::common::config::ParquetOptions
source · pub struct ParquetOptions {Show 27 fields
pub enable_page_index: bool,
pub pruning: bool,
pub skip_metadata: bool,
pub metadata_size_hint: Option<usize>,
pub pushdown_filters: bool,
pub reorder_filters: bool,
pub data_pagesize_limit: usize,
pub write_batch_size: usize,
pub writer_version: String,
pub compression: Option<String>,
pub dictionary_enabled: Option<bool>,
pub dictionary_page_size_limit: usize,
pub statistics_enabled: Option<String>,
pub max_statistics_size: Option<usize>,
pub max_row_group_size: usize,
pub created_by: String,
pub column_index_truncate_length: Option<usize>,
pub data_page_row_count_limit: usize,
pub encoding: Option<String>,
pub bloom_filter_on_read: bool,
pub bloom_filter_on_write: bool,
pub bloom_filter_fpp: Option<f64>,
pub bloom_filter_ndv: Option<u64>,
pub allow_single_file_parallelism: bool,
pub maximum_parallel_row_group_writers: usize,
pub maximum_buffered_record_batches_per_stream: usize,
pub schema_force_string_view: bool,
}
Expand description
Options for reading and writing parquet files
See also: SessionConfig
Fields§
§enable_page_index: bool
(reading) If true, reads the Parquet data page level metadata (the Page Index), if present, to reduce the I/O and number of rows decoded.
pruning: bool
(reading) If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file
skip_metadata: bool
(reading) If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata
metadata_size_hint: Option<usize>
(reading) If specified, the parquet reader will try and fetch the last size_hint
bytes of the parquet file optimistically. If not specified, two reads are required:
One read to fetch the 8-byte parquet footer and
another to fetch the metadata length encoded in the footer
pushdown_filters: bool
(reading) If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded. This optimization is sometimes called “late materialization”.
reorder_filters: bool
(reading) If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query
data_pagesize_limit: usize
(writing) Sets best effort maximum size of data page in bytes
write_batch_size: usize
(writing) Sets write_batch_size in bytes
writer_version: String
(writing) Sets parquet writer version valid values are “1.0” and “2.0”
compression: Option<String>
(writing) Sets default parquet compression codec. Valid values are: uncompressed, snappy, gzip(level), lzo, brotli(level), lz4, zstd(level), and lz4_raw. These values are not case sensitive. If NULL, uses default parquet writer setting
Note that this default setting is not the same as the default parquet writer setting.
dictionary_enabled: Option<bool>
(writing) Sets if dictionary encoding is enabled. If NULL, uses default parquet writer setting
dictionary_page_size_limit: usize
(writing) Sets best effort maximum dictionary page size, in bytes
statistics_enabled: Option<String>
(writing) Sets if statistics are enabled for any column Valid values are: “none”, “chunk”, and “page” These values are not case sensitive. If NULL, uses default parquet writer setting
max_statistics_size: Option<usize>
(writing) Sets max statistics size for any column. If NULL, uses default parquet writer setting
max_row_group_size: usize
(writing) Target maximum number of rows in each row group (defaults to 1M rows). Writing larger row groups requires more memory to write, but can get better compression and be faster to read.
created_by: String
(writing) Sets “created by” property
column_index_truncate_length: Option<usize>
(writing) Sets column index truncate length
data_page_row_count_limit: usize
(writing) Sets best effort maximum number of rows in data page
encoding: Option<String>
(writing) Sets default encoding for any column. Valid values are: plain, plain_dictionary, rle, bit_packed, delta_binary_packed, delta_length_byte_array, delta_byte_array, rle_dictionary, and byte_stream_split. These values are not case sensitive. If NULL, uses default parquet writer setting
bloom_filter_on_read: bool
(writing) Use any available bloom filters when reading parquet files
bloom_filter_on_write: bool
(writing) Write bloom filters for all columns when creating parquet files
bloom_filter_fpp: Option<f64>
(writing) Sets bloom filter false positive probability. If NULL, uses default parquet writer setting
bloom_filter_ndv: Option<u64>
(writing) Sets bloom filter number of distinct values. If NULL, uses default parquet writer setting
allow_single_file_parallelism: bool
(writing) Controls whether DataFusion will attempt to speed up writing parquet files by serializing them in parallel. Each column in each row group in each output file are serialized in parallel leveraging a maximum possible core count of n_filesn_row_groupsn_columns.
maximum_parallel_row_group_writers: usize
(writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.
maximum_buffered_record_batches_per_stream: usize
(writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.
schema_force_string_view: bool
(reading) If true, parquet reader will read columns of Utf8/Utf8Large
with Utf8View
,
and Binary/BinaryLarge
with BinaryView
.
Implementations§
source§impl ParquetOptions
impl ParquetOptions
sourcepub fn into_writer_properties_builder(
&self,
) -> Result<WriterPropertiesBuilder, DataFusionError>
pub fn into_writer_properties_builder( &self, ) -> Result<WriterPropertiesBuilder, DataFusionError>
Convert the global session options, ParquetOptions
, into a single write action’s WriterPropertiesBuilder
.
The returned WriterPropertiesBuilder
can then be further modified with additional options
applied per column; a customization which is not applicable for ParquetOptions
.
Trait Implementations§
source§impl Clone for ParquetOptions
impl Clone for ParquetOptions
source§fn clone(&self) -> ParquetOptions
fn clone(&self) -> ParquetOptions
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl ConfigField for ParquetOptions
impl ConfigField for ParquetOptions
source§impl Debug for ParquetOptions
impl Debug for ParquetOptions
source§impl Default for ParquetOptions
impl Default for ParquetOptions
source§fn default() -> ParquetOptions
fn default() -> ParquetOptions
source§impl PartialEq for ParquetOptions
impl PartialEq for ParquetOptions
impl StructuralPartialEq for ParquetOptions
Auto Trait Implementations§
impl Freeze for ParquetOptions
impl RefUnwindSafe for ParquetOptions
impl Send for ParquetOptions
impl Sync for ParquetOptions
impl Unpin for ParquetOptions
impl UnwindSafe for ParquetOptions
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more