aws_sdk_databasemigration::types

Struct S3Settings

Source
#[non_exhaustive]
pub struct S3Settings {
Show 41 fields pub service_access_role_arn: Option<String>, pub external_table_definition: Option<String>, pub csv_row_delimiter: Option<String>, pub csv_delimiter: Option<String>, pub bucket_folder: Option<String>, pub bucket_name: Option<String>, pub compression_type: Option<CompressionTypeValue>, pub encryption_mode: Option<EncryptionModeValue>, pub server_side_encryption_kms_key_id: Option<String>, pub data_format: Option<DataFormatValue>, pub encoding_type: Option<EncodingTypeValue>, pub dict_page_size_limit: Option<i32>, pub row_group_length: Option<i32>, pub data_page_size: Option<i32>, pub parquet_version: Option<ParquetVersionValue>, pub enable_statistics: Option<bool>, pub include_op_for_full_load: Option<bool>, pub cdc_inserts_only: Option<bool>, pub timestamp_column_name: Option<String>, pub parquet_timestamp_in_millisecond: Option<bool>, pub cdc_inserts_and_updates: Option<bool>, pub date_partition_enabled: Option<bool>, pub date_partition_sequence: Option<DatePartitionSequenceValue>, pub date_partition_delimiter: Option<DatePartitionDelimiterValue>, pub use_csv_no_sup_value: Option<bool>, pub csv_no_sup_value: Option<String>, pub preserve_transactions: Option<bool>, pub cdc_path: Option<String>, pub use_task_start_time_for_full_load_timestamp: Option<bool>, pub canned_acl_for_objects: Option<CannedAclForObjectsValue>, pub add_column_name: Option<bool>, pub cdc_max_batch_interval: Option<i32>, pub cdc_min_file_size: Option<i32>, pub csv_null_value: Option<String>, pub ignore_header_rows: Option<i32>, pub max_file_size: Option<i32>, pub rfc4180: Option<bool>, pub date_partition_timezone: Option<String>, pub add_trailing_padding_character: Option<bool>, pub expected_bucket_owner: Option<String>, pub glue_catalog_generation: Option<bool>,
}
Expand description

Settings for exporting data to Amazon S3.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§service_access_role_arn: Option<String>

The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.

§external_table_definition: Option<String>

Specifies how tables are defined in the S3 source files only.

§csv_row_delimiter: Option<String>

The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).

§csv_delimiter: Option<String>

The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.

§bucket_folder: Option<String>

An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

§bucket_name: Option<String>

The name of the S3 bucket.

§compression_type: Option<CompressionTypeValue>

An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

§encryption_mode: Option<EncryptionModeValue>

The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.

For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3. But you can’t change the existing value from SSE_S3 to SSE_KMS.

To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

  • s3:CreateBucket

  • s3:ListBucket

  • s3:DeleteBucket

  • s3:GetBucketLocation

  • s3:GetObject

  • s3:PutObject

  • s3:DeleteObject

  • s3:GetObjectVersion

  • s3:GetBucketPolicy

  • s3:PutBucketPolicy

  • s3:DeleteBucketPolicy

§server_side_encryption_kms_key_id: Option<String>

If you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.

Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

§data_format: Option<DataFormatValue>

The format of the data that you want to use for output. You can choose one of the following:

  • csv : This is a row-based file format with comma-separated values (.csv).

  • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

§encoding_type: Option<EncodingTypeValue>

The type of encoding you are using:

  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

  • PLAIN doesn't use encoding at all. Values are stored as they are.

  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

§dict_page_size_limit: Option<i32>

The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

§row_group_length: Option<i32>

The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

§data_page_size: Option<i32>

The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

§parquet_version: Option<ParquetVersionValue>

The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

§enable_statistics: Option<bool>

A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

§include_op_for_full_load: Option<bool>

A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.

DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.

DMS supports the use of the .parquet files with the IncludeOpForFullLoad parameter in versions 3.4.7 and later.

For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

§cdc_inserts_only: Option<bool>

A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.

CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.

§timestamp_column_name: Option<String>

A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.

DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

§parquet_timestamp_in_millisecond: Option<bool>

A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.

When ParquetTimestampInMillisecond is set to true or y, DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.

DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.

Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.

§cdc_inserts_and_updates: Option<bool>

A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

DMS supports the use of the .parquet files in versions 3.4.7 and later.

How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.

CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.

§date_partition_enabled: Option<bool>

When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see Using date-based folder partitioning.

§date_partition_sequence: Option<DatePartitionSequenceValue>

Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.

§date_partition_delimiter: Option<DatePartitionDelimiterValue>

Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.

§use_csv_no_sup_value: Option<bool>

This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by CsvNoSupValue . If not set or set to false, DMS uses the null value for these columns.

This setting is supported in DMS versions 3.4.1 and later.

§csv_no_sup_value: Option<String>

This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.

This setting is supported in DMS versions 3.4.1 and later.

§preserve_transactions: Option<bool>

If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath . For more information, see Capturing data changes (CDC) including transaction order on the S3 target.

This setting is supported in DMS versions 3.4.2 and later.

§cdc_path: Option<String>

Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder and BucketName .

For example, if you specify CdcPath as MyChangedData, and you specify BucketName as MyTargetBucket but do not specify BucketFolder, DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.

If you specify the same CdcPath, and you specify BucketName as MyTargetBucket and BucketFolder as MyTargetData, DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData.

For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.

This setting is supported in DMS versions 3.4.2 and later.

§use_task_start_time_for_full_load_timestamp: Option<bool>

When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.

When useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target.

§canned_acl_for_objects: Option<CannedAclForObjectsValue>

A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.

The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.

§add_column_name: Option<bool>

An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.

The default value is false. Valid values are true, false, y, and n.

§cdc_max_batch_interval: Option<i32>

Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.

When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.

The default value is 60 seconds.

§cdc_min_file_size: Option<i32>

Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.

When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.

The default value is 32 MB.

§csv_null_value: Option<String>

An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.

The default value is NULL. Valid values include any valid string.

§ignore_header_rows: Option<i32>

When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.

The default is 0.

§max_file_size: Option<i32>

A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.

The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.

§rfc4180: Option<bool>

For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.

For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.

The default value is true. Valid values include true, false, y, and n.

§date_partition_timezone: Option<String>

When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true, as shown in the following example.

s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'

§add_trailing_padding_character: Option<bool>

Use the S3 target endpoint setting AddTrailingPaddingCharacter to add padding on string data. The default value is false.

§expected_bucket_owner: Option<String>

To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting.

Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'

When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.

§glue_catalog_generation: Option<bool>

When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.

Implementations§

Source§

impl S3Settings

Source

pub fn service_access_role_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.

Source

pub fn external_table_definition(&self) -> Option<&str>

Specifies how tables are defined in the S3 source files only.

Source

pub fn csv_row_delimiter(&self) -> Option<&str>

The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).

Source

pub fn csv_delimiter(&self) -> Option<&str>

The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.

Source

pub fn bucket_folder(&self) -> Option<&str>

An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

Source

pub fn bucket_name(&self) -> Option<&str>

The name of the S3 bucket.

Source

pub fn compression_type(&self) -> Option<&CompressionTypeValue>

An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

Source

pub fn encryption_mode(&self) -> Option<&EncryptionModeValue>

The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.

For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3. But you can’t change the existing value from SSE_S3 to SSE_KMS.

To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

  • s3:CreateBucket

  • s3:ListBucket

  • s3:DeleteBucket

  • s3:GetBucketLocation

  • s3:GetObject

  • s3:PutObject

  • s3:DeleteObject

  • s3:GetObjectVersion

  • s3:GetBucketPolicy

  • s3:PutBucketPolicy

  • s3:DeleteBucketPolicy

Source

pub fn server_side_encryption_kms_key_id(&self) -> Option<&str>

If you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.

Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

Source

pub fn data_format(&self) -> Option<&DataFormatValue>

The format of the data that you want to use for output. You can choose one of the following:

  • csv : This is a row-based file format with comma-separated values (.csv).

  • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

Source

pub fn encoding_type(&self) -> Option<&EncodingTypeValue>

The type of encoding you are using:

  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

  • PLAIN doesn't use encoding at all. Values are stored as they are.

  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

Source

pub fn dict_page_size_limit(&self) -> Option<i32>

The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

Source

pub fn row_group_length(&self) -> Option<i32>

The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

Source

pub fn data_page_size(&self) -> Option<i32>

The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

Source

pub fn parquet_version(&self) -> Option<&ParquetVersionValue>

The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

Source

pub fn enable_statistics(&self) -> Option<bool>

A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

Source

pub fn include_op_for_full_load(&self) -> Option<bool>

A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.

DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.

DMS supports the use of the .parquet files with the IncludeOpForFullLoad parameter in versions 3.4.7 and later.

For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

Source

pub fn cdc_inserts_only(&self) -> Option<bool>

A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.

CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.

Source

pub fn timestamp_column_name(&self) -> Option<&str>

A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.

DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

Source

pub fn parquet_timestamp_in_millisecond(&self) -> Option<bool>

A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.

When ParquetTimestampInMillisecond is set to true or y, DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.

DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.

Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.

Source

pub fn cdc_inserts_and_updates(&self) -> Option<bool>

A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

DMS supports the use of the .parquet files in versions 3.4.7 and later.

How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..

DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.

CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.

Source

pub fn date_partition_enabled(&self) -> Option<bool>

When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see Using date-based folder partitioning.

Source

pub fn date_partition_sequence(&self) -> Option<&DatePartitionSequenceValue>

Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.

Source

pub fn date_partition_delimiter(&self) -> Option<&DatePartitionDelimiterValue>

Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.

Source

pub fn use_csv_no_sup_value(&self) -> Option<bool>

This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by CsvNoSupValue . If not set or set to false, DMS uses the null value for these columns.

This setting is supported in DMS versions 3.4.1 and later.

Source

pub fn csv_no_sup_value(&self) -> Option<&str>

This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.

This setting is supported in DMS versions 3.4.1 and later.

Source

pub fn preserve_transactions(&self) -> Option<bool>

If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath . For more information, see Capturing data changes (CDC) including transaction order on the S3 target.

This setting is supported in DMS versions 3.4.2 and later.

Source

pub fn cdc_path(&self) -> Option<&str>

Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder and BucketName .

For example, if you specify CdcPath as MyChangedData, and you specify BucketName as MyTargetBucket but do not specify BucketFolder, DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.

If you specify the same CdcPath, and you specify BucketName as MyTargetBucket and BucketFolder as MyTargetData, DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData.

For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.

This setting is supported in DMS versions 3.4.2 and later.

Source

pub fn use_task_start_time_for_full_load_timestamp(&self) -> Option<bool>

When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.

When useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target.

Source

pub fn canned_acl_for_objects(&self) -> Option<&CannedAclForObjectsValue>

A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.

The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.

Source

pub fn add_column_name(&self) -> Option<bool>

An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.

The default value is false. Valid values are true, false, y, and n.

Source

pub fn cdc_max_batch_interval(&self) -> Option<i32>

Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.

When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.

The default value is 60 seconds.

Source

pub fn cdc_min_file_size(&self) -> Option<i32>

Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.

When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.

The default value is 32 MB.

Source

pub fn csv_null_value(&self) -> Option<&str>

An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.

The default value is NULL. Valid values include any valid string.

Source

pub fn ignore_header_rows(&self) -> Option<i32>

When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.

The default is 0.

Source

pub fn max_file_size(&self) -> Option<i32>

A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.

The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.

Source

pub fn rfc4180(&self) -> Option<bool>

For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.

For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.

The default value is true. Valid values include true, false, y, and n.

Source

pub fn date_partition_timezone(&self) -> Option<&str>

When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true, as shown in the following example.

s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'

Source

pub fn add_trailing_padding_character(&self) -> Option<bool>

Use the S3 target endpoint setting AddTrailingPaddingCharacter to add padding on string data. The default value is false.

Source

pub fn expected_bucket_owner(&self) -> Option<&str>

To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting.

Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'

When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.

Source

pub fn glue_catalog_generation(&self) -> Option<bool>

When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.

Source§

impl S3Settings

Source

pub fn builder() -> S3SettingsBuilder

Creates a new builder-style object to manufacture S3Settings.

Trait Implementations§

Source§

impl Clone for S3Settings

Source§

fn clone(&self) -> S3Settings

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for S3Settings

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl PartialEq for S3Settings

Source§

fn eq(&self, other: &S3Settings) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl StructuralPartialEq for S3Settings

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

Source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
Source§

impl<T> Paint for T
where T: ?Sized,

Source§

fn fg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the foreground set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like red() and green(), which have the same functionality but are pithier.

§Example

Set foreground color to white using fg():

use yansi::{Paint, Color};

painted.fg(Color::White);

Set foreground color to white using white().

use yansi::Paint;

painted.white();
Source§

fn primary(&self) -> Painted<&T>

Returns self with the fg() set to Color::Primary.

§Example
println!("{}", value.primary());
Source§

fn fixed(&self, color: u8) -> Painted<&T>

Returns self with the fg() set to Color::Fixed.

§Example
println!("{}", value.fixed(color));
Source§

fn rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the fg() set to Color::Rgb.

§Example
println!("{}", value.rgb(r, g, b));
Source§

fn black(&self) -> Painted<&T>

Returns self with the fg() set to Color::Black.

§Example
println!("{}", value.black());
Source§

fn red(&self) -> Painted<&T>

Returns self with the fg() set to Color::Red.

§Example
println!("{}", value.red());
Source§

fn green(&self) -> Painted<&T>

Returns self with the fg() set to Color::Green.

§Example
println!("{}", value.green());
Source§

fn yellow(&self) -> Painted<&T>

Returns self with the fg() set to Color::Yellow.

§Example
println!("{}", value.yellow());
Source§

fn blue(&self) -> Painted<&T>

Returns self with the fg() set to Color::Blue.

§Example
println!("{}", value.blue());
Source§

fn magenta(&self) -> Painted<&T>

Returns self with the fg() set to Color::Magenta.

§Example
println!("{}", value.magenta());
Source§

fn cyan(&self) -> Painted<&T>

Returns self with the fg() set to Color::Cyan.

§Example
println!("{}", value.cyan());
Source§

fn white(&self) -> Painted<&T>

Returns self with the fg() set to Color::White.

§Example
println!("{}", value.white());
Source§

fn bright_black(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightBlack.

§Example
println!("{}", value.bright_black());
Source§

fn bright_red(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightRed.

§Example
println!("{}", value.bright_red());
Source§

fn bright_green(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightGreen.

§Example
println!("{}", value.bright_green());
Source§

fn bright_yellow(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightYellow.

§Example
println!("{}", value.bright_yellow());
Source§

fn bright_blue(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightBlue.

§Example
println!("{}", value.bright_blue());
Source§

fn bright_magenta(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightMagenta.

§Example
println!("{}", value.bright_magenta());
Source§

fn bright_cyan(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightCyan.

§Example
println!("{}", value.bright_cyan());
Source§

fn bright_white(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightWhite.

§Example
println!("{}", value.bright_white());
Source§

fn bg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the background set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like on_red() and on_green(), which have the same functionality but are pithier.

§Example

Set background color to red using fg():

use yansi::{Paint, Color};

painted.bg(Color::Red);

Set background color to red using on_red().

use yansi::Paint;

painted.on_red();
Source§

fn on_primary(&self) -> Painted<&T>

Returns self with the bg() set to Color::Primary.

§Example
println!("{}", value.on_primary());
Source§

fn on_fixed(&self, color: u8) -> Painted<&T>

Returns self with the bg() set to Color::Fixed.

§Example
println!("{}", value.on_fixed(color));
Source§

fn on_rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the bg() set to Color::Rgb.

§Example
println!("{}", value.on_rgb(r, g, b));
Source§

fn on_black(&self) -> Painted<&T>

Returns self with the bg() set to Color::Black.

§Example
println!("{}", value.on_black());
Source§

fn on_red(&self) -> Painted<&T>

Returns self with the bg() set to Color::Red.

§Example
println!("{}", value.on_red());
Source§

fn on_green(&self) -> Painted<&T>

Returns self with the bg() set to Color::Green.

§Example
println!("{}", value.on_green());
Source§

fn on_yellow(&self) -> Painted<&T>

Returns self with the bg() set to Color::Yellow.

§Example
println!("{}", value.on_yellow());
Source§

fn on_blue(&self) -> Painted<&T>

Returns self with the bg() set to Color::Blue.

§Example
println!("{}", value.on_blue());
Source§

fn on_magenta(&self) -> Painted<&T>

Returns self with the bg() set to Color::Magenta.

§Example
println!("{}", value.on_magenta());
Source§

fn on_cyan(&self) -> Painted<&T>

Returns self with the bg() set to Color::Cyan.

§Example
println!("{}", value.on_cyan());
Source§

fn on_white(&self) -> Painted<&T>

Returns self with the bg() set to Color::White.

§Example
println!("{}", value.on_white());
Source§

fn on_bright_black(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightBlack.

§Example
println!("{}", value.on_bright_black());
Source§

fn on_bright_red(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightRed.

§Example
println!("{}", value.on_bright_red());
Source§

fn on_bright_green(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightGreen.

§Example
println!("{}", value.on_bright_green());
Source§

fn on_bright_yellow(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightYellow.

§Example
println!("{}", value.on_bright_yellow());
Source§

fn on_bright_blue(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightBlue.

§Example
println!("{}", value.on_bright_blue());
Source§

fn on_bright_magenta(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightMagenta.

§Example
println!("{}", value.on_bright_magenta());
Source§

fn on_bright_cyan(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightCyan.

§Example
println!("{}", value.on_bright_cyan());
Source§

fn on_bright_white(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightWhite.

§Example
println!("{}", value.on_bright_white());
Source§

fn attr(&self, value: Attribute) -> Painted<&T>

Enables the styling Attribute value.

This method should be used rarely. Instead, prefer to use attribute-specific builder methods like bold() and underline(), which have the same functionality but are pithier.

§Example

Make text bold using attr():

use yansi::{Paint, Attribute};

painted.attr(Attribute::Bold);

Make text bold using using bold().

use yansi::Paint;

painted.bold();
Source§

fn bold(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Bold.

§Example
println!("{}", value.bold());
Source§

fn dim(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Dim.

§Example
println!("{}", value.dim());
Source§

fn italic(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Italic.

§Example
println!("{}", value.italic());
Source§

fn underline(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Underline.

§Example
println!("{}", value.underline());

Returns self with the attr() set to Attribute::Blink.

§Example
println!("{}", value.blink());

Returns self with the attr() set to Attribute::RapidBlink.

§Example
println!("{}", value.rapid_blink());
Source§

fn invert(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Invert.

§Example
println!("{}", value.invert());
Source§

fn conceal(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Conceal.

§Example
println!("{}", value.conceal());
Source§

fn strike(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Strike.

§Example
println!("{}", value.strike());
Source§

fn quirk(&self, value: Quirk) -> Painted<&T>

Enables the yansi Quirk value.

This method should be used rarely. Instead, prefer to use quirk-specific builder methods like mask() and wrap(), which have the same functionality but are pithier.

§Example

Enable wrapping using .quirk():

use yansi::{Paint, Quirk};

painted.quirk(Quirk::Wrap);

Enable wrapping using wrap().

use yansi::Paint;

painted.wrap();
Source§

fn mask(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Mask.

§Example
println!("{}", value.mask());
Source§

fn wrap(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Wrap.

§Example
println!("{}", value.wrap());
Source§

fn linger(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Linger.

§Example
println!("{}", value.linger());
Source§

fn clear(&self) -> Painted<&T>

👎Deprecated since 1.0.1: renamed to resetting() due to conflicts with Vec::clear(). The clear() method will be removed in a future release.

Returns self with the quirk() set to Quirk::Clear.

§Example
println!("{}", value.clear());
Source§

fn resetting(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Resetting.

§Example
println!("{}", value.resetting());
Source§

fn bright(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Bright.

§Example
println!("{}", value.bright());
Source§

fn on_bright(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::OnBright.

§Example
println!("{}", value.on_bright());
Source§

fn whenever(&self, value: Condition) -> Painted<&T>

Conditionally enable styling based on whether the Condition value applies. Replaces any previous condition.

See the crate level docs for more details.

§Example

Enable styling painted only when both stdout and stderr are TTYs:

use yansi::{Paint, Condition};

painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);
Source§

fn new(self) -> Painted<Self>
where Self: Sized,

Create a new Painted with a default Style. Read more
Source§

fn paint<S>(&self, style: S) -> Painted<&Self>
where S: Into<Style>,

Apply a style wholesale to self. Any previous style is replaced. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,

Source§

impl<T> MaybeSendSync for T