aws_sdk_s3::client

Struct Client

Source
pub struct Client { /* private fields */ }
Expand description

Client for Amazon Simple Storage Service

Client for invoking operations on Amazon Simple Storage Service. Each operation on Amazon Simple Storage Service is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_s3::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Builder struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_s3::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the AbortMultipartUpload operation has a Client::abort_multipart_upload, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.abort_multipart_upload()
    .bucket("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

§Waiters

This client provides wait_until methods behind the Waiters trait. To use them, simply import the trait, and then call one of the wait_until methods. This will return a waiter fluent builder that takes various parameters, which are documented on the builder type. Once parameters have been provided, the wait method can be called to initiate waiting.

For example, if there was a wait_until_thing method, it could look like:

let result = client.wait_until_thing()
    .thing_id("someId")
    .wait(Duration::from_secs(120))
    .await;

Implementations§

Source§

impl Client

Source

pub fn abort_multipart_upload(&self) -> AbortMultipartUploadFluentBuilder

Constructs a fluent builder for the AbortMultipartUpload operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name to which the upload was taking place.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Key of the object for which the multipart upload was initiated.


    • upload_id(impl Into<String>) / set_upload_id(Option<String>):
      required: true

      Upload ID that identifies the multipart upload.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • if_match_initiated_time(DateTime) / set_if_match_initiated_time(Option<DateTime>):
      required: false

      If present, this header aborts an in progress multipart upload only if it was initiated on the provided timestamp. If the initiated timestamp of the multipart upload does not match the provided value, the operation returns a 412 Precondition Failed error. If the initiated timestamp matches or if the multipart upload doesn’t exist, the operation returns a 204 Success (No Content) response.

      This functionality is only supported for directory buckets.


  • On success, responds with AbortMultipartUploadOutput with field(s):
  • On failure, responds with SdkError<AbortMultipartUploadError>
Source§

impl Client

Source

pub fn complete_multipart_upload(&self) -> CompleteMultipartUploadFluentBuilder

Constructs a fluent builder for the CompleteMultipartUpload operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      Name of the bucket to which the multipart upload was initiated.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the multipart upload was initiated.


    • multipart_upload(CompletedMultipartUpload) / set_multipart_upload(Option<CompletedMultipartUpload>):
      required: false

      The container for the multipart upload request information.


    • upload_id(impl Into<String>) / set_upload_id(Option<String>):
      required: true

      ID for the initiated multipart upload.


    • checksum_crc32(impl Into<String>) / set_checksum_crc32(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_crc32_c(impl Into<String>) / set_checksum_crc32_c(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha1(impl Into<String>) / set_checksum_sha1(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 160-bit SHA-1 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha256(impl Into<String>) / set_checksum_sha256(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 256-bit SHA-256 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • if_match(impl Into<String>) / set_if_match(Option<String>):
      required: false

      Uploads the object only if the ETag (entity tag) value provided during the WRITE operation matches the ETag of the object in S3. If the ETag values do not match, the operation returns a 412 Precondition Failed error.

      If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict response. On a 409 failure you should fetch the object’s ETag, re-initiate the multipart upload with CreateMultipartUpload, and re-upload each part.

      Expects the ETag value as a string.

      For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.


    • if_none_match(impl Into<String>) / set_if_none_match(Option<String>):
      required: false

      Uploads the object only if the object key name does not already exist in the bucket specified. Otherwise, Amazon S3 returns a 412 Precondition Failed error.

      If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict response. On a 409 failure you should re-initiate the multipart upload with CreateMultipartUpload and re-upload each part.

      Expects the ‘*’ (asterisk) character.

      For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      The server-side encryption (SSE) algorithm used to encrypt the object. This parameter is required only when the object was created using a checksum algorithm or if your bucket policy requires the use of SSE-C. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      The server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      The MD5 server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


  • On success, responds with CompleteMultipartUploadOutput with field(s):
    • location(Option<String>):

      The URI that identifies the newly created object.

    • bucket(Option<String>):

      The name of the bucket that contains the newly created object. Does not return the access point ARN or access point alias if used.

      Access points are not supported by directory buckets.

    • key(Option<String>):

      The object key of the newly created object.

    • expiration(Option<String>):

      If the object expiration is configured, this will contain the expiration date (expiry-date) and rule ID (rule-id). The value of rule-id is URL-encoded.

      This functionality is not supported for directory buckets.

    • e_tag(Option<String>):

      Entity tag that identifies the newly created object’s data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits. For more information about how the entity tag is calculated, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32(Option<String>):

      The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32_c(Option<String>):

      The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha1(Option<String>):

      The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha256(Option<String>):

      The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).

    • version_id(Option<String>):

      Version ID of the newly created object, in case the bucket has versioning turned on.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<CompleteMultipartUploadError>
Source§

impl Client

Source

pub fn copy_object(&self) -> CopyObjectFluentBuilder

Constructs a fluent builder for the CopyObject operation.

  • The fluent builder is configurable:
    • acl(ObjectCannedAcl) / set_acl(Option<ObjectCannedAcl>):
      required: false

      The canned access control list (ACL) to apply to the object.

      When you copy an object, the ACL metadata is not preserved and is set to private by default. Only the owner has full access control. To override the default ACL setting, specify a new ACL when you generate a copy request. For more information, see Using ACLs.

      If the destination bucket that you’re copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don’t specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format. For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

      • If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the destination bucket.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • cache_control(impl Into<String>) / set_cache_control(Option<String>):
      required: false

      Specifies the caching behavior along the request/reply chain.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

      When you copy an object, if the source object has a checksum, that checksum value will be copied to the new object by default. If the CopyObject request does not include this x-amz-checksum-algorithm header, the checksum algorithm will be copied from the source object to the destination object (if it’s present on the source object). You can optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm header. Unrecognized or unsupported values will respond with the HTTP status code 400 Bad Request.

      For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that’s used for performance.


    • content_disposition(impl Into<String>) / set_content_disposition(Option<String>):
      required: false

      Specifies presentational information for the object. Indicates whether an object should be displayed in a web browser or downloaded as a file. It allows specifying the desired filename for the downloaded file.


    • content_encoding(impl Into<String>) / set_content_encoding(Option<String>):
      required: false

      Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.

      For directory buckets, only the aws-chunked value is supported in this header field.


    • content_language(impl Into<String>) / set_content_language(Option<String>):
      required: false

      The language the content is in.


    • content_type(impl Into<String>) / set_content_type(Option<String>):
      required: false

      A standard MIME type that describes the format of the object data.


    • copy_source(impl Into<String>) / set_copy_source(Option<String>):
      required: true

      Specifies the source object for the copy operation. The source object can be up to 5 GB. If the source object is an object that was uploaded by using a multipart upload, the object copy will be a single part object after the source object is copied to the destination bucket.

      You specify the value of the copy source in one of two formats, depending on whether you want to access the source object through an access point:

      • For objects not accessed through an access point, specify the name of the source bucket and the key of the source object, separated by a slash (/). For example, to copy the object reports/january.pdf from the general purpose bucket awsexamplebucket, use awsexamplebucket/reports/january.pdf. The value must be URL-encoded. To copy the object reports/january.pdf from the directory bucket awsexamplebucket–use1-az5–x-s3, use awsexamplebucket–use1-az5–x-s3/reports/january.pdf. The value must be URL-encoded.

      • For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed through the access point, in the format arn:aws:s3: : :accesspoint/ /object/ . For example, to copy the object reports/january.pdf through access point my-access-point owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf. The value must be URL encoded.

        • Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.

        • Access points are not supported by directory buckets.

        Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in the format arn:aws:s3-outposts: : :outpost/ /object/ . For example, to copy the object reports/january.pdf through outpost my-outpost owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf. The value must be URL-encoded.

      If your source bucket versioning is enabled, the x-amz-copy-source header by default identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId query parameter. Specifically, append ?versionId= to the value (for example, awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893). If you don’t specify a version ID, Amazon S3 copies the latest version of the source object.

      If you enable versioning on the destination bucket, Amazon S3 generates a unique version ID for the copied object. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id response header in the response.

      If you do not enable versioning or suspend it on the destination bucket, the version ID that Amazon S3 generates in the x-amz-version-id response header is always null.

      Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets.


    • copy_source_if_match(impl Into<String>) / set_copy_source_if_match(Option<String>):
      required: false

      Copies the object if its entity tag (ETag) matches the specified tag.

      If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

      • x-amz-copy-source-if-match condition evaluates to true

      • x-amz-copy-source-if-unmodified-since condition evaluates to false


    • copy_source_if_modified_since(DateTime) / set_copy_source_if_modified_since(Option<DateTime>):
      required: false

      Copies the object if it has been modified since the specified time.

      If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

      • x-amz-copy-source-if-none-match condition evaluates to false

      • x-amz-copy-source-if-modified-since condition evaluates to true


    • copy_source_if_none_match(impl Into<String>) / set_copy_source_if_none_match(Option<String>):
      required: false

      Copies the object if its entity tag (ETag) is different than the specified ETag.

      If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

      • x-amz-copy-source-if-none-match condition evaluates to false

      • x-amz-copy-source-if-modified-since condition evaluates to true


    • copy_source_if_unmodified_since(DateTime) / set_copy_source_if_unmodified_since(Option<DateTime>):
      required: false

      Copies the object if it hasn’t been modified since the specified time.

      If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

      • x-amz-copy-source-if-match condition evaluates to true

      • x-amz-copy-source-if-unmodified-since condition evaluates to false


    • expires(DateTime) / set_expires(Option<DateTime>):
      required: false

      The date and time at which the object is no longer cacheable.


    • grant_full_control(impl Into<String>) / set_grant_full_control(Option<String>):
      required: false

      Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read(impl Into<String>) / set_grant_read(Option<String>):
      required: false

      Allows grantee to read the object data and its metadata.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read_acp(impl Into<String>) / set_grant_read_acp(Option<String>):
      required: false

      Allows grantee to read the object ACL.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_write_acp(impl Into<String>) / set_grant_write_acp(Option<String>):
      required: false

      Allows grantee to write the ACL for the applicable object.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      The key of the destination object.


    • metadata(impl Into<String>, impl Into<String>) / set_metadata(Option<HashMap::<String, String>>):
      required: false

      A map of metadata to store with the object in S3.


    • metadata_directive(MetadataDirective) / set_metadata_directive(Option<MetadataDirective>):
      required: false

      Specifies whether the metadata is copied from the source object or replaced with metadata that’s provided in the request. When copying an object, you can preserve all metadata (the default) or specify new metadata. If this header isn’t specified, COPY is the default behavior.

      General purpose bucket - For general purpose buckets, when you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Amazon S3 condition key examples in the Amazon S3 User Guide.

      x-amz-website-redirect-location is unique to each object and is not copied when using the x-amz-metadata-directive header. To copy the value, you must specify x-amz-website-redirect-location in the request header.


    • tagging_directive(TaggingDirective) / set_tagging_directive(Option<TaggingDirective>):
      required: false

      Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that’s provided in the request.

      The default value is COPY.

      Directory buckets - For directory buckets in a CopyObject operation, only the empty tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a 501 Not Implemented status code. When the destination bucket is a directory bucket, you will receive a 501 Not Implemented response in any of the following situations:

      • When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.

      • When you attempt to REPLACE the tag-set of a source object and set a non-empty value to x-amz-tagging.

      • When you don’t set the x-amz-tagging-directive header and the source object has non-empty tags. This is because the default value of x-amz-tagging-directive is COPY.

      Because only the empty tag-set is supported for directory buckets in a CopyObject operation, the following situations are allowed:

      • When you attempt to COPY the tag-set from a directory bucket source object that has no tags to a general purpose bucket. It copies an empty tag-set to the destination object.

      • When you attempt to REPLACE the tag-set of a directory bucket source object and set the x-amz-tagging value of the directory bucket destination object to empty.

      • When you attempt to REPLACE the tag-set of a general purpose bucket source object that has non-empty tags and set the x-amz-tagging value of the directory bucket destination object to empty.

      • When you attempt to REPLACE the tag-set of a directory bucket source object and don’t set the x-amz-tagging value of the directory bucket destination object. This is because the default value of x-amz-tagging is the empty value.


    • server_side_encryption(ServerSideEncryption) / set_server_side_encryption(Option<ServerSideEncryption>):
      required: false

      The server-side encryption algorithm used when storing this object in Amazon S3. Unrecognized or unsupported values won’t write a destination object and will receive a 400 Bad Request response.

      Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don’t specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.

      With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.

      General purpose buckets

      • For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.

      • When you perform a CopyObject operation, if you want to use a different type of encryption setting for the target object, you can specify appropriate encryption-related headers to encrypt the target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence.

      Directory buckets

      • For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession requests or PUT object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.

      • To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the directory bucket’s default encryption configuration with a KMS key (specifically, a customer managed key). The Amazon Web Services managed key (aws/s3) isn’t supported. Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you can’t override the customer managed key for the bucket’s SSE-KMS configuration. Then, when you perform a CopyObject operation and want to specify server-side encryption settings for new object copies with SSE-KMS in the encryption-related request headers, you must ensure the encryption key is the same customer managed key that you specified for the directory bucket’s default encryption configuration.


    • storage_class(StorageClass) / set_storage_class(Option<StorageClass>):
      required: false

      If the x-amz-storage-class header is not used, the copied object will be stored in the STANDARD Storage Class by default. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class.

      • Directory buckets - For directory buckets, only the S3 Express One Zone storage class is supported to store newly created objects. Unsupported storage class values won’t write a destination object and will respond with the HTTP status code 400 Bad Request.

      • Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS Storage Class.

      You can use the CopyObject action to change the storage class of an object that is already stored in Amazon S3 by using the x-amz-storage-class header. For more information, see Storage Classes in the Amazon S3 User Guide.

      Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:

      • The storage class of the source object is GLACIER or DEEP_ARCHIVE.

      • The storage class of the source object is INTELLIGENT_TIERING and it’s S3 Intelligent-Tiering access tier is Archive Access or Deep Archive Access.

      For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.


    • website_redirect_location(impl Into<String>) / set_website_redirect_location(Option<String>):
      required: false

      If the destination bucket is configured as a website, redirects requests for this object copy to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. This value is unique to each object and is not copied when using the x-amz-metadata-directive header. Instead, you may opt to provide this header in combination with the x-amz-metadata-directive header.

      This functionality is not supported for directory buckets.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      When you perform a CopyObject operation, if you want to use a different type of encryption setting for the target object, you can specify appropriate encryption-related headers to encrypt the target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence.

      This functionality is not supported when the destination bucket is a directory bucket.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded. Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      This functionality is not supported when the destination bucket is a directory bucket.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported when the destination bucket is a directory bucket.


    • ssekms_key_id(impl Into<String>) / set_ssekms_key_id(Option<String>):
      required: false

      Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. All GET and PUT requests for an object protected by KMS will fail if they’re not made via SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 User Guide.

      Directory buckets - If you specify x-amz-server-side-encryption with aws:kms, the x-amz-server-side-encryption-aws-kms-key-id header is implicitly assigned the ID of the KMS symmetric encryption customer managed key that’s configured for your directory bucket’s default encryption setting. If you want to specify the x-amz-server-side-encryption-aws-kms-key-id header explicitly, you can only specify it with the ID (Key ID or Key ARN) of the KMS customer managed key that’s configured for your directory bucket’s default encryption setting. Otherwise, you get an HTTP 400 Bad Request error. Only use the key ID or key ARN. The key alias format of the KMS key isn’t supported. Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. The Amazon Web Services managed key (aws/s3) isn’t supported.


    • ssekms_encryption_context(impl Into<String>) / set_ssekms_encryption_context(Option<String>):
      required: false

      Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for the destination object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.

      General purpose buckets - This value must be explicitly added to specify encryption context for CopyObject requests if you want an additional encryption context for your destination object. The additional encryption context of the source object won’t be copied to the destination object. For more information, see Encryption context in the Amazon S3 User Guide.

      Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.


    • bucket_key_enabled(bool) / set_bucket_key_enabled(Option<bool>):
      required: false

      Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS). If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object.

      Setting this header to true causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Specifying this header with a COPY action doesn’t affect bucket-level settings for S3 Bucket Key.

      For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.

      Directory buckets - S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.


    • copy_source_sse_customer_algorithm(impl Into<String>) / set_copy_source_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when decrypting the source object (for example, AES256).

      If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.

      This functionality is not supported when the source object is in a directory bucket.


    • copy_source_sse_customer_key(impl Into<String>) / set_copy_source_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be the same one that was used when the source object was created.

      If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.

      This functionality is not supported when the source object is in a directory bucket.


    • copy_source_sse_customer_key_md5(impl Into<String>) / set_copy_source_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.

      This functionality is not supported when the source object is in a directory bucket.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • tagging(impl Into<String>) / set_tagging(Option<String>):
      required: false

      The tag-set for the object copy in the destination bucket. This value must be used in conjunction with the x-amz-tagging-directive if you choose REPLACE for the x-amz-tagging-directive. If you choose COPY for the x-amz-tagging-directive, you don’t need to set the x-amz-tagging header, because the tag-set will be copied from the source object directly. The tag-set must be encoded as URL Query parameters.

      The default value is the empty value.

      Directory buckets - For directory buckets in a CopyObject operation, only the empty tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a 501 Not Implemented status code. When the destination bucket is a directory bucket, you will receive a 501 Not Implemented response in any of the following situations:

      • When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.

      • When you attempt to REPLACE the tag-set of a source object and set a non-empty value to x-amz-tagging.

      • When you don’t set the x-amz-tagging-directive header and the source object has non-empty tags. This is because the default value of x-amz-tagging-directive is COPY.

      Because only the empty tag-set is supported for directory buckets in a CopyObject operation, the following situations are allowed:

      • When you attempt to COPY the tag-set from a directory bucket source object that has no tags to a general purpose bucket. It copies an empty tag-set to the destination object.

      • When you attempt to REPLACE the tag-set of a directory bucket source object and set the x-amz-tagging value of the directory bucket destination object to empty.

      • When you attempt to REPLACE the tag-set of a general purpose bucket source object that has non-empty tags and set the x-amz-tagging value of the directory bucket destination object to empty.

      • When you attempt to REPLACE the tag-set of a directory bucket source object and don’t set the x-amz-tagging value of the directory bucket destination object. This is because the default value of x-amz-tagging is the empty value.


    • object_lock_mode(ObjectLockMode) / set_object_lock_mode(Option<ObjectLockMode>):
      required: false

      The Object Lock mode that you want to apply to the object copy.

      This functionality is not supported for directory buckets.


    • object_lock_retain_until_date(DateTime) / set_object_lock_retain_until_date(Option<DateTime>):
      required: false

      The date and time when you want the Object Lock of the object copy to expire.

      This functionality is not supported for directory buckets.


    • object_lock_legal_hold_status(ObjectLockLegalHoldStatus) / set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>):
      required: false

      Specifies whether you want to apply a legal hold to the object copy.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected destination bucket owner. If the account ID that you provide does not match the actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • expected_source_bucket_owner(impl Into<String>) / set_expected_source_bucket_owner(Option<String>):
      required: false

      The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual owner of the source bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with CopyObjectOutput with field(s):
    • copy_object_result(Option<CopyObjectResult>):

      Container for all response elements.

    • expiration(Option<String>):

      If the object expiration is configured, the response includes this header.

      This functionality is not supported for directory buckets.

    • copy_source_version_id(Option<String>):

      Version ID of the source object that was copied.

      This functionality is not supported when the source object is in a directory bucket.

    • version_id(Option<String>):

      Version ID of the newly created copy.

      This functionality is not supported for directory buckets.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms, aws:kms:dsse).

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • ssekms_encryption_context(Option<String>):

      If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the copied object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<CopyObjectError>
Source§

impl Client

Source

pub fn create_bucket(&self) -> CreateBucketFluentBuilder

Constructs a fluent builder for the CreateBucket operation.

Source§

impl Client

Source

pub fn create_multipart_upload(&self) -> CreateMultipartUploadFluentBuilder

Constructs a fluent builder for the CreateMultipartUpload operation.

  • The fluent builder is configurable:
    • acl(ObjectCannedAcl) / set_acl(Option<ObjectCannedAcl>):
      required: false

      The canned ACL to apply to the object. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL in the Amazon S3 User Guide.

      By default, all objects are private. Only the owner has full access control. When uploading an object, you can grant access permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the new object. For more information, see Using ACLs. One way to grant the permissions using the request headers is to specify a canned ACL with the x-amz-acl request header.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket where the multipart upload is initiated and where the object is uploaded.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • cache_control(impl Into<String>) / set_cache_control(Option<String>):
      required: false

      Specifies caching behavior along the request/reply chain.


    • content_disposition(impl Into<String>) / set_content_disposition(Option<String>):
      required: false

      Specifies presentational information for the object.


    • content_encoding(impl Into<String>) / set_content_encoding(Option<String>):
      required: false

      Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.

      For directory buckets, only the aws-chunked value is supported in this header field.


    • content_language(impl Into<String>) / set_content_language(Option<String>):
      required: false

      The language that the content is in.


    • content_type(impl Into<String>) / set_content_type(Option<String>):
      required: false

      A standard MIME type describing the format of the object data.


    • expires(DateTime) / set_expires(Option<DateTime>):
      required: false

      The date and time at which the object is no longer cacheable.


    • grant_full_control(impl Into<String>) / set_grant_full_control(Option<String>):
      required: false

      Specify access permissions explicitly to give the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.

      By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id=“11112222333”, id=“444455556666”

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read(impl Into<String>) / set_grant_read(Option<String>):
      required: false

      Specify access permissions explicitly to allow grantee to read the object data and its metadata.

      By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id=“11112222333”, id=“444455556666”

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read_acp(impl Into<String>) / set_grant_read_acp(Option<String>):
      required: false

      Specify access permissions explicitly to allows grantee to read the object ACL.

      By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id=“11112222333”, id=“444455556666”

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_write_acp(impl Into<String>) / set_grant_write_acp(Option<String>):
      required: false

      Specify access permissions explicitly to allows grantee to allow grantee to write the ACL for the applicable object.

      By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id=“11112222333”, id=“444455556666”

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the multipart upload is to be initiated.


    • metadata(impl Into<String>, impl Into<String>) / set_metadata(Option<HashMap::<String, String>>):
      required: false

      A map of metadata to store with the object in S3.


    • server_side_encryption(ServerSideEncryption) / set_server_side_encryption(Option<ServerSideEncryption>):
      required: false

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms).

      • Directory buckets - For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession requests or PUT object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.

        In the Zonal endpoint API calls (except CopyObject and UploadPartCopy) using the REST API, the encryption request headers must match the encryption settings that are specified in the CreateSession request. You can’t override the values of the encryption settings (x-amz-server-side-encryption, x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption-context, and x-amz-server-side-encryption-bucket-key-enabled) that are specified in the CreateSession request. You don’t need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the CreateSession request to protect new objects in the directory bucket.

        When you use the CLI or the Amazon Web Services SDKs, for CreateSession, the session token refreshes automatically to avoid service interruptions when a session expires. The CLI or the Amazon Web Services SDKs use the bucket’s default encryption configuration for the CreateSession request. It’s not supported to override the encryption settings values in the CreateSession request. So in the Zonal endpoint API calls (except CopyObject and UploadPartCopy), the encryption request headers must match the default encryption configuration of the directory bucket.


    • storage_class(StorageClass) / set_storage_class(Option<StorageClass>):
      required: false

      By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. For more information, see Storage Classes in the Amazon S3 User Guide.

      • For directory buckets, only the S3 Express One Zone storage class is supported to store newly created objects.

      • Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.


    • website_redirect_location(impl Into<String>) / set_website_redirect_location(Option<String>):
      required: false

      If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.

      This functionality is not supported for directory buckets.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the customer-provided encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported for directory buckets.


    • ssekms_key_id(impl Into<String>) / set_ssekms_key_id(Option<String>):
      required: false

      Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. If the KMS key doesn’t exist in the same account that’s issuing the command, you must use the full Key ARN not the Key ID.

      General purpose buckets - If you specify x-amz-server-side-encryption with aws:kms or aws:kms:dsse, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS key to use. If you specify x-amz-server-side-encryption:aws:kms or x-amz-server-side-encryption:aws:kms:dsse, but do not provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key (aws/s3) to protect the data.

      Directory buckets - If you specify x-amz-server-side-encryption with aws:kms, the x-amz-server-side-encryption-aws-kms-key-id header is implicitly assigned the ID of the KMS symmetric encryption customer managed key that’s configured for your directory bucket’s default encryption setting. If you want to specify the x-amz-server-side-encryption-aws-kms-key-id header explicitly, you can only specify it with the ID (Key ID or Key ARN) of the KMS customer managed key that’s configured for your directory bucket’s default encryption setting. Otherwise, you get an HTTP 400 Bad Request error. Only use the key ID or key ARN. The key alias format of the KMS key isn’t supported. Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. The Amazon Web Services managed key (aws/s3) isn’t supported.


    • ssekms_encryption_context(impl Into<String>) / set_ssekms_encryption_context(Option<String>):
      required: false

      Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs.

      Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.


    • bucket_key_enabled(bool) / set_bucket_key_enabled(Option<bool>):
      required: false

      Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS).

      General purpose buckets - Setting this header to true causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Also, specifying this header with a PUT action doesn’t affect bucket-level settings for S3 Bucket Key.

      Directory buckets - S3 Bucket Keys are always enabled for GET and PUT operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • tagging(impl Into<String>) / set_tagging(Option<String>):
      required: false

      The tag-set for the object. The tag-set must be encoded as URL Query parameters.

      This functionality is not supported for directory buckets.


    • object_lock_mode(ObjectLockMode) / set_object_lock_mode(Option<ObjectLockMode>):
      required: false

      Specifies the Object Lock mode that you want to apply to the uploaded object.

      This functionality is not supported for directory buckets.


    • object_lock_retain_until_date(DateTime) / set_object_lock_retain_until_date(Option<DateTime>):
      required: false

      Specifies the date and time when you want the Object Lock to expire.

      This functionality is not supported for directory buckets.


    • object_lock_legal_hold_status(ObjectLockLegalHoldStatus) / set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>):
      required: false

      Specifies whether you want to apply a legal hold to the uploaded object.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


  • On success, responds with CreateMultipartUploadOutput with field(s):
    • abort_date(Option<DateTime>):

      If the bucket has a lifecycle rule configured with an action to abort incomplete multipart uploads and the prefix in the lifecycle rule matches the object name in the request, the response includes this header. The header indicates when the initiated multipart upload becomes eligible for an abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in the Amazon S3 User Guide.

      The response also includes the x-amz-abort-rule-id header that provides the ID of the lifecycle configuration rule that defines the abort action.

      This functionality is not supported for directory buckets.

    • abort_rule_id(Option<String>):

      This header is returned along with the x-amz-abort-date header. It identifies the applicable lifecycle configuration rule that defines the action to abort incomplete multipart uploads.

      This functionality is not supported for directory buckets.

    • bucket(Option<String>):

      The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.

      Access points are not supported by directory buckets.

    • key(Option<String>):

      Object key for which the multipart upload was initiated.

    • upload_id(Option<String>):

      ID for the initiated multipart upload.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms).

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • ssekms_encryption_context(Option<String>):

      If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

    • checksum_algorithm(Option<ChecksumAlgorithm>):

      The algorithm that was used to create a checksum of the object.

  • On failure, responds with SdkError<CreateMultipartUploadError>
Source§

impl Client

Source

pub fn create_session(&self) -> CreateSessionFluentBuilder

Constructs a fluent builder for the CreateSession operation.

  • The fluent builder is configurable:
    • session_mode(SessionMode) / set_session_mode(Option<SessionMode>):
      required: false

      Specifies the mode of the session that will be created, either ReadWrite or ReadOnly. By default, a ReadWrite session is created. A ReadWrite session is capable of executing all the Zonal endpoint API operations on a directory bucket. A ReadOnly session is constrained to execute the following Zonal endpoint API operations: GetObject, HeadObject, ListObjectsV2, GetObjectAttributes, ListParts, and ListMultipartUploads.


    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket that you create a session for.


    • server_side_encryption(ServerSideEncryption) / set_server_side_encryption(Option<ServerSideEncryption>):
      required: false

      The server-side encryption algorithm to use when you store objects in the directory bucket.

      For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). By default, Amazon S3 encrypts data with SSE-S3. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide.


    • ssekms_key_id(impl Into<String>) / set_ssekms_key_id(Option<String>):
      required: false

      If you specify x-amz-server-side-encryption with aws:kms, you must specify the x-amz-server-side-encryption-aws-kms-key-id header with the ID (Key ID or Key ARN) of the KMS symmetric encryption customer managed key to use. Otherwise, you get an HTTP 400 Bad Request error. Only use the key ID or key ARN. The key alias format of the KMS key isn’t supported. Also, if the KMS key doesn’t exist in the same account that’t issuing the command, you must use the full Key ARN not the Key ID.

      Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. The Amazon Web Services managed key (aws/s3) isn’t supported.


    • ssekms_encryption_context(impl Into<String>) / set_ssekms_encryption_context(Option<String>):
      required: false

      Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject operations on this object.

      General purpose buckets - This value must be explicitly added during CopyObject operations if you want an additional encryption context for your object. For more information, see Encryption context in the Amazon S3 User Guide.

      Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.


    • bucket_key_enabled(bool) / set_bucket_key_enabled(Option<bool>):
      required: false

      Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using KMS keys (SSE-KMS).

      S3 Bucket Keys are always enabled for GET and PUT operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.


  • On success, responds with CreateSessionOutput with field(s):
    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store objects in the directory bucket.

    • ssekms_key_id(Option<String>):

      If you specify x-amz-server-side-encryption with aws:kms, this header indicates the ID of the KMS symmetric encryption customer managed key that was used for object encryption.

    • ssekms_encryption_context(Option<String>):

      If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject operations on this object.

    • bucket_key_enabled(Option<bool>):

      Indicates whether to use an S3 Bucket Key for server-side encryption with KMS keys (SSE-KMS).

    • credentials(Option<SessionCredentials>):

      The established temporary security credentials for the created session.

  • On failure, responds with SdkError<CreateSessionError>
Source§

impl Client

Source

pub fn delete_bucket(&self) -> DeleteBucketFluentBuilder

Constructs a fluent builder for the DeleteBucket operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      Specifies the bucket being deleted.

      Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name . Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format bucket_base_nameaz_id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

      For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented.


  • On success, responds with DeleteBucketOutput
  • On failure, responds with SdkError<DeleteBucketError>
Source§

impl Client

Source

pub fn delete_bucket_analytics_configuration( &self, ) -> DeleteBucketAnalyticsConfigurationFluentBuilder

Constructs a fluent builder for the DeleteBucketAnalyticsConfiguration operation.

Source§

impl Client

Source

pub fn delete_bucket_cors(&self) -> DeleteBucketCorsFluentBuilder

Constructs a fluent builder for the DeleteBucketCors operation.

Source§

impl Client

Source

pub fn delete_bucket_encryption(&self) -> DeleteBucketEncryptionFluentBuilder

Constructs a fluent builder for the DeleteBucketEncryption operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket containing the server-side encryption configuration to delete.

      Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name . Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format bucket_base_nameaz_id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

      For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented.


  • On success, responds with DeleteBucketEncryptionOutput
  • On failure, responds with SdkError<DeleteBucketEncryptionError>
Source§

impl Client

Source

pub fn delete_bucket_intelligent_tiering_configuration( &self, ) -> DeleteBucketIntelligentTieringConfigurationFluentBuilder

Constructs a fluent builder for the DeleteBucketIntelligentTieringConfiguration operation.

Source§

impl Client

Source

pub fn delete_bucket_inventory_configuration( &self, ) -> DeleteBucketInventoryConfigurationFluentBuilder

Constructs a fluent builder for the DeleteBucketInventoryConfiguration operation.

Source§

impl Client

Source

pub fn delete_bucket_lifecycle(&self) -> DeleteBucketLifecycleFluentBuilder

Constructs a fluent builder for the DeleteBucketLifecycle operation.

Source§

impl Client

Source

pub fn delete_bucket_metrics_configuration( &self, ) -> DeleteBucketMetricsConfigurationFluentBuilder

Constructs a fluent builder for the DeleteBucketMetricsConfiguration operation.

Source§

impl Client

Source

pub fn delete_bucket_ownership_controls( &self, ) -> DeleteBucketOwnershipControlsFluentBuilder

Constructs a fluent builder for the DeleteBucketOwnershipControls operation.

Source§

impl Client

Source

pub fn delete_bucket_policy(&self) -> DeleteBucketPolicyFluentBuilder

Constructs a fluent builder for the DeleteBucketPolicy operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name.

      Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name . Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format bucket_base_nameaz_id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

      For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented.


  • On success, responds with DeleteBucketPolicyOutput
  • On failure, responds with SdkError<DeleteBucketPolicyError>
Source§

impl Client

Source

pub fn delete_bucket_replication(&self) -> DeleteBucketReplicationFluentBuilder

Constructs a fluent builder for the DeleteBucketReplication operation.

Source§

impl Client

Source

pub fn delete_bucket_tagging(&self) -> DeleteBucketTaggingFluentBuilder

Constructs a fluent builder for the DeleteBucketTagging operation.

Source§

impl Client

Source

pub fn delete_bucket_website(&self) -> DeleteBucketWebsiteFluentBuilder

Constructs a fluent builder for the DeleteBucketWebsite operation.

Source§

impl Client

Source

pub fn delete_object(&self) -> DeleteObjectFluentBuilder

Constructs a fluent builder for the DeleteObject operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name of the bucket containing the object.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Key name of the object to delete.


    • mfa(impl Into<String>) / set_mfa(Option<String>):
      required: false

      The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.

      This functionality is not supported for directory buckets.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      Version ID used to reference a specific version of the object.

      For directory buckets in this API operation, only the null value of the version ID is supported.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • bypass_governance_retention(bool) / set_bypass_governance_retention(Option<bool>):
      required: false

      Indicates whether S3 Object Lock should bypass Governance-mode restrictions to process this operation. To use this header, you must have the s3:BypassGovernanceRetention permission.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • if_match(impl Into<String>) / set_if_match(Option<String>):
      required: false

      The If-Match header field makes the request method conditional on ETags. If the ETag value does not match, the operation returns a 412 Precondition Failed error. If the ETag matches or if the object doesn’t exist, the operation will return a 204 Success (No Content) response.

      For more information about conditional requests, see RFC 7232.

      This functionality is only supported for directory buckets.


    • if_match_last_modified_time(DateTime) / set_if_match_last_modified_time(Option<DateTime>):
      required: false

      If present, the object is deleted only if its modification times matches the provided Timestamp. If the Timestamp values do not match, the operation returns a 412 Precondition Failed error. If the Timestamp matches or if the object doesn’t exist, the operation returns a 204 Success (No Content) response.

      This functionality is only supported for directory buckets.


    • if_match_size(i64) / set_if_match_size(Option<i64>):
      required: false

      If present, the object is deleted only if its size matches the provided size in bytes. If the Size value does not match, the operation returns a 412 Precondition Failed error. If the Size matches or if the object doesn’t exist, the operation returns a 204 Success (No Content) response.

      This functionality is only supported for directory buckets.

      You can use the If-Match, x-amz-if-match-last-modified-time and x-amz-if-match-size conditional headers in conjunction with each-other or individually.


  • On success, responds with DeleteObjectOutput with field(s):
    • delete_marker(Option<bool>):

      Indicates whether the specified object version that was permanently deleted was (true) or was not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or not (false) the current version of the object is a delete marker.

      This functionality is not supported for directory buckets.

    • version_id(Option<String>):

      Returns the version ID of the delete marker created as a result of the DELETE operation.

      This functionality is not supported for directory buckets.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<DeleteObjectError>
Source§

impl Client

Source

pub fn delete_object_tagging(&self) -> DeleteObjectTaggingFluentBuilder

Constructs a fluent builder for the DeleteObjectTagging operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the objects from which to remove the tags.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      The key that identifies the object in the bucket from which to remove all tags.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      The versionId of the object that the tag-set will be removed from.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with DeleteObjectTaggingOutput with field(s):
  • On failure, responds with SdkError<DeleteObjectTaggingError>
Source§

impl Client

Source

pub fn delete_objects(&self) -> DeleteObjectsFluentBuilder

Constructs a fluent builder for the DeleteObjects operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the objects to delete.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • delete(Delete) / set_delete(Option<Delete>):
      required: true

      Container for the request.


    • mfa(impl Into<String>) / set_mfa(Option<String>):
      required: false

      The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.

      When performing the DeleteObjects operation on an MFA delete enabled bucket, which attempts to delete the specified versioned objects, you must include an MFA token. If you don’t provide an MFA token, the entire request will fail, even if there are non-versioned objects that you are trying to delete. If you provide an invalid token, whether there are versioned object keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • bypass_governance_retention(bool) / set_bypass_governance_retention(Option<bool>):
      required: false

      Specifies whether you want to delete this object even if it has a Governance-type Object Lock in place. To use this header, you must have the s3:BypassGovernanceRetention permission.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request.

      For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list:

      • CRC32

      • CRC32C

      • SHA1

      • SHA256

      For more information, see Checking object integrity in the Amazon S3 User Guide.

      If the individual checksum value you provide through x-amz-checksum-algorithm doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm .

      If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter.


  • On success, responds with DeleteObjectsOutput with field(s):
  • On failure, responds with SdkError<DeleteObjectsError>
Source§

impl Client

Source

pub fn delete_public_access_block(&self) -> DeletePublicAccessBlockFluentBuilder

Constructs a fluent builder for the DeletePublicAccessBlock operation.

Source§

impl Client

Source

pub fn get_bucket_accelerate_configuration( &self, ) -> GetBucketAccelerateConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketAccelerateConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_acl(&self) -> GetBucketAclFluentBuilder

Constructs a fluent builder for the GetBucketAcl operation.

Source§

impl Client

Source

pub fn get_bucket_analytics_configuration( &self, ) -> GetBucketAnalyticsConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketAnalyticsConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_cors(&self) -> GetBucketCorsFluentBuilder

Constructs a fluent builder for the GetBucketCors operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name for which to get the cors configuration.

      When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

      When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with GetBucketCorsOutput with field(s):
  • On failure, responds with SdkError<GetBucketCorsError>
Source§

impl Client

Source

pub fn get_bucket_encryption(&self) -> GetBucketEncryptionFluentBuilder

Constructs a fluent builder for the GetBucketEncryption operation.

Source§

impl Client

Source

pub fn get_bucket_intelligent_tiering_configuration( &self, ) -> GetBucketIntelligentTieringConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketIntelligentTieringConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_inventory_configuration( &self, ) -> GetBucketInventoryConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketInventoryConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_lifecycle_configuration( &self, ) -> GetBucketLifecycleConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketLifecycleConfiguration operation.

  • The fluent builder is configurable:
  • On success, responds with GetBucketLifecycleConfigurationOutput with field(s):
    • rules(Option<Vec::<LifecycleRule>>):

      Container for a lifecycle rule.

    • transition_default_minimum_object_size(Option<TransitionDefaultMinimumObjectSize>):

      Indicates which default minimum object size behavior is applied to the lifecycle configuration.

      This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.

      • all_storage_classes_128K - Objects smaller than 128 KB will not transition to any storage class by default.

      • varies_by_storage_class - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.

      To customize the minimum object size for any transition you can add a filter that specifies a custom ObjectSizeGreaterThan or ObjectSizeLessThan in the body of your transition rule. Custom filters always take precedence over the default transition behavior.

  • On failure, responds with SdkError<GetBucketLifecycleConfigurationError>
Source§

impl Client

Source

pub fn get_bucket_location(&self) -> GetBucketLocationFluentBuilder

Constructs a fluent builder for the GetBucketLocation operation.

Source§

impl Client

Source

pub fn get_bucket_logging(&self) -> GetBucketLoggingFluentBuilder

Constructs a fluent builder for the GetBucketLogging operation.

Source§

impl Client

Source

pub fn get_bucket_metrics_configuration( &self, ) -> GetBucketMetricsConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketMetricsConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_notification_configuration( &self, ) -> GetBucketNotificationConfigurationFluentBuilder

Constructs a fluent builder for the GetBucketNotificationConfiguration operation.

Source§

impl Client

Source

pub fn get_bucket_ownership_controls( &self, ) -> GetBucketOwnershipControlsFluentBuilder

Constructs a fluent builder for the GetBucketOwnershipControls operation.

Source§

impl Client

Source

pub fn get_bucket_policy(&self) -> GetBucketPolicyFluentBuilder

Constructs a fluent builder for the GetBucketPolicy operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name to get the bucket policy for.

      Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name . Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format bucket_base_nameaz_id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide

      Access points - When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

      Object Lambda access points - When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

      Access points and Object Lambda access points are not supported by directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

      For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented.


  • On success, responds with GetBucketPolicyOutput with field(s):
  • On failure, responds with SdkError<GetBucketPolicyError>
Source§

impl Client

Source

pub fn get_bucket_policy_status(&self) -> GetBucketPolicyStatusFluentBuilder

Constructs a fluent builder for the GetBucketPolicyStatus operation.

Source§

impl Client

Source

pub fn get_bucket_replication(&self) -> GetBucketReplicationFluentBuilder

Constructs a fluent builder for the GetBucketReplication operation.

Source§

impl Client

Source

pub fn get_bucket_request_payment(&self) -> GetBucketRequestPaymentFluentBuilder

Constructs a fluent builder for the GetBucketRequestPayment operation.

Source§

impl Client

Source

pub fn get_bucket_tagging(&self) -> GetBucketTaggingFluentBuilder

Constructs a fluent builder for the GetBucketTagging operation.

Source§

impl Client

Source

pub fn get_bucket_versioning(&self) -> GetBucketVersioningFluentBuilder

Constructs a fluent builder for the GetBucketVersioning operation.

Source§

impl Client

Source

pub fn get_bucket_website(&self) -> GetBucketWebsiteFluentBuilder

Constructs a fluent builder for the GetBucketWebsite operation.

Source§

impl Client

Source

pub fn get_object(&self) -> GetObjectFluentBuilder

Constructs a fluent builder for the GetObject operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the object.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Object Lambda access points - When you use this action with an Object Lambda access point, you must direct requests to the Object Lambda access point hostname. The Object Lambda access point hostname takes the form AccessPointName-AccountId.s3-object-lambda.Region.amazonaws.com.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • if_match(impl Into<String>) / set_if_match(Option<String>):
      required: false

      Return the object only if its entity tag (ETag) is the same as the one specified in this header; otherwise, return a 412 Precondition Failed error.

      If both of the If-Match and If-Unmodified-Since headers are present in the request as follows: If-Match condition evaluates to true, and; If-Unmodified-Since condition evaluates to false; then, S3 returns 200 OK and the data requested.

      For more information about conditional requests, see RFC 7232.


    • if_modified_since(DateTime) / set_if_modified_since(Option<DateTime>):
      required: false

      Return the object only if it has been modified since the specified time; otherwise, return a 304 Not Modified error.

      If both of the If-None-Match and If-Modified-Since headers are present in the request as follows: If-None-Match condition evaluates to false, and; If-Modified-Since condition evaluates to true; then, S3 returns 304 Not Modified status code.

      For more information about conditional requests, see RFC 7232.


    • if_none_match(impl Into<String>) / set_if_none_match(Option<String>):
      required: false

      Return the object only if its entity tag (ETag) is different from the one specified in this header; otherwise, return a 304 Not Modified error.

      If both of the If-None-Match and If-Modified-Since headers are present in the request as follows: If-None-Match condition evaluates to false, and; If-Modified-Since condition evaluates to true; then, S3 returns 304 Not Modified HTTP status code.

      For more information about conditional requests, see RFC 7232.


    • if_unmodified_since(DateTime) / set_if_unmodified_since(Option<DateTime>):
      required: false

      Return the object only if it has not been modified since the specified time; otherwise, return a 412 Precondition Failed error.

      If both of the If-Match and If-Unmodified-Since headers are present in the request as follows: If-Match condition evaluates to true, and; If-Unmodified-Since condition evaluates to false; then, S3 returns 200 OK and the data requested.

      For more information about conditional requests, see RFC 7232.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Key of the object to get.


    • range(impl Into<String>) / set_range(Option<String>):
      required: false

      Downloads the specified byte range of an object. For more information about the HTTP Range header, see https://www.rfc-editor.org/rfc/rfc9110.html#name-range.

      Amazon S3 doesn’t support retrieving multiple ranges of data per GET request.


    • response_cache_control(impl Into<String>) / set_response_cache_control(Option<String>):
      required: false

      Sets the Cache-Control header of the response.


    • response_content_disposition(impl Into<String>) / set_response_content_disposition(Option<String>):
      required: false

      Sets the Content-Disposition header of the response.


    • response_content_encoding(impl Into<String>) / set_response_content_encoding(Option<String>):
      required: false

      Sets the Content-Encoding header of the response.


    • response_content_language(impl Into<String>) / set_response_content_language(Option<String>):
      required: false

      Sets the Content-Language header of the response.


    • response_content_type(impl Into<String>) / set_response_content_type(Option<String>):
      required: false

      Sets the Content-Type header of the response.


    • response_expires(DateTime) / set_response_expires(Option<DateTime>):
      required: false

      Sets the Expires header of the response.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      Version ID used to reference a specific version of the object.

      By default, the GetObject operation returns the current version of an object. To return a different version, use the versionId subresource.

      • If you include a versionId in your request header, you must have the s3:GetObjectVersion permission to access a specific version of an object. The s3:GetObject permission is not required in this scenario.

      • If you request the current version of an object without a specific versionId in the request header, only the s3:GetObject permission is required. The s3:GetObjectVersion permission is not required in this scenario.

      • Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request.

      For more information about versioning, see PutBucketVersioning.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when decrypting the object (for example, AES256).

      If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key that you originally provided for Amazon S3 to encrypt the data before storing it. This value is used to decrypt the object when recovering it and must match the one used when storing the data. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the customer-provided encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • part_number(i32) / set_part_number(Option<i32>):
      required: false

      Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a ‘ranged’ GET request for the part specified. Useful for downloading just a part of an object.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • checksum_mode(ChecksumMode) / set_checksum_mode(Option<ChecksumMode>):
      required: false

      To retrieve the checksum, this mode must be enabled.

      General purpose buckets - In addition, if you enable checksum mode and the object is uploaded with a checksum and encrypted with an Key Management Service (KMS) key, you must have permission to use the kms:Decrypt action to retrieve the checksum.


  • On success, responds with GetObjectOutput with field(s):
    • body(ByteStream):

      Object data.

    • delete_marker(Option<bool>):

      Indicates whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.

      • If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true in the response.

      • If the specified version in the request is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header.

    • accept_ranges(Option<String>):

      Indicates that a range of bytes was specified in the request.

    • expiration(Option<String>):

      If the object expiration is configured (see PutBucketLifecycleConfiguration ), the response includes this header. It includes the expiry-date and rule-id key-value pairs providing object expiration information. The value of the rule-id is URL-encoded.

      This functionality is not supported for directory buckets.

    • restore(Option<String>):

      Provides information about object restoration action and expiration time of the restored object copy.

      This functionality is not supported for directory buckets. Only the S3 Express One Zone storage class is supported by directory buckets to store objects.

    • last_modified(Option<DateTime>):

      Date and time when the object was last modified.

      General purpose buckets - When you specify a versionId of the object in your request, if the specified version in the request is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header.

    • content_length(Option<i64>):

      Size of the body in bytes.

    • e_tag(Option<String>):

      An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.

    • checksum_crc32(Option<String>):

      The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32_c(Option<String>):

      The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha1(Option<String>):

      The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha256(Option<String>):

      The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

    • missing_meta(Option<i32>):

      This is set to the number of metadata entries not returned in the headers that are prefixed with x-amz-meta-. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.

      This functionality is not supported for directory buckets.

    • version_id(Option<String>):

      Version ID of the object.

      This functionality is not supported for directory buckets.

    • cache_control(Option<String>):

      Specifies caching behavior along the request/reply chain.

    • content_disposition(Option<String>):

      Specifies presentational information for the object.

    • content_encoding(Option<String>):

      Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.

    • content_language(Option<String>):

      The language the content is in.

    • content_range(Option<String>):

      The portion of the object returned in the response.

    • content_type(Option<String>):

      A standard MIME type describing the format of the object data.

    • website_redirect_location(Option<String>):

      If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.

      This functionality is not supported for directory buckets.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3.

    • metadata(Option<HashMap::<String, String>>):

      A map of metadata to store with the object in S3.

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • storage_class(Option<StorageClass>):

      Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.

      Directory buckets - Only the S3 Express One Zone storage class is supported by directory buckets to store objects.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

    • replication_status(Option<ReplicationStatus>):

      Amazon S3 can return this if your request involves a bucket that is either a source or destination in a replication rule.

      This functionality is not supported for directory buckets.

    • parts_count(Option<i32>):

      The count of parts this object has. This value is only returned if you specify partNumber in your request and the object was uploaded as a multipart upload.

    • tag_count(Option<i32>):

      The number of tags, if any, on the object, when you have the relevant permission to read object tags.

      You can use GetObjectTagging to retrieve the tag set associated with an object.

      This functionality is not supported for directory buckets.

    • object_lock_mode(Option<ObjectLockMode>):

      The Object Lock mode that’s currently in place for this object.

      This functionality is not supported for directory buckets.

    • object_lock_retain_until_date(Option<DateTime>):

      The date and time when this object’s Object Lock will expire.

      This functionality is not supported for directory buckets.

    • object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>):

      Indicates whether this object has an active legal hold. This field is only returned if you have permission to view an object’s legal hold status.

      This functionality is not supported for directory buckets.

    • expires(Option<DateTime>):

      The date and time at which the object is no longer cacheable.

    • expires_string(Option<String>):

      The date and time at which the object is no longer cacheable.

  • On failure, responds with SdkError<GetObjectError>
Source§

impl Client

Source

pub fn get_object_acl(&self) -> GetObjectAclFluentBuilder

Constructs a fluent builder for the GetObjectAcl operation.

Source§

impl Client

Source

pub fn get_object_attributes(&self) -> GetObjectAttributesFluentBuilder

Constructs a fluent builder for the GetObjectAttributes operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket that contains the object.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      The object key.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      The version ID used to reference a specific version of the object.

      S3 Versioning isn’t enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request.


    • max_parts(i32) / set_max_parts(Option<i32>):
      required: false

      Sets the maximum number of parts to return.


    • part_number_marker(impl Into<String>) / set_part_number_marker(Option<String>):
      required: false

      Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • object_attributes(ObjectAttributes) / set_object_attributes(Option<Vec::<ObjectAttributes>>):
      required: true

      Specifies the fields at the root level that you want returned in the response. Fields that you do not specify are not returned.


  • On success, responds with GetObjectAttributesOutput with field(s):
  • On failure, responds with SdkError<GetObjectAttributesError>
Source§

impl Client

Constructs a fluent builder for the GetObjectLegalHold operation.

Source§

impl Client

Source

pub fn get_object_lock_configuration( &self, ) -> GetObjectLockConfigurationFluentBuilder

Constructs a fluent builder for the GetObjectLockConfiguration operation.

Source§

impl Client

Source

pub fn get_object_retention(&self) -> GetObjectRetentionFluentBuilder

Constructs a fluent builder for the GetObjectRetention operation.

Source§

impl Client

Source

pub fn get_object_tagging(&self) -> GetObjectTaggingFluentBuilder

Constructs a fluent builder for the GetObjectTagging operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the object for which to get the tagging information.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which to get the tagging information.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      The versionId of the object for which to get the tagging information.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


  • On success, responds with GetObjectTaggingOutput with field(s):
  • On failure, responds with SdkError<GetObjectTaggingError>
Source§

impl Client

Source

pub fn get_object_torrent(&self) -> GetObjectTorrentFluentBuilder

Constructs a fluent builder for the GetObjectTorrent operation.

Source§

impl Client

Source

pub fn get_public_access_block(&self) -> GetPublicAccessBlockFluentBuilder

Constructs a fluent builder for the GetPublicAccessBlock operation.

Source§

impl Client

Source

pub fn head_bucket(&self) -> HeadBucketFluentBuilder

Constructs a fluent builder for the HeadBucket operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Object Lambda access points - When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with HeadBucketOutput with field(s):
  • On failure, responds with SdkError<HeadBucketError>
Source§

impl Client

Source

pub fn head_object(&self) -> HeadObjectFluentBuilder

Constructs a fluent builder for the HeadObject operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket that contains the object.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • if_match(impl Into<String>) / set_if_match(Option<String>):
      required: false

      Return the object only if its entity tag (ETag) is the same as the one specified; otherwise, return a 412 (precondition failed) error.

      If both of the If-Match and If-Unmodified-Since headers are present in the request as follows:

      • If-Match condition evaluates to true, and;

      • If-Unmodified-Since condition evaluates to false;

      Then Amazon S3 returns 200 OK and the data requested.

      For more information about conditional requests, see RFC 7232.


    • if_modified_since(DateTime) / set_if_modified_since(Option<DateTime>):
      required: false

      Return the object only if it has been modified since the specified time; otherwise, return a 304 (not modified) error.

      If both of the If-None-Match and If-Modified-Since headers are present in the request as follows:

      • If-None-Match condition evaluates to false, and;

      • If-Modified-Since condition evaluates to true;

      Then Amazon S3 returns the 304 Not Modified response code.

      For more information about conditional requests, see RFC 7232.


    • if_none_match(impl Into<String>) / set_if_none_match(Option<String>):
      required: false

      Return the object only if its entity tag (ETag) is different from the one specified; otherwise, return a 304 (not modified) error.

      If both of the If-None-Match and If-Modified-Since headers are present in the request as follows:

      • If-None-Match condition evaluates to false, and;

      • If-Modified-Since condition evaluates to true;

      Then Amazon S3 returns the 304 Not Modified response code.

      For more information about conditional requests, see RFC 7232.


    • if_unmodified_since(DateTime) / set_if_unmodified_since(Option<DateTime>):
      required: false

      Return the object only if it has not been modified since the specified time; otherwise, return a 412 (precondition failed) error.

      If both of the If-Match and If-Unmodified-Since headers are present in the request as follows:

      • If-Match condition evaluates to true, and;

      • If-Unmodified-Since condition evaluates to false;

      Then Amazon S3 returns 200 OK and the data requested.

      For more information about conditional requests, see RFC 7232.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      The object key.


    • range(impl Into<String>) / set_range(Option<String>):
      required: false

      HeadObject returns only the metadata for an object. If the Range is satisfiable, only the ContentLength is affected in the response. If the Range is not satisfiable, S3 returns a 416 - Requested Range Not Satisfiable error.


    • response_cache_control(impl Into<String>) / set_response_cache_control(Option<String>):
      required: false

      Sets the Cache-Control header of the response.


    • response_content_disposition(impl Into<String>) / set_response_content_disposition(Option<String>):
      required: false

      Sets the Content-Disposition header of the response.


    • response_content_encoding(impl Into<String>) / set_response_content_encoding(Option<String>):
      required: false

      Sets the Content-Encoding header of the response.


    • response_content_language(impl Into<String>) / set_response_content_language(Option<String>):
      required: false

      Sets the Content-Language header of the response.


    • response_content_type(impl Into<String>) / set_response_content_type(Option<String>):
      required: false

      Sets the Content-Type header of the response.


    • response_expires(DateTime) / set_response_expires(Option<DateTime>):
      required: false

      Sets the Expires header of the response.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      Version ID used to reference a specific version of the object.

      For directory buckets in this API operation, only the null value of the version ID is supported.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • part_number(i32) / set_part_number(Option<i32>):
      required: false

      Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a ‘ranged’ HEAD request for the part specified. Useful querying about the size of the part and the number of parts in this object.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • checksum_mode(ChecksumMode) / set_checksum_mode(Option<ChecksumMode>):
      required: false

      To retrieve the checksum, this parameter must be enabled.

      General purpose buckets - If you enable checksum mode and the object is uploaded with a checksum and encrypted with an Key Management Service (KMS) key, you must have permission to use the kms:Decrypt action to retrieve the checksum.

      Directory buckets - If you enable ChecksumMode and the object is encrypted with Amazon Web Services Key Management Service (Amazon Web Services KMS), you must also have the kms:GenerateDataKey and kms:Decrypt permissions in IAM identity-based policies and KMS key policies for the KMS key to retrieve the checksum of the object.


  • On success, responds with HeadObjectOutput with field(s):
    • delete_marker(Option<bool>):

      Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.

      This functionality is not supported for directory buckets.

    • accept_ranges(Option<String>):

      Indicates that a range of bytes was specified.

    • expiration(Option<String>):

      If the object expiration is configured (see PutBucketLifecycleConfiguration ), the response includes this header. It includes the expiry-date and rule-id key-value pairs providing object expiration information. The value of the rule-id is URL-encoded.

      This functionality is not supported for directory buckets.

    • restore(Option<String>):

      If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see RestoreObject or an archive copy is already restored.

      If an archive copy is already restored, the header value indicates when Amazon S3 is scheduled to delete the object copy. For example:

      x-amz-restore: ongoing-request=“false”, expiry-date=“Fri, 21 Dec 2012 00:00:00 GMT”

      If the object restoration is in progress, the header returns the value ongoing-request=“true”.

      For more information about archiving objects, see Transitioning Objects: General Considerations.

      This functionality is not supported for directory buckets. Only the S3 Express One Zone storage class is supported by directory buckets to store objects.

    • archive_status(Option<ArchiveStatus>):

      The archive state of the head object.

      This functionality is not supported for directory buckets.

    • last_modified(Option<DateTime>):

      Date and time when the object was last modified.

    • content_length(Option<i64>):

      Size of the body in bytes.

    • checksum_crc32(Option<String>):

      The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32_c(Option<String>):

      The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha1(Option<String>):

      The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha256(Option<String>):

      The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • e_tag(Option<String>):

      An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.

    • missing_meta(Option<i32>):

      This is set to the number of metadata entries not returned in x-amz-meta headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.

      This functionality is not supported for directory buckets.

    • version_id(Option<String>):

      Version ID of the object.

      This functionality is not supported for directory buckets.

    • cache_control(Option<String>):

      Specifies caching behavior along the request/reply chain.

    • content_disposition(Option<String>):

      Specifies presentational information for the object.

    • content_encoding(Option<String>):

      Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.

    • content_language(Option<String>):

      The language the content is in.

    • content_type(Option<String>):

      A standard MIME type describing the format of the object data.

    • website_redirect_location(Option<String>):

      If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.

      This functionality is not supported for directory buckets.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms, aws:kms:dsse).

    • metadata(Option<HashMap::<String, String>>):

      A map of metadata to store with the object in S3.

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • storage_class(Option<StorageClass>):

      Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.

      For more information, see Storage Classes.

      Directory buckets - Only the S3 Express One Zone storage class is supported by directory buckets to store objects.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

    • replication_status(Option<ReplicationStatus>):

      Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.

      In replication, you have a source bucket on which you configure replication and destination bucket or buckets where Amazon S3 stores object replicas. When you request an object (GetObject) or object metadata (HeadObject) from these buckets, Amazon S3 will return the x-amz-replication-status header in the response as follows:

      • If requesting an object from the source bucket, Amazon S3 will return the x-amz-replication-status header if the object in your request is eligible for replication.

        For example, suppose that in your replication configuration, you specify object prefix TaxDocs requesting Amazon S3 to replicate objects with key prefix TaxDocs. Any objects you upload with this key name prefix, for example TaxDocs/document1.pdf, are eligible for replication. For any object request with this key name prefix, Amazon S3 will return the x-amz-replication-status header with value PENDING, COMPLETED or FAILED indicating object replication status.

      • If requesting an object from a destination bucket, Amazon S3 will return the x-amz-replication-status header with value REPLICA if the object in your request is a replica that Amazon S3 created and there is no replica modification replication in progress.

      • When replicating objects to multiple destination buckets, the x-amz-replication-status header acts differently. The header of the source object will only return a value of COMPLETED when replication is successful to all destinations. The header will remain at value PENDING until replication has completed for all destinations. If one or more destinations fails replication the header will return FAILED.

      For more information, see Replication.

      This functionality is not supported for directory buckets.

    • parts_count(Option<i32>):

      The count of parts this object has. This value is only returned if you specify partNumber in your request and the object was uploaded as a multipart upload.

    • object_lock_mode(Option<ObjectLockMode>):

      The Object Lock mode, if any, that’s in effect for this object. This header is only returned if the requester has the s3:GetObjectRetention permission. For more information about S3 Object Lock, see Object Lock.

      This functionality is not supported for directory buckets.

    • object_lock_retain_until_date(Option<DateTime>):

      The date and time when the Object Lock retention period expires. This header is only returned if the requester has the s3:GetObjectRetention permission.

      This functionality is not supported for directory buckets.

    • object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>):

      Specifies whether a legal hold is in effect for this object. This header is only returned if the requester has the s3:GetObjectLegalHold permission. This header is not returned if the specified version of this object has never had a legal hold applied. For more information about S3 Object Lock, see Object Lock.

      This functionality is not supported for directory buckets.

    • expires(Option<DateTime>):

      The date and time at which the object is no longer cacheable.

    • expires_string(Option<String>):

      The date and time at which the object is no longer cacheable.

  • On failure, responds with SdkError<HeadObjectError>
Source§

impl Client

Source

pub fn list_bucket_analytics_configurations( &self, ) -> ListBucketAnalyticsConfigurationsFluentBuilder

Constructs a fluent builder for the ListBucketAnalyticsConfigurations operation.

Source§

impl Client

Source

pub fn list_bucket_intelligent_tiering_configurations( &self, ) -> ListBucketIntelligentTieringConfigurationsFluentBuilder

Constructs a fluent builder for the ListBucketIntelligentTieringConfigurations operation.

Source§

impl Client

Source

pub fn list_bucket_inventory_configurations( &self, ) -> ListBucketInventoryConfigurationsFluentBuilder

Constructs a fluent builder for the ListBucketInventoryConfigurations operation.

Source§

impl Client

Source

pub fn list_bucket_metrics_configurations( &self, ) -> ListBucketMetricsConfigurationsFluentBuilder

Constructs a fluent builder for the ListBucketMetricsConfigurations operation.

Source§

impl Client

Source

pub fn list_buckets(&self) -> ListBucketsFluentBuilder

Constructs a fluent builder for the ListBuckets operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • max_buckets(i32) / set_max_buckets(Option<i32>):
      required: false

      Maximum number of buckets to be returned in response. When the number is more than the count of buckets that are owned by an Amazon Web Services account, return all the buckets in response.


    • continuation_token(impl Into<String>) / set_continuation_token(Option<String>):
      required: false

      ContinuationToken indicates to Amazon S3 that the list is being continued on this bucket with a token. ContinuationToken is obfuscated and is not a real key. You can use this ContinuationToken for pagination of the list results.

      Length Constraints: Minimum length of 0. Maximum length of 1024.

      Required: No.

      If you specify the bucket-region, prefix, or continuation-token query parameters without using max-buckets to set the maximum number of buckets returned in the response, Amazon S3 applies a default page size of 10,000 and provides a continuation token if there are more buckets.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      Limits the response to bucket names that begin with the specified bucket name prefix.


    • bucket_region(impl Into<String>) / set_bucket_region(Option<String>):
      required: false

      Limits the response to buckets that are located in the specified Amazon Web Services Region. The Amazon Web Services Region must be expressed according to the Amazon Web Services Region code, such as us-west-2 for the US West (Oregon) Region. For a list of the valid values for all of the Amazon Web Services Regions, see Regions and Endpoints.

      Requests made to a Regional endpoint that is different from the bucket-region parameter are not supported. For example, if you want to limit the response to your buckets in Region us-west-2, the request must be made to an endpoint in Region us-west-2.


  • On success, responds with ListBucketsOutput with field(s):
    • buckets(Option<Vec::<Bucket>>):

      The list of buckets owned by the requester.

    • owner(Option<Owner>):

      The owner of the buckets listed.

    • continuation_token(Option<String>):

      ContinuationToken is included in the response when there are more buckets that can be listed with pagination. The next ListBuckets request to Amazon S3 can be continued with this ContinuationToken. ContinuationToken is obfuscated and is not a real bucket.

    • prefix(Option<String>):

      If Prefix was sent with the request, it is included in the response.

      All bucket names in the response begin with the specified bucket name prefix.

  • On failure, responds with SdkError<ListBucketsError>
Source§

impl Client

Source

pub fn list_directory_buckets(&self) -> ListDirectoryBucketsFluentBuilder

Constructs a fluent builder for the ListDirectoryBuckets operation. This operation supports pagination; See into_paginator().

Source§

impl Client

Source

pub fn list_multipart_uploads(&self) -> ListMultipartUploadsFluentBuilder

Constructs a fluent builder for the ListMultipartUploads operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket to which the multipart upload was initiated.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • delimiter(impl Into<String>) / set_delimiter(Option<String>):
      required: false

      Character you use to group keys.

      All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element, CommonPrefixes. If you don’t specify the prefix parameter, then the substring starts at the beginning of the key. The keys that are grouped under CommonPrefixes result element are not returned elsewhere in the response.

      Directory buckets - For directory buckets, / is the only supported delimiter.


    • encoding_type(EncodingType) / set_encoding_type(Option<EncodingType>):
      required: false

      Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.

      When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png will appear as test_file%283%29.png.


    • key_marker(impl Into<String>) / set_key_marker(Option<String>):
      required: false

      Specifies the multipart upload after which listing should begin.

      • General purpose buckets - For general purpose buckets, key-marker is an object key. Together with upload-id-marker, this parameter specifies the multipart upload after which listing should begin.

        If upload-id-marker is not specified, only the keys lexicographically greater than the specified key-marker will be included in the list.

        If upload-id-marker is specified, any multipart uploads for a key equal to the key-marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload-id-marker.

      • Directory buckets - For directory buckets, key-marker is obfuscated and isn’t a real object key. The upload-id-marker parameter isn’t supported by directory buckets. To list the additional multipart uploads, you only need to set the value of key-marker to the NextKeyMarker value from the previous response.

        In the ListMultipartUploads response, the multipart uploads aren’t sorted lexicographically based on the object keys.


    • max_uploads(i32) / set_max_uploads(Option<i32>):
      required: false

      Sets the maximum number of multipart uploads, from 1 to 1,000, to return in the response body. 1,000 is the maximum number of uploads that can be returned in a response.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      Lists in-progress uploads only for those keys that begin with the specified prefix. You can use prefixes to separate a bucket into different grouping of keys. (You can think of using prefix to make groups in the same way that you’d use a folder in a file system.)

      Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.


    • upload_id_marker(impl Into<String>) / set_upload_id_marker(Option<String>):
      required: false

      Together with key-marker, specifies the multipart upload after which listing should begin. If key-marker is not specified, the upload-id-marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key-marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload-id-marker.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


  • On success, responds with ListMultipartUploadsOutput with field(s):
    • bucket(Option<String>):

      The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.

    • key_marker(Option<String>):

      The key at or after which the listing began.

    • upload_id_marker(Option<String>):

      Together with key-marker, specifies the multipart upload after which listing should begin. If key-marker is not specified, the upload-id-marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key-marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload-id-marker.

      This functionality is not supported for directory buckets.

    • next_key_marker(Option<String>):

      When a list is truncated, this element specifies the value that should be used for the key-marker request parameter in a subsequent request.

    • prefix(Option<String>):

      When a prefix is provided in the request, this field contains the specified prefix. The result contains only keys starting with the specified prefix.

      Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.

    • delimiter(Option<String>):

      Contains the delimiter you specified in the request. If you don’t specify a delimiter in your request, this element is absent from the response.

      Directory buckets - For directory buckets, / is the only supported delimiter.

    • next_upload_id_marker(Option<String>):

      When a list is truncated, this element specifies the value that should be used for the upload-id-marker request parameter in a subsequent request.

      This functionality is not supported for directory buckets.

    • max_uploads(Option<i32>):

      Maximum number of multipart uploads that could have been included in the response.

    • is_truncated(Option<bool>):

      Indicates whether the returned list of multipart uploads is truncated. A value of true indicates that the list was truncated. The list can be truncated if the number of multipart uploads exceeds the limit allowed or specified by max uploads.

    • uploads(Option<Vec::<MultipartUpload>>):

      Container for elements related to a particular multipart upload. A response can contain zero or more Upload elements.

    • common_prefixes(Option<Vec::<CommonPrefix>>):

      If you specify a delimiter in the request, then the result returns each distinct key prefix containing the delimiter in a CommonPrefixes element. The distinct key prefixes are returned in the Prefix child element.

      Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.

    • encoding_type(Option<EncodingType>):

      Encoding type used by Amazon S3 to encode object keys in the response.

      If you specify the encoding-type request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:

      Delimiter, KeyMarker, Prefix, NextKeyMarker, Key.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<ListMultipartUploadsError>
Source§

impl Client

Source

pub fn list_object_versions(&self) -> ListObjectVersionsFluentBuilder

Constructs a fluent builder for the ListObjectVersions operation.

  • The fluent builder is configurable:
  • On success, responds with ListObjectVersionsOutput with field(s):
    • is_truncated(Option<bool>):

      A flag that indicates whether Amazon S3 returned all of the results that satisfied the search criteria. If your results were truncated, you can make a follow-up paginated request by using the NextKeyMarker and NextVersionIdMarker response parameters as a starting place in another request to return the rest of the results.

    • key_marker(Option<String>):

      Marks the last key returned in a truncated response.

    • version_id_marker(Option<String>):

      Marks the last version of the key returned in a truncated response.

    • next_key_marker(Option<String>):

      When the number of responses exceeds the value of MaxKeys, NextKeyMarker specifies the first key not returned that satisfies the search criteria. Use this value for the key-marker request parameter in a subsequent request.

    • next_version_id_marker(Option<String>):

      When the number of responses exceeds the value of MaxKeys, NextVersionIdMarker specifies the first object version not returned that satisfies the search criteria. Use this value for the version-id-marker request parameter in a subsequent request.

    • versions(Option<Vec::<ObjectVersion>>):

      Container for version information.

    • delete_markers(Option<Vec::<DeleteMarkerEntry>>):

      Container for an object that is a delete marker.

    • name(Option<String>):

      The bucket name.

    • prefix(Option<String>):

      Selects objects that start with the value supplied by this parameter.

    • delimiter(Option<String>):

      The delimiter grouping the included keys. A delimiter is a character that you specify to group keys. All keys that contain the same string between the prefix and the first occurrence of the delimiter are grouped under a single result element in CommonPrefixes. These groups are counted as one result against the max-keys limitation. These keys are not returned elsewhere in the response.

    • max_keys(Option<i32>):

      Specifies the maximum number of objects to return.

    • common_prefixes(Option<Vec::<CommonPrefix>>):

      All of the keys rolled up into a common prefix count as a single return when calculating the number of returns.

    • encoding_type(Option<EncodingType>):

      Encoding type used by Amazon S3 to encode object key names in the XML response.

      If you specify the encoding-type request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:

      KeyMarker, NextKeyMarker, Prefix, Key, and Delimiter.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<ListObjectVersionsError>
Source§

impl Client

Source

pub fn list_objects(&self) -> ListObjectsFluentBuilder

Constructs a fluent builder for the ListObjects operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket containing the objects.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • delimiter(impl Into<String>) / set_delimiter(Option<String>):
      required: false

      A delimiter is a character that you use to group keys.


    • encoding_type(EncodingType) / set_encoding_type(Option<EncodingType>):
      required: false

      Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.

      When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png will appear as test_file%283%29.png.


    • marker(impl Into<String>) / set_marker(Option<String>):
      required: false

      Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket.


    • max_keys(i32) / set_max_keys(Option<i32>):
      required: false

      Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      Limits the response to keys that begin with the specified prefix.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that she or he will be charged for the list objects request. Bucket owners need not specify this parameter in their requests.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • optional_object_attributes(OptionalObjectAttributes) / set_optional_object_attributes(Option<Vec::<OptionalObjectAttributes>>):
      required: false

      Specifies the optional fields that you want returned in the response. Fields that you do not specify are not returned.


  • On success, responds with ListObjectsOutput with field(s):
    • is_truncated(Option<bool>):

      A flag that indicates whether Amazon S3 returned all of the results that satisfied the search criteria.

    • marker(Option<String>):

      Indicates where in the bucket listing begins. Marker is included in the response if it was sent with the request.

    • next_marker(Option<String>):

      When the response is truncated (the IsTruncated element value in the response is true), you can use the key name in this field as the marker parameter in the subsequent request to get the next set of objects. Amazon S3 lists objects in alphabetical order.

      This element is returned only if you have the delimiter request parameter specified. If the response does not include the NextMarker element and it is truncated, you can use the value of the last Key element in the response as the marker parameter in the subsequent request to get the next set of object keys.

    • contents(Option<Vec::<Object>>):

      Metadata about each object returned.

    • name(Option<String>):

      The bucket name.

    • prefix(Option<String>):

      Keys that begin with the indicated prefix.

    • delimiter(Option<String>):

      Causes keys that contain the same string between the prefix and the first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys value.

    • max_keys(Option<i32>):

      The maximum number of keys returned in the response body.

    • common_prefixes(Option<Vec::<CommonPrefix>>):

      All of the keys (up to 1,000) rolled up in a common prefix count as a single return when calculating the number of returns.

      A response can contain CommonPrefixes only if you specify a delimiter.

      CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by the delimiter.

      CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.

      For example, if the prefix is notes/ and the delimiter is a slash (/), as in notes/summer/july, the common prefix is notes/summer/. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.

    • encoding_type(Option<EncodingType>):

      Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.

      When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png will appear as test_file%283%29.png.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<ListObjectsError>
Source§

impl Client

Source

pub fn list_objects_v2(&self) -> ListObjectsV2FluentBuilder

Constructs a fluent builder for the ListObjectsV2 operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • delimiter(impl Into<String>) / set_delimiter(Option<String>):
      required: false

      A delimiter is a character that you use to group keys.

      • Directory buckets - For directory buckets, / is the only supported delimiter.

      • Directory buckets - When you query ListObjectsV2 with a delimiter during in-progress multipart uploads, the CommonPrefixes response parameter contains the prefixes that are associated with the in-progress multipart uploads. For more information about multipart uploads, see Multipart Upload Overview in the Amazon S3 User Guide.


    • encoding_type(EncodingType) / set_encoding_type(Option<EncodingType>):
      required: false

      Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.

      When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png will appear as test_file%283%29.png.


    • max_keys(i32) / set_max_keys(Option<i32>):
      required: false

      Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      Limits the response to keys that begin with the specified prefix.

      Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.


    • continuation_token(impl Into<String>) / set_continuation_token(Option<String>):
      required: false

      ContinuationToken indicates to Amazon S3 that the list is being continued on this bucket with a token. ContinuationToken is obfuscated and is not a real key. You can use this ContinuationToken for pagination of the list results.


    • fetch_owner(bool) / set_fetch_owner(Option<bool>):
      required: false

      The owner field is not present in ListObjectsV2 by default. If you want to return the owner field with each key in the result, then set the FetchOwner field to true.

      Directory buckets - For directory buckets, the bucket owner is returned as the object owner for all objects.


    • start_after(impl Into<String>) / set_start_after(Option<String>):
      required: false

      StartAfter is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. StartAfter can be any key in the bucket.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that she or he will be charged for the list objects request in V2 style. Bucket owners need not specify this parameter in their requests.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • optional_object_attributes(OptionalObjectAttributes) / set_optional_object_attributes(Option<Vec::<OptionalObjectAttributes>>):
      required: false

      Specifies the optional fields that you want returned in the response. Fields that you do not specify are not returned.

      This functionality is not supported for directory buckets.


  • On success, responds with ListObjectsV2Output with field(s):
    • is_truncated(Option<bool>):

      Set to false if all of the results were returned. Set to true if more keys are available to return. If the number of results exceeds that specified by MaxKeys, all of the results might not be returned.

    • contents(Option<Vec::<Object>>):

      Metadata about each object returned.

    • name(Option<String>):

      The bucket name.

    • prefix(Option<String>):

      Keys that begin with the indicated prefix.

      Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.

    • delimiter(Option<String>):

      Causes keys that contain the same string between the prefix and the first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys value.

      Directory buckets - For directory buckets, / is the only supported delimiter.

    • max_keys(Option<i32>):

      Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.

    • common_prefixes(Option<Vec::<CommonPrefix>>):

      All of the keys (up to 1,000) that share the same prefix are grouped together. When counting the total numbers of returns by this API operation, this group of keys is considered as one item.

      A response can contain CommonPrefixes only if you specify a delimiter.

      CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by a delimiter.

      CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.

      For example, if the prefix is notes/ and the delimiter is a slash (/) as in notes/summer/july, the common prefix is notes/summer/. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.

      • Directory buckets - For directory buckets, only prefixes that end in a delimiter (/) are supported.

      • Directory buckets - When you query ListObjectsV2 with a delimiter during in-progress multipart uploads, the CommonPrefixes response parameter contains the prefixes that are associated with the in-progress multipart uploads. For more information about multipart uploads, see Multipart Upload Overview in the Amazon S3 User Guide.

    • encoding_type(Option<EncodingType>):

      Encoding type used by Amazon S3 to encode object key names in the XML response.

      If you specify the encoding-type request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:

      Delimiter, Prefix, Key, and StartAfter.

    • key_count(Option<i32>):

      KeyCount is the number of keys returned with this request. KeyCount will always be less than or equal to the MaxKeys field. For example, if you ask for 50 keys, your result will include 50 keys or fewer.

    • continuation_token(Option<String>):

      If ContinuationToken was sent with the request, it is included in the response. You can use the returned ContinuationToken for pagination of the list response. You can use this ContinuationToken for pagination of the list results.

    • next_continuation_token(Option<String>):

      NextContinuationToken is sent when isTruncated is true, which means there are more keys in the bucket that can be listed. The next list requests to Amazon S3 can be continued with this NextContinuationToken. NextContinuationToken is obfuscated and is not a real key

    • start_after(Option<String>):

      If StartAfter was sent with the request, it is included in the response.

      This functionality is not supported for directory buckets.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<ListObjectsV2Error>
Source§

impl Client

Source

pub fn list_parts(&self) -> ListPartsFluentBuilder

Constructs a fluent builder for the ListParts operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket to which the parts are being uploaded.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the multipart upload was initiated.


    • max_parts(i32) / set_max_parts(Option<i32>):
      required: false

      Sets the maximum number of parts to return.


    • part_number_marker(impl Into<String>) / set_part_number_marker(Option<String>):
      required: false

      Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.


    • upload_id(impl Into<String>) / set_upload_id(Option<String>):
      required: true

      Upload ID identifying the multipart upload whose parts are being listed.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      The server-side encryption (SSE) algorithm used to encrypt the object. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      The server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      The MD5 server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


  • On success, responds with ListPartsOutput with field(s):
    • abort_date(Option<DateTime>):

      If the bucket has a lifecycle rule configured with an action to abort incomplete multipart uploads and the prefix in the lifecycle rule matches the object name in the request, then the response includes this header indicating when the initiated multipart upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.

      The response will also include the x-amz-abort-rule-id header that will provide the ID of the lifecycle configuration rule that defines this action.

      This functionality is not supported for directory buckets.

    • abort_rule_id(Option<String>):

      This header is returned along with the x-amz-abort-date header. It identifies applicable lifecycle configuration rule that defines the action to abort incomplete multipart uploads.

      This functionality is not supported for directory buckets.

    • bucket(Option<String>):

      The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.

    • key(Option<String>):

      Object key for which the multipart upload was initiated.

    • upload_id(Option<String>):

      Upload ID identifying the multipart upload whose parts are being listed.

    • part_number_marker(Option<String>):

      Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.

    • next_part_number_marker(Option<String>):

      When a list is truncated, this element specifies the last part in the list, as well as the value to use for the part-number-marker request parameter in a subsequent request.

    • max_parts(Option<i32>):

      Maximum number of parts that were allowed in the response.

    • is_truncated(Option<bool>):

      Indicates whether the returned list of parts is truncated. A true value indicates that the list was truncated. A list can be truncated if the number of parts exceeds the limit returned in the MaxParts element.

    • parts(Option<Vec::<Part>>):

      Container for elements related to a particular part. A response can contain zero or more Part elements.

    • initiator(Option<Initiator>):

      Container element that identifies who initiated the multipart upload. If the initiator is an Amazon Web Services account, this element provides the same information as the Owner element. If the initiator is an IAM User, this element provides the user ARN and display name.

    • owner(Option<Owner>):

      Container element that identifies the object owner, after the object is created. If multipart upload is initiated by an IAM user, this element provides the parent account ID and display name.

      Directory buckets - The bucket owner is returned as the object owner for all the parts.

    • storage_class(Option<StorageClass>):

      The class of storage used to store the uploaded object.

      Directory buckets - Only the S3 Express One Zone storage class is supported by directory buckets to store objects.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

    • checksum_algorithm(Option<ChecksumAlgorithm>):

      The algorithm that was used to create a checksum of the object.

  • On failure, responds with SdkError<ListPartsError>
Source§

impl Client

Source

pub fn put_bucket_accelerate_configuration( &self, ) -> PutBucketAccelerateConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketAccelerateConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_acl(&self) -> PutBucketAclFluentBuilder

Constructs a fluent builder for the PutBucketAcl operation.

Source§

impl Client

Source

pub fn put_bucket_analytics_configuration( &self, ) -> PutBucketAnalyticsConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketAnalyticsConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_cors(&self) -> PutBucketCorsFluentBuilder

Constructs a fluent builder for the PutBucketCors operation.

Source§

impl Client

Source

pub fn put_bucket_encryption(&self) -> PutBucketEncryptionFluentBuilder

Constructs a fluent builder for the PutBucketEncryption operation.

Source§

impl Client

Source

pub fn put_bucket_intelligent_tiering_configuration( &self, ) -> PutBucketIntelligentTieringConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketIntelligentTieringConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_inventory_configuration( &self, ) -> PutBucketInventoryConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketInventoryConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_lifecycle_configuration( &self, ) -> PutBucketLifecycleConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketLifecycleConfiguration operation.

  • The fluent builder is configurable:
  • On success, responds with PutBucketLifecycleConfigurationOutput with field(s):
    • transition_default_minimum_object_size(Option<TransitionDefaultMinimumObjectSize>):

      Indicates which default minimum object size behavior is applied to the lifecycle configuration.

      This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.

      • all_storage_classes_128K - Objects smaller than 128 KB will not transition to any storage class by default.

      • varies_by_storage_class - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.

      To customize the minimum object size for any transition you can add a filter that specifies a custom ObjectSizeGreaterThan or ObjectSizeLessThan in the body of your transition rule. Custom filters always take precedence over the default transition behavior.

  • On failure, responds with SdkError<PutBucketLifecycleConfigurationError>
Source§

impl Client

Source

pub fn put_bucket_logging(&self) -> PutBucketLoggingFluentBuilder

Constructs a fluent builder for the PutBucketLogging operation.

Source§

impl Client

Source

pub fn put_bucket_metrics_configuration( &self, ) -> PutBucketMetricsConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketMetricsConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_notification_configuration( &self, ) -> PutBucketNotificationConfigurationFluentBuilder

Constructs a fluent builder for the PutBucketNotificationConfiguration operation.

Source§

impl Client

Source

pub fn put_bucket_ownership_controls( &self, ) -> PutBucketOwnershipControlsFluentBuilder

Constructs a fluent builder for the PutBucketOwnershipControls operation.

Source§

impl Client

Source

pub fn put_bucket_policy(&self) -> PutBucketPolicyFluentBuilder

Constructs a fluent builder for the PutBucketPolicy operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket.

      Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name . Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format bucket_base_nameaz_id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide


    • content_md5(impl Into<String>) / set_content_md5(Option<String>):
      required: false

      The MD5 hash of the request body.

      For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.

      This functionality is not supported for directory buckets.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request.

      For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list:

      • CRC32

      • CRC32C

      • SHA1

      • SHA256

      For more information, see Checking object integrity in the Amazon S3 User Guide.

      If the individual checksum value you provide through x-amz-checksum-algorithm doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm .

      For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that’s used for performance.


    • confirm_remove_self_bucket_access(bool) / set_confirm_remove_self_bucket_access(Option<bool>):
      required: false

      Set this parameter to true to confirm that you want to remove your permissions to change this bucket policy in the future.

      This functionality is not supported for directory buckets.


    • policy(impl Into<String>) / set_policy(Option<String>):
      required: true

      The bucket policy as a JSON document.

      For directory buckets, the only IAM action supported in the bucket policy is s3express:CreateSession.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

      For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented.


  • On success, responds with PutBucketPolicyOutput
  • On failure, responds with SdkError<PutBucketPolicyError>
Source§

impl Client

Source

pub fn put_bucket_replication(&self) -> PutBucketReplicationFluentBuilder

Constructs a fluent builder for the PutBucketReplication operation.

Source§

impl Client

Source

pub fn put_bucket_request_payment(&self) -> PutBucketRequestPaymentFluentBuilder

Constructs a fluent builder for the PutBucketRequestPayment operation.

Source§

impl Client

Source

pub fn put_bucket_tagging(&self) -> PutBucketTaggingFluentBuilder

Constructs a fluent builder for the PutBucketTagging operation.

Source§

impl Client

Source

pub fn put_bucket_versioning(&self) -> PutBucketVersioningFluentBuilder

Constructs a fluent builder for the PutBucketVersioning operation.

Source§

impl Client

Source

pub fn put_bucket_website(&self) -> PutBucketWebsiteFluentBuilder

Constructs a fluent builder for the PutBucketWebsite operation.

Source§

impl Client

Source

pub fn put_object(&self) -> PutObjectFluentBuilder

Constructs a fluent builder for the PutObject operation.

  • The fluent builder is configurable:
    • acl(ObjectCannedAcl) / set_acl(Option<ObjectCannedAcl>):
      required: false

      The canned ACL to apply to the object. For more information, see Canned ACL in the Amazon S3 User Guide.

      When adding a new object, you can use headers to grant ACL-based permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. By default, all objects are private. Only the owner has full access control. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API in the Amazon S3 User Guide.

      If the bucket that you’re uploading objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don’t specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that contain other ACLs (for example, custom grants to certain Amazon Web Services accounts) fail and return a 400 error with the error code AccessControlListNotSupported. For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • body(ByteStream) / set_body(ByteStream):
      required: false

      Object data.


    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name to which the PUT action was initiated.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • cache_control(impl Into<String>) / set_cache_control(Option<String>):
      required: false

      Can be used to specify caching behavior along the request/reply chain. For more information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.


    • content_disposition(impl Into<String>) / set_content_disposition(Option<String>):
      required: false

      Specifies presentational information for the object. For more information, see https://www.rfc-editor.org/rfc/rfc6266#section-4.


    • content_encoding(impl Into<String>) / set_content_encoding(Option<String>):
      required: false

      Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#field.content-encoding.


    • content_language(impl Into<String>) / set_content_language(Option<String>):
      required: false

      The language the content is in.


    • content_length(i64) / set_content_length(Option<i64>):
      required: false

      Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#name-content-length.


    • content_md5(impl Into<String>) / set_content_md5(Option<String>):
      required: false

      The base64-encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, we recommend using the Content-MD5 mechanism as an end-to-end integrity check. For more information about REST request authentication, see REST Authentication.

      The Content-MD5 or x-amz-sdk-checksum-algorithm header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information, see Uploading objects to an Object Lock enabled bucket in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • content_type(impl Into<String>) / set_content_type(Option<String>):
      required: false

      A standard MIME type describing the format of the contents. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request.

      For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list:

      • CRC32

      • CRC32C

      • SHA1

      • SHA256

      For more information, see Checking object integrity in the Amazon S3 User Guide.

      If the individual checksum value you provide through x-amz-checksum-algorithm doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm .

      The Content-MD5 or x-amz-sdk-checksum-algorithm header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information, see Uploading objects to an Object Lock enabled bucket in the Amazon S3 User Guide.

      For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that’s used for performance.


    • checksum_crc32(impl Into<String>) / set_checksum_crc32(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_crc32_c(impl Into<String>) / set_checksum_crc32_c(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha1(impl Into<String>) / set_checksum_sha1(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 160-bit SHA-1 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha256(impl Into<String>) / set_checksum_sha256(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 256-bit SHA-256 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • expires(DateTime) / set_expires(Option<DateTime>):
      required: false

      The date and time at which the object is no longer cacheable. For more information, see https://www.rfc-editor.org/rfc/rfc7234#section-5.3.


    • if_match(impl Into<String>) / set_if_match(Option<String>):
      required: false

      Uploads the object only if the ETag (entity tag) value provided during the WRITE operation matches the ETag of the object in S3. If the ETag values do not match, the operation returns a 412 Precondition Failed error.

      If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict response. On a 409 failure you should fetch the object’s ETag and retry the upload.

      Expects the ETag value as a string.

      For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.


    • if_none_match(impl Into<String>) / set_if_none_match(Option<String>):
      required: false

      Uploads the object only if the object key name does not already exist in the bucket specified. Otherwise, Amazon S3 returns a 412 Precondition Failed error.

      If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict response. On a 409 failure you should retry the upload.

      Expects the ‘*’ (asterisk) character.

      For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.


    • grant_full_control(impl Into<String>) / set_grant_full_control(Option<String>):
      required: false

      Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read(impl Into<String>) / set_grant_read(Option<String>):
      required: false

      Allows grantee to read the object data and its metadata.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_read_acp(impl Into<String>) / set_grant_read_acp(Option<String>):
      required: false

      Allows grantee to read the object ACL.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • grant_write_acp(impl Into<String>) / set_grant_write_acp(Option<String>):
      required: false

      Allows grantee to write the ACL for the applicable object.

      • This functionality is not supported for directory buckets.

      • This functionality is not supported for Amazon S3 on Outposts.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the PUT action was initiated.


    • write_offset_bytes(i64) / set_write_offset_bytes(Option<i64>):
      required: false

      Specifies the offset for appending data to existing objects in bytes. The offset must be equal to the size of the existing object being appended to. If no object exists, setting this header to 0 will create a new object.

      This functionality is only supported for objects in the Amazon S3 Express One Zone storage class in directory buckets.


    • metadata(impl Into<String>, impl Into<String>) / set_metadata(Option<HashMap::<String, String>>):
      required: false

      A map of metadata to store with the object in S3.


    • server_side_encryption(ServerSideEncryption) / set_server_side_encryption(Option<ServerSideEncryption>):
      required: false

      The server-side encryption algorithm that was used when you store this object in Amazon S3 (for example, AES256, aws:kms, aws:kms:dsse).

      • General purpose buckets - You have four mutually exclusive options to protect data using server-side encryption in Amazon S3, depending on how you choose to manage the encryption keys. Specifically, the encryption key options are Amazon S3 managed keys (SSE-S3), Amazon Web Services KMS keys (SSE-KMS or DSSE-KMS), and customer-provided keys (SSE-C). Amazon S3 encrypts data with server-side encryption by using Amazon S3 managed keys (SSE-S3) by default. You can optionally tell Amazon S3 to encrypt data at rest by using server-side encryption with other key options. For more information, see Using Server-Side Encryption in the Amazon S3 User Guide.

      • Directory buckets - For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession requests or PUT object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.

        In the Zonal endpoint API calls (except CopyObject and UploadPartCopy) using the REST API, the encryption request headers must match the encryption settings that are specified in the CreateSession request. You can’t override the values of the encryption settings (x-amz-server-side-encryption, x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption-context, and x-amz-server-side-encryption-bucket-key-enabled) that are specified in the CreateSession request. You don’t need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the CreateSession request to protect new objects in the directory bucket.

        When you use the CLI or the Amazon Web Services SDKs, for CreateSession, the session token refreshes automatically to avoid service interruptions when a session expires. The CLI or the Amazon Web Services SDKs use the bucket’s default encryption configuration for the CreateSession request. It’s not supported to override the encryption settings values in the CreateSession request. So in the Zonal endpoint API calls (except CopyObject and UploadPartCopy), the encryption request headers must match the default encryption configuration of the directory bucket.


    • storage_class(StorageClass) / set_storage_class(Option<StorageClass>):
      required: false

      By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. For more information, see Storage Classes in the Amazon S3 User Guide.

      • For directory buckets, only the S3 Express One Zone storage class is supported to store newly created objects.

      • Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.


    • website_redirect_location(impl Into<String>) / set_website_redirect_location(Option<String>):
      required: false

      If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. For information about object metadata, see Object Key and Metadata in the Amazon S3 User Guide.

      In the following example, the request header sets the redirect to an object (anotherPage.html) in the same bucket:

      x-amz-website-redirect-location: /anotherPage.html

      In the following example, the request header sets the object redirect to another website:

      x-amz-website-redirect-location: http://www.example.com/

      For more information about website hosting in Amazon S3, see Hosting Websites on Amazon S3 and How to Configure Website Page Redirects in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported for directory buckets.


    • ssekms_key_id(impl Into<String>) / set_ssekms_key_id(Option<String>):
      required: false

      Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. If the KMS key doesn’t exist in the same account that’s issuing the command, you must use the full Key ARN not the Key ID.

      General purpose buckets - If you specify x-amz-server-side-encryption with aws:kms or aws:kms:dsse, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS key to use. If you specify x-amz-server-side-encryption:aws:kms or x-amz-server-side-encryption:aws:kms:dsse, but do not provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key (aws/s3) to protect the data.

      Directory buckets - If you specify x-amz-server-side-encryption with aws:kms, the x-amz-server-side-encryption-aws-kms-key-id header is implicitly assigned the ID of the KMS symmetric encryption customer managed key that’s configured for your directory bucket’s default encryption setting. If you want to specify the x-amz-server-side-encryption-aws-kms-key-id header explicitly, you can only specify it with the ID (Key ID or Key ARN) of the KMS customer managed key that’s configured for your directory bucket’s default encryption setting. Otherwise, you get an HTTP 400 Bad Request error. Only use the key ID or key ARN. The key alias format of the KMS key isn’t supported. Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. The Amazon Web Services managed key (aws/s3) isn’t supported.


    • ssekms_encryption_context(impl Into<String>) / set_ssekms_encryption_context(Option<String>):
      required: false

      Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject operations on this object.

      General purpose buckets - This value must be explicitly added during CopyObject operations if you want an additional encryption context for your object. For more information, see Encryption context in the Amazon S3 User Guide.

      Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.


    • bucket_key_enabled(bool) / set_bucket_key_enabled(Option<bool>):
      required: false

      Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS).

      General purpose buckets - Setting this header to true causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Also, specifying this header with a PUT action doesn’t affect bucket-level settings for S3 Bucket Key.

      Directory buckets - S3 Bucket Keys are always enabled for GET and PUT operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • tagging(impl Into<String>) / set_tagging(Option<String>):
      required: false

      The tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, “Key1=Value1”)

      This functionality is not supported for directory buckets.


    • object_lock_mode(ObjectLockMode) / set_object_lock_mode(Option<ObjectLockMode>):
      required: false

      The Object Lock mode that you want to apply to this object.

      This functionality is not supported for directory buckets.


    • object_lock_retain_until_date(DateTime) / set_object_lock_retain_until_date(Option<DateTime>):
      required: false

      The date and time when you want this object’s Object Lock to expire. Must be formatted as a timestamp parameter.

      This functionality is not supported for directory buckets.


    • object_lock_legal_hold_status(ObjectLockLegalHoldStatus) / set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>):
      required: false

      Specifies whether a legal hold will be applied to this object. For more information about S3 Object Lock, see Object Lock in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with PutObjectOutput with field(s):
    • expiration(Option<String>):

      If the expiration is configured for the object (see PutBucketLifecycleConfiguration) in the Amazon S3 User Guide, the response includes this header. It includes the expiry-date and rule-id key-value pairs that provide information about object expiration. The value of the rule-id is URL-encoded.

      This functionality is not supported for directory buckets.

    • e_tag(Option<String>):

      Entity tag for the uploaded object.

      General purpose buckets - To ensure that data is not corrupted traversing the network, for objects where the ETag is the MD5 digest of the object, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.

      Directory buckets - The ETag for the object in a directory bucket isn’t the MD5 digest of the object.

    • checksum_crc32(Option<String>):

      The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32_c(Option<String>):

      The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha1(Option<String>):

      The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha256(Option<String>):

      The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3.

    • version_id(Option<String>):

      Version ID of the object.

      If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object being stored. Amazon S3 returns this ID in the response. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects. For more information about versioning, see Adding Objects to Versioning-Enabled Buckets in the Amazon S3 User Guide. For information about returning the versioning state of a bucket, see GetBucketVersioning.

      This functionality is not supported for directory buckets.

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • ssekms_encryption_context(Option<String>):

      If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject operations on this object.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the uploaded object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • size(Option<i64>):

      The size of the object in bytes. This will only be present if you append to an object.

      This functionality is only supported for objects in the Amazon S3 Express One Zone storage class in directory buckets.

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<PutObjectError>
Source§

impl Client

Source

pub fn put_object_acl(&self) -> PutObjectAclFluentBuilder

Constructs a fluent builder for the PutObjectAcl operation.

Source§

impl Client

Constructs a fluent builder for the PutObjectLegalHold operation.

Source§

impl Client

Source

pub fn put_object_lock_configuration( &self, ) -> PutObjectLockConfigurationFluentBuilder

Constructs a fluent builder for the PutObjectLockConfiguration operation.

Source§

impl Client

Source

pub fn put_object_retention(&self) -> PutObjectRetentionFluentBuilder

Constructs a fluent builder for the PutObjectRetention operation.

Source§

impl Client

Source

pub fn put_object_tagging(&self) -> PutObjectTaggingFluentBuilder

Constructs a fluent builder for the PutObjectTagging operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the object.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Name of the object key.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      The versionId of the object that the tag-set will be added to.


    • content_md5(impl Into<String>) / set_content_md5(Option<String>):
      required: false

      The MD5 hash for the request body.

      For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For more information, see Checking object integrity in the Amazon S3 User Guide.

      If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter.


    • tagging(Tagging) / set_tagging(Option<Tagging>):
      required: true

      Container for the TagSet and Tag elements


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


  • On success, responds with PutObjectTaggingOutput with field(s):
  • On failure, responds with SdkError<PutObjectTaggingError>
Source§

impl Client

Source

pub fn put_public_access_block(&self) -> PutPublicAccessBlockFluentBuilder

Constructs a fluent builder for the PutPublicAccessBlock operation.

Source§

impl Client

Source

pub fn restore_object(&self) -> RestoreObjectFluentBuilder

Constructs a fluent builder for the RestoreObject operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name containing the object to restore.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the action was initiated.


    • version_id(impl Into<String>) / set_version_id(Option<String>):
      required: false

      VersionId used to reference a specific version of the object.


    • restore_request(RestoreRequest) / set_restore_request(Option<RestoreRequest>):
      required: false

      Container for restore job parameters.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For more information, see Checking object integrity in the Amazon S3 User Guide.

      If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with RestoreObjectOutput with field(s):
  • On failure, responds with SdkError<RestoreObjectError>
Source§

impl Client

Source

pub fn select_object_content(&self) -> SelectObjectContentFluentBuilder

Constructs a fluent builder for the SelectObjectContent operation.

Source§

impl Client

Source

pub fn upload_part(&self) -> UploadPartFluentBuilder

Constructs a fluent builder for the UploadPart operation.

  • The fluent builder is configurable:
    • body(ByteStream) / set_body(ByteStream):
      required: false

      Object data.


    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The name of the bucket to which the multipart upload was initiated.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • content_length(i64) / set_content_length(Option<i64>):
      required: false

      Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.


    • content_md5(impl Into<String>) / set_content_md5(Option<String>):
      required: false

      The base64-encoded 128-bit MD5 digest of the part data. This parameter is auto-populated when using the command from the CLI. This parameter is required if object lock parameters are specified.

      This functionality is not supported for directory buckets.


    • checksum_algorithm(ChecksumAlgorithm) / set_checksum_algorithm(Option<ChecksumAlgorithm>):
      required: false

      Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For more information, see Checking object integrity in the Amazon S3 User Guide.

      If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter.

      This checksum algorithm must be the same for all parts and it match the checksum value supplied in the CreateMultipartUpload request.


    • checksum_crc32(impl Into<String>) / set_checksum_crc32(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_crc32_c(impl Into<String>) / set_checksum_crc32_c(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha1(impl Into<String>) / set_checksum_sha1(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 160-bit SHA-1 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • checksum_sha256(impl Into<String>) / set_checksum_sha256(Option<String>):
      required: false

      This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 256-bit SHA-256 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the multipart upload was initiated.


    • part_number(i32) / set_part_number(Option<i32>):
      required: true

      Part number of part being uploaded. This is a positive integer between 1 and 10,000.


    • upload_id(impl Into<String>) / set_upload_id(Option<String>):
      required: true

      Upload ID identifying the multipart upload whose part is being uploaded.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported for directory buckets.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header. This must be the same encryption key specified in the initiate multipart upload request.

      This functionality is not supported for directory buckets.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported for directory buckets.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with UploadPartOutput with field(s):
    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms).

    • e_tag(Option<String>):

      Entity tag for the uploaded object.

    • checksum_crc32(Option<String>):

      The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_crc32_c(Option<String>):

      The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha1(Option<String>):

      The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • checksum_sha256(Option<String>):

      The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<UploadPartError>
Source§

impl Client

Source

pub fn upload_part_copy(&self) -> UploadPartCopyFluentBuilder

Constructs a fluent builder for the UploadPartCopy operation.

  • The fluent builder is configurable:
    • bucket(impl Into<String>) / set_bucket(Option<String>):
      required: true

      The bucket name.

      Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket_base_nameaz-id–x-s3 (for example, DOC-EXAMPLE-BUCKETusw2-az1–x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.

      Access points - When you use this action with an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.

      Access points and Object Lambda access points are not supported by directory buckets.

      S3 on Outposts - When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? in the Amazon S3 User Guide.


    • copy_source(impl Into<String>) / set_copy_source(Option<String>):
      required: true

      Specifies the source object for the copy operation. You specify the value in one of two formats, depending on whether you want to access the source object through an access point:

      • For objects not accessed through an access point, specify the name of the source bucket and key of the source object, separated by a slash (/). For example, to copy the object reports/january.pdf from the bucket awsexamplebucket, use awsexamplebucket/reports/january.pdf. The value must be URL-encoded.

      • For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed through the access point, in the format arn:aws:s3: : :accesspoint/ /object/ . For example, to copy the object reports/january.pdf through access point my-access-point owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf. The value must be URL encoded.

        • Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.

        • Access points are not supported by directory buckets.

        Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in the format arn:aws:s3-outposts: : :outpost/ /object/ . For example, to copy the object reports/january.pdf through outpost my-outpost owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf. The value must be URL-encoded.

      If your bucket has versioning enabled, you could have multiple versions of the same object. By default, x-amz-copy-source identifies the current version of the source object to copy. To copy a specific version of the source object to copy, append ?versionId= to the x-amz-copy-source request header (for example, x-amz-copy-source: /awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893).

      If the current version is a delete marker and you don’t specify a versionId in the x-amz-copy-source request header, Amazon S3 returns a 404 Not Found error, because the object does not exist. If you specify versionId in the x-amz-copy-source and the versionId is a delete marker, Amazon S3 returns an HTTP 400 Bad Request error, because you are not allowed to specify a delete marker as a version for the x-amz-copy-source.

      Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets.


    • copy_source_if_match(impl Into<String>) / set_copy_source_if_match(Option<String>):
      required: false

      Copies the object if its entity tag (ETag) matches the specified tag.

      If both of the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request as follows:

      x-amz-copy-source-if-match condition evaluates to true, and;

      x-amz-copy-source-if-unmodified-since condition evaluates to false;

      Amazon S3 returns 200 OK and copies the data.


    • copy_source_if_modified_since(DateTime) / set_copy_source_if_modified_since(Option<DateTime>):
      required: false

      Copies the object if it has been modified since the specified time.

      If both of the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request as follows:

      x-amz-copy-source-if-none-match condition evaluates to false, and;

      x-amz-copy-source-if-modified-since condition evaluates to true;

      Amazon S3 returns 412 Precondition Failed response code.


    • copy_source_if_none_match(impl Into<String>) / set_copy_source_if_none_match(Option<String>):
      required: false

      Copies the object if its entity tag (ETag) is different than the specified ETag.

      If both of the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request as follows:

      x-amz-copy-source-if-none-match condition evaluates to false, and;

      x-amz-copy-source-if-modified-since condition evaluates to true;

      Amazon S3 returns 412 Precondition Failed response code.


    • copy_source_if_unmodified_since(DateTime) / set_copy_source_if_unmodified_since(Option<DateTime>):
      required: false

      Copies the object if it hasn’t been modified since the specified time.

      If both of the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request as follows:

      x-amz-copy-source-if-match condition evaluates to true, and;

      x-amz-copy-source-if-unmodified-since condition evaluates to false;

      Amazon S3 returns 200 OK and copies the data.


    • copy_source_range(impl Into<String>) / set_copy_source_range(Option<String>):
      required: false

      The range of bytes to copy from the source object. The range value must use the form bytes=first-last, where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first 10 bytes of the source. You can copy a range only if the source object is greater than 5 MB.


    • key(impl Into<String>) / set_key(Option<String>):
      required: true

      Object key for which the multipart upload was initiated.


    • part_number(i32) / set_part_number(Option<i32>):
      required: true

      Part number of part being copied. This is a positive integer between 1 and 10,000.


    • upload_id(impl Into<String>) / set_upload_id(Option<String>):
      required: true

      Upload ID identifying the multipart upload whose part is being copied.


    • sse_customer_algorithm(impl Into<String>) / set_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when encrypting the object (for example, AES256).

      This functionality is not supported when the destination bucket is a directory bucket.


    • sse_customer_key(impl Into<String>) / set_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header. This must be the same encryption key specified in the initiate multipart upload request.

      This functionality is not supported when the destination bucket is a directory bucket.


    • sse_customer_key_md5(impl Into<String>) / set_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported when the destination bucket is a directory bucket.


    • copy_source_sse_customer_algorithm(impl Into<String>) / set_copy_source_sse_customer_algorithm(Option<String>):
      required: false

      Specifies the algorithm to use when decrypting the source object (for example, AES256).

      This functionality is not supported when the source object is in a directory bucket.


    • copy_source_sse_customer_key(impl Into<String>) / set_copy_source_sse_customer_key(Option<String>):
      required: false

      Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be one that was used when the source object was created.

      This functionality is not supported when the source object is in a directory bucket.


    • copy_source_sse_customer_key_md5(impl Into<String>) / set_copy_source_sse_customer_key_md5(Option<String>):
      required: false

      Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

      This functionality is not supported when the source object is in a directory bucket.


    • request_payer(RequestPayer) / set_request_payer(Option<RequestPayer>):
      required: false

      Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

      This functionality is not supported for directory buckets.


    • expected_bucket_owner(impl Into<String>) / set_expected_bucket_owner(Option<String>):
      required: false

      The account ID of the expected destination bucket owner. If the account ID that you provide does not match the actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


    • expected_source_bucket_owner(impl Into<String>) / set_expected_source_bucket_owner(Option<String>):
      required: false

      The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual owner of the source bucket, the request fails with the HTTP status code 403 Forbidden (access denied).


  • On success, responds with UploadPartCopyOutput with field(s):
    • copy_source_version_id(Option<String>):

      The version of the source object that was copied, if you have enabled versioning on the source bucket.

      This functionality is not supported when the source object is in a directory bucket.

    • copy_part_result(Option<CopyPartResult>):

      Container for all response elements.

    • server_side_encryption(Option<ServerSideEncryption>):

      The server-side encryption algorithm used when you store this object in Amazon S3 (for example, AES256, aws:kms).

    • sse_customer_algorithm(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.

      This functionality is not supported for directory buckets.

    • sse_customer_key_md5(Option<String>):

      If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.

      This functionality is not supported for directory buckets.

    • ssekms_key_id(Option<String>):

      If present, indicates the ID of the KMS key that was used for object encryption.

    • bucket_key_enabled(Option<bool>):

      Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).

    • request_charged(Option<RequestCharged>):

      If present, indicates that the requester was successfully charged for the request.

      This functionality is not supported for directory buckets.

  • On failure, responds with SdkError<UploadPartCopyError>
Source§

impl Client

Source

pub fn write_get_object_response(&self) -> WriteGetObjectResponseFluentBuilder

Constructs a fluent builder for the WriteGetObjectResponse operation.

Source§

impl Client

Source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

Source

pub fn config(&self) -> &Config

Returns the client’s configuration.

Source§

impl Client

Source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

Source§

impl Clone for Client

Source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Client

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Waiters for Client

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

Source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
Source§

impl<T> Paint for T
where T: ?Sized,

Source§

fn fg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the foreground set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like red() and green(), which have the same functionality but are pithier.

§Example

Set foreground color to white using fg():

use yansi::{Paint, Color};

painted.fg(Color::White);

Set foreground color to white using white().

use yansi::Paint;

painted.white();
Source§

fn primary(&self) -> Painted<&T>

Returns self with the fg() set to Color::Primary.

§Example
println!("{}", value.primary());
Source§

fn fixed(&self, color: u8) -> Painted<&T>

Returns self with the fg() set to Color::Fixed.

§Example
println!("{}", value.fixed(color));
Source§

fn rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the fg() set to Color::Rgb.

§Example
println!("{}", value.rgb(r, g, b));
Source§

fn black(&self) -> Painted<&T>

Returns self with the fg() set to Color::Black.

§Example
println!("{}", value.black());
Source§

fn red(&self) -> Painted<&T>

Returns self with the fg() set to Color::Red.

§Example
println!("{}", value.red());
Source§

fn green(&self) -> Painted<&T>

Returns self with the fg() set to Color::Green.

§Example
println!("{}", value.green());
Source§

fn yellow(&self) -> Painted<&T>

Returns self with the fg() set to Color::Yellow.

§Example
println!("{}", value.yellow());
Source§

fn blue(&self) -> Painted<&T>

Returns self with the fg() set to Color::Blue.

§Example
println!("{}", value.blue());
Source§

fn magenta(&self) -> Painted<&T>

Returns self with the fg() set to Color::Magenta.

§Example
println!("{}", value.magenta());
Source§

fn cyan(&self) -> Painted<&T>

Returns self with the fg() set to Color::Cyan.

§Example
println!("{}", value.cyan());
Source§

fn white(&self) -> Painted<&T>

Returns self with the fg() set to Color::White.

§Example
println!("{}", value.white());
Source§

fn bright_black(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightBlack.

§Example
println!("{}", value.bright_black());
Source§

fn bright_red(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightRed.

§Example
println!("{}", value.bright_red());
Source§

fn bright_green(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightGreen.

§Example
println!("{}", value.bright_green());
Source§

fn bright_yellow(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightYellow.

§Example
println!("{}", value.bright_yellow());
Source§

fn bright_blue(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightBlue.

§Example
println!("{}", value.bright_blue());
Source§

fn bright_magenta(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightMagenta.

§Example
println!("{}", value.bright_magenta());
Source§

fn bright_cyan(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightCyan.

§Example
println!("{}", value.bright_cyan());
Source§

fn bright_white(&self) -> Painted<&T>

Returns self with the fg() set to Color::BrightWhite.

§Example
println!("{}", value.bright_white());
Source§

fn bg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the background set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like on_red() and on_green(), which have the same functionality but are pithier.

§Example

Set background color to red using fg():

use yansi::{Paint, Color};

painted.bg(Color::Red);

Set background color to red using on_red().

use yansi::Paint;

painted.on_red();
Source§

fn on_primary(&self) -> Painted<&T>

Returns self with the bg() set to Color::Primary.

§Example
println!("{}", value.on_primary());
Source§

fn on_fixed(&self, color: u8) -> Painted<&T>

Returns self with the bg() set to Color::Fixed.

§Example
println!("{}", value.on_fixed(color));
Source§

fn on_rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the bg() set to Color::Rgb.

§Example
println!("{}", value.on_rgb(r, g, b));
Source§

fn on_black(&self) -> Painted<&T>

Returns self with the bg() set to Color::Black.

§Example
println!("{}", value.on_black());
Source§

fn on_red(&self) -> Painted<&T>

Returns self with the bg() set to Color::Red.

§Example
println!("{}", value.on_red());
Source§

fn on_green(&self) -> Painted<&T>

Returns self with the bg() set to Color::Green.

§Example
println!("{}", value.on_green());
Source§

fn on_yellow(&self) -> Painted<&T>

Returns self with the bg() set to Color::Yellow.

§Example
println!("{}", value.on_yellow());
Source§

fn on_blue(&self) -> Painted<&T>

Returns self with the bg() set to Color::Blue.

§Example
println!("{}", value.on_blue());
Source§

fn on_magenta(&self) -> Painted<&T>

Returns self with the bg() set to Color::Magenta.

§Example
println!("{}", value.on_magenta());
Source§

fn on_cyan(&self) -> Painted<&T>

Returns self with the bg() set to Color::Cyan.

§Example
println!("{}", value.on_cyan());
Source§

fn on_white(&self) -> Painted<&T>

Returns self with the bg() set to Color::White.

§Example
println!("{}", value.on_white());
Source§

fn on_bright_black(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightBlack.

§Example
println!("{}", value.on_bright_black());
Source§

fn on_bright_red(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightRed.

§Example
println!("{}", value.on_bright_red());
Source§

fn on_bright_green(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightGreen.

§Example
println!("{}", value.on_bright_green());
Source§

fn on_bright_yellow(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightYellow.

§Example
println!("{}", value.on_bright_yellow());
Source§

fn on_bright_blue(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightBlue.

§Example
println!("{}", value.on_bright_blue());
Source§

fn on_bright_magenta(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightMagenta.

§Example
println!("{}", value.on_bright_magenta());
Source§

fn on_bright_cyan(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightCyan.

§Example
println!("{}", value.on_bright_cyan());
Source§

fn on_bright_white(&self) -> Painted<&T>

Returns self with the bg() set to Color::BrightWhite.

§Example
println!("{}", value.on_bright_white());
Source§

fn attr(&self, value: Attribute) -> Painted<&T>

Enables the styling Attribute value.

This method should be used rarely. Instead, prefer to use attribute-specific builder methods like bold() and underline(), which have the same functionality but are pithier.

§Example

Make text bold using attr():

use yansi::{Paint, Attribute};

painted.attr(Attribute::Bold);

Make text bold using using bold().

use yansi::Paint;

painted.bold();
Source§

fn bold(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Bold.

§Example
println!("{}", value.bold());
Source§

fn dim(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Dim.

§Example
println!("{}", value.dim());
Source§

fn italic(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Italic.

§Example
println!("{}", value.italic());
Source§

fn underline(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Underline.

§Example
println!("{}", value.underline());

Returns self with the attr() set to Attribute::Blink.

§Example
println!("{}", value.blink());

Returns self with the attr() set to Attribute::RapidBlink.

§Example
println!("{}", value.rapid_blink());
Source§

fn invert(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Invert.

§Example
println!("{}", value.invert());
Source§

fn conceal(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Conceal.

§Example
println!("{}", value.conceal());
Source§

fn strike(&self) -> Painted<&T>

Returns self with the attr() set to Attribute::Strike.

§Example
println!("{}", value.strike());
Source§

fn quirk(&self, value: Quirk) -> Painted<&T>

Enables the yansi Quirk value.

This method should be used rarely. Instead, prefer to use quirk-specific builder methods like mask() and wrap(), which have the same functionality but are pithier.

§Example

Enable wrapping using .quirk():

use yansi::{Paint, Quirk};

painted.quirk(Quirk::Wrap);

Enable wrapping using wrap().

use yansi::Paint;

painted.wrap();
Source§

fn mask(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Mask.

§Example
println!("{}", value.mask());
Source§

fn wrap(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Wrap.

§Example
println!("{}", value.wrap());
Source§

fn linger(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Linger.

§Example
println!("{}", value.linger());
Source§

fn clear(&self) -> Painted<&T>

👎Deprecated since 1.0.1: renamed to resetting() due to conflicts with Vec::clear(). The clear() method will be removed in a future release.

Returns self with the quirk() set to Quirk::Clear.

§Example
println!("{}", value.clear());
Source§

fn resetting(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Resetting.

§Example
println!("{}", value.resetting());
Source§

fn bright(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::Bright.

§Example
println!("{}", value.bright());
Source§

fn on_bright(&self) -> Painted<&T>

Returns self with the quirk() set to Quirk::OnBright.

§Example
println!("{}", value.on_bright());
Source§

fn whenever(&self, value: Condition) -> Painted<&T>

Conditionally enable styling based on whether the Condition value applies. Replaces any previous condition.

See the crate level docs for more details.

§Example

Enable styling painted only when both stdout and stderr are TTYs:

use yansi::{Paint, Condition};

painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);
Source§

fn new(self) -> Painted<Self>
where Self: Sized,

Create a new Painted with a default Style. Read more
Source§

fn paint<S>(&self, style: S) -> Painted<&Self>
where S: Into<Style>,

Apply a style wholesale to self. Any previous style is replaced. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,

Source§

impl<T> MaybeSendSync for T