Expand description
Data structures used by operation inputs/outputs.
Modules§
- Builders
- Error types that Amazon CloudWatch Logs can respond with.
Structs§
A structure that contains information about one CloudWatch Logs account policy.
This object defines one key that will be added with the addKeys processor.
This processor adds new key-value pairs to the log event.
For more information about this processor including examples, see addKeys in the CloudWatch Logs User Guide.
This structure represents one anomaly that has been found by a logs anomaly detector.
For more information about patterns and anomalies, see CreateLogAnomalyDetector.
Contains information about one anomaly detector in the account.
A structure containing information about the deafult settings and available settings that you can use to configure a delivery or a delivery destination.
This structure contains the default values that are used for each configuration parameter when you use CreateDelivery to create a deliver under the current service type, resource type, and log type.
This processor copies values within a log event. You can also use this processor to add metadata to log events by copying the values of the following metadata keys into the log events:
@logGroupName
,@logGroupStream
,@accountId
,@regionName
.For more information about this processor including examples, see copyValue in the CloudWatch Logs User Guide.
This object defines one value to be copied with the copyValue processor.
The
CSV
processor parses comma-separated values (CSV) from the log events into columns.For more information about this processor including examples, see csv in the CloudWatch Logs User Guide.
This processor converts a datetime string into a format that you specify.
For more information about this processor including examples, see datetimeConverter in the CloudWatch Logs User Guide.
This processor deletes entries from a log event. These entries are key-value pairs.
For more information about this processor including examples, see deleteKeys in the CloudWatch Logs User Guide.
This structure contains information about one delivery in your account.
A delivery is a connection between a logical delivery source and a logical delivery destination.
For more information, see CreateDelivery.
To update an existing delivery configuration, use UpdateDeliveryConfiguration.
This structure contains information about one delivery destination in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, are supported as Firehose delivery destinations.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
-
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.
-
Create a delivery destination, which is a logical object that represents the actual delivery destination.
-
If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
-
A structure that contains information about one logs delivery destination.
This structure contains information about one delivery source in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported \[V2 Permissions\] in the table at Enabling logging from Amazon Web Services services.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
-
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.
-
Create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.
-
If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
-
Represents a cross-account destination that receives subscription log events.
The entity associated with the log events in a
PutLogEvents
call.Represents an export task.
Represents the status of an export task.
Represents the status of an export task.
This structure describes one log event field that is used as an index in at least one index policy in this account.
Represents a matched event.
This processor uses pattern matching to parse and structure unstructured data. This processor can also extract fields from log messages.
For more information about this processor including examples, see grok in the CloudWatch Logs User Guide.
This structure contains information about one field index policy in this account.
Represents a log event, which is a record of activity that was recorded by the application or resource being monitored.
This processor takes a list of objects that contain key fields, and converts them into a map of target keys.
For more information about this processor including examples, see listToMap in the CloudWatch Logs User Guide.
This structure contains the information for one sample log event that is associated with an anomaly found by a log anomaly detector.
Represents a log group.
The fields contained in log events found by a
GetLogGroupFields
operation, along with the percentage of queried log events in which each field appears.Represents a log stream, which is a sequence of log events from a single emitter of logs.
This processor converts a string to lowercase.
For more information about this processor including examples, see lowerCaseString in the CloudWatch Logs User Guide.
Metric filters express how CloudWatch Logs would extract metric observations from ingested log events and transform them into metric data in a CloudWatch metric.
Represents a matched event.
Indicates how to transform ingested log events to metric data in a CloudWatch metric.
This object defines one key that will be moved with the moveKey processor.
This processor moves a key from one field to another. The original key is deleted.
For more information about this processor including examples, see moveKeys in the CloudWatch Logs User Guide.
Represents a log event.
This processor parses CloudFront vended logs, extract fields, and convert them into JSON format. Encoded field values are decoded. Values that are integers and doubles are treated as such. For more information about this processor including examples, see parseCloudfront
For more information about CloudFront log format, see Configure and use standard logs (access logs).
If you use this processor, it must be the first processor in your transformer.
This processor parses log events that are in JSON format. It can extract JSON key-value pairs and place them under a destination that you specify.
Additionally, because you must have at least one parse-type processor in a transformer, you can use
ParseJSON
as that processor for JSON-format logs, so that you can also apply other processors, such as mutate processors, to these logs.For more information about this processor including examples, see parseJSON in the CloudWatch Logs User Guide.
This processor parses a specified field in the original log event into key-value pairs.
For more information about this processor including examples, see parseKeyValue in the CloudWatch Logs User Guide.
Use this processor to parse RDS for PostgreSQL vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message. For more information about this processor including examples, see parsePostGres.
For more information about RDS for PostgreSQL log format, see RDS for PostgreSQL database log filesTCP flag sequence.
If you use this processor, it must be the first processor in your transformer.
Use this processor to parse Route 53 vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message. For more information about this processor including examples, see parseRoute53.
If you use this processor, it must be the first processor in your transformer.
Use this processor to parse Amazon VPC vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message.
This processor doesn't support custom log formats, such as NAT gateway logs. For more information about custom log formats in Amazon VPC, see parseVPC For more information about this processor including examples, see parseVPC.
If you use this processor, it must be the first processor in your transformer.
Use this processor to parse WAF vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message. For more information about this processor including examples, see parseWAF.
For more information about WAF log format, see Log examples for web ACL traffic.
If you use this processor, it must be the first processor in your transformer.
A structure that contains information about one pattern token related to an anomaly.
For more information about patterns and tokens, see CreateLogAnomalyDetector.
A structure that contains information about one delivery destination policy.
This structure contains the information about one processor in a log transformer.
Reserved.
Reserved.
This structure contains details about a saved CloudWatch Logs Insights query definition.
Information about one CloudWatch Logs Insights query that matches the request in a
DescribeQueries
operation.Contains the number of log events scanned by the query, the number of log events that matched the query criteria, and the total number of bytes in the log events that were scanned.
If the query involved log groups that have field index policies, the estimated number of skipped log events and the total bytes of those skipped log events are included. Using field indexes to skip log events in queries reduces scan volume and improves performance. For more information, see Create field indexes to improve query performance and reduce scan volume.
A structure that represents a valid record field header and whether it is mandatory.
If an entity is rejected when a
PutLogEvents
request was made, this includes details about the reason for the rejection.Represents the rejected events.
This object defines one key that will be renamed with the renameKey processor.
Use this processor to rename keys in a log event.
For more information about this processor including examples, see renameKeys in the CloudWatch Logs User Guide.
A policy enabling one or more entities to put logs to a log group in this account.
Contains one field from one log event returned by a CloudWatch Logs Insights query, along with the value of that field.
For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields.
This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket.
Represents the search status of a log stream.
Use this processor to split a field into an array of strings using a delimiting character.
For more information about this processor including examples, see splitString in the CloudWatch Logs User Guide.
This object defines one log field that will be split with the splitString processor.
Represents a subscription filter.
This processor matches a key’s value against a regular expression and replaces all matches with a replacement string.
For more information about this processor including examples, see substituteString in the CloudWatch Logs User Guide.
This object defines one log field key that will be replaced using the substituteString processor.
If you are suppressing an anomaly temporariliy, this structure defines how long the suppression period is to be.
This structure contains information for one log event that has been processed by a log transformer.
Use this processor to remove leading and trailing whitespace.
For more information about this processor including examples, see trimString in the CloudWatch Logs User Guide.
Use this processor to convert a value type associated with the specified key to the specified type. It's a casting processor that changes the types of the specified fields. Values can be converted into one of the following datatypes:
integer
,double
,string
andboolean
.For more information about this processor including examples, see trimString in the CloudWatch Logs User Guide.
This object defines one value type that will be converted using the typeConverter processor.
This processor converts a string field to uppercase.
For more information about this processor including examples, see upperCaseString in the CloudWatch Logs User Guide.
Enums§
- When writing a match expression against
AnomalyDetectorStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
DataProtectionStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
DeliveryDestinationType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
Distribution
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
EntityRejectionErrorType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
EvaluationFrequency
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
ExportTaskStatusCode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
FlattenedElement
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
IndexSource
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
InheritedProperty
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
LogGroupClass
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
OrderBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
OutputFormat
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
PolicyType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
QueryStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
Scope
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
StandardUnit
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
State
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
SuppressionState
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
SuppressionType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
SuppressionUnit
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - When writing a match expression against
Type
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.