From 00213cac1d62b9bcd8b8acd5632351f86db7024c Mon Sep 17 00:00:00 2001
From: Elysa Hall A comprehensive test suite generated by the build workflow, providing validation capabilities for automated reasoning policies. An entity encompassing all the policy scenarios generated by the build workflow, which can be used to validate an Automated Reasoning policy. Contains the various assets generated during a policy build workflow, including logs, quality reports, test cases, and the final policy definition. An alternative way to express the same test scenario, used for validation and comparison purposes. The list of rule identifiers that are expected to be triggered or evaluated by this test scenario. The expected outcome when this scenario is evaluated against the policy (e.g., PASS, FAIL, VIOLATION). The list of rule identifiers that are expected to be triggered or evaluated by this test scenario. Represents a test scenario used to validate an Automated Reasoning policy, including the test conditions and expected outcomes. Represents a collection of generated policy scenarios. Contains a comprehensive entity encompassing all the scenarios generated by the build workflow, which can be used to validate an Automated Reasoning policy. The key-value pair that represents the attribute by which the The display settings of the custom line item The values of the line item filter. This specifies the values to filter on. Currently, you can only exclude Savings Plans discounts. The values of the line item filter. This specifies the values to filter on. A representation of the line item filter for your custom line item. You can use line item filters to include or exclude specific resource values from the billing group's total cost. For example, if you create a custom line item and you want to filter out a value, such as Savings Plans discounts, you can update This operation attempted to create a resource that already exists. The Amazon Resource Name (ARN) of the IAM service role to associate with the resource. The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) service role to associate with the resource. The Amazon Web Services integration configuration settings for the IAM service role association. The Amazon Web Services integration configuration settings for the Amazon Web Services Identity and Access Management (IAM) service role association. The Amazon Resource Name (ARN) of the target resource to associate with the IAM service role. The Amazon Resource Name (ARN) of the target resource to associate with the Amazon Web Services Identity and Access Management (IAM) service role. The unique identifier of the ODB network associated with this Autonomous VM cluster. The Amazon Resource Name (ARN) of the ODB network associated with this Autonomous VM cluster. The name of the OCI resource anchor associated with this Autonomous VM cluster. The unique identifier of the Cloud Exadata Infrastructure containing this Autonomous VM cluster. The Amazon Resource Name (ARN) of the Cloud Exadata Infrastructure containing this Autonomous VM cluster. The percentage of data storage currently in use for Autonomous Databases in the Autonomous VM cluster. The unique identifier of the ODB network associated with this Autonomous VM cluster. The Amazon Resource Name (ARN) of the ODB network associated with this Autonomous VM cluster. The name of the OCI resource anchor associated with this Autonomous VM cluster. The unique identifier of the Exadata infrastructure containing this Autonomous VM cluster. The Amazon Resource Name (ARN) of the Exadata infrastructure containing this Autonomous VM cluster. The percentage of data storage currently in use for Autonomous Databases in the Autonomous VM cluster. The unique identifier of the Exadata infrastructure that this VM cluster belongs to. The Amazon Resource Name (ARN) of the Exadata infrastructure that this VM cluster belongs to. The name of the Grid Infrastructure (GI) cluster. The unique identifier of the ODB network for the VM cluster. The Amazon Resource Name (ARN) of the ODB network associated with this VM cluster. The amount of progress made on the current operation on the VM cluster, expressed as a percentage. The unique identifier of the Exadata infrastructure that this VM cluster belongs to. The Amazon Resource Name (ARN) of the Exadata infrastructure that this VM cluster belongs to. The name of the Grid Infrastructure (GI) cluster. The unique identifier of the ODB network for the VM cluster. The Amazon Resource Name (ARN) of the ODB network associated with this VM cluster. The amount of progress made on the current operation on the VM cluster, expressed as a percentage. The STS policy document that defines permissions for token service usage within the ODB network. The Amazon Web Services Security Token Service (STS) policy document that defines permissions for token service usage within the ODB network. The KMS policy document that defines permissions for key usage within the ODB network. The Amazon Web Services Key Management Service (KMS) policy document that defines permissions for key usage within the ODB network. The Amazon Web Services Region for cross-Region S3 restore access. The Amazon Web Services Region for cross-Region Amazon S3 restore access. The IPv4 addresses allowed for cross-Region S3 restore access. The IPv4 addresses allowed for cross-Region Amazon S3 restore access. The current status of the cross-Region S3 restore access configuration. The current status of the cross-Region Amazon S3 restore access configuration. The configuration access for the cross-Region Amazon S3 database restore source for the ODB network. The Amazon Resource Name (ARN) of the IAM service role to disassociate from the resource. The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) service role to disassociate from the resource. The Amazon Web Services integration configuration settings for the IAM service role disassociation. The Amazon Web Services integration configuration settings for the Amazon Web Services Identity and Access Management (IAM) service role disassociation. The Amazon Resource Name (ARN) of the target resource to disassociate from the IAM service role. The Amazon Resource Name (ARN) of the target resource to disassociate from the Amazon Web Services Identity and Access Management (IAM) service role. The Amazon Resource Name (ARN) of the IAM service role. The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) service role. The current status of the IAM service role. The current status of the Amazon Web Services Identity and Access Management (IAM) service role. Additional information about the current status of the IAM service role, if applicable. Additional information about the current status of the Amazon Web Services Identity and Access Management (IAM) service role, if applicable. The Amazon Web Services integration configuration settings for the IAM service role. The Amazon Web Services integration configuration settings for the Amazon Web Services Identity and Access Management (IAM) service role. Information about an Amazon Web Services Identity and Access Management (IAM) service role associated with a resource. The current status of the KMS access configuration. The current status of the Amazon Web Services Key Management Service (KMS) access configuration. The IPv4 addresses allowed for KMS access. The IPv4 addresses allowed for Amazon Web Services Key Management Service (KMS) access. The domain name for KMS access configuration. The domain name for Amazon Web Services Key Management Service (KMS) access configuration. The KMS policy document that defines permissions for key usage. The Amazon Web Services Key Management Service (KMS) policy document that defines permissions for key usage. Configuration for Amazon Web Services Key Management Service (KMS) access from the ODB network. The Amazon Web Services Security Token Service (STS) access configuration for managed services. The Amazon Web Services Security Token Service (STS) access configuration. The Amazon Web Services Key Management Service (KMS) access configuration for managed services. The Amazon Web Services Key Management Service (KMS) access configuration. The current status of the STS access configuration. The current status of the Amazon Web Services Security Token Service (STS) access configuration. The IPv4 addresses allowed for STS access. The IPv4 addresses allowed for Amazon Web Services Security Token Service (STS) access. The domain name for STS access configuration. The domain name for Amazon Web Services Security Token Service (STS) access configuration. The STS policy document that defines permissions for token service usage. The Amazon Web Services Security Token Service (STS) policy document that defines permissions for token service usage. Configuration for Amazon Web Services Security Token Service (STS) access from the ODB network. The STS policy document that defines permissions for token service usage within the ODB network. The Amazon Web Services Security Token Service (STS) policy document that defines permissions for token service usage within the ODB network. The KMS policy document that defines permissions for key usage within the ODB network. The Amazon Web Services Key Management Service (KMS) policy document that defines permissions for key usage within the ODB network. Configuration settings for the OpenSearch application, including administrative options. The Amazon Resource Name (ARN) of the KMS key used to encrypt the application's data at rest. If provided, the application uses your customer-managed key for encryption. If omitted, the application uses an AWS-managed key. The KMS key must be in the same region as the application. The timestamp indicating when the OpenSearch application was created. The Amazon Resource Name (ARN) of the KMS key used to encrypt the application's data at rest. The timestamp of the last update to the OpenSearch application. The Amazon Resource Name (ARN) of the KMS key used to encrypt the application's data at rest. The configuration parameters to enable access to the key store required by the package. Indicates the expected spending by the customer over the course of the project. This value helps partners and AWS estimate the financial impact of the opportunity. Use the AWS Pricing Calculator to create an estimate of the customer’s total spend. If only annual recurring revenue (ARR) is available, distribute it across 12 months to provide an average monthly value. AWS partition where the opportunity will be deployed. Possible values: 'aws-eusc' for AWS European Sovereign Cloud, Captures details about the project associated with the opportunity, including objectives, scope, and customer requirements. Captures additional comments or information for the AWS partition where the opportunity will be deployed. Possible values: 'aws-eusc' for AWS European Sovereign Cloud, An object that contains the Changes the state of an Changes the state of an Retrieves the revocation status of one or more of the signing profile, signing job, and signing certificate. Changes the state of a signing job to REVOKED. This indicates that the signature is no longer valid. Changes the state of a signing job to Changes the state of a signing profile to REVOKED. This indicates that signatures generated using the signing profile after an effective start date are no longer valid. Changes the state of a signing profile to Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the You must create an Amazon S3 source bucket. For more information, see Creating a Bucket in the Amazon S3 Getting Started Guide. Your S3 source bucket must be version enabled. You must create an S3 destination bucket. AWS Signer uses your S3 destination bucket to write your signed code. You specify the name of the source and destination buckets when calling the You must ensure the S3 buckets are from the same Region as the signing profile. Cross-Region signing isn't supported. You must also specify a request token that identifies your request to Signer. You can call the DescribeSigningJob and the ListSigningJobs actions after you call For a Java example that shows how to use this action, see StartSigningJob. Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the You must create an Amazon S3 source bucket. For more information, see Creating a Bucket in the Amazon S3 Getting Started Guide. Your S3 source bucket must be version enabled. You must create an S3 destination bucket. AWS Signer uses your S3 destination bucket to write your signed code. You specify the name of the source and destination buckets when calling the You must ensure the S3 buckets are from the same Region as the signing profile. Cross-Region signing isn't supported. You must also specify a request token that identifies your request to Signer. You can call the DescribeSigningJob and the ListSigningJobs actions after you call For a Java example that shows how to use this action, see StartSigningJob. For cross-account signing. Grant a designated account permission to perform one or more of the following actions. Each action is associated with a specific API's operations. For more information about cross-account signing, see Using cross-account signing with signing profiles in the AWS Signer Developer Guide. You can designate the following actions to an account. For cross-account signing. Grant a designated account permission to perform one or more of the following actions. Each action is associated with a specific API's operations. For more information about cross-account signing, see Using cross-account signing with signing profiles in the AWS Signer Developer Guide. You can designate the following actions to an account. AWS Signer is a fully managed code-signing service to help you ensure the trust and integrity of your code. Signer supports the following applications: With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3. With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code-signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management. With Signer and the Notation CLI from the Notary
Project, you can sign container images stored in a container registry such as Amazon Elastic Container Registry (ECR). The signatures are stored in the registry alongside the images, where they are available for verifying image authenticity and integrity. For more information about Signer, see the AWS Signer Developer Guide. AWS Signer is a fully managed code-signing service to help you ensure the trust and integrity of your code. Signer supports the following applications: With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3. With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code-signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management. With Signer and the Notation CLI from the Notary
Project, you can sign container images stored in a container registry such as Amazon Elastic Container Registry (ECR). The signatures are stored in the registry alongside the images, where they are available for verifying image authenticity and integrity. For more information about Signer, see the AWS Signer Developer Guide. Attaches a policy to a root, an organizational unit (OU), or an individual account. How the policy affects accounts depends on the type of policy. Refer to the Organizations User Guide for information about each policy type: You can only call this operation from the management account or a member account that is a delegated administrator. Attaches a policy to a root, an organizational unit (OU), or an individual account. How the policy affects accounts depends on the type of policy. Refer to the Organizations User Guide for information about each policy type: You can only call this operation from the management account or a member account that is a delegated administrator. The type of policy to create. You can specify one of the following values: The type of policy to create. You can specify one of the following values: The type of policy that you want information about. You can specify one of the following values: The type of policy that you want information about. You can specify one of the following values: The policy type that you want to disable in this root. You can specify one of the following values: The policy type that you want to disable in this root. You can specify one of the following values: The policy type that you want to enable. You can specify one of the following values: The policy type that you want to enable. You can specify one of the following values: The type of policy that you want information about. You can specify one of the following values: The type of policy that you want information about. You can specify one of the following values: The specified policy type. One of the following values: The specified policy type. One of the following values: The type of policy that you want information about. You can specify one of the following values: The type of policy that you want information about. You can specify one of the following values: The specified policy type. One of the following values: The specified policy type. One of the following values: The type of policy that you want to include in the returned list. You must specify one of the following values: The type of policy that you want to include in the returned list. You must specify one of the following values: Specifies the type of policy that you want to include in the response. You must specify one of the following values: Specifies the type of policy that you want to include in the response. You must specify one of the following values: Describes an existing snapshot job. Poll job descriptions after a job starts to know the status of the job. For information on available status codes, see Describes an existing snapshot job. Poll job descriptions after a job starts to know the status of the job. For information on available status codes, see Registered user support This API can be called as before to get status of a job started by the same Quick Sight user. Possible error scenarios Request will fail with an Access Denied error in the following scenarios: The credentials have expired. Job has been started by a different user. Impersonated Quick Sight user doesn't have access to the specified dashboard in the job. Describes the result of an existing snapshot job that has finished running. A finished snapshot job will return a If the job has not finished running, this operation returns a message that says Describes the result of an existing snapshot job that has finished running. A finished snapshot job will return a If the job has not finished running, this operation returns a message that says Registered user support This API can be called as before to get the result of a job started by the same Quick Sight user. The result for the user will be returned in Possible error scenarios The request fails with an Access Denied error in the following scenarios: The credentials have expired. The job was started by a different user. The registered user doesn't have access to the specified dashboard. The request succeeds but the job fails in the following scenarios: The request succeeds but the response contains an error code in the following scenarios: Get permissions for a flow. Retrieves the identity context for a Quick Sight user in a specified namespace, allowing you to obtain identity tokens that can be used with identity-enhanced IAM role sessions to call identity-aware APIs. Currently, you can call the following APIs with identity-enhanced Credentials Supported Authentication Methods This API supports Quick Sight native users, IAM federated users, and Active Directory users. For Quick Sight users authenticated by Amazon Web Services Identity Center, see Identity Center documentation on identity-enhanced IAM role sessions. Getting Identity-Enhanced Credentials To obtain identity-enhanced credentials, follow these steps: Call the GetIdentityContext API to retrieve an identity token for the specified user. Use the identity token with the STS AssumeRole API to obtain identity-enhanced IAM role session credentials. Usage with STS AssumeRole The identity token returned by this API should be used with the STS AssumeRole API to obtain credentials for an identity-enhanced IAM role session. When calling AssumeRole, include the identity token in the The assumed role must allow the Starts an asynchronous job that generates a snapshot of a dashboard's output. You can request one or several of the following format configurations in each API call. 1 Paginated PDF 1 Excel workbook that includes up to 5 table or pivot table visuals 5 CSVs from table or pivot table visuals The status of a submitted job can be polled with the StartDashboardSnapshotJob API throttling Quick Sight utilizes API throttling to create a more consistent user experience within a time span for customers when they call the Common throttling scenarios The following list provides information about the most commin throttling scenarios that can occur. A large number of A large number of API requests are submitted on an Amazon Web Services account. When a user makes more than 10 API calls to the Quick Sight API in one second, a If your use case requires a higher throttling limit, contact your account admin or Amazon Web ServicesSupport to explore options to tailor a more optimal expereince for your account. Best practices to handle throttling If your use case projects high levels of API traffic, try to reduce the degree of frequency and parallelism of API calls as much as you can to avoid throttling. You can also perform a timing test to calculate an estimate for the total processing time of your projected load that stays within the throttling limits of the Quick Sight APIs. For example, if your projected traffic is 100 snapshot jobs before 12:00 PM per day, start 12 jobs in parallel and measure the amount of time it takes to proccess all 12 jobs. Once you obtain the result, multiply the duration by 9, for example The time that it takes to process a job can be impacted by the following factors: The dataset type (Direct Query or SPICE). The size of the dataset. The complexity of the calculated fields that are used in the dashboard. The number of visuals that are on a sheet. The types of visuals that are on the sheet. The number of formats and snapshots that are requested in the job configuration. The size of the generated snapshots. Starts an asynchronous job that generates a snapshot of a dashboard's output. You can request one or several of the following format configurations in each API call. 1 Paginated PDF 1 Excel workbook that includes up to 5 table or pivot table visuals 5 CSVs from table or pivot table visuals The status of a submitted job can be polled with the StartDashboardSnapshotJob API throttling Quick Sight utilizes API throttling to create a more consistent user experience within a time span for customers when they call the Common throttling scenarios The following list provides information about the most commin throttling scenarios that can occur. A large number of A large number of API requests are submitted on an Amazon Web Services account. When a user makes more than 10 API calls to the Quick Sight API in one second, a If your use case requires a higher throttling limit, contact your account admin or Amazon Web ServicesSupport to explore options to tailor a more optimal expereince for your account. Best practices to handle throttling If your use case projects high levels of API traffic, try to reduce the degree of frequency and parallelism of API calls as much as you can to avoid throttling. You can also perform a timing test to calculate an estimate for the total processing time of your projected load that stays within the throttling limits of the Quick Sight APIs. For example, if your projected traffic is 100 snapshot jobs before 12:00 PM per day, start 12 jobs in parallel and measure the amount of time it takes to proccess all 12 jobs. Once you obtain the result, multiply the duration by 9, for example The time that it takes to process a job can be impacted by the following factors: The dataset type (Direct Query or SPICE). The size of the dataset. The complexity of the calculated fields that are used in the dashboard. The number of visuals that are on a sheet. The types of visuals that are on the sheet. The number of formats and snapshots that are requested in the job configuration. The size of the generated snapshots. Registered user support You can generate snapshots for registered Quick Sight users by using the Snapshot Job APIs with identity-enhanced IAM role session credentials. This approach allows you to create snapshots on behalf of specific Quick Sight users while respecting their row-level security (RLS), column-level security (CLS), dynamic default parameters and dashboard parameter/filter settings. To generate snapshots for registered Quick Sight users, you need to: Obtain identity-enhanced IAM role session credentials from AWS Security Token Service (STS). Use these credentials to call the Snapshot Job APIs. Identity-enhanced credentials are credentials that contain information about the end user (e.g., registered Quick Sight user). If your Quick Sight users are backed by AWS Identity Center, then you need to set up a trusted token issuer. Then, getting identity-enhanced IAM credentials for a Quick Sight user will look like the following: Authenticate user with your OIDC compliant Identity Provider. You should get auth tokens back. Use the OIDC API, CreateTokenWithIAM, to exchange auth tokens to IAM tokens. One of the resulted tokens will be identity token. Call STS AssumeRole API as you normally would, but provide an extra For more details, see IdC documentation on Identity-enhanced IAM role sessions. To obtain Identity-enhanced credentials for Quick Sight native users, IAM federated users, or Active Directory users, follow the steps below: Call Quick Sight GetIdentityContext API to get identity token. Call STS AssumeRole API as you normally would, but provide extra After obtaining the identity-enhanced IAM role session credentials, you can use them to start a job, describe the job and describe job result. You can use the same credentials as long as they haven't expired. All API requests made with these credentials are considered to be made by the impersonated Quick Sight user. When using identity-enhanced session credentials, set the UserConfiguration request attribute to null. Otherwise, the request will be invalid. Possible error scenarios The request fails with an Access Denied error in the following scenarios: The credentials have expired. The impersonated Quick Sight user doesn't have access to the specified dashboard. The impersonated Quick Sight user is restricted from exporting data in the selected formats. For more information about export restrictions, see Customizing access to Amazon Quick Sight capabilities. The label options (label text, label visibility and sort icon visibility) for a color that is used in a bar chart. The options that determine the default presentation of all bar series in The series item configuration of a The legend display setup of the visual. The configuration of a Decal settings for all bar series in the visual. Border settings for all bar series in the visual. The options that determine the default presentation of all bar series in Decal settings for the bar series. Border settings for the bar series. Options that determine the presentation of a bar series in the visual. A bar chart. The Horizontal bar chart Vertical bar chart Horizontal stacked bar chart Vertical stacked bar chart Horizontal stacked 100% bar chart Vertical stacked 100% bar chart For more information, see Using bar charts in the Amazon Quick Suite User Guide. The field series item configuration of a The data field series item configuration of a The series item configuration of a This is a union type structure. For this structure to be valid, only one of the attributes can be defined. Visibility setting for the border. Width of the border. Valid range is from 1px to 8px. Color of the border. Border settings configuration for visual elements, including visibility, width, and color properties. The color configurations of the column. Decal configuration of the column. The general configuration of a column. The label options (label text, label visibility, and sort icon visibility) of a combo chart's color field well. The options that determine the default presentation of all series in The series item configuration of a The legend display setup of the visual. The configuration of a Line styles options for all line series in the visual. Marker styles options for all line series in the visual. Decal settings for all series in the visual. Border settings for all bar series in the visual. The options that determine the default presentation of all series in The field wells of the visual. This is a union type structure. For this structure to be valid, only one of the attributes can be defined. Line styles options for the line series in the visual. Marker styles options for the line series in the visual. Decal settings for the series in the visual. Border settings for the bar series in the visual. Options that determine the presentation of a series in the visual. A combo chart. The For more information, see Using combo charts in the Amazon Quick Suite User Guide. The field series item configuration of a The data field series item configuration of a The series item configuration of a This is a union type structure. For this structure to be valid, only one of the attributes can be defined. The latitude coordinate value for the geocode preference. The longitude coordinate value for the geocode preference. The preference coordinate for the geocode preference. The configuration that controls field customization options available to dashboard readers for a visual. The options that define customizations available to dashboard readers for a specific visual The theme colors that are used for data colors in charts. The colors description is a hexadecimal color code that consists of six alphanumerical characters, prefixed with Field ID of the field that you are setting the series configuration for. Field value of the field that you are setting the series configuration for. Options that determine the presentation of bar series associated to the field. The data field series item configuration of a Field ID of the field that you are setting the series configuration for. Field value of the field that you are setting the series configuration for. Options that determine the presentation of series associated to the field. The data field series item configuration of a The Amazon Resource Name (ARN) of the secret associated with the data source in Amazon Secrets Manager. The credentials for connecting using key-pair. The credentials for connecting through a web proxy server. Field value of the field that you are setting the decal pattern to. Applicable only for field level settings. Visibility setting for the decal pattern. Color configuration for the decal pattern. Type of pattern used for the decal, such as solid, diagonal, or circular patterns in various sizes. Style type for the decal, which can be either manual or automatic. This field is only applicable for line series. Decal settings for accessibility features that define visual patterns and styling for data elements. A list of up to 50 decal settings. Decal settings configuration for a column An entry that appears when a Field ID of the field for which you are setting the series configuration. Options that determine the presentation of bar series associated to the field. The field series item configuration of a The setup for the detailed tooltip. Field ID of the field for which you are setting the series configuration. Options that determine the presentation of series associated to the field. The field series item configuration of a The alt text for the visual. The geocoding prefences for filled map visual. A filled map. For more information, see Creating filled maps in the Amazon Quick Suite User Guide. The unique request key for the geocode preference. The preference definition for the geocode preference. The geocode preference. The preference hierarchy for the geocode preference. The preference coordinate for the geocode preference. The preference value for the geocode preference. The country value for the preference hierarchy. The state/region value for the preference hierarchy. The county/district value for the preference hierarchy. The city value for the preference hierarchy. The postcode value for the preference hierarchy. The preference hierarchy for the geocode preference. The alt text for the visual. The geocoding prefences for geospatial map. A geospatial map or a points on map visual. For more information, see Creating point maps in the Amazon Quick Suite User Guide. The ID for the Amazon Web Services account that the user whose identity context you want to retrieve is in. Currently, you use the ID for the Amazon Web Services account that contains your Quick Sight account. The identifier for the user whose identity context you want to retrieve. The namespace of the user that you want to get identity context for. This parameter is required when the UserIdentifier is specified using Email or UserName. The timestamp at which the session will expire. ///////////////////////// ///////////////////////// The HTTP status of the request. The Amazon Web Services request ID for this operation. The identity context information for the user. This is an identity token that should be used as the ContextAssertion parameter in the STS AssumeRole API call to obtain identity enhanced AWS credentials. Username PrivateKey PrivateKeyPassphrase The combination of username, private key and passphrase that are used as credentials. Marker styles options for all line series in the visual. Decal settings options for all line series in the visual. The options that determine the default presentation of all line series in Marker styles options for a line series in Decal settings for a line series in The options that determine the presentation of a line series in the visual The paginated report options for a pivot table visual. The options that define customizations available to dashboard readers for a specific visual The general visual interactions setup for a visual. Information about the Amazon Quick Sight console that you want to embed. A list of A structure that contains information about files that are requested for registered user during a The error type. An object that contains information on the error that caused the snapshot job to fail. An object that contains information on the error that caused the snapshot job to fail. For more information, see DescribeDashboardSnapshotJobResult API. A list of A list of An object that provides information on the result of a snapshot job. This object provides information about the job, the job status, and the location of the generated file. An array of records that describe the anonymous users that the dashboard snapshot is generated for. A structure that contains information about the users that the dashboard snapshot is generated for. A structure that contains information about the users that the dashboard snapshot is generated for. When using identity-enhanced session credentials, set the UserConfiguration request attribute to null. Otherwise, the request will be invalid. A structure that contains information about the anonymous users that the generated snapshot is for. This API will not return information about registered Amazon Quick Sight. A structure that contains information about the users that the dashboard snapshot is generated for. The users can be either anonymous users or registered users. Anonymous users cannot be used together with registered users. When using identity-enhanced session credentials, set the UserConfiguration request attribute to null. Otherwise, the request will be invalid. A collection of inline visualizations to display within a chart. The options that define customizations available to dashboard readers for a specific visual The general visual interactions setup for a visual. A registered user of Quick Sight. The name of the user that you want to get identity context for. The email address of the user that you want to get identity context for. The Amazon Resource Name (ARN) of the user that you want to get identity context for. A structure that contains information to identify a user. Specifies whether dashboard readers can customize fields for this visual. This option is The additional dataset fields available for dashboard readers to customize the visual with, beyond the fields already configured on the visual. The configuration that controls field customization options available to dashboard readers for a visual. The options that are available for a single Y axis in a chart. Amazon Quick Sight is a fully managed, serverless business intelligence service for the Amazon Web Services Cloud that makes it easy to extend data and insights to every user in your organization. This API reference contains documentation for a programming interface that you can use to manage Amazon Quick Sight. Secrets are listed by If not specified, secrets are listed by If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete. If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector for the public key. Regardless of the DKIM authentication method you use, Amazon SES searches for the appropriate records in the DNS configuration of the domain for up to 72 hours. The hosted zone where Amazon SES publishes the DKIM public key TXT records for this email identity. This value indicates the DNS zone that customers must reference when configuring their CNAME records for DKIM authentication. When configuring DKIM for your domain, create CNAME records in your DNS that point to the selectors in this hosted zone. For example: A string that indicates how DKIM was configured for the identity. These are the possible values: An object containing additional settings for your VDM configuration as applicable to the Guardian. The https policy to use for tracking open and click events. If the value is OPTIONAL or HttpsPolicy is not specified, the open trackers use HTTP and click tracker use the original protocol of the link. If the value is REQUIRE, both open and click tracker uses HTTPS and if the value is REQUIRE_OPEN_ONLY open tracker uses HTTPS and link tracker is same as original protocol of the link. If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete. If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector that's associated with your public key. Regardless of the DKIM authentication method you use, Amazon SES searches for the appropriate records in the DNS configuration of the domain for up to 72 hours. The hosted zone where Amazon SES publishes the DKIM public key TXT records for this email identity. This value indicates the DNS zone that customers must reference when configuring their CNAME records for DKIM authentication. When configuring DKIM for your domain, create CNAME records in your DNS that point to the selectors in this hosted zone. For example: If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Tag-based conditions for contact flow filtering. Tag-based conditions for contact flow filtering. Contact flow type condition. Contact flow type condition. A list of conditions which would be applied together with an AND condition. A list of conditions which would be applied together with an AND condition. A list of conditions which would be applied together with an OR condition. A list of conditions which would be applied together with an OR condition. A list of conditions which would be applied together with a AND condition. A list of conditions which would be applied together with a AND condition. Contact flow type condition within attribute filter. Contact flow type condition within attribute filter. Filter for contact flow attributes with multiple condition types. Filter for contact flow attributes with multiple condition types. Contact flow type of the contact flow type condition. Contact flow type of the contact flow type condition. The contact flow type condition. The contact flow type condition. An object that can be used to specify Tag conditions inside the The top level list specifies conditions that need to be applied with The inner list specifies conditions that need to be applied with An object that can be used to specify Tag conditions inside the The top level list specifies conditions that need to be applied with The inner list specifies conditions that need to be applied with A list of conditions which would be applied together with an A list of conditions which would be applied together with an A leaf node condition which can be used to specify a tag condition. A leaf node condition which can be used to specify a tag condition. An object that can be used to specify Tag conditions inside the Top level list specifies conditions that need to be applied with Inner list specifies conditions that need to be applied with An object that can be used to specify Tag conditions inside the Top level list specifies conditions that need to be applied with Inner list specifies conditions that need to be applied with An object that can be used to specify Tag conditions or Hierarchy Group conditions inside the This accepts an The top level list specifies conditions that need to be applied with The inner list specifies conditions that need to be applied with Only one field can be populated. Maximum number of allowed Tag conditions is 25. Maximum number of allowed Hierarchy Group conditions is 20. An object that can be used to specify Tag conditions or Hierarchy Group conditions inside the This accepts an The top level list specifies conditions that need to be applied with The inner list specifies conditions that need to be applied with Only one field can be populated. Maximum number of allowed Tag conditions is 25. Maximum number of allowed Hierarchy Group conditions is 20. A minimum value of the property. A minimum value of the property. A maximum value of the property. A maximum value of the property. Datetime property comparison type. Datetime property comparison type. A datetime search condition for Search APIs. The type of comparison to be made when evaluating the decimal condition. The type of comparison to be made when evaluating the decimal condition. A decimal search condition for Search APIs. Information about the call disconnect experience. The customer's identification number. For example, the A list of participant types to automatically disconnect when the end customer ends the chat session, allowing them to continue through disconnect flows such as surveys or feedback forms. Valid value: With the A list of conditions which would be applied together with an A list of conditions which would be applied together with an Specifies the ARN for the customer-managed KMS key that DataSync uses to encrypt the DataSync-managed secret stored for Specifies configuration information for a DataSync-managed secret, such as an authentication token or secret key that DataSync uses to access a specific storage location, with a customer-managed KMS key. You can use either Specifies configuration information for a DataSync-managed secret, such as an authentication token, secret key, password, or Kerberos keytab that DataSync uses to access a specific storage location, with a customer-managed KMS key. You can use either Specifies configuration information for a DataSync-managed secret, which includes the authentication token that DataSync uses to access a specific AzureBlob storage location, with a customer-managed KMS key. When you include this paramater as part of a Make sure the DataSync has permission to access the KMS key that you specify. You can use either Specifies configuration information for a DataSync-managed secret, which includes the authentication token that DataSync uses to access a specific AzureBlob storage location, with a customer-managed KMS key. When you include this parameter as part of a Make sure that DataSync has permission to access the KMS key that you specify. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where the authentication token for an AzureBlob storage location is stored in plain text. This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where the authentication token for an AzureBlob storage location is stored in plain text, in Secrets Manager. This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either Specifies configuration information for a DataSync-managed secret, which includes the When you include this paramater as part of a Make sure the DataSync has permission to access the KMS key that you specify. You can use either Specifies configuration information for a DataSync-managed secret, which includes the When you include this parameter as part of a Make sure that DataSync has permission to access the KMS key that you specify. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where the secret key for a specific object storage location is stored in plain text. This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where the secret key for a specific object storage location is stored in plain text, in Secrets Manager. This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either CreateLocationObjectStorageRequest Specifies the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. This parameter applies only if Specifies configuration information for a DataSync-managed secret, either a When you include this parameter as part of a Make sure that DataSync has permission to access the KMS key that you specify. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where the SMB storage location credentials is stored in Secrets Manager as plain text (for You can use either Specifies the DataSync agent (or agents) that can connect to your SMB file server. You specify an agent by using its Amazon Resource Name (ARN). Specifies the ARN for the Identity and Access Management role that DataSync uses to access the secret specified for Specifies configuration information for a customer-managed Secrets Manager secret where a storage location authentication token or secret key is stored in plain text. This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either Specifies configuration information for a customer-managed Secrets Manager secret where a storage location credentials is stored in Secrets Manager as plain text (for authentication token, secret key, or password) or as binary (for Kerberos keytab). This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret. You can use either The authentication protocol that DataSync uses to connect to your SMB file server. Describes configuration information for a DataSync-managed secret, such as a Describes configuration information for a DataSync-managed secret, such as a Describes configuration information for a customer-managed secret, such as a DescribeLocationSmbResponse The number of files, objects, and directories that DataSync expects to transfer over the network. This value is calculated while DataSync prepares the transfer. How this gets calculated depends primarily on your task’s transfer mode configuration: If Anything that's added or modified at the source location. Anything that's in both locations and modified at the destination after an initial transfer (unless OverwriteMode is set to (Basic task mode only) The number of items that DataSync expects to delete (if PreserveDeletedFiles is set to If The number of files, objects, and directories that DataSync expects to transfer over the network. This value is calculated while DataSync prepares the transfer. How this gets calculated depends primarily on your task’s transfer mode configuration: If Anything that's added or modified at the source location. Anything that's in both locations and modified at the destination after an initial transfer (unless OverwriteMode is set to (Basic task mode only) The number of items that DataSync expects to delete (if PreserveDeletedFiles is set to If For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in EstimatedFoldersToTransfer. The number of files, objects, and directories that DataSync actually transfers over the network. This value is updated periodically during your task execution when something is read from the source and sent over the network. If DataSync fails to transfer something, this value can be less than The number of files, objects, and directories that DataSync actually transfers over the network. This value is updated periodically during your task execution when something is read from the source and sent over the network. If DataSync fails to transfer something, this value can be less than For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in FoldersTransferred. The number of files, objects, and directories that DataSync actually deletes in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always The number of files, objects, and directories that DataSync actually deletes in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in FoldersDeleted. The number of files, objects, and directories that DataSync skips during your transfer. The number of files, objects, and directories that DataSync skips during your transfer. For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in FoldersSkipped. The number of files, objects, and directories that DataSync verifies during your transfer. When you configure your task to verify only the data that's transferred, DataSync doesn't verify directories in some situations or files that fail to transfer. The number of files, objects, and directories that DataSync verifies during your transfer. When you configure your task to verify only the data that's transferred, DataSync doesn't verify directories in some situations or files that fail to transfer. For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in FoldersVerified. The number of files, objects, and directories that DataSync expects to delete in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always The number of files, objects, and directories that DataSync expects to delete in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always For Enhanced mode tasks, this counter only includes files or objects. Directories are counted in EstimatedFoldersToDelete. The number of objects that DataSync will attempt to transfer after comparing your source and destination locations. Applies only to Enhanced mode tasks. This counter isn't applicable if you configure your task to transfer all data. In that scenario, DataSync copies everything from the source to the destination without comparing differences between the locations. The number of files or objects that DataSync will attempt to transfer after comparing your source and destination locations. Applies only to Enhanced mode tasks. This counter isn't applicable if you configure your task to transfer all data. In that scenario, DataSync copies everything from the source to the destination without comparing differences between the locations. The number of objects that DataSync finds at your locations. Applies only to Enhanced mode tasks. The number of files or objects that DataSync finds at your locations. Applies only to Enhanced mode tasks. The number of objects that DataSync fails to prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. The number of files or objects that DataSync fails to prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. The number of directories that DataSync expects to delete in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always Applies only to Enhanced mode tasks. The number of directories that DataSync expects to transfer over the network. This value is calculated as DataSync prepares directories to transfer. How this gets calculated depends primarily on your task’s transfer mode configuration: If Anything that's added or modified at the source location. Anything that's in both locations and modified at the destination after an initial transfer (unless OverwriteMode is set to If Applies only to Enhanced mode tasks. The number of directories that DataSync skips during your transfer. Applies only to Enhanced mode tasks. The number of directories that DataSync will attempt to transfer after comparing your source and destination locations. Applies only to Enhanced mode tasks. This counter isn't applicable if you configure your task to transfer all data. In that scenario, DataSync copies everything from the source to the destination without comparing differences between the locations. The number of directories that DataSync actually transfers over the network. This value is updated periodically during your task execution when something is read from the source and sent over the network. If DataSync fails to transfer something, this value can be less than Applies only to Enhanced mode tasks. The number of directories that DataSync verifies during your transfer. Applies only to Enhanced mode tasks. The number of directories that DataSync actually deletes in your destination location. If you don't configure your task to delete data in the destination that isn't in the source, the value is always Applies only to Enhanced mode tasks. The number of directories that DataSync finds at your locations. Applies only to Enhanced mode tasks. The number of directories that DataSync fails to list, prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. This exception is thrown when the client submits a malformed request. Limits the bandwidth used by a DataSync task. For example, if you want DataSync to use a maximum of 1 MB, set this value to Not applicable to Enhanced mode tasks. Limits the bandwidth used by a DataSync task. For example, if you want DataSync to use a maximum of 1 MB, set this value to The number of objects that DataSync fails to prepare during your task execution. The number of files or objects that DataSync fails to prepare during your task execution. The number of objects that DataSync fails to transfer during your task execution. The number of files or objects that DataSync fails to transfer during your task execution. The number of objects that DataSync fails to verify during your task execution. The number of files or objects that DataSync fails to verify during your task execution. The number of objects that DataSync fails to delete during your task execution. The number of files or objects that DataSync fails to delete during your task execution. The number of objects that DataSync fails to prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. The number of files or objects that DataSync fails to prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. The number of objects that DataSync finds at your source location. With a manifest, DataSync lists only what's in your manifest (and not everything at your source location). With an include filter, DataSync lists only what matches the filter at your source location. With an exclude filter, DataSync lists everything at your source location before applying the filter. The number of files or objects that DataSync finds at your source location. With a manifest, DataSync lists only what's in your manifest (and not everything at your source location). With an include filter, DataSync lists only what matches the filter at your source location. With an exclude filter, DataSync lists everything at your source location before applying the filter. The number of files or objects that DataSync finds at your destination location. This counter is only applicable if you configure your task to delete data in the destination that isn't in the source. The number of files or objects that DataSync finds at your locations. Applies only to Enhanced mode tasks. The number of directories that DataSync fails to list during your task execution. The number of directories that DataSync fails to prepare during your task execution. The number of directories that DataSync fails to transfer during your task execution. The number of directories that DataSync fails to verify during your task execution. The number of directories that DataSync fails to delete during your task execution. The number of directories that DataSync fails to list, prepare, transfer, verify, and delete during your task execution. Applies only to Enhanced mode tasks. The number of directories that DataSync finds at your source location. With a manifest, DataSync lists only what's in your manifest (and not everything at your source location). With an include filter, DataSync lists only what matches the filter at your source location. With an exclude filter, DataSync lists everything at your source location before applying the filter. The number of objects that DataSync finds at your destination location. This counter is only applicable if you configure your task to delete data in the destination that isn't in the source. The number of directories that DataSync finds at your destination location. This counter is only applicable if you configure your task to delete data in the destination that isn't in the source. The number of objects that DataSync finds at your locations. Applies only to Enhanced mode tasks. The number of directories that DataSync finds at your locations. Applies only to Enhanced mode tasks. Specifies the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. This parameter applies only if Specifies configuration information for a DataSync-managed secret, such as a Specifies configuration information for a customer-managed secret, such as a Specifies the DataSync agent (or agents) that can connect to your SMB file server. You specify an agent by using its Amazon Resource Name (ARN). Gets browser settings. Gets browser settings. Gets the data protection settings. Gets the data protection settings. Gets the identity provider. Gets the identity provider. Gets the IP access settings. Gets the IP access settings. Gets the network settings. Gets the network settings. Gets the web portal. Gets the web portal. Gets the service provider metadata. Gets the service provider metadata. Gets information for a secure browser session. Gets information for a secure browser session. Gets details about a specific session logger resource. Gets details about a specific session logger resource. Gets the trust store. Gets the trust store. Gets the trust store certificate. Gets the trust store certificate. Gets user access logging settings. Gets user access logging settings. Gets user settings. Gets user settings. Retrieves a list of browser settings. Retrieves a list of browser settings. Retrieves a list of data protection settings. Retrieves a list of data protection settings. Retrieves a list of identity providers for a specific web portal. Retrieves a list of identity providers for a specific web portal. Retrieves a list of IP access settings. Retrieves a list of IP access settings. Retrieves a list of network settings. Retrieves a list of network settings. Retrieves a list or web portals. Retrieves a list or web portals. Lists all available session logger resources. Lists all available session logger resources. Lists information for multiple secure browser sessions from a specific portal. Lists information for multiple secure browser sessions from a specific portal. Retrieves a list of tags for a resource. Retrieves a list of tags for a resource. Retrieves a list of trust store certificates. Retrieves a list of trust store certificates. Retrieves a list of trust stores. Retrieves a list of trust stores. Retrieves a list of user access logging settings. Retrieves a list of user access logging settings. Retrieves a list of user settings. Retrieves a list of user settings. Metadata for the logo image file, including the MIME type, file extension, and upload timestamp. Metadata for the wallpaper image file, including the MIME type, file extension, and upload timestamp. Metadata for the favicon image file, including the MIME type, file extension, and upload timestamp. A map of localized text strings for different languages, allowing the portal to display content in the user's preferred language. The color theme for components on the web portal. The terms of service text in Markdown format that users must accept before accessing the portal. The branding configuration output including custom images metadata, localized strings, color theme, and terms of service. The logo image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 100 KB in JPEG, PNG, or ICO format. The wallpaper image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 5 MB in JPEG or PNG format. The favicon image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 100 KB in JPEG, PNG, or ICO format. A map of localized text strings for different supported languages. Each locale must provide the required fields The color theme for components on the web portal. Choose The terms of service text in Markdown format. Users will be presented with the terms of service after successfully signing in. The input configuration for creating branding settings. The logo image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 100 KB in JPEG, PNG, or ICO format. The wallpaper image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 5 MB in JPEG or PNG format. The favicon image for the portal. Provide either a binary image file or an S3 URI pointing to the image file. Maximum 100 KB in JPEG, PNG, or ICO format. A map of localized text strings for different supported languages. Each locale must provide the required fields The color theme for components on the web portal. Choose The terms of service text in Markdown format. To remove existing terms of service, provide an empty string. The input configuration for updating branding settings. All fields are optional when updating existing branding. The configuration of the toolbar. This allows administrators to select the toolbar type and visual mode, set maximum display resolution for sessions, and choose which items are visible to end users during their sessions. If administrators do not modify these settings, end users retain control over their toolbar preferences. The branding configuration input that customizes the appearance of the web portal for end users. This includes a custom logo, favicon, wallpaper, localized strings, color theme, and an optional terms of service. The image provided as a binary image file. The S3 URI pointing to the image file. The URI must use the format The input for an icon image (logo or favicon). Provide either a binary image file or an S3 URI pointing to the image file. Maximum 100 KB in JPEG, PNG, or ICO format. The MIME type of the image. The file extension of the image. The timestamp when the image was last uploaded. Metadata information about an uploaded image file. The text displayed in the browser tab title. The welcome text displayed on the sign-in page. The title text for the login section. This field is optional and defaults to \"Sign In\". The description text for the login section. This field is optional and defaults to \"Sign in to your session\". The text displayed on the login button. This field is optional and defaults to \"Sign In\". A contact link URL. The URL must start with The text displayed on the contact button. This field is optional and defaults to \"Contact us\". The text displayed during session loading. This field is optional and defaults to \"Loading your session\". Localized text strings for a specific language that customize the web portal. The S3 log configuration. The configuration of the toolbar. This allows administrators to select the toolbar type and visual mode, set maximum display resolution for sessions, and choose which items are visible to end users during their sessions. If administrators do not modify these settings, end users retain control over their toolbar preferences. The branding configuration that customizes the appearance of the web portal for end users. When updating user settings without an existing branding configuration, all fields (logo, favicon, wallpaper, localized strings, and color theme) are required except for terms of service. When updating user settings with an existing branding configuration, all fields are optional. The configuration of the toolbar. This allows administrators to select the toolbar type and visual mode, set maximum display resolution for sessions, and choose which items are visible to end users during their sessions. If administrators do not modify these settings, end users retain control over their toolbar preferences. The branding configuration output that customizes the appearance of the web portal for end users. A user settings resource that can be associated with a web portal. Once associated with a web portal, user settings control how users can transfer data between a streaming session and the their local devices. The configuration of the toolbar. This allows administrators to select the toolbar type and visual mode, set maximum display resolution for sessions, and choose which items are visible to end users during their sessions. If administrators do not modify these settings, end users retain control over their toolbar preferences. The branding configuration output that customizes the appearance of the web portal for end users. The summary of user settings. The image provided as a binary image file. The S3 URI pointing to the image file. The URI must use the format The input for a wallpaper image. Provide the image as either a binary image file or an S3 URI. Maximum 5 MB in JPEG or PNG format. Creates a policy within the AgentCore Policy system. Policies provide real-time, deterministic control over agentic interactions with AgentCore Gateway. Using the Cedar policy language, you can define fine-grained policies that specify which interactions with Gateway tools are permitted based on input parameters and OAuth claims, ensuring agents operate within defined boundaries and business rules. The policy is validated during creation against the Cedar schema generated from the Gateway's tools' input schemas, which defines the available tools, their parameters, and expected data types. This is an asynchronous operation. Use the GetPolicy operation to poll the Creates a policy within the AgentCore Policy system. Policies provide real-time, deterministic control over agentic interactions with AgentCore Gateway. Using the Cedar policy language, you can define fine-grained policies that specify which interactions with Gateway tools are permitted based on input parameters and OAuth claims, ensuring agents operate within defined boundaries and business rules. The policy is validated during creation against the Cedar schema generated from the Gateway's tools' input schemas, which defines the available tools, their parameters, and expected data types. This is an asynchronous operation. Use the GetPolicy operation to poll the Creates a new policy engine within the AgentCore Policy system. A policy engine is a collection of policies that evaluates and authorizes agent tool calls. When associated with Gateways (each Gateway can be associated with at most one policy engine, but multiple Gateways can be associated with the same engine), the policy engine intercepts all agent requests and determines whether to allow or deny each action based on the defined policies. This is an asynchronous operation. Use the GetPolicyEngine operation to poll the Creates a new policy engine within the AgentCore Policy system. A policy engine is a collection of policies that evaluates and authorizes agent tool calls. When associated with Gateways (each Gateway can be associated with at most one policy engine, but multiple Gateways can be associated with the same engine), the policy engine intercepts all agent requests and determines whether to allow or deny each action based on the defined policies. This is an asynchronous operation. Use the GetPolicyEngine operation to poll the The unique identifier of the policy generation request to be retrieved. This must be a valid generation ID from a previous StartPolicyGeneration call. The unique identifier of the policy generation request to be retrieved. This must be a valid generation ID from a previous StartPolicyGeneration call. A pagination token returned from a previous ListPolicies call. Use this token to retrieve the next page of results when the response is paginated. A pagination token returned from a previous ListPolicies call. Use this token to retrieve the next page of results when the response is paginated. A pagination token returned from a previous ListPolicyEngines call. Use this token to retrieve the next page of results when the response is paginated. A pagination token returned from a previous ListPolicyEngines call. Use this token to retrieve the next page of results when the response is paginated. A pagination token that can be used in subsequent ListPolicyEngines calls to retrieve additional results. This token is only present when there are more results available. A pagination token that can be used in subsequent ListPolicyEngines calls to retrieve additional results. This token is only present when there are more results available. The unique identifier of the policy generation request whose assets are to be retrieved. This must be a valid generation ID from a previous StartPolicyGeneration call that has completed processing. The unique identifier of the policy generation request whose assets are to be retrieved. This must be a valid generation ID from a previous StartPolicyGeneration call that has completed processing. A pagination token returned from a previous ListPolicyGenerationAssets call. Use this token to retrieve the next page of assets when the response is paginated due to large numbers of generated policy options. A pagination token returned from a previous ListPolicyGenerationAssets call. Use this token to retrieve the next page of assets when the response is paginated due to large numbers of generated policy options. A pagination token that can be used in subsequent ListPolicyGenerationAssets calls to retrieve additional assets. This token is only present when there are more generated policy assets available beyond the current response. A pagination token that can be used in subsequent ListPolicyGenerationAssets calls to retrieve additional assets. This token is only present when there are more generated policy assets available beyond the current response. The unique identifier for an Amazon Connect contact. This identifier is related to the contact starting. Returns the status, metrics, and errors (if there are any) that are associated with a job. Returns the status, metrics, and errors (if there are any) that are associated with a job. Returns the Returns the Returns the Returns the Returns the corresponding Match ID of a customer record if the record has been processed in a rule-based matching workflow. You can call this API as a dry run of an incremental load on the rule-based matching workflow. Returns the corresponding Match ID of a customer record if the record has been processed in a rule-based matching workflow. You can call this API as a dry run of an incremental load on the rule-based matching workflow. Returns the status, metrics, and errors (if there are any) that are associated with a job. Returns the status, metrics, and errors (if there are any) that are associated with a job. Returns the Returns the Returns the resource-based policy. Returns the resource-based policy. Returns the Returns the Returns the SchemaMapping of a given name. Returns the SchemaMapping of a given name. Lists all ID mapping jobs for a given workflow. Lists all ID mapping jobs for a given workflow. Returns a list of all the Returns a list of all the Returns a list of all ID namespaces. Returns a list of all ID namespaces. Lists all jobs for a given workflow. Lists all jobs for a given workflow. Returns a list of all the Returns a list of all the Returns a list of all the Returns a list of all the Returns a list of all the Returns a list of all the Displays the tags associated with an Entity Resolution resource. In Entity Resolution, Displays the tags associated with an Entity Resolution resource. In Entity Resolution, The Amazon Resource Name (ARN) of the Customer Profiles domain where the matched output will be sent. The Amazon Resource Name (ARN) of the Customer Profiles object type that defines the structure for the matched customer data. Specifies the configuration for integrating with Customer Profiles. This configuration enables Entity Resolution to send matched output directly to Customer Profiles instead of Amazon S3, creating a unified customer view by automatically updating customer profiles based on match clusters. The S3 path to which Entity Resolution will write the output table. Customer KMS ARN for encryption at rest. If not provided, system will use an Entity Resolution managed KMS key. The S3 path to which Entity Resolution will write the output table. The output source for the ID mapping workflow. The S3 path to which Entity Resolution will write the output table. Customer KMS ARN for encryption at rest. If not provided, system will use an Entity Resolution managed KMS key. The S3 path to which Entity Resolution will write the output table. A list of Normalizes the attributes defined in the schema in the input data. For example, if an attribute has an Specifies the Customer Profiles integration configuration for sending matched output directly to Customer Profiles. When configured, Entity Resolution automatically creates and updates customer profiles based on match clusters, eliminating the need for manual Amazon S3 integration setup. A list of This operation aborts a multipart upload identified by the upload ID. After the Abort Multipart Upload request succeeds, you cannot upload any more parts to the multipart upload or complete the multipart upload. Aborting a completed upload fails. However, aborting an already-aborted upload will succeed, for a short time. For more information about uploading a part and completing a multipart upload, see UploadMultipartPart and CompleteMultipartUpload. This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Working with Archives in Amazon S3 Glacier and Abort Multipart Upload in the Amazon Glacier Developer Guide. This operation aborts a multipart upload identified by the upload ID. After the Abort Multipart Upload request succeeds, you cannot upload any more parts to the multipart upload or complete the multipart upload. Aborting a completed upload fails. However, aborting an already-aborted upload will succeed, for a short time. For more information about uploading a part and completing a multipart upload, see UploadMultipartPart and CompleteMultipartUpload. This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Working with Archives in Amazon Glacier and Abort Multipart Upload in the Amazon Glacier Developer Guide. This operation aborts the vault locking process if the vault lock is not in the A vault lock is put into the This operation is idempotent. You can successfully invoke this operation multiple times, if the vault lock is in the This operation adds the specified tags to a vault. Each tag is composed of a key and a value. Each vault can have up to 10 tags. If your request would cause the tag limit for the vault to be exceeded, the operation throws the This operation adds the specified tags to a vault. Each tag is composed of a key and a value. Each vault can have up to 10 tags. If your request would cause the tag limit for the vault to be exceeded, the operation throws the You call this operation to inform Amazon S3 Glacier (Glacier) that all the archive parts have been uploaded and that Glacier can now assemble the archive from the uploaded parts. After assembling and saving the archive to the vault, Glacier returns the URI path of the newly created archive resource. Using the URI path, you can then access the archive. After you upload an archive, you should save the archive ID returned to retrieve the archive at a later point. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob. In the request, you must include the computed SHA256 tree hash of the entire archive you have uploaded. For information about computing a SHA256 tree hash, see Computing Checksums. On the server side, Glacier also constructs the SHA256 tree hash of the assembled archive. If the values match, Glacier saves the archive to the vault; otherwise, it returns an error, and the operation fails. The ListParts operation returns a list of parts uploaded for a specific multipart upload. It includes checksum information for each uploaded part that can be used to debug a bad checksum issue. Additionally, Glacier also checks for any missing content ranges when assembling the archive, if missing content ranges are found, Glacier returns an error and the operation fails. Complete Multipart Upload is an idempotent operation. After your first successful complete multipart upload, if you call the operation again within a short period, the operation will succeed and return the same archive ID. This is useful in the event you experience a network issue that causes an aborted connection or receive a 500 server error, in which case you can repeat your Complete Multipart Upload request and get the same archive ID without creating duplicate archives. Note, however, that after the multipart upload completes, you cannot call the List Parts operation and the multipart upload will not appear in List Multipart Uploads response, even if idempotent complete is possible. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Complete Multipart Upload in the Amazon Glacier Developer Guide. You call this operation to inform Amazon Glacier (Glacier) that all the archive parts have been uploaded and that Glacier can now assemble the archive from the uploaded parts. After assembling and saving the archive to the vault, Glacier returns the URI path of the newly created archive resource. Using the URI path, you can then access the archive. After you upload an archive, you should save the archive ID returned to retrieve the archive at a later point. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob. In the request, you must include the computed SHA256 tree hash of the entire archive you have uploaded. For information about computing a SHA256 tree hash, see Computing Checksums. On the server side, Glacier also constructs the SHA256 tree hash of the assembled archive. If the values match, Glacier saves the archive to the vault; otherwise, it returns an error, and the operation fails. The ListParts operation returns a list of parts uploaded for a specific multipart upload. It includes checksum information for each uploaded part that can be used to debug a bad checksum issue. Additionally, Glacier also checks for any missing content ranges when assembling the archive, if missing content ranges are found, Glacier returns an error and the operation fails. Complete Multipart Upload is an idempotent operation. After your first successful complete multipart upload, if you call the operation again within a short period, the operation will succeed and return the same archive ID. This is useful in the event you experience a network issue that causes an aborted connection or receive a 500 server error, in which case you can repeat your Complete Multipart Upload request and get the same archive ID without creating duplicate archives. Note, however, that after the multipart upload completes, you cannot call the List Parts operation and the multipart upload will not appear in List Multipart Uploads response, even if idempotent complete is possible. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Complete Multipart Upload in the Amazon Glacier Developer Guide. This operation completes the vault locking process by transitioning the vault lock from the This operation is idempotent. This request is always successful if the vault lock is in the If an invalid lock ID is passed in the request when the vault lock is in the This operation creates a new vault with the specified name. The name of the vault must be unique within a region for an AWS account. You can create up to 1,000 vaults per account. If you need to create more vaults, contact Amazon S3 Glacier. You must use the following guidelines when naming a vault. Names can be between 1 and 255 characters long. Allowed characters are a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), and '.' (period). This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Creating a Vault in Amazon Glacier and Create Vault in the Amazon Glacier Developer Guide. This operation creates a new vault with the specified name. The name of the vault must be unique within a region for an AWS account. You can create up to 1,000 vaults per account. If you need to create more vaults, contact Amazon Glacier. You must use the following guidelines when naming a vault. Names can be between 1 and 255 characters long. Allowed characters are a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), and '.' (period). This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Creating a Vault in Amazon Glacier and Create Vault in the Amazon Glacier Developer Guide. This operation deletes an archive from a vault. Subsequent requests to initiate a retrieval of this archive will fail. Archive retrievals that are in progress for this archive ID may or may not succeed according to the following scenarios: If the archive retrieval job is actively preparing the data for download when Amazon S3 Glacier receives the delete archive request, the archival retrieval operation might fail. If the archive retrieval job has successfully prepared the archive for download when Amazon S3 Glacier receives the delete archive request, you will be able to download the output. This operation is idempotent. Attempting to delete an already-deleted archive does not result in an error. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Deleting an Archive in Amazon Glacier and Delete Archive in the Amazon Glacier Developer Guide. This operation deletes an archive from a vault. Subsequent requests to initiate a retrieval of this archive will fail. Archive retrievals that are in progress for this archive ID may or may not succeed according to the following scenarios: If the archive retrieval job is actively preparing the data for download when Amazon Glacier receives the delete archive request, the archival retrieval operation might fail. If the archive retrieval job has successfully prepared the archive for download when Amazon Glacier receives the delete archive request, you will be able to download the output. This operation is idempotent. Attempting to delete an already-deleted archive does not result in an error. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Deleting an Archive in Amazon Glacier and Delete Archive in the Amazon Glacier Developer Guide. This operation deletes a vault. Amazon S3 Glacier will delete a vault only if there are no archives in the vault as of the last inventory and there have been no writes to the vault since the last inventory. If either of these conditions is not satisfied, the vault deletion fails (that is, the vault is not removed) and Amazon S3 Glacier returns an error. You can use DescribeVault to return the number of archives in a vault, and you can use Initiate a Job (POST jobs) to initiate a new inventory retrieval for a vault. The inventory contains the archive IDs you use to delete archives using Delete Archive (DELETE archive). This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Deleting a Vault in Amazon Glacier and Delete Vault in the Amazon S3 Glacier Developer Guide. This operation deletes a vault. Amazon Glacier will delete a vault only if there are no archives in the vault as of the last inventory and there have been no writes to the vault since the last inventory. If either of these conditions is not satisfied, the vault deletion fails (that is, the vault is not removed) and Amazon Glacier returns an error. You can use DescribeVault to return the number of archives in a vault, and you can use Initiate a Job (POST jobs) to initiate a new inventory retrieval for a vault. The inventory contains the archive IDs you use to delete archives using Delete Archive (DELETE archive). This operation is idempotent. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Deleting a Vault in Amazon Glacier and Delete Vault in the Amazon Glacier Developer Guide. This operation deletes the access policy associated with the specified vault. The operation is eventually consistent; that is, it might take some time for Amazon S3 Glacier to completely remove the access policy, and you might still see the effect of the policy for a short time after you send the delete request. This operation is idempotent. You can invoke delete multiple times, even if there is no policy associated with the vault. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies. This operation deletes the access policy associated with the specified vault. The operation is eventually consistent; that is, it might take some time for Amazon Glacier to completely remove the access policy, and you might still see the effect of the policy for a short time after you send the delete request. This operation is idempotent. You can invoke delete multiple times, even if there is no policy associated with the vault. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies. This operation deletes the notification configuration set for a vault. The operation is eventually consistent; that is, it might take some time for Amazon S3 Glacier to completely disable the notifications and you might still receive some notifications for a short time after you send the delete request. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon S3 Glacier and Delete Vault Notification Configuration in the Amazon S3 Glacier Developer Guide. This operation deletes the notification configuration set for a vault. The operation is eventually consistent; that is, it might take some time for Amazon Glacier to completely disable the notifications and you might still receive some notifications for a short time after you send the delete request. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon Glacier and Delete Vault Notification Configuration in the Amazon Glacier Developer Guide. This operation returns information about a job you previously initiated, including the job initiation date, the user who initiated the job, the job status code/message and the Amazon SNS topic to notify after Amazon S3 Glacier (Glacier) completes the job. For more information about initiating a job, see InitiateJob. This operation enables you to check the status of your job. However, it is strongly recommended that you set up an Amazon SNS topic and specify it in your initiate job request so that Glacier can notify the topic after it completes the job. A job ID will not expire for at least 24 hours after Glacier completes the job. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For more information about using this operation, see the documentation for the underlying REST API Describe Job in the Amazon Glacier Developer Guide. This operation returns information about a job you previously initiated, including the job initiation date, the user who initiated the job, the job status code/message and the Amazon SNS topic to notify after Amazon Glacier (Glacier) completes the job. For more information about initiating a job, see InitiateJob. This operation enables you to check the status of your job. However, it is strongly recommended that you set up an Amazon SNS topic and specify it in your initiate job request so that Glacier can notify the topic after it completes the job. A job ID will not expire for at least 24 hours after Glacier completes the job. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For more information about using this operation, see the documentation for the underlying REST API Describe Job in the Amazon Glacier Developer Guide. This operation returns information about a vault, including the vault's Amazon Resource Name (ARN), the date the vault was created, the number of archives it contains, and the total size of all the archives in the vault. The number of archives and their total size are as of the last inventory generation. This means that if you add or remove an archive from a vault, and then immediately use Describe Vault, the change in contents will not be immediately reflected. If you want to retrieve the latest inventory of the vault, use InitiateJob. Amazon S3 Glacier generates vault inventories approximately daily. For more information, see Downloading a Vault Inventory in Amazon S3 Glacier. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Retrieving Vault Metadata in Amazon S3 Glacier and Describe Vault in the Amazon Glacier Developer Guide. This operation returns information about a vault, including the vault's Amazon Resource Name (ARN), the date the vault was created, the number of archives it contains, and the total size of all the archives in the vault. The number of archives and their total size are as of the last inventory generation. This means that if you add or remove an archive from a vault, and then immediately use Describe Vault, the change in contents will not be immediately reflected. If you want to retrieve the latest inventory of the vault, use InitiateJob. Amazon Glacier generates vault inventories approximately daily. For more information, see Downloading a Vault Inventory in Amazon Glacier. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Retrieving Vault Metadata in Amazon Glacier and Describe Vault in the Amazon Glacier Developer Guide. This operation returns the current data retrieval policy for the account and region specified in the GET request. For more information about data retrieval policies, see Amazon Glacier Data Retrieval Policies. This operation downloads the output of the job you initiated using InitiateJob. Depending on the job type you specified when you initiated the job, the output will be either the content of an archive or a vault inventory. You can download all the job output or download a portion of the output by specifying a byte range. In the case of an archive retrieval job, depending on the byte range you specify, Amazon S3 Glacier (Glacier) returns the checksum for the portion of the data. You can compute the checksum on the client and verify that the values match to ensure the portion you downloaded is the correct data. A job ID will not expire for at least 24 hours after Glacier completes the job. That a byte range. For both archive and inventory retrieval jobs, you should verify the downloaded size against the size returned in the headers from the Get Job Output response. For archive retrieval jobs, you should also verify that the size is what you expected. If you download a portion of the output, the expected size is based on the range of bytes you specified. For example, if you specify a range of In the case of an archive retrieval job, depending on the byte range you specify, Glacier returns the checksum for the portion of the data. To ensure the portion you downloaded is the correct data, compute the checksum on the client, verify that the values match, and verify that the size is what you expected. A job ID does not expire for at least 24 hours after Glacier completes the job. That is, you can download the job output within the 24 hours period after Amazon Glacier completes the job. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Downloading a Vault Inventory, Downloading an Archive, and Get Job Output This operation downloads the output of the job you initiated using InitiateJob. Depending on the job type you specified when you initiated the job, the output will be either the content of an archive or a vault inventory. You can download all the job output or download a portion of the output by specifying a byte range. In the case of an archive retrieval job, depending on the byte range you specify, Amazon Glacier (Glacier) returns the checksum for the portion of the data. You can compute the checksum on the client and verify that the values match to ensure the portion you downloaded is the correct data. A job ID will not expire for at least 24 hours after Glacier completes the job. That a byte range. For both archive and inventory retrieval jobs, you should verify the downloaded size against the size returned in the headers from the Get Job Output response. For archive retrieval jobs, you should also verify that the size is what you expected. If you download a portion of the output, the expected size is based on the range of bytes you specified. For example, if you specify a range of In the case of an archive retrieval job, depending on the byte range you specify, Glacier returns the checksum for the portion of the data. To ensure the portion you downloaded is the correct data, compute the checksum on the client, verify that the values match, and verify that the size is what you expected. A job ID does not expire for at least 24 hours after Glacier completes the job. That is, you can download the job output within the 24 hours period after Amazon Glacier completes the job. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Downloading a Vault Inventory, Downloading an Archive, and Get Job Output This operation retrieves the This operation retrieves the following attributes from the The vault lock policy set on the vault. The state of the vault lock, which is either When the lock ID expires. The lock ID is used to complete the vault locking process. When the vault lock was initiated and put into the A vault lock is put into the If there is no vault lock policy set on the vault, the operation returns a This operation retrieves the For information about setting a notification configuration on a vault, see SetVaultNotifications. If a notification configuration for a vault is not set, the operation returns a An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon S3 Glacier and Get Vault Notification Configuration in the Amazon Glacier Developer Guide. This operation retrieves the For information about setting a notification configuration on a vault, see SetVaultNotifications. If a notification configuration for a vault is not set, the operation returns a An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon Glacier and Get Vault Notification Configuration in the Amazon Glacier Developer Guide. This operation initiates a job of the specified type, which can be a select, an archival retrieval, or a vault retrieval. For more information about using this operation, see the documentation for the underlying REST API Initiate a Job. This operation initiates a multipart upload. Amazon S3 Glacier creates a multipart upload resource and returns its ID in the response. The multipart upload ID is used in subsequent requests to upload parts of an archive (see UploadMultipartPart). When you initiate a multipart upload, you specify the part size in number of bytes. The part size must be a megabyte (1024 KB) multiplied by a power of 2-for example, 1048576 (1 MB), 2097152 (2 MB), 4194304 (4 MB), 8388608 (8 MB), and so on. The minimum allowable part size is 1 MB, and the maximum is 4 GB. Every part you upload to this resource (see UploadMultipartPart), except the last one, must have the same size. The last one can be the same size or smaller. For example, suppose you want to upload a 16.2 MB file. If you initiate the multipart upload with a part size of 4 MB, you will upload four parts of 4 MB each and one part of 0.2 MB. You don't need to know the size of the archive when you start a multipart upload because Amazon S3 Glacier does not require you to specify the overall archive size. After you complete the multipart upload, Amazon S3 Glacier (Glacier) removes the multipart upload resource referenced by the ID. Glacier also removes the multipart upload resource if you cancel the multipart upload or it may be removed if there is no activity for a period of 24 hours. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Initiate Multipart Upload in the Amazon Glacier Developer Guide. This operation initiates a multipart upload. Amazon Glacier creates a multipart upload resource and returns its ID in the response. The multipart upload ID is used in subsequent requests to upload parts of an archive (see UploadMultipartPart). When you initiate a multipart upload, you specify the part size in number of bytes. The part size must be a megabyte (1024 KB) multiplied by a power of 2-for example, 1048576 (1 MB), 2097152 (2 MB), 4194304 (4 MB), 8388608 (8 MB), and so on. The minimum allowable part size is 1 MB, and the maximum is 4 GB. Every part you upload to this resource (see UploadMultipartPart), except the last one, must have the same size. The last one can be the same size or smaller. For example, suppose you want to upload a 16.2 MB file. If you initiate the multipart upload with a part size of 4 MB, you will upload four parts of 4 MB each and one part of 0.2 MB. You don't need to know the size of the archive when you start a multipart upload because Amazon Glacier does not require you to specify the overall archive size. After you complete the multipart upload, Amazon Glacier (Glacier) removes the multipart upload resource referenced by the ID. Glacier also removes the multipart upload resource if you cancel the multipart upload or it may be removed if there is no activity for a period of 24 hours. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Initiate Multipart Upload in the Amazon Glacier Developer Guide. This operation initiates the vault locking process by doing the following: Installing a vault lock policy on the specified vault. Setting the lock state of vault lock to Returning a lock ID, which is used to complete the vault locking process. You can set one vault lock policy for each vault and this policy can be up to 20 KB in size. For more information about vault lock policies, see Amazon Glacier Access Control with Vault Lock Policies. You must complete the vault locking process within 24 hours after the vault lock enters the After a vault lock is in the You can abort the vault locking process by calling AbortVaultLock. You can get the state of the vault lock by calling GetVaultLock. For more information about the vault locking process, Amazon Glacier Vault Lock. If this operation is called when the vault lock is in the This operation lists jobs for a vault, including jobs that are in-progress and jobs that have recently finished. The List Job operation returns a list of these jobs sorted by job initiation time. Amazon Glacier retains recently completed jobs for a period before deleting them; however, it eventually removes completed jobs. The output of completed jobs can be retrieved. Retaining completed jobs for a period of time after they have completed enables you to get a job output in the event you miss the job completion notification or your first attempt to download it fails. For example, suppose you start an archive retrieval job to download an archive. After the job completes, you start to download the archive but encounter a network error. In this scenario, you can retry and download the archive while the job exists. The List Jobs operation supports pagination. You should always check the response You can set a maximum limit for the number of jobs returned in the response by specifying the Additionally, you can filter the jobs list returned by specifying the optional For more information about using this operation, see the documentation for the underlying REST API List Jobs. This operation lists in-progress multipart uploads for the specified vault. An in-progress multipart upload is a multipart upload that has been initiated by an InitiateMultipartUpload request, but has not yet been completed or aborted. The list returned in the List Multipart Upload response has no guaranteed order. The List Multipart Uploads operation supports pagination. By default, this operation returns up to 50 multipart uploads in the response. You should always check the response for a Note the difference between this operation and listing parts (ListParts). The List Multipart Uploads operation lists all multipart uploads for a vault and does not require a multipart upload ID. The List Parts operation requires a multipart upload ID since parts are associated with a single upload. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Working with Archives in Amazon S3 Glacier and List Multipart Uploads in the Amazon Glacier Developer Guide. This operation lists in-progress multipart uploads for the specified vault. An in-progress multipart upload is a multipart upload that has been initiated by an InitiateMultipartUpload request, but has not yet been completed or aborted. The list returned in the List Multipart Upload response has no guaranteed order. The List Multipart Uploads operation supports pagination. By default, this operation returns up to 50 multipart uploads in the response. You should always check the response for a Note the difference between this operation and listing parts (ListParts). The List Multipart Uploads operation lists all multipart uploads for a vault and does not require a multipart upload ID. The List Parts operation requires a multipart upload ID since parts are associated with a single upload. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Working with Archives in Amazon Glacier and List Multipart Uploads in the Amazon Glacier Developer Guide. This operation lists the parts of an archive that have been uploaded in a specific multipart upload. You can make this request at any time during an in-progress multipart upload before you complete the upload (see CompleteMultipartUpload. List Parts returns an error for completed uploads. The list returned in the List Parts response is sorted by part range. The List Parts operation supports pagination. By default, this operation returns up to 50 uploaded parts in the response. You should always check the response for a An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Working with Archives in Amazon S3 Glacier and List Parts in the Amazon Glacier Developer Guide. This operation lists the parts of an archive that have been uploaded in a specific multipart upload. You can make this request at any time during an in-progress multipart upload before you complete the upload (see CompleteMultipartUpload. List Parts returns an error for completed uploads. The list returned in the List Parts response is sorted by part range. The List Parts operation supports pagination. By default, this operation returns up to 50 uploaded parts in the response. You should always check the response for a An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and the underlying REST API, see Working with Archives in Amazon Glacier and List Parts in the Amazon Glacier Developer Guide. This operation lists the provisioned capacity units for the specified AWS account. This operation lists all the tags attached to a vault. The operation returns an empty map if there are no tags. For more information about tags, see Tagging Amazon S3 Glacier Resources. This operation lists all the tags attached to a vault. The operation returns an empty map if there are no tags. For more information about tags, see Tagging Amazon Glacier Resources. This operation lists all vaults owned by the calling user's account. The list returned in the response is ASCII-sorted by vault name. By default, this operation returns up to 10 items. If there are more vaults to list, the response An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Retrieving Vault Metadata in Amazon S3 Glacier and List Vaults in the Amazon Glacier Developer Guide. This operation lists all vaults owned by the calling user's account. The list returned in the response is ASCII-sorted by vault name. By default, this operation returns up to 10 items. If there are more vaults to list, the response An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Retrieving Vault Metadata in Amazon Glacier and List Vaults in the Amazon Glacier Developer Guide. This operation purchases a provisioned capacity unit for an AWS account. This operation removes one or more tags from the set of tags attached to a vault. For more information about tags, see Tagging Amazon S3 Glacier Resources. This operation is idempotent. The operation will be successful, even if there are no tags attached to the vault. This operation removes one or more tags from the set of tags attached to a vault. For more information about tags, see Tagging Amazon Glacier Resources. This operation is idempotent. The operation will be successful, even if there are no tags attached to the vault. This operation sets and then enacts a data retrieval policy in the region specified in the PUT request. You can set one policy per region for an AWS account. The policy is enacted within a few minutes of a successful PUT operation. The set policy operation does not affect retrieval jobs that were in progress before the policy was enacted. For more information about data retrieval policies, see Amazon Glacier Data Retrieval Policies. This operation configures an access policy for a vault and will overwrite an existing policy. To configure a vault access policy, send a PUT request to the This operation configures notifications that will be sent when specific events happen to a vault. By default, you don't get any notifications. To configure vault notifications, send a PUT request to the Amazon SNS topics must grant permission to the vault to be allowed to publish notifications to the topic. You can configure a vault to publish a notification for the following vault events: ArchiveRetrievalCompleted This event occurs when a job that was initiated for an archive retrieval is completed (InitiateJob). The status of the completed job can be \"Succeeded\" or \"Failed\". The notification sent to the SNS topic is the same output as returned from DescribeJob. InventoryRetrievalCompleted This event occurs when a job that was initiated for an inventory retrieval is completed (InitiateJob). The status of the completed job can be \"Succeeded\" or \"Failed\". The notification sent to the SNS topic is the same output as returned from DescribeJob. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon S3 Glacier and Set Vault Notification Configuration in the Amazon Glacier Developer Guide. This operation configures notifications that will be sent when specific events happen to a vault. By default, you don't get any notifications. To configure vault notifications, send a PUT request to the Amazon SNS topics must grant permission to the vault to be allowed to publish notifications to the topic. You can configure a vault to publish a notification for the following vault events: ArchiveRetrievalCompleted This event occurs when a job that was initiated for an archive retrieval is completed (InitiateJob). The status of the completed job can be \"Succeeded\" or \"Failed\". The notification sent to the SNS topic is the same output as returned from DescribeJob. InventoryRetrievalCompleted This event occurs when a job that was initiated for an inventory retrieval is completed (InitiateJob). The status of the completed job can be \"Succeeded\" or \"Failed\". The notification sent to the SNS topic is the same output as returned from DescribeJob. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Configuring Vault Notifications in Amazon Glacier and Set Vault Notification Configuration in the Amazon Glacier Developer Guide. This operation adds an archive to a vault. This is a synchronous operation, and for a successful upload, your data is durably persisted. Amazon S3 Glacier returns the archive ID in the You must use the archive ID to access your data in Amazon S3 Glacier. After you upload an archive, you should save the archive ID returned so that you can retrieve or delete the archive later. Besides saving the archive ID, you can also index it and give it a friendly name to allow for better searching. You can also use the optional archive description field to specify how the archive is referred to in an external index of archives, such as you might create in Amazon DynamoDB. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob. You must provide a SHA256 tree hash of the data you are uploading. For information about computing a SHA256 tree hash, see Computing Checksums. You can optionally specify an archive description of up to 1,024 printable ASCII characters. You can get the archive description when you either retrieve the archive or get the vault inventory. For more information, see InitiateJob. Amazon Glacier does not interpret the description in any way. An archive description does not need to be unique. You cannot use the description to retrieve or sort the archive list. Archives are immutable. After you upload an archive, you cannot edit the archive or its description. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading an Archive in Amazon Glacier and Upload Archive in the Amazon Glacier Developer Guide. This operation adds an archive to a vault. This is a synchronous operation, and for a successful upload, your data is durably persisted. Amazon Glacier returns the archive ID in the You must use the archive ID to access your data in Amazon Glacier. After you upload an archive, you should save the archive ID returned so that you can retrieve or delete the archive later. Besides saving the archive ID, you can also index it and give it a friendly name to allow for better searching. You can also use the optional archive description field to specify how the archive is referred to in an external index of archives, such as you might create in Amazon DynamoDB. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob. You must provide a SHA256 tree hash of the data you are uploading. For information about computing a SHA256 tree hash, see Computing Checksums. You can optionally specify an archive description of up to 1,024 printable ASCII characters. You can get the archive description when you either retrieve the archive or get the vault inventory. For more information, see InitiateJob. Amazon Glacier does not interpret the description in any way. An archive description does not need to be unique. You cannot use the description to retrieve or sort the archive list. Archives are immutable. After you upload an archive, you cannot edit the archive or its description. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading an Archive in Amazon Glacier and Upload Archive in the Amazon Glacier Developer Guide. This operation uploads a part of an archive. You can upload archive parts in any order. You can also upload them in parallel. You can upload up to 10,000 parts for a multipart upload. Amazon Glacier rejects your upload part request if any of the following conditions is true: SHA256 tree hash does not matchTo ensure that part data is not corrupted in transmission, you compute a SHA256 tree hash of the part and include it in your request. Upon receiving the part data, Amazon S3 Glacier also computes a SHA256 tree hash. If these hash values don't match, the operation fails. For information about computing a SHA256 tree hash, see Computing Checksums. Part size does not matchThe size of each part except the last must match the size specified in the corresponding InitiateMultipartUpload request. The size of the last part must be the same size as, or smaller than, the specified size. If you upload a part whose size is smaller than the part size you specified in your initiate multipart upload request and that part is not the last part, then the upload part request will succeed. However, the subsequent Complete Multipart Upload request will fail. Range does not alignThe byte range value in the request does not align with the part size specified in the corresponding initiate request. For example, if you specify a part size of 4194304 bytes (4 MB), then 0 to 4194303 bytes (4 MB - 1) and 4194304 (4 MB) to 8388607 (8 MB - 1) are valid part ranges. However, if you set a range value of 2 MB to 6 MB, the range does not align with the part size and the upload will fail. This operation is idempotent. If you upload the same part multiple times, the data included in the most recent request overwrites the previously uploaded data. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Upload Part in the Amazon Glacier Developer Guide. This operation uploads a part of an archive. You can upload archive parts in any order. You can also upload them in parallel. You can upload up to 10,000 parts for a multipart upload. Amazon Glacier rejects your upload part request if any of the following conditions is true: SHA256 tree hash does not matchTo ensure that part data is not corrupted in transmission, you compute a SHA256 tree hash of the part and include it in your request. Upon receiving the part data, Amazon Glacier also computes a SHA256 tree hash. If these hash values don't match, the operation fails. For information about computing a SHA256 tree hash, see Computing Checksums. Part size does not matchThe size of each part except the last must match the size specified in the corresponding InitiateMultipartUpload request. The size of the last part must be the same size as, or smaller than, the specified size. If you upload a part whose size is smaller than the part size you specified in your initiate multipart upload request and that part is not the last part, then the upload part request will succeed. However, the subsequent Complete Multipart Upload request will fail. Range does not alignThe byte range value in the request does not align with the part size specified in the corresponding initiate request. For example, if you specify a part size of 4194304 bytes (4 MB), then 0 to 4194303 bytes (4 MB - 1) and 4194304 (4 MB) to 8388607 (8 MB - 1) are valid part ranges. However, if you set a range value of 2 MB to 6 MB, the range does not align with the part size and the upload will fail. This operation is idempotent. If you upload the same part multiple times, the data included in the most recent request overwrites the previously uploaded data. An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM). For conceptual information and underlying REST API, see Uploading Large Archives in Parts (Multipart Upload) and Upload Part in the Amazon Glacier Developer Guide. The The Provides options to abort a multipart upload identified by the upload ID. For information about the underlying REST API, see Abort Multipart Upload. For conceptual information, see Working with Archives in Amazon S3 Glacier. Provides options to abort a multipart upload identified by the upload ID. For information about the underlying REST API, see Abort Multipart Upload. For conceptual information, see Working with Archives in Amazon Glacier. The The The checksum of the archive computed by Amazon S3 Glacier. The checksum of the archive computed by Amazon Glacier. Contains the Amazon S3 Glacier response to your request. For information about the underlying REST API, see Upload Archive. For conceptual information, see Working with Archives in Amazon S3 Glacier. Contains the Amazon Glacier response to your request. For information about the underlying REST API, see Upload Archive. For conceptual information, see Working with Archives in Amazon Glacier. The The The SHA256 tree hash of the entire archive. It is the tree hash of SHA256 tree hash of the individual parts. If the value you specify in the request does not match the SHA256 tree hash of the final assembled archive as computed by Amazon S3 Glacier (Glacier), Glacier returns an error and the request fails. The SHA256 tree hash of the entire archive. It is the tree hash of SHA256 tree hash of the individual parts. If the value you specify in the request does not match the SHA256 tree hash of the final assembled archive as computed by Amazon Glacier (Glacier), Glacier returns an error and the request fails. Provides options to complete a multipart upload operation. This informs Amazon Glacier that all the archive parts have been uploaded and Amazon S3 Glacier (Glacier) can now assemble the archive from the uploaded parts. After assembling and saving the archive to the vault, Glacier returns the URI path of the newly created archive resource. Provides options to complete a multipart upload operation. This informs Amazon Glacier that all the archive parts have been uploaded and Amazon Glacier (Glacier) can now assemble the archive from the uploaded parts. After assembling and saving the archive to the vault, Glacier returns the URI path of the newly created archive resource. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The Provides options for deleting an archive from an Amazon S3 Glacier vault. Provides options for deleting an archive from an Amazon Glacier vault. The The The The Provides options for deleting a vault from Amazon S3 Glacier. Provides options for deleting a vault from Amazon Glacier. The The The The The The The Universal Coordinated Time (UTC) date when Amazon S3 Glacier completed the last vault inventory. This value should be a string in the ISO 8601 date format, for example The Universal Coordinated Time (UTC) date when Amazon Glacier completed the last vault inventory. This value should be a string in the ISO 8601 date format, for example Total size, in bytes, of the archives in the vault as of the last inventory date. This field will return null if an inventory has not yet run on the vault, for example if you just created the vault. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. Contains the returned data retrieval policy in JSON format. Contains the Amazon S3 Glacier response to the Contains the Amazon Glacier response to the The The The range of bytes to retrieve from the output. For example, if you want to download the first 1,048,576 bytes, specify the range as If the job output is large, then you can use a range to retrieve a portion of the output. This allows you to download the entire output in smaller chunks of bytes. For example, suppose you have 1 GB of job output you want to download and you decide to download 128 MB chunks of data at a time, which is a total of eight Get Job Output requests. You use the following process to download the job output: Download a 128 MB chunk of output by specifying the appropriate byte range. Verify that all 128 MB of data was received. Along with the data, the response includes a SHA256 tree hash of the payload. You compute the checksum of the payload on the client and compare it with the checksum you received in the response to ensure you received all the expected data. Repeat steps 1 and 2 for all the eight 128 MB chunks of output data, each time specifying the appropriate byte range. After downloading all the parts of the job output, you have a list of eight checksum values. Compute the tree hash of these values to find the checksum of the entire output. Using the DescribeJob API, obtain job information of the job that provided you the output. The response includes the checksum of the entire archive stored in Amazon S3 Glacier. You compare this value with the checksum you computed to ensure you have downloaded the entire archive content with no errors. The range of bytes to retrieve from the output. For example, if you want to download the first 1,048,576 bytes, specify the range as If the job output is large, then you can use a range to retrieve a portion of the output. This allows you to download the entire output in smaller chunks of bytes. For example, suppose you have 1 GB of job output you want to download and you decide to download 128 MB chunks of data at a time, which is a total of eight Get Job Output requests. You use the following process to download the job output: Download a 128 MB chunk of output by specifying the appropriate byte range. Verify that all 128 MB of data was received. Along with the data, the response includes a SHA256 tree hash of the payload. You compute the checksum of the payload on the client and compare it with the checksum you received in the response to ensure you received all the expected data. Repeat steps 1 and 2 for all the eight 128 MB chunks of output data, each time specifying the appropriate byte range. After downloading all the parts of the job output, you have a list of eight checksum values. Compute the tree hash of these values to find the checksum of the entire output. Using the DescribeJob API, obtain job information of the job that provided you the output. The response includes the checksum of the entire archive stored in Amazon Glacier. You compare this value with the checksum you computed to ensure you have downloaded the entire archive content with no errors. Provides options for downloading output of an Amazon S3 Glacier job. Provides options for downloading output of an Amazon Glacier job. The range of bytes returned by Amazon S3 Glacier. If only partial output is downloaded, the response provides the range of bytes Amazon S3 Glacier returned. For example, bytes 0-1048575/8388608 returns the first 1 MB from 8 MB. The range of bytes returned by Amazon Glacier. If only partial output is downloaded, the response provides the range of bytes Amazon Glacier returned. For example, bytes 0-1048575/8388608 returns the first 1 MB from 8 MB. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The The The The UTC date and time at which the vault lock was put into the Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The Returns the notification configuration set on the vault. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The Provides options for specifying job information. Provides options for initiating an Amazon S3 Glacier job. Provides options for initiating an Amazon Glacier job. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The Provides options for initiating a multipart upload to an Amazon S3 Glacier vault. Provides options for initiating a multipart upload to an Amazon Glacier vault. The relative URI path of the multipart upload ID Amazon S3 Glacier created. The relative URI path of the multipart upload ID Amazon Glacier created. The Amazon S3 Glacier response to your request. The Amazon Glacier response to your request. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The Amazon SNS topic ARN to which Amazon S3 Glacier sends a notification when the job is completed and the output is ready for you to download. The specified topic publishes the notification to its subscribers. The SNS topic must exist. The Amazon SNS topic ARN to which Amazon Glacier sends a notification when the job is completed and the output is ready for you to download. The specified topic publishes the notification to its subscribers. The SNS topic must exist. The The Provides options for retrieving a job list for an Amazon S3 Glacier vault. Provides options for retrieving a job list for an Amazon Glacier vault. An opaque string used for pagination that specifies the job at which the listing of jobs should begin. You get the Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The An opaque string that represents where to continue pagination of the results. You use the marker in a new List Multipart Uploads request to obtain more uploads in the list. If there are no more uploads, this value is Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The The An opaque string that represents where to continue pagination of the results. You use the marker in a new List Parts request to obtain more jobs in the list. If there are no more parts, this value is Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, don't include any hyphens ('-') in the ID. The AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, don't include any hyphens ('-') in the ID. The The The tags attached to the vault. Each tag is composed of a key and a value. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The vault ARN at which to continue pagination of the results. You use the marker in another List Vaults request to obtain more vaults in the list. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. The SHA256 tree hash value that Amazon S3 Glacier calculated for the part. This field is never The SHA256 tree hash value that Amazon Glacier calculated for the part. This field is never A list of the part sizes of the multipart upload. The AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, don't include any hyphens ('-') in the ID. The AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, don't include any hyphens ('-') in the ID. The The Returned if, when uploading an archive, Amazon S3 Glacier times out while receiving the upload. Returned if, when uploading an archive, Amazon Glacier times out while receiving the upload. Returned if, when uploading an archive, Amazon S3 Glacier times out while receiving the upload. Returned if, when uploading an archive, Amazon Glacier times out while receiving the upload. The The The The The The The The Identifies the range of bytes in the assembled archive that will be uploaded in this part. Amazon S3 Glacier uses this information to assemble the archive in the proper sequence. The format of this header follows RFC 2616. An example header is Content-Range:bytes 0-4194303/*. Identifies the range of bytes in the assembled archive that will be uploaded in this part. Amazon Glacier uses this information to assemble the archive in the proper sequence. The format of this header follows RFC 2616. An example header is Content-Range:bytes 0-4194303/*. The SHA256 tree hash that Amazon S3 Glacier computed for the uploaded part. The SHA256 tree hash that Amazon Glacier computed for the uploaded part. Contains the Amazon S3 Glacier response to your request. Contains the Amazon Glacier response to your request. A list of one or more events for which Amazon S3 Glacier will send a notification to the specified Amazon SNS topic. A list of one or more events for which Amazon Glacier will send a notification to the specified Amazon SNS topic. Represents a vault's notification configuration. Amazon S3 Glacier (Glacier) is a storage solution for \"cold data.\" Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3). You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon S3 Glacier Developer Guide: What is Amazon S3 Glacier - This section of the Developer Guide describes the underlying data model, the operations it supports, and the AWS SDKs that you can use to interact with the service. Getting Started with Amazon S3 Glacier - The Getting Started section walks you through the process of creating a vault, uploading archives, creating jobs to download archives, retrieving the job output, and deleting archives. Amazon Glacier (Glacier) is a storage solution for \"cold data.\" Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3). You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon Glacier Developer Guide: What is Amazon Glacier - This section of the Developer Guide describes the underlying data model, the operations it supports, and the AWS SDKs that you can use to interact with the service. Getting Started with Amazon Glacier - The Getting Started section walks you through the process of creating a vault, uploading archives, creating jobs to download archives, retrieving the job output, and deleting archives. Cancels the specified export task. The task must be in the Cancels an active import task and stops importing data from the CloudTrail Lake Event Data Store. Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket. When you perform a Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported. Exporting to S3 buckets that are encrypted with AES-256 is supported. This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active ( You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects. We recommend that you don't regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instead recommend that you use subscriptions. For more information about subscriptions, see Real-time processing of log data with subscriptions. Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities. Starts an import from a data source to CloudWatch Log and creates a managed log group as the destination for the imported data. Currently, CloudTrail Event Data Store is the only supported data source. The import task must satisfy the following constraints: The specified source must be in an ACTIVE state. The API caller must have permissions to access the data in the provided source and to perform iam:PassRole on the provided import role which has the same permissions, as described below. The provided IAM role must trust the \"cloudtrail.amazonaws.com\" principal and have the following permissions: cloudtrail:GetEventDataStoreData logs:CreateLogGroup logs:CreateLogStream logs:PutResourcePolicy (If source has an associated AWS KMS Key) kms:Decrypt (If source has an associated AWS KMS Key) kms:GenerateDataKey Example IAM policy for provided import role: If the import source has a customer managed key, the \"cloudtrail.amazonaws.com\" principal needs permissions to perform kms:Decrypt and kms:GenerateDataKey. There can be no more than 3 active imports per account at a given time. The startEventTime must be less than or equal to endEventTime. The data being imported must be within the specified source's retention period. Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries. If the deleted policy included facet configurations, those facets will no longer be available for interactive exploration in the CloudWatch Logs Insights console for this log group. However, facet data is retained for up to 30 days. You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy. If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. This operation only affects log group-level policies, including any facet configurations, and preserves any data source-based account policies that may apply to the log group. Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries. If the deleted policy included facet configurations, those facets will no longer be available for interactive exploration in the CloudWatch Logs Insights console for this log group. However, facet data is retained for up to 30 days. You can't use this operation to delete an account-level index policy. Instead, use DeleteAccountPolicy. If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. This operation only affects log group-level policies, including any facet configurations, and preserves any data source-based account policies that may apply to the log group. Returns a list of custom and default field indexes which are discovered in log data. For more information about field index policies, see PutIndexPolicy. Gets detailed information about the individual batches within an import task, including their status and any error messages. For CloudTrail Event Data Store sources, a batch refers to a subset of stored events grouped by their eventTime. Lists and describes import tasks, with optional filtering by import status and source ARN. Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups or a subset of log groups in the account. For field index policies, you can configure indexed fields as facets to enable interactive exploration of your logs. Facets provide value distributions and counts for indexed fields in the CloudWatch Logs Insights console without requiring query execution. For more information, see Use facets to group and explore logs. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating. To create a data protection policy, you must have the To create a subscription filter policy, you must have the To create a transformer policy, you must have the To create a field index policy, you must have the To configure facets for field index policies, you must have the To create a metric extraction policy, you must have the Data protection policy A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. If you use By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking. To use the The Subscription filter policy A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters: An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. An Firehose data stream in the same account as the subscription policy, for same-account delivery. A Lambda function in the same account as the subscription policy, for same-account delivery. A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in Transformer policy Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class. You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the CloudWatch Logs provides default field indexes for all log groups in the Standard log class. Default field indexes are automatically available for the following fields: Default field indexes are in addition to any custom field indexes you define within your policy. Default field indexes are not counted towards your field index quota. You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with Field index policy You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the GetLogGroupFields operation. For example, suppose you have created a field index for Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 40 account-level field index policies (20 for log group prefix selection, 20 for data source selection) that are each scoped to a subset of log groups or data sources with the If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts. If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of Metric extraction policy A metric extraction policy controls whether CloudWatch Metrics can be created through the Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation is enabled for all log groups. You can use metric extraction policies to disable EMF metric creation for your entire account or specific log groups. When a policy disables EMF metric creation for a log group, log events in the EMF format are still ingested, but no CloudWatch Metrics are created from them. Creating a policy disables metrics for AWS features that use EMF to create metrics, such as CloudWatch Container Insights and CloudWatch Application Signals. To prevent turning off those features by accident, we recommend that you exclude the underlying log-groups through a selection-criteria such as Each account can have either one account-level metric extraction policy that applies to all log groups, or up to 5 policies that are each scoped to a subset of log groups with the The selection criteria can be specified in these formats: If you have multiple account-level metric extraction policies with selection criteria, no two of them can have overlapping criteria. For example, if you have one policy with selection criteria When using When combining policies with If you have a If you have a Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups, a subset of log groups, or a data source name and type combination in the account. For field index policies, you can configure indexed fields as facets to enable interactive exploration of your logs. Facets provide value distributions and counts for indexed fields in the CloudWatch Logs Insights console without requiring query execution. For more information, see Use facets to group and explore logs. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating. To create a data protection policy, you must have the To create a subscription filter policy, you must have the To create a transformer policy, you must have the To create a field index policy, you must have the To configure facets for field index policies, you must have the To create a metric extraction policy, you must have the Data protection policy A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. If you use By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking. To use the The Subscription filter policy A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters: An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. An Firehose data stream in the same account as the subscription policy, for same-account delivery. A Lambda function in the same account as the subscription policy, for same-account delivery. A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in Transformer policy Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class. You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with Field index policy You can use field index policies to create indexes on fields found in log events for a log group or data source name and type combination. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the GetLogGroupFields operation. To find the fields for a data source use the GetLogFields operation. For example, suppose you have created a field index for Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups using If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts. CloudWatch Logs provides default field indexes for all log groups in the Standard log class. Default field indexes are automatically available for the following fields: CloudWatch Logs provides default field indexes for certain data source name and type combinations as well. Default field indexes are automatically available for the following data source name and type combinations as identified in the following list: Default field indexes are in addition to any custom field indexes you define within your policy. Default field indexes are not counted towards your field index quota. If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of Metric extraction policy A metric extraction policy controls whether CloudWatch Metrics can be created through the Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation is enabled for all log groups. You can use metric extraction policies to disable EMF metric creation for your entire account or specific log groups. When a policy disables EMF metric creation for a log group, log events in the EMF format are still ingested, but no CloudWatch Metrics are created from them. Creating a policy disables metrics for AWS features that use EMF to create metrics, such as CloudWatch Container Insights and CloudWatch Application Signals. To prevent turning off those features by accident, we recommend that you exclude the underlying log-groups through a selection-criteria such as Each account can have either one account-level metric extraction policy that applies to all log groups, or up to 5 policies that are each scoped to a subset of log groups with the The selection criteria can be specified in these formats: If you have multiple account-level metric extraction policies with selection criteria, no two of them can have overlapping criteria. For example, if you have one policy with selection criteria When using When combining policies with If you have a If you have a The ID of the import task to cancel. The ID of the cancelled import task. Statistics about the import progress at the time of cancellation. The final status of the import task. This will be set to CANCELLED. The timestamp when the import task was created, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The timestamp when the import task was cancelled, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The ARN of the source to import from. The ARN of the IAM role that grants CloudWatch Logs permission to import from the CloudTrail Lake Event Data Store. Optional filters to constrain the import by CloudTrail event time. Times are specified in Unix timestamp milliseconds. The range of data being imported must be within the specified source's retention period. A unique identifier for the import task. The ARN of the CloudWatch Logs log group created as the destination for the imported events. The timestamp when the import task was created, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The ID of the import task to get batch information for. Optional filter to list import batches by their status. Accepts multiple status values: IN_PROGRESS, CANCELLED, COMPLETED and FAILED. The maximum number of import batches to return in the response. Default: 10 The pagination token for the next set of results. The ARN of the source being imported from. The ID of the import task. The list of import batches that match the request filters. The token to use when requesting the next set of results. Not present if there are no additional results to retrieve. Optional filter to describe a specific import task by its ID. Optional filter to list imports by their status. Valid values are IN_PROGRESS, CANCELLED, COMPLETED and FAILED. Optional filter to list imports from a specific source The maximum number of import tasks to return in the response. Default: 50 The pagination token for the next set of results. The list of import tasks that match the request filters. The token to use when requesting the next set of results. Not present if there are no additional results to retrieve. The unique identifier of the import task. The ARN of the CloudTrail Lake Event Data Store being imported from. The current status of the import task. Valid values are IN_PROGRESS, CANCELLED, COMPLETED and FAILED. The ARN of the managed CloudWatch Logs log group where the events are being imported to. Statistics about the import progress The filter criteria used for this import task. The timestamp when the import task was created, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The timestamp when the import task was last updated, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Error message related to any failed imports An import job to move data from CloudTrail Event Data Store to CloudWatch. The unique identifier of the import batch. The current status of the import batch. Valid values are IN_PROGRESS, CANCELLED, COMPLETED and FAILED. The error message if the batch failed to import. Only present when status is FAILED. A collection of events being imported to CloudWatch The start of the time range for events to import, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The end of the time range for events to import, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. The filter criteria used for import tasks The total number of bytes that have been imported to the managed log group. Statistics about the import progress A name for the policy. This must be unique within the account. A name for the policy. This must be unique within the account and cannot start with Specify the policy, in JSON. Data protection policy A data protection policy must include two JSON blocks: The first block must include both a The The second block must include both a The For an example data protection policy, see the Examples section on this page. The contents of the two In addition to the two JSON blocks, the The JSON specified in Subscription filter policy A subscription filter policy can include the following attributes in a JSON block: DestinationArn The ARN of the destination to deliver log events to. Supported destinations are: An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. An Firehose data stream in the same account as the subscription policy, for same-account delivery. A Lambda function in the same account as the subscription policy, for same-account delivery. A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. RoleArn The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery. FilterPattern A filter pattern for subscribing to a filtered stream of log events. Distribution The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to Transformer policy A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use. Field index policy A field index filter policy can include the following attribute in a JSON block: Fields The array of field indexes to create. It must contain at least one field index. The following is an example of an index policy document that creates two indexes, Specify the policy, in JSON. Data protection policy A data protection policy must include two JSON blocks: The first block must include both a The The second block must include both a The For an example data protection policy, see the Examples section on this page. The contents of the two In addition to the two JSON blocks, the The JSON specified in Subscription filter policy A subscription filter policy can include the following attributes in a JSON block: DestinationArn The ARN of the destination to deliver log events to. Supported destinations are: An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. An Firehose data stream in the same account as the subscription policy, for same-account delivery. A Lambda function in the same account as the subscription policy, for same-account delivery. A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. RoleArn The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery. FilterPattern A filter pattern for subscribing to a filtered stream of log events. Distribution The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to Transformer policy A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use. Field index policy A field index filter policy can include the following attribute in a JSON block: Fields The array of field indexes to create. FieldsV2 The object of field indexes to create along with it's type. It must contain at least one field index. The following is an example of an index policy document that creates indexes with different types. You can use Use this parameter to apply the new policy to a subset of log groups in the account. Specifying If If The Using the Use this parameter to apply the new policy to a subset of log groups in the account or a data source name and type combination. Specifying If If If When you specify The Using the Defines the type of log that the source is sending. For Amazon Bedrock Agents, the valid values are For Amazon Bedrock Knowledge Bases, the valid value is For Amazon Bedrock AgentCore Runtime, the valid values are For Amazon Bedrock AgentCore Tools, the valid values are For Amazon Bedrock AgentCore Identity, the valid values are For Amazon Bedrock AgentCore Gateway, the valid values are For CloudFront, the valid value is For Amazon CodeWhisperer, the valid value is For Elemental MediaPackage, the valid values are For Elemental MediaTailor, the valid values are For Entity Resolution, the valid value is For IAM Identity Center, the valid value is For Network Load Balancer, the valid value is For PCS, the valid values are For Amazon Web Services RTB Fabric, the valid values is For Amazon Q, the valid values are For Amazon SES mail manager, the valid values are For Amazon WorkMail, the valid values are For Amazon VPC Route Server, the valid value is Defines the type of log that the source is sending. For Amazon Bedrock Agents, the valid values are For Amazon Bedrock Knowledge Bases, the valid value is For Amazon Bedrock AgentCore Runtime, the valid values are For Amazon Bedrock AgentCore Tools, the valid values are For Amazon Bedrock AgentCore Identity, the valid values are For Amazon Bedrock AgentCore Gateway, the valid values are For CloudFront, the valid value is For Amazon CodeWhisperer, the valid value is For Elemental MediaPackage, the valid values are For Elemental MediaTailor, the valid values are For Entity Resolution, the valid value is For IAM Identity Center, the valid value is For Network Firewall Proxy, the valid values are For Network Load Balancer, the valid value is For PCS, the valid values are For Quick Suite, the valid values are For Amazon Web Services RTB Fabric, the valid values is For Amazon Q, the valid values are For Amazon SES mail manager, the valid values are For Amazon WorkMail, the valid values are For Amazon VPC Route Server, the valid value is The index policy document, in JSON format. The following is an example of an index policy document that creates two indexes, The policy document must include at least one field index. For more information about the fields that can be included and other restrictions, see Field index syntax and quotas. The index policy document, in JSON format. The following is an example of an index policy document that creates indexes with different types. You can use The policy document must include at least one field index. For more information about the fields that can be included and other restrictions, see Field index syntax and quotas. The setting that indicates what conditioning MediaTailor will perform on ads that the ad decision server (ADS) returns. The HTTP request configuration parameters for the ad decision server. Configuration parameters for customizing HTTP requests sent to the ad decision server (ADS). This allows you to specify the HTTP method, headers, request body, and compression settings for ADS requests. Clip range configuration for the VOD source associated with the program. The setting that indicates what conditioning MediaTailor will perform on ads that the ad decision server (ADS) returns, and what priority MediaTailor uses when inserting ads. The configuration for customizing HTTP requests to the ad decision server (ADS). This includes settings for request method, headers, body content, and compression options. The VOD source's HTTP package configuration settings. The HTTP method to use when making requests to the ad decision server. Supported values are The request body content to send with HTTP requests to the ad decision server. This value is only eligible for Custom HTTP headers to include in requests to the ad decision server. Specify headers as key-value pairs. This value is only eligible for The compression method to apply to requests sent to the ad decision server. Supported values are HTTP request configuration parameters that define how MediaTailor communicates with the ad decision server. Insertion Mode controls whether players can use stitched or guided ad insertion. The setting that indicates what conditioning MediaTailor will perform on ads that the ad decision server (ADS) returns, and what priority MediaTailor uses when inserting ads. A playback configuration. For information about MediaTailor configurations, see Working with configurations in AWS Elemental MediaTailor. Indicates the type of traffic shaping used for prefetch traffic shaping and limiting the number of requests to the ADS at one time. Indicates the type of traffic shaping used to limit the number of requests to the ADS at one time. Configuration for spreading ADS traffic across a set window instead of sending ADS requests for all sessions at the same time. The configuration that tells Elemental MediaTailor how many seconds to spread out requests to the ad decision server (ADS). Instead of sending ADS requests for all sessions at the same time, MediaTailor spreads the requests across the amount of time specified in the retrieval window. The configuration for TPS-based traffic shaping that limits the number of requests to the ad decision server (ADS) based on transactions per second instead of time windows. The configuration for TPS-based traffic shaping. This approach limits requests to the ad decision server (ADS) based on transactions per second and concurrent users. A complex type that contains settings governing when MediaTailor prefetches ads, and which dynamic variables that MediaTailor includes in the request to the ad decision server. The setting that indicates what conditioning MediaTailor will perform on ads that the ad decision server (ADS) returns, and what priority MediaTailor uses when inserting ads. The configuration for customizing HTTP requests to the ad decision server (ADS). This includes settings for request method, headers, body content, and compression options. The setting that indicates what conditioning MediaTailor will perform on ads that the ad decision server (ADS) returns, and what priority MediaTailor uses when inserting ads. The configuration for customizing HTTP requests to the ad decision server (ADS). This includes settings for request method, headers, body content, and compression options. Indicates the type of traffic shaping used for traffic shaping and limiting the number of requests to the ADS at one time. Indicates the type of traffic shaping used to limit the number of requests to the ADS at one time. Configuration for spreading ADS traffic across a set window instead of sending ADS requests for all sessions at the same time. The configuration that tells Elemental MediaTailor how many seconds to spread out requests to the ad decision server (ADS). Instead of sending ADS requests for all sessions at the same time, MediaTailor spreads the requests across the amount of time specified in the retrieval window. The configuration for TPS-based traffic shaping that limits the number of requests to the ad decision server (ADS) based on transactions per second instead of time windows. The configuration for TPS-based traffic shaping. This approach limits requests to the ad decision server (ADS) based on transactions per second and concurrent users. With recurring prefetch, MediaTailor automatically prefetches ads for every avail that occurs during the retrieval window. The following configurations describe the MediaTailor behavior when prefetching ads for a live event. The amount of time, in seconds, that MediaTailor spreads prefetch requests to the ADS. The configuration that tells Elemental MediaTailor how to spread out requests to the ad decision server (ADS). Instead of sending ADS requests for all sessions at the same time, MediaTailor spreads the requests across the amount of time specified in the retrieval window. The configuration that tells Elemental MediaTailor how many seconds to spread out requests to the ad decision server (ADS). Instead of sending ADS requests for all sessions at the same time, MediaTailor spreads the requests across the amount of time specified in the retrieval window. The expected peak number of concurrent viewers for your content. MediaTailor uses this value along with peak TPS to determine how to distribute prefetch requests across the available capacity without exceeding your ADS limits. The configuration for TPS-based traffic shaping. This approach limits requests to the ad decision server (ADS) based on transactions per second and concurrent users, providing more intuitive capacity management compared to time-window based traffic shaping. The configuration for TPS-based traffic shaping. This approach limits requests to the ad decision server (ADS) based on transactions per second and concurrent users. A name for the association that you're creating between a Resolver rule and a VPC. A name for the association that you're creating between a Resolver rule and a VPC. The name can be up to 64 characters long and can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), underscores (_), and spaces. The name cannot consist of only numbers. The protocols you want to use for the endpoint. DoH-FIPS is applicable for default inbound endpoints only. For a default inbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 and DoH-FIPS in combination. Do53 alone. DoH alone. DoH-FIPS alone. None, which is treated as Do53. For a delegation inbound endpoint you can use Do53 only. For an outbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 alone. DoH alone. None, which is treated as Do53. Specifies whether RNI enhanced metrics are enabled for the Resolver endpoints. When set to true, one-minute granular metrics are published in CloudWatch for each RNI associated with this endpoint. When set to false, metrics are not published. Default is false. Standard CloudWatch pricing and charges are applied for using the Route 53 Resolver endpoint RNI enhanced metrics. For more information, see Detailed metrics. Specifies whether target name server metrics are enabled for the outbound Resolver endpoints. When set to true, one-minute granular metrics are published in CloudWatch for each target name server associated with this endpoint. When set to false, metrics are not published. Default is false. This is not supported for inbound Resolver endpoints. Standard CloudWatch pricing and charges are applied for using the Route 53 Resolver endpoint target name server metrics. For more information, see Detailed metrics. A friendly name that lets you easily find a rule in the Resolver dashboard in the Route 53 console. A friendly name that lets you easily find a rule in the Resolver dashboard in the Route 53 console. The name can be up to 64 characters long and can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), underscores (_), and spaces. The name cannot consist of only numbers. The IPs that you want Resolver to forward DNS queries to. You can specify either Ipv4 or Ipv6 addresses but not both in the same rule. Separate IP addresses with a space. The IPs that you want Resolver to forward DNS queries to. You can specify either Ipv4 or Ipv6 addresses but not both in the same rule. Separate IP addresses with a space. when creating a DELEGATE rule, you must not provide the Protocols used for the endpoint. DoH-FIPS is applicable for a default inbound endpoints only. For an inbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 and DoH-FIPS in combination. Do53 alone. DoH alone. DoH-FIPS alone. None, which is treated as Do53. For a delegation inbound endpoint you can use Do53 only. For an outbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 alone. DoH alone. None, which is treated as Do53. Indicates whether RNI enhanced metrics are enabled for the Resolver endpoint. When enabled, one-minute granular metrics are published in CloudWatch for each RNI associated with this endpoint. When disabled, these metrics are not published. Indicates whether target name server metrics are enabled for the outbound Resolver endpoint. When enabled, one-minute granular metrics are published in CloudWatch for each target name server associated with this endpoint. When disabled, these metrics are not published. This feature is not supported for inbound Resolver endpoint. In the response to a CreateResolverEndpoint, DeleteResolverEndpoint, GetResolverEndpoint, Updates the name, or ResolverEndpointType for an endpoint, or UpdateResolverEndpoint request, a complex type that contains settings for an existing inbound or outbound Resolver endpoint. The name for the Resolver rule, which you specified when you created the Resolver rule. The name for the Resolver rule, which you specified when you created the Resolver rule. The name can be up to 64 characters long and can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), underscores (_), and spaces. The name cannot consist of only numbers. The name of an association between a Resolver rule and a VPC. The name of an association between a Resolver rule and a VPC. The name can be up to 64 characters long and can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), underscores (_), and spaces. The name cannot consist of only numbers. The new name for the Resolver rule. The name that you specify appears in the Resolver dashboard in the Route 53 console. The new name for the Resolver rule. The name that you specify appears in the Resolver dashboard in the Route 53 console. The name can be up to 64 characters long and can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), underscores (_), and spaces. The name cannot consist of only numbers. The protocols you want to use for the endpoint. DoH-FIPS is applicable for default inbound endpoints only. For a default inbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 and DoH-FIPS in combination. Do53 alone. DoH alone. DoH-FIPS alone. None, which is treated as Do53. For a delegation inbound endpoint you can use Do53 only. For an outbound endpoint you can apply the protocols as follows: Do53 and DoH in combination. Do53 alone. DoH alone. None, which is treated as Do53. You can't change the protocol of an inbound endpoint directly from only Do53 to only DoH, or DoH-FIPS. This is to prevent a sudden disruption to incoming traffic that relies on Do53. To change the protocol from Do53 to DoH, or DoH-FIPS, you must first enable both Do53 and DoH, or Do53 and DoH-FIPS, to make sure that all incoming traffic has transferred to using the DoH protocol, or DoH-FIPS, and then remove the Do53. Updates whether RNI enhanced metrics are enabled for the Resolver endpoints. When set to true, one-minute granular metrics are published in CloudWatch for each RNI associated with this endpoint. When set to false, metrics are not published. Standard CloudWatch pricing and charges are applied for using the Route 53 Resolver endpoint RNI enhanced metrics. For more information, see Detailed metrics. Updates whether target name server metrics are enabled for the outbound Resolver endpoints. When set to true, one-minute granular metrics are published in CloudWatch for each target name server associated with this endpoint. When set to false, metrics are not published. This setting is not supported for inbound Resolver endpoints. Standard CloudWatch pricing and charges are applied for using the Route 53 Resolver endpoint target name server metrics. For more information, see Detailed metrics. Retrieves information about your Service Quotas Automatic Management configuration. Automatic Management monitors your Service Quotas utilization and notifies you before you run out of your allocated quotas. Retrieves the quota utilization report for your Amazon Web Services account. This operation returns paginated results showing your quota usage across all Amazon Web Services services, sorted by utilization percentage in descending order (highest utilization first). You must first initiate a report using the Each report contains up to 1,000 quota records per page. Use the Starts Service Quotas Automatic Management for an Amazon Web Services account, including notification preferences and excluded quotas configurations. Automatic Management monitors your Service Quotas utilization and notifies you before you run out of your allocated quotas. Initiates the generation of a quota utilization report for your Amazon Web Services account. This asynchronous operation analyzes your quota usage across all Amazon Web Services services and returns a unique report identifier that you can use to retrieve the results. The report generation process may take several seconds to complete, depending on the number of quotas in your account. Use the The unique identifier for the quota utilization report. This identifier is returned by the A token that indicates the next page of results to retrieve. This token is returned in the response when there are more results available. Omit this parameter for the first request. The maximum number of results to return per page. The default value is 1,000 and the maximum allowed value is 1,000. The unique identifier for the quota utilization report. The current status of the report generation. Possible values are: The timestamp when the report was generated, in ISO 8601 format. The total number of quotas included in the report across all pages. A list of quota utilization records, sorted by utilization percentage in descending order. Each record includes the quota code, service code, service name, quota name, namespace, utilization percentage, default value, applied value, and whether the quota is adjustable. Up to 1,000 records are returned per page. A token that indicates more results are available. Include this token in the next request to retrieve the next page of results. If this field is not present, you have retrieved all available results. An error code indicating the reason for failure when the report status is A detailed error message describing the failure when the report status is Information about the quota period. The quota identifier. The service identifier. The quota name. The namespace of the metric used to track quota usage. The utilization percentage of the quota, calculated as (current usage / applied value) × 100. Values range from 0.0 to 100.0 or higher if usage exceeds the quota limit. The default value of the quota. The applied value of the quota, which may be higher than the default value if a quota increase has been requested and approved. The service name. Indicates whether the quota value can be increased. Information about a quota's utilization, including the quota code, service information, current usage, and applied limits. The unique identifier. The type of quota increase request. Possible values include: If this field is not present, the request was manually created by a user. The case ID. A unique identifier for the quota utilization report. Use this identifier with the The current status of the report generation. The status will be An optional message providing additional information about the report generation status. This field may contain details about the report initiation or indicate if an existing recent report is being reused. You've exceeded the number of tags allowed for a resource. For more information, see Tag restrictions in the Service Quotas User Guide. With Service Quotas, you can view and manage your quotas easily as your Amazon Web Services workloads grow. Quotas, also referred to as limits, are the maximum number of resources that you can create in your Amazon Web Services account. For more information, see the Service Quotas User Guide. You need Amazon Web Services CLI version 2.13.20 or higher to view and manage resource-level quotas such as Retrieves the encryption configuration for resources and data of your Amazon Web Services account in Amazon Web Services IoT Core. For more information, see Key management in IoT from the Amazon Web Services IoT Core Developer Guide. Retrieves the encryption configuration for resources and data of your Amazon Web Services account in Amazon Web Services IoT Core. For more information, see Data encryption at rest in the Amazon Web Services IoT Core Developer Guide. Transfers the specified certificate to the specified Amazon Web Services account. Requires permission to access the TransferCertificate action. You can cancel the transfer until it is acknowledged by the recipient. No notification is sent to the transfer destination's account. It's up to the caller to notify the transfer target. The certificate being transferred must not be in the The certificate must not have any policies attached to it. You can use the DetachPolicy action to detach them. Customer managed key behavior: When you use a customer managed key to secure your data and then transfer the key to a customer in a different account using the TransferCertificate operation, the certificates will no longer be protected by their customer managed key configuration. During the transfer process, certificates are encrypted using IoT owned keys. While a certificate is in the PENDING_TRANSFER state, it's always protected by IoT owned keys, regardless of the customer managed key configuration of either the source or destination account. Once the transfer is completed through AcceptCertificateTransfer, RejectCertificateTransfer, or CancelCertificateTransfer, the certificate will be protected by the customer managed key configuration of the account that owns the certificate after the transfer operation: If the transfer is accepted: The certificate is protected by the destination account's customer managed key configuration. If the transfer is rejected or cancelled: The certificate is protected by the source account's customer managed key configuration. Transfers the specified certificate to the specified Amazon Web Services account. Requires permission to access the TransferCertificate action. You can cancel the transfer until it is accepted by the recipient. No notification is sent to the transfer destination's account. The caller is responsible for notifying the transfer target. The certificate being transferred must not be in the The certificate must not have any policies attached to it. You can use the DetachPolicy action to detach them. Customer managed key behavior: When you use a customer managed key to encrypt your data and then transfer the certificate to a customer in a different account using the While a certificate is in the PENDING_TRANSFER state, it's always protected by Amazon Web Services IoT Core owned keys, regardless of the customer managed key configuration of either the source or destination account. Once the transfer is completed through AcceptCertificateTransfer, RejectCertificateTransfer, or CancelCertificateTransfer, the certificate will be protected by the customer managed key configuration of the account that owns the certificate after the transfer operation: If the transfer is accepted: The certificate is encrypted by the target account's customer managed key configuration. If the transfer is rejected or cancelled: The certificate is protected by the source account's customer managed key configuration. Updates the encryption configuration. By default, all Amazon Web Services IoT Core data at rest is encrypted using Amazon Web Services owned keys. Amazon Web Services IoT Core also supports symmetric customer managed keys from Amazon Web Services Key Management Service (KMS). With customer managed keys, you create, own, and manage the KMS keys in your Amazon Web Services account. For more information, see Data encryption in the Amazon Web Services IoT Core Developer Guide. Updates the encryption configuration. By default, Amazon Web Services IoT Core encrypts your data at rest using Amazon Web Services owned keys. Amazon Web Services IoT Core also supports symmetric customer managed keys from Key Management Service (KMS). With customer managed keys, you create, own, and manage the KMS keys in your Amazon Web Services account. Before using this API, you must set up permissions for Amazon Web Services IoT Core to access KMS. For more information, see Data encryption at rest in the Amazon Web Services IoT Core Developer Guide. Specifies the amount of time each device has to finish its execution of the job. A timer is started when the job execution status is set to Converts the command preprocessor result to the format defined by this parameter, before sending it to the device. Configures the command to treat the The name of a specific parameter used in a command and command execution. The type of the command parameter. The value used to describe the command. When you assign a value to a parameter, it will override any default value that you had already specified. Parameter value that overrides the default value, if set. The default value used to describe the command. This is the value assumed by the parameter if no other value is assigned to it. The list of conditions that a command parameter value must satisfy to create a command execution. The description of the command parameter. An attribute of type unsigned long. The range of possible values that's used to describe a specific command parameter. The The value of a command parameter used to create a command execution. The An operand of number value type, defined as a string. A List of operands of numerical value type, defined as strings. An operand of string value type. A List of operands of string value type. An operand of numerical range value type. The comparison operand used to compare the defined value against the value supplied in request. The comparison operator for the command parameter. IN_RANGE, and NOT_IN_RANGE operators include boundary values. The comparison operand for the command parameter. A condition for the command parameter that must be evaluated to true for successful creation of a command execution. The minimum value of a numerical range of a command parameter value. The maximum value of a numerical range of a command parameter value. The numerical range value type to compare a command parameter value against. The command payload object that contains the instructions for the device to process. Configuration for the JSON substitution preprocessor. Configuration that determines how the The health status of KMS key and KMS access role. If either KMS key or KMS access role is The health status of KMS key and KMS access role. If either KMS key or KMS access role is The detailed error message that corresponds to the The encryption configuration details that include the status information of the Amazon Web Services Key Management Service (KMS) key and the KMS access role. The encryption configuration details that include the status information of the Key Management Service (KMS) key and the KMS access role. The payload object for the command. You must specify this information when using the You can upload a static payload file from your local storage that contains the instructions for the device to process. The payload file can use any format. To make sure that the device correctly interprets the payload, we recommend you to specify the payload content type. The payload object for the static command. You can upload a static payload file from your local storage that contains the instructions for the device to process. The payload file can use any format. To make sure that the device correctly interprets the payload, we recommend you to specify the payload content type. The payload template for the dynamic command. This parameter is required for dynamic commands where the command execution placeholders are supplied either from Configuration that determines how This parameter is required for dynamic commands, along with A list of parameters that are required by the A list of parameters that are used by The IAM role that you must provide when using the The IAM role that you must provide when using the The type of the Amazon Web Services Key Management Service (KMS) key. The type of the KMS key. The Amazon Resource Name (ARN) of the IAM role assumed by Amazon Web Services IoT Core to call KMS on behalf of the customer. The ARN of the customer managed KMS key. The ARN of the customer-managed KMS key. The Amazon Resource Name (ARN) of the IAM role assumed by Amazon Web Services IoT Core to call KMS on behalf of the customer. The payload object that you provided for the command. The payload template for the dynamic command. Configuration that determines how The IAM role that you provided when creating the command with The principal. Valid principals are CertificateArn (arn:aws:iot:region:accountId:cert/certificateId), thingGroupArn (arn:aws:iot:region:accountId:thinggroup/groupName) and CognitoId (region:id). The principal. Valid principals are CertificateArn (arn:aws:iot:region:accountId:cert/certificateId) and CognitoId (region:id). The type of the Amazon Web Services Key Management Service (KMS) key. The type of the KMS key. The ARN of the customer-managed KMS key. The ARN of the customer managedKMS key. A list of tags applied to the resource. Reboots a Timestream for InfluxDB cluster. Reboots a Timestream for InfluxDB instance. Service-generated unique identifier of the DB cluster to reboot. A list of service-generated unique DB Instance Ids belonging to the DB Cluster to reboot. The status of the DB Cluster. The id of the DB instance to reboot. A service-generated unique identifier. The customer-supplied name that uniquely identifies the DB instance when interacting with the Amazon Timestream for InfluxDB API and CLI commands. The Amazon Resource Name (ARN) of the DB instance. The status of the DB instance. The endpoint used to connect to InfluxDB. The default InfluxDB port is 8086. The port number on which InfluxDB accepts connections. Specifies whether the networkType of the Timestream for InfluxDB instance is IPV4, which can communicate over IPv4 protocol only, or DUAL, which can communicate over both IPv4 and IPv6 protocols. The Timestream for InfluxDB instance type that InfluxDB runs on. The Timestream for InfluxDB DB storage type that InfluxDB stores data on. The amount of storage allocated for your DB storage type (in gibibytes). Specifies whether the Timestream for InfluxDB is deployed as Single-AZ or with a MultiAZ Standby for High availability. A list of VPC subnet IDs associated with the DB instance. Indicates if the DB instance has a public IP to facilitate access. A list of VPC security group IDs associated with the DB instance. The id of the DB parameter group assigned to your DB instance. The Availability Zone in which the DB instance resides. The Availability Zone in which the standby instance is located when deploying with a MultiAZ standby instance. Configuration for sending InfluxDB engine logs to send to specified S3 bucket. The Amazon Resource Name (ARN) of the Secrets Manager secret containing the initial InfluxDB authorization parameters. The secret value is a JSON formatted key-value pair holding InfluxDB authorization values: organization, bucket, username, and password. Specifies the DbCluster to which this DbInstance belongs to. Specifies the DbInstance's role in the cluster. Specifies the DbInstance's roles in the cluster. Stream groups manage how Amazon GameLift Streams allocates resources and handles concurrent streams, allowing you to effectively manage capacity and costs. Within a stream group, you specify an application to stream, streaming locations and their capacity, and the stream class you want to use when streaming applications to your end-users. A stream class defines the hardware configuration of the compute resources that Amazon GameLift Streams will use when streaming, such as the CPU, GPU, and memory. Stream capacity represents the number of concurrent streams that can be active at a time. You set stream capacity per location, per stream group. There are two types of capacity, always-on and on-demand: Always-on: The streaming capacity that is allocated and ready to handle stream requests without delay. You pay for this capacity whether it's in use or not. Best for quickest time from streaming request to streaming session. Default is 1 (2 for high stream classes) when creating a stream group or adding a location. On-demand: The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes). Default is 0 when creating a stream group or adding a location. Values for capacity must be whole number multiples of the tenancy value of the stream group's stream class. To adjust the capacity of any If the Stream groups should be recreated every 3-4 weeks to pick up important service updates and fixes. Stream groups that are older than 180 days can no longer be updated with new application associations. Stream groups expire when they are 365 days old, at which point they can no longer stream sessions. The exact expiration date is indicated by the date value in the Stream groups manage how Amazon GameLift Streams allocates resources and handles concurrent streams, allowing you to effectively manage capacity and costs. Within a stream group, you specify an application to stream, streaming locations and their capacity, and the stream class you want to use when streaming applications to your end-users. A stream class defines the hardware configuration of the compute resources that Amazon GameLift Streams will use when streaming, such as the CPU, GPU, and memory. Stream capacity represents the number of concurrent streams that can be active at a time. You set stream capacity per location, per stream group. The following capacity settings are available: Always-on capacity: This setting, if non-zero, indicates minimum streaming capacity which is allocated to you and is never released back to the service. You pay for this base level of capacity at all times, whether used or idle. Maximum capacity: This indicates the maximum capacity that the service can allocate for you. Newly created streams may take a few minutes to start. Capacity is released back to the service when idle. You pay for capacity that is allocated to you until it is released. Target-idle capacity: This indicates idle capacity which the service pre-allocates and holds for you in anticipation of future activity. This helps to insulate your users from capacity-allocation delays. You pay for capacity which is held in this intentional idle state. Values for capacity must be whole number multiples of the tenancy value of the stream group's stream class. To adjust the capacity of any If the Stream groups should be recreated every 3-4 weeks to pick up important service updates and fixes. Stream groups that are older than 180 days can no longer be updated with new application associations. Stream groups expire when they are 365 days old, at which point they can no longer stream sessions. The exact expiration date is indicated by the date value in the Updates the configuration settings for an Amazon GameLift Streams stream group resource. To update a stream group, it must be in Stream capacity represents the number of concurrent streams that can be active at a time. You set stream capacity per location, per stream group. There are two types of capacity, always-on and on-demand: Always-on: The streaming capacity that is allocated and ready to handle stream requests without delay. You pay for this capacity whether it's in use or not. Best for quickest time from streaming request to streaming session. Default is 1 (2 for high stream classes) when creating a stream group or adding a location. On-demand: The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes). Default is 0 when creating a stream group or adding a location. Values for capacity must be whole number multiples of the tenancy value of the stream group's stream class. To update a stream group, specify the stream group's Amazon Resource Name (ARN) and provide the new values. If the request is successful, Amazon GameLift Streams returns the complete updated metadata for the stream group. Expired stream groups cannot be updated. Updates the configuration settings for an Amazon GameLift Streams stream group resource. To update a stream group, it must be in Stream capacity represents the number of concurrent streams that can be active at a time. You set stream capacity per location, per stream group. The following capacity settings are available: Always-on capacity: This setting, if non-zero, indicates minimum streaming capacity which is allocated to you and is never released back to the service. You pay for this base level of capacity at all times, whether used or idle. Maximum capacity: This indicates the maximum capacity that the service can allocate for you. Newly created streams may take a few minutes to start. Capacity is released back to the service when idle. You pay for capacity that is allocated to you until it is released. Target-idle capacity: This indicates idle capacity which the service pre-allocates and holds for you in anticipation of future activity. This helps to insulate your users from capacity-allocation delays. You pay for capacity which is held in this intentional idle state. Values for capacity must be whole number multiples of the tenancy value of the stream group's stream class. To update a stream group, specify the stream group's Amazon Resource Name (ARN) and provide the new values. If the request is successful, Amazon GameLift Streams returns the complete updated metadata for the stream group. Expired stream groups cannot be updated. The target stream quality for sessions that are hosted in this stream group. Set a stream class that is appropriate to the type of content that you're streaming. Stream class determines the type of computing resources Amazon GameLift Streams uses and impacts the cost of streaming. The following options are available: A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for sessions that are hosted in this stream group. Set a stream class that is appropriate to the type of content that you're streaming. Stream class determines the type of computing resources Amazon GameLift Streams uses and impacts the cost of streaming. The following options are available: A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 2 vCPUs, 8 GB RAM, 6 GB VRAM Tenancy: Supports up to 4 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 1 vCPUs, 4 GB RAM, 2 GB VRAM Tenancy: Supports up to 12 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 2 vCPUs, 8 GB RAM, 6 GB VRAM Tenancy: Supports up to 4 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 1 vCPUs, 4 GB RAM, 2 GB VRAM Tenancy: Supports up to 12 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session A short description of the reason that the stream group is in A short description of the reason that the stream group is in The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 2 vCPUs, 8 GB RAM, 6 GB VRAM Tenancy: Supports up to 4 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 1 vCPUs, 4 GB RAM, 2 GB VRAM Tenancy: Supports up to 12 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session A short description of the reason that the stream group is in A short description of the reason that the stream group is in A set of options that you can use to control the stream session runtime environment, expressed as a set of key-value pairs. You can use this to configure the application or stream session details. You can also provide custom environment variables that Amazon GameLift Streams passes to your game client. If you want to debug your application with environment variables, we recommend that you do so in a local environment outside of Amazon GameLift Streams. For more information, refer to the Compatibility Guidance in the troubleshooting section of the Developer Guide. The performance stats configuration for the stream session Access location for log files that your content generates during a stream session. These log files are uploaded to cloud storage location at the end of a stream session. The Amazon GameLift Streams application resource defines which log files to upload. The streaming capacity that is allocated and ready to handle stream requests without delay. You pay for this capacity whether it's in use or not. Best for quickest time from streaming request to streaming session. Default is 1 (2 for high stream classes) when creating a stream group or adding a location. This setting, if non-zero, indicates minimum streaming capacity which is allocated to you and is never released back to the service. You pay for this base level of capacity at all times, whether used or idle. The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes). Default is 0 when creating a stream group or adding a location. This field is deprecated. Use The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes). Default is 0 when creating a stream group or adding a location. This indicates idle capacity which the service pre-allocates and holds for you in anticipation of future activity. This helps to insulate your users from capacity-allocation delays. You pay for capacity which is held in this intentional idle state. This indicates the maximum capacity that the service can allocate for you. Newly created streams may take a few minutes to start. Capacity is released back to the service when idle. You pay for capacity that is allocated to you until it is released. Configuration settings that define a stream group's stream capacity for a location. When configuring a location for the first time, you must specify a numeric value for at least one of the two capacity types. To update the capacity for an existing stream group, call UpdateStreamGroup. To add a new location and specify its capacity, call AddStreamGroupLocations. The streaming capacity that is allocated and ready to handle stream requests without delay. You pay for this capacity whether it's in use or not. Best for quickest time from streaming request to streaming session. Default is 1 (2 for high stream classes) when creating a stream group or adding a location. This setting, if non-zero, indicates minimum streaming capacity which is allocated to you and is never released back to the service. You pay for this base level of capacity at all times, whether used or idle. The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes). Default is 0 when creating a stream group or adding a location. This indicates idle capacity which the service pre-allocates and holds for you in anticipation of future activity. This helps to insulate your users from capacity-allocation delays. You pay for capacity which is held in this intentional idle state. This indicates the maximum capacity that the service can allocate for you. Newly created streams may take a few minutes to start. Capacity is released back to the service when idle. You pay for capacity that is allocated to you until it is released. This value is the always-on capacity that you most recently requested for a stream group. You request capacity separately for each location in a stream group. In response to an increase in requested capacity, Amazon GameLift Streams attempts to provision compute resources to make the stream group's allocated capacity meet requested capacity. When always-on capacity is decreased, it can take a few minutes to deprovision allocated capacity to match the requested capacity. This value is the stream capacity that Amazon GameLift Streams has provisioned in a stream group that can respond immediately to stream requests. It includes resources that are currently streaming and resources that are idle and ready to respond to stream requests. You pay for this capacity whether it's in use or not. After making changes to capacity, it can take a few minutes for the allocated capacity count to reflect the change while compute resources are allocated or deallocated. Similarly, when allocated on-demand capacity is no longer needed, it can take a few minutes for Amazon GameLift Streams to spin down the allocated capacity. This value is the stream capacity that Amazon GameLift Streams has provisioned in a stream group that can respond immediately to stream requests. It includes resources that are currently streaming and resources that are idle and ready to respond to stream requests. When target-idle capacity is configured, the idle resources include the capacity buffer maintained beyond ongoing sessions. You pay for this capacity whether it's in use or not. After making changes to capacity, it can take a few minutes for the allocated capacity count to reflect the change while compute resources are allocated or deallocated. Similarly, when allocated on-demand capacity is no longer needed, it can take a few minutes for Amazon GameLift Streams to spin down the allocated capacity. Performance stats for the session are streamed to the client when set to Configuration settings for sharing the stream session's performance stats with the client A set of options that you can use to control the stream session runtime environment, expressed as a set of key-value pairs. You can use this to configure the application or stream session details. You can also provide custom environment variables that Amazon GameLift Streams passes to your game client. If you want to debug your application with environment variables, we recommend that you do so in a local environment outside of Amazon GameLift Streams. For more information, refer to the Compatibility Guidance in the troubleshooting section of the Developer Guide. Configuration settings for sharing the stream session's performance stats with the client A set of options that you can use to control the stream session runtime environment, expressed as a set of key-value pairs. You can use this to configure the application or stream session details. You can also provide custom environment variables that Amazon GameLift Streams passes to your game client. If you want to debug your application with environment variables, we recommend that you do so in a local environment outside of Amazon GameLift Streams. For more information, refer to the Compatibility Guidance in the troubleshooting section of the Developer Guide. The performance stats configuration for the stream session Access location for log files that your content generates during a stream session. These log files are uploaded to cloud storage location at the end of a stream session. The Amazon GameLift Streams application resource defines which log files to upload. The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 2 vCPUs, 8 GB RAM, 6 GB VRAM Tenancy: Supports up to 4 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 1 vCPUs, 4 GB RAM, 2 GB VRAM Tenancy: Supports up to 12 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The current status of the stream session resource. A short description of the reason the stream session is in The data transfer protocol in use with the stream session. The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session The target stream quality for the stream group. A stream class can be one of the following: Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 16 vCPUs, 64 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 2 vCPUs, 8 GB RAM, 6 GB VRAM Tenancy: Supports up to 4 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 1 vCPUs, 4 GB RAM, 2 GB VRAM Tenancy: Supports up to 12 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM Tenancy: Supports up to 2 concurrent stream sessions Reference resolution: 1080p Reference frame rate: 60 fps Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM Tenancy: Supports 1 concurrent stream session A short description of the reason that the stream group is in A short description of the reason that the stream group is in The Amazon Resource Name (ARN) that identifies the database instance involved in the finding. The unique ID of the database resource involved in the activity that prompted GuardDuty to generate the finding. Information about the tag key-value pairs. Scans a provided CycloneDX 1.5 SBOM and reports on any vulnerabilities discovered in that SBOM. You can generate compatible SBOMs for your resources using the Amazon Inspector SBOM generator. Scans a provided CycloneDX 1.5 SBOM and reports on any vulnerabilities discovered in that SBOM. You can generate compatible SBOMs for your resources using the Amazon Inspector SBOM generator. The output of this action reports NVD and CVSS scores when NVD and CVSS scores are available. Because the output reports both scores, you might notice a discrepency between them. However, you can triage the severity of either score depending on the vendor of your choosing. The JSON file for the SBOM you want to scan. The SBOM must be in CycloneDX 1.5 format. The JSON file for the SBOM you want to scan. The SBOM must be in CycloneDX 1.5 format. This format limits you to passing 2000 components before throwing a Returns summary information about the connector. Returns summary information about the connector. Returns information about the specified connector's operations. Returns information about the specified connector's operations. A summary description of the custom plugin. A summary description of the custom plugin. Returns information about a worker configuration. Returns information about a worker configuration. Lists information about a connector's operation(s). Lists information about a connector's operation(s). Returns a list of all the connectors in this account and Region. The list is limited to connectors whose name starts with the specified prefix. The response also includes a description of each of the listed connectors. Returns a list of all the connectors in this account and Region. The list is limited to connectors whose name starts with the specified prefix. The response also includes a description of each of the listed connectors. Returns a list of all of the custom plugins in this account and Region. Returns a list of all of the custom plugins in this account and Region. Lists all the tags attached to the specified resource. Lists all the tags attached to the specified resource. Returns a list of all of the worker configurations in this account and Region. Returns a list of all of the worker configurations in this account and Region. Updates the specified connector. Updates the specified connector. For request body, specify only one parameter: either The settings for delivering connector logs to Amazon CloudWatch Logs. The network type of the connector. It gives connectors connectivity to either IPv4 (IPV4) or IPv4 and IPv6 (DUAL) destinations. Defaults to IPV4. Specifies which plugins were used for this connector. Details about log delivery. The network type of the connector. It gives connectors connectivity to either IPv4 (IPV4) or IPv4 and IPv6 (DUAL) destinations. Defaults to IPV4. Amazon MSK Connect does not currently support specifying multiple plugins as a list. To use more than one plugin for your connector, you can create a single custom plugin using a ZIP file that bundles multiple plugins together. Specifies which plugin to use for the connector. You must specify a single-element list containing one Amazon MSK Connect does not currently support specifying multiple plugins as a list. To use more than one plugin for your connector, you can create a single custom plugin using a ZIP file that bundles multiple plugins together. Specifies which plugin to use for the connector. You must specify a single-element list containing one BillingGroupCostReportResults are grouped. For example, if you want a service-level breakdown for Amazon Simple Storage Service (Amazon S3) of the billing group, the attribute will be a key-value pair of \"PRODUCT_NAME\" and \"S3\".LineItemFilter to exclude it.null for all other partitionsOpportunity that weren't captured in other fields.null for all other partitionsOpportunity's project details.ACTIVE signing profile to CANCELED. A canceled profile is still viewable with the ListSigningProfiles operation, but it cannot perform new signing jobs, and is deleted two years after cancelation.ACTIVE signing profile to CANCELED. A canceled profile is still viewable with the ListSigningProfiles operation, but it cannot perform new signing jobs. See Data Retention for more information on scheduled deletion of a canceled signing profile.REVOKED. This indicates that the signature is no longer valid.REVOKED. This indicates that signatures generated using the signing profile after an effective start date are no longer valid. A revoked profile is still viewable with the ListSigningProfiles operation, but it cannot perform new signing jobs. See Data Retention for more information on scheduled deletion of a revoked signing profile. ListSigningJobs operation for two years after they are performed. Note the following requirements:
StartSigningJob operation.StartSigningJob.ListSigningJobs operation. Note the following requirements:
StartSigningJob operation.StartSigningJob.
"
+ "documentation":"signer:StartSigningJob. This action isn't supported for container image workflows. For details, see StartSigningJob.signer:SignPayload. This action isn't supported for AWS Lambda workflows. For details, see SignPayload signer:GetSigningProfile. For details, see GetSigningProfile.signer:RevokeSignature. For details, see RevokeSignature.
"
},
"principal":{
"shape":"String",
@@ -1875,5 +1875,5 @@
"bool":{"type":"boolean"},
"string":{"type":"string"}
},
- "documentation":"signer:StartSigningJob. This action isn't supported for container image workflows. For details, see StartSigningJob.signer:SignPayload. This action isn't supported for AWS Lambda workflows. For details, see SignPayload signer:GetSigningProfile. For details, see GetSigningProfile.signer:RevokeSignature. For details, see RevokeSignature.
"
+ "documentation":"
"
},
"Tags":{
"shape":"Tags",
@@ -2035,7 +2035,7 @@
"members":{
"PolicyType":{
"shape":"EffectivePolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"TargetId":{
"shape":"PolicyTargetId",
@@ -2194,7 +2194,7 @@
},
"PolicyType":{
"shape":"PolicyType",
- "documentation":"
"
+ "documentation":"
"
}
}
},
@@ -2289,7 +2289,8 @@
"INSPECTOR_POLICY",
"UPGRADE_ROLLOUT_POLICY",
"BEDROCK_POLICY",
- "S3_POLICY"
+ "S3_POLICY",
+ "NETWORK_SECURITY_DIRECTOR_POLICY"
]
},
"EffectivePolicyValidationError":{
@@ -2361,7 +2362,7 @@
},
"PolicyType":{
"shape":"PolicyType",
- "documentation":"
"
+ "documentation":"
"
}
}
},
@@ -2850,7 +2851,7 @@
"members":{
"PolicyType":{
"shape":"EffectivePolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"NextToken":{
"shape":"NextToken",
@@ -2871,7 +2872,7 @@
},
"PolicyType":{
"shape":"EffectivePolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"NextToken":{
"shape":"NextToken",
@@ -3021,7 +3022,7 @@
},
"PolicyType":{
"shape":"EffectivePolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"NextToken":{
"shape":"NextToken",
@@ -3042,7 +3043,7 @@
},
"PolicyType":{
"shape":"EffectivePolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"Path":{
"shape":"Path",
@@ -3263,7 +3264,7 @@
},
"Filter":{
"shape":"PolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"NextToken":{
"shape":"NextToken",
@@ -3294,7 +3295,7 @@
"members":{
"Filter":{
"shape":"PolicyType",
- "documentation":"
"
+ "documentation":"
"
},
"NextToken":{
"shape":"NextToken",
@@ -3759,7 +3760,8 @@
"INSPECTOR_POLICY",
"UPGRADE_ROLLOUT_POLICY",
"BEDROCK_POLICY",
- "S3_POLICY"
+ "S3_POLICY",
+ "NETWORK_SECURITY_DIRECTOR_POLICY"
]
},
"PolicyTypeAlreadyEnabledException":{
diff --git a/awscli/botocore/data/quicksight/2018-04-01/service-2.json b/awscli/botocore/data/quicksight/2018-04-01/service-2.json
index 9a53f1526c62..a663f8479d75 100644
--- a/awscli/botocore/data/quicksight/2018-04-01/service-2.json
+++ b/awscli/botocore/data/quicksight/2018-04-01/service-2.json
@@ -1532,7 +1532,7 @@
{"shape":"UnsupportedUserEditionException"},
{"shape":"InternalFailureException"}
],
- "documentation":"JobStatus.JobStatus.
"
},
"DescribeDashboardSnapshotJobResult":{
"name":"DescribeDashboardSnapshotJobResult",
@@ -1551,7 +1551,7 @@
{"shape":"PreconditionNotMetException"},
{"shape":"InternalFailureException"}
],
- "documentation":"COMPLETED or FAILED status when you poll the job with a DescribeDashboardSnapshotJob API call.Dashboard Snapshot Job with id <SnapshotjobId> has not reached a terminal state..COMPLETED or FAILED status when you poll the job with a DescribeDashboardSnapshotJob API call.Dashboard Snapshot Job with id <SnapshotjobId> has not reached a terminal state..RegisteredUsers response attribute. The attribute will contain a list with at most one object in it.
DASHBOARD_ACCESS_DENIED - The registered user lost access to the dashboard.CAPABILITY_RESTRICTED - The registered user is restricted from exporting data in all selected formats.
"
},
"DescribeDashboardsQAConfiguration":{
"name":"DescribeDashboardsQAConfiguration",
@@ -2288,6 +2288,25 @@
"documentation":"CAPABILITY_RESTRICTED - The registered user is restricted from exporting data in some selected formats.RLS_CHANGED - Row-level security settings have changed. Re-run the job with current settings.CLS_CHANGED - Column-level security settings have changed. Re-run the job with current settings.DATASET_DELETED - The dataset has been deleted. Verify the dataset exists before re-running the job.
ProvidedContexts parameter with ProviderArn set to arn:aws:iam::aws:contextProvider/QuickSight and ContextAssertion set to the identity token received from this API.sts:SetContext action in addition to sts:AssumeRole in its trust relationship policy. The trust policy should include both actions for the principal that will be assuming the role.
DescribeDashboardSnapshotJob API. When you call the DescribeDashboardSnapshotJob API, check the JobStatus field in the response. Once the job reaches a COMPLETED or FAILED status, use the DescribeDashboardSnapshotJobResult API to obtain the URLs for the generated files. If the job fails, the DescribeDashboardSnapshotJobResult API returns detailed information about the error that occurred.StartDashboardSnapshotJob. By default, 12 jobs can run simlutaneously in one Amazon Web Services account and users can submit up 10 API requests per second before an account is throttled. If an overwhelming number of API requests are made by the same user in a short period of time, Quick Sight throttles the API calls to maintin an optimal experience and reliability for all Quick Sight users.
SnapshotExport API jobs are running simultaneously on an Amazon Web Services account. When a new StartDashboardSnapshotJob is created and there are already 12 jobs with the RUNNING status, the new job request fails and returns a LimitExceededException error. Wait for a current job to comlpete before you resubmit the new job.ThrottlingException is returned.(12 minutes * 9 = 108 minutes). Use the new result to determine the latest time at which the jobs need to be started to meet your target deadline.
"
+ "documentation":"
DescribeDashboardSnapshotJob API. When you call the DescribeDashboardSnapshotJob API, check the JobStatus field in the response. Once the job reaches a COMPLETED or FAILED status, use the DescribeDashboardSnapshotJobResult API to obtain the URLs for the generated files. If the job fails, the DescribeDashboardSnapshotJobResult API returns detailed information about the error that occurred.StartDashboardSnapshotJob. By default, 12 jobs can run simlutaneously in one Amazon Web Services account and users can submit up 10 API requests per second before an account is throttled. If an overwhelming number of API requests are made by the same user in a short period of time, Quick Sight throttles the API calls to maintin an optimal experience and reliability for all Quick Sight users.
SnapshotExport API jobs are running simultaneously on an Amazon Web Services account. When a new StartDashboardSnapshotJob is created and there are already 12 jobs with the RUNNING status, the new job request fails and returns a LimitExceededException error. Wait for a current job to comlpete before you resubmit the new job.ThrottlingException is returned.(12 minutes * 9 = 108 minutes). Use the new result to determine the latest time at which the jobs need to be started to meet your target deadline.
ProvidedContexts parameter in the API request. The list of contexts must have a single trusted context assertion. The ProviderArn should be arn:aws:iam::aws:contextProvider/IdentityCenter while ContextAssertion will be the identity token you received in response from CreateTokenWithIAM
ProvidedContexts parameter in the API request. The list of contexts must have a single trusted context assertion. The ProviderArn should be arn:aws:iam::aws:contextProvider/QuickSight while ContextAssertion will be the identity token you received in response from GetIdentityContext
"
},
"StartDashboardSnapshotJobSchedule":{
"name":"StartDashboardSnapshotJobSchedule",
@@ -7328,6 +7347,14 @@
"shape":"ChartAxisLabelOptions",
"documentation":"BarChartVisual.BarChartVisual.BarChartVisual.BarChartVisual.BarChartVisual structure describes a visual that is a member of the bar chart family. The following charts can be described using this structure:
BarChartVisual.BarChartVisual.BarChartVisual.ComboChartVisual.ComboChartVisual.ComboChartVisual.ComboChartVisual.ComboChartVisual includes stacked bar combo charts and clustered bar combo chartsComboChartVisual.ComboChartVisual.ComboChartVisual.#, for example #37BFF5. BarChartVisual.ComboChartVisual.
"
+ },
+ "DecalStyleType":{
+ "shape":"DecalStyleType",
+ "documentation":"SOLID: Solid fill pattern.DIAGONAL_SMALL: Small diagonal stripes pattern.DIAGONAL_MEDIUM: Medium diagonal stripes pattern.DIAGONAL_LARGE: Large diagonal stripes pattern.DIAGONAL_OPPOSITE_SMALL: Small cross-diagonal stripes pattern.DIAGONAL_OPPOSITE_MEDIUM: Medium cross-diagonal stripes pattern.DIAGONAL_OPPOSITE_LARGE: Large cross-diagonal stripes pattern.CIRCLE_SMALL: Small circle pattern.CIRCLE_MEDIUM: Medium circle pattern.CIRCLE_LARGE: Large circle pattern.DIAMOND_SMALL: Small diamonds pattern.DIAMOND_MEDIUM: Medium diamonds pattern.DIAMOND_LARGE: Large diamonds pattern.DIAMOND_GRID_SMALL: Small diamond grid pattern.DIAMOND_GRID_MEDIUM: Medium diamond grid pattern.DIAMOND_GRID_LARGE: Large diamond grid pattern.CHECKERBOARD_SMALL: Small checkerboard pattern.CHECKERBOARD_MEDIUM: Medium checkerboard pattern.CHECKERBOARD_LARGE: Large checkerboard pattern.TRIANGLE_SMALL: Small triangles pattern.TRIANGLE_MEDIUM: Medium triangles pattern.TRIANGLE_LARGE: Large triangles pattern.
"
+ }
+ },
+ "documentation":"Manual: Apply manual line and marker configuration for line series.Auto: Apply automatic line and marker configuration for line series.KeyRegistration update to Quick Sight fails.BarChartVisual.ComboChartVisual.LineChartVisual.LineChartVisual.LineChartVisual.SnapshotJobResultFileGroup objects that contain information on the files that are requested for registered user during a StartDashboardSnapshotJob API call. If the job succeeds, these objects contain the location where the snapshot artifacts are stored. If the job fails, the objects contain information about the error that caused the job to fail.StartDashboardSnapshotJob API call.AnonymousUserSnapshotJobResult objects that contain information on anonymous users and their user configurations. This data provided by you when you make a StartDashboardSnapshotJob API call.RegisteredUserSnapshotJobResult objects that contain information about files that are requested for registered user during a StartDashboardSnapshotJob API call.ENABLED by default.CreatedDate. CreatedDate. selector1._domainkey.yourdomain.com CNAME selector1.<SigningHostedZone> selector2._domainkey.yourdomain.com CNAME selector2.<SigningHostedZone> selector3._domainkey.yourdomain.com CNAME selector3.<SigningHostedZone>
"
@@ -4882,6 +4886,7 @@
},
"documentation":"AWS_SES – Indicates that DKIM was configured for the identity by using Easy DKIM.EXTERNAL – Indicates that DKIM was configured for the identity by using Bring Your Own DKIM (BYODKIM).AWS_SES_AF_SOUTH_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Africa (Cape Town) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_NORTH_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Stockholm) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTH_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Mumbai) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTH_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Hyderabad) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_WEST_3 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Paris) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_WEST_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (London) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_SOUTH_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Milan) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_WEST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Ireland) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_NORTHEAST_3 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Osaka) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_NORTHEAST_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Seoul) region using Deterministic Easy-DKIM (DEED). AWS_SES_ME_CENTRAL_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Middle East (UAE) region using Deterministic Easy-DKIM (DEED). AWS_SES_ME_SOUTH_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Middle East (Bahrain) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_NORTHEAST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Tokyo) region using Deterministic Easy-DKIM (DEED). AWS_SES_IL_CENTRAL_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Israel (Tel Aviv) region using Deterministic Easy-DKIM (DEED). AWS_SES_SA_EAST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in South America (São Paulo) region using Deterministic Easy-DKIM (DEED). AWS_SES_CA_CENTRAL_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Canada (Central) region using Deterministic Easy-DKIM (DEED). AWS_SES_CA_WEST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Canada (Calgary) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTHEAST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Singapore) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTHEAST_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Sydney) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTHEAST_3 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Jakarta) region using Deterministic Easy-DKIM (DEED). AWS_SES_AP_SOUTHEAST_5 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Asia Pacific (Malaysia) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_CENTRAL_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Frankfurt) region using Deterministic Easy-DKIM (DEED). AWS_SES_EU_CENTRAL_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in Europe (Zurich) region using Deterministic Easy-DKIM (DEED). AWS_SES_US_EAST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in US East (N. Virginia) region using Deterministic Easy-DKIM (DEED). AWS_SES_US_EAST_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in US East (Ohio) region using Deterministic Easy-DKIM (DEED). AWS_SES_US_WEST_1 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in US West (N. California) region using Deterministic Easy-DKIM (DEED). AWS_SES_US_WEST_2 – Indicates that DKIM was configured for the identity by replicating signing attributes from a parent identity in US West (Oregon) region using Deterministic Easy-DKIM (DEED). selector1._domainkey.yourdomain.com CNAME selector1.<SigningHostedZone> selector2._domainkey.yourdomain.com CNAME selector2.<SigningHostedZone> selector3._domainkey.yourdomain.com CNAME selector3.<SigningHostedZone> SearchFilter. This accepts an OR or AND (List of List) input where:
"
+ "documentation":"OR operator.AND operator.SearchFilter. This accepts an OR or AND (List of List) input where:
"
},
"ControlPlaneTagFilter":{
"type":"structure",
"members":{
"OrConditions":{
"shape":"TagOrConditionList",
- "documentation":"OR operator.AND operator.OR condition. OR condition.SearchFilter. This accepts an OR of AND (List of List) input where:
"
+ "documentation":"OR operatorAND operator.SearchFilter. This accepts an OR of AND (List of List) input where:
"
},
"ControlPlaneUserAttributeFilter":{
"type":"structure",
@@ -10080,7 +10080,7 @@
"TagCondition":{"shape":"TagCondition"},
"HierarchyGroupCondition":{"shape":"HierarchyGroupCondition"}
},
- "documentation":"OR operatorAND operator.SearchFilter.OR of AND (List of List) input where:
OR operatorAND operator.SearchFilter.OR of AND (List of List) input where:
OR operatorAND operator.CustomerId may be a customer number from your CRM.AGENT.DisconnectOnCustomerExit parameter, you can configure automatic agent disconnection when end customers end the chat, ensuring that disconnect flows are triggered consistently regardless of which participant disconnects first.AND condition. AND condition.SecretArn. DataSync provides this key to Secrets Manager.CmkSecretConfig or CustomSecretConfig to provide credentials for a CreateLocation request. Do not provide both parameters for the same request.CmkSecretConfig or CustomSecretConfig to provide credentials for a CreateLocation request. Do not provide both parameters for the same request.CreateLocationAzureBlob request, you provide only the KMS key ARN. DataSync uses this KMS key together with the authentication token you specify for SasConfiguration to create a DataSync-managed secret to store the location access credentials.CmkSecretConfig (with SasConfiguration) or CustomSecretConfig (without SasConfiguration) to provide credentials for a CreateLocationAzureBlob request. Do not provide both parameters for the same request.CreateLocationAzureBlob request, you provide only the KMS key ARN. DataSync uses this KMS key together with the authentication token you specify for SasConfiguration to create a DataSync-managed secret to store the location access credentials.CmkSecretConfig (with SasConfiguration) or CustomSecretConfig (without SasConfiguration) to provide credentials for a CreateLocationAzureBlob request. Do not provide both parameters for the same request.CmkSecretConfig (with SasConfiguration) or CustomSecretConfig (without SasConfiguration) to provide credentials for a CreateLocationAzureBlob request. Do not provide both parameters for the same request.CmkSecretConfig (with SasConfiguration) or CustomSecretConfig (without SasConfiguration) to provide credentials for a CreateLocationAzureBlob request. Do not provide both parameters for the same request.SecretKey that DataSync uses to access a specific object storage location, with a customer-managed KMS key.CreateLocationObjectStorage request, you provide only the KMS key ARN. DataSync uses this KMS key together with the value you specify for the SecretKey parameter to create a DataSync-managed secret to store the location access credentials.CmkSecretConfig (with SecretKey) or CustomSecretConfig (without SecretKey) to provide credentials for a CreateLocationObjectStorage request. Do not provide both parameters for the same request.SecretKey that DataSync uses to access a specific object storage location, with a customer-managed KMS key.CreateLocationObjectStorage request, you provide only the KMS key ARN. DataSync uses this KMS key together with the value you specify for the SecretKey parameter to create a DataSync-managed secret to store the location access credentials.CmkSecretConfig (with SecretKey) or CustomSecretConfig (without SecretKey) to provide credentials for a CreateLocationObjectStorage request. Do not provide both parameters for the same request.CmkSecretConfig (with SecretKey) or CustomSecretConfig (without SecretKey) to provide credentials for a CreateLocationObjectStorage request. Do not provide both parameters for the same request.CmkSecretConfig (with SecretKey) or CustomSecretConfig (without SecretKey) to provide credentials for a CreateLocationObjectStorage request. Do not provide both parameters for the same request.AuthenticationType is set to NTLM.Password or KerberosKeytab (for NTLM (default) and KERBEROS authentication types, respectively) that DataSync uses to access a specific SMB storage location, with a customer-managed KMS key.CreateLocationSmbRequest request, you provide only the KMS key ARN. DataSync uses this KMS key together with either the Password or KerberosKeytab you specify to create a DataSync-managed secret to store the location access credentials.CmkSecretConfig (with either Password or KerberosKeytab) or CustomSecretConfig (without any Password and KerberosKeytab) to provide credentials for a CreateLocationSmbRequest request. Do not provide both CmkSecretConfig and CustomSecretConfig parameters for the same request.Password) or binary (for KerberosKeytab). This configuration includes the secret ARN, and the ARN for an IAM role that provides access to the secret.CmkSecretConfig (with SasConfiguration) or CustomSecretConfig (without SasConfiguration) to provide credentials for a CreateLocationSmbRequest request. Do not provide both parameters for the same request.SecretArn.CmkSecretConfig or CustomSecretConfig to provide credentials for a CreateLocation request. Do not provide both parameters for the same request.CmkSecretConfig or CustomSecretConfig to provide credentials for a CreateLocation request. Do not provide both parameters for the same request.Password or KerberosKeytab that DataSync uses to access a specific storage location. DataSync uses the default Amazon Web Services-managed KMS key to encrypt this secret in Secrets Manager.Password or KerberosKeytab that DataSync uses to access a specific storage location, with a customer-managed KMS key.Password or KerberosKeytab that DataSync uses to access a specific storage location, with a customer-managed KMS key.
"
+ "documentation":"TranserMode is set to CHANGED - The calculation is based on comparing the content of the source and destination locations and determining the difference that needs to be transferred. The difference can include:
NEVER).REMOVE).TranserMode is set to ALL - The calculation is based only on the items that DataSync finds at the source location.
TranserMode is set to CHANGED - The calculation is based on comparing the content of the source and destination locations and determining the difference that needs to be transferred. The difference can include:
NEVER).REMOVE).TranserMode is set to ALL - The calculation is based only on the items that DataSync finds at the source location.EstimatedFilesToTransfer. In some cases, this value can also be greater than EstimatedFilesToTransfer. This element is implementation-specific for some location types, so don't use it as an exact indication of what's transferring or to monitor your task execution.EstimatedFilesToTransfer. In some cases, this value can also be greater than EstimatedFilesToTransfer. This element is implementation-specific for some location types, so don't use it as an exact indication of what's transferring or to monitor your task execution.0.0.0.0.0.
TranserMode is set to CHANGED - The calculation is based on comparing the content of the source and destination locations and determining the difference that needs to be transferred. The difference can include:
NEVER).TranserMode is set to ALL - The calculation is based only on the items that DataSync finds at the source location.EstimatedFoldersToTransfer. In some cases, this value can also be greater than EstimatedFoldersToTransfer. 0.1048576 (=1024*1024).1048576 (=1024*1024).
"
+ "documentation":"
"
+ },
+ "AtDestinationForDelete":{
+ "shape":"long",
+ "documentation":"
"
},
"AtDestinationForDelete":{
"shape":"long",
- "documentation":"AuthenticationType is set to NTLM.Password or KerberosKeytab or set of credentials that DataSync uses to access a specific transfer location, and a customer-managed KMS key.Password or KerberosKeytab or set of credentials that DataSync uses to access a specific transfer location, and a customer-managed KMS key.browserTabTitle and welcomeText.Light if you upload a dark wallpaper, or Dark for a light wallpaper.browserTabTitle and welcomeText.Light if you upload a dark wallpaper, or Dark for a light wallpaper.s3://bucket-name/key-name. You must have read access to the S3 object.https:// or mailto:. If not provided, the contact button will be hidden from the web portal screen.s3://bucket-name/key-name. You must have read access to the S3 object.status field to track completion.status field to track completion.status field to track completion.status field to track completion.IdMappingWorkflow with a given name, if it exists.IdMappingWorkflow with a given name, if it exists.IdNamespace with a given name, if it exists.IdNamespace with a given name, if it exists.MatchingWorkflow with a given name, if it exists.MatchingWorkflow with a given name, if it exists.ProviderService of a given name.ProviderService of a given name.IdMappingWorkflows that have been created for an Amazon Web Services account.IdMappingWorkflows that have been created for an Amazon Web Services account.MatchingWorkflows that have been created for an Amazon Web Services account.MatchingWorkflows that have been created for an Amazon Web Services account.ProviderServices that are available in this Amazon Web Services Region.ProviderServices that are available in this Amazon Web Services Region.SchemaMappings that have been created for an Amazon Web Services account.SchemaMappings that have been created for an Amazon Web Services account.SchemaMapping, and MatchingWorkflow can be tagged.SchemaMapping, and MatchingWorkflow can be tagged.OutputAttribute objects, each of which have the fields Name and Hashed. Each of these objects selects a column to be included in the output table, and whether the values of the column should be hashed.AttributeType of PHONE_NUMBER, and the data in the input table is in a format of 1234567890, Entity Resolution will normalize this field in the output to (123)-456-7890.OutputAttribute objects, each of which have the fields Name and Hashed. Each of these objects selects a column to be included in the output table, and whether the values of the column should be hashed.Locked state. If the vault lock is in the Locked state when this operation is requested, the operation returns an AccessDeniedException error. Aborting the vault locking process removes the vault lock policy from the specified vault. InProgress state by calling InitiateVaultLock. A vault lock is put into the Locked state by calling CompleteVaultLock. You can get the state of a vault lock by calling GetVaultLock. For more information about the vault locking process, see Amazon Glacier Vault Lock. For more information about vault lock policies, see Amazon Glacier Access Control with Vault Lock Policies. InProgress state or if there is no policy associated with the vault.LimitExceededException error. If a tag already exists on the vault under a specified key, the existing key value will be overwritten. For more information about tags, see Tagging Amazon S3 Glacier Resources. LimitExceededException error. If a tag already exists on the vault under a specified key, the existing key value will be overwritten. For more information about tags, see Tagging Amazon Glacier Resources. InProgress state to the Locked state, which causes the vault lock policy to become unchangeable. A vault lock is put into the InProgress state by calling InitiateVaultLock. You can obtain the state of the vault lock by calling GetVaultLock. For more information about the vault locking process, Amazon Glacier Vault Lock. Locked state and the provided lock ID matches the lock ID originally used to lock the vault.Locked state, the operation returns an AccessDeniedException error. If an invalid lock ID is passed in the request when the vault lock is in the InProgress state, the operation throws an InvalidParameter error.
bytes=0-1048575, you should verify your download size is 1,048,576 bytes. If you download an entire archive, the expected size is the size of the archive when you uploaded it to Amazon S3 Glacier The expected size is also returned in the headers from the Get Job Output response.bytes=0-1048575, you should verify your download size is 1,048,576 bytes. If you download an entire archive, the expected size is the size of the archive when you uploaded it to Amazon Glacier The expected size is also returned in the headers from the Get Job Output response.access-policy subresource set on the vault; for more information on setting this subresource, see Set Vault Access Policy (PUT access-policy). If there is no access policy set on the vault, the operation returns a 404 Not found error. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies.lock-policy subresource set on the specified vault:
InProgess or Locked.InProgress state.InProgress state by calling InitiateVaultLock. A vault lock is put into the Locked state by calling CompleteVaultLock. You can abort the vault locking process by calling AbortVaultLock. For more information about the vault locking process, Amazon Glacier Vault Lock. 404 Not found error. For more information about vault lock policies, Amazon Glacier Access Control with Vault Lock Policies. notification-configuration subresource of the specified vault.404 Not Found error. For more information about vault notifications, see Configuring Vault Notifications in Amazon S3 Glacier. notification-configuration subresource of the specified vault.404 Not Found error. For more information about vault notifications, see Configuring Vault Notifications in Amazon Glacier.
InProgress.InProgress state. After the 24 hour window ends, the lock ID expires, the vault automatically exits the InProgress state, and the vault lock policy is removed from the vault. You call CompleteVaultLock to complete the vault locking process by setting the state of the vault lock to Locked. Locked state, you cannot initiate a new vault lock for the vault.InProgress state, the operation returns an AccessDeniedException error. When the vault lock is in the InProgress state you must call AbortVaultLock before you can initiate a new vault lock policy. Marker field. If there are no more jobs to list, the Marker field is set to null. If there are more jobs to list, the Marker field is set to a non-null value, which you can use to continue the pagination of the list. To return a list of jobs that begins at a specific job, set the marker request parameter to the Marker value for that job that you obtained from a previous List Jobs request.limit parameter in the request. The default limit is 50. The number of jobs returned might be fewer than the limit, but the number of returned jobs never exceeds the limit.statuscode parameter or completed parameter, or both. Using the statuscode parameter, you can specify to return only jobs that match either the InProgress, Succeeded, or Failed status. Using the completed parameter, you can specify to return only jobs that were completed (true) or jobs that were not completed (false).marker at which to continue the list; if there are no more items the marker is null. To return a list of multipart uploads that begins at a specific upload, set the marker request parameter to the value you obtained from a previous List Multipart Upload request. You can also limit the number of uploads returned in the response by specifying the limit parameter in the request.marker at which to continue the list; if there are no more items the marker is null. To return a list of multipart uploads that begins at a specific upload, set the marker request parameter to the value you obtained from a previous List Multipart Upload request. You can also limit the number of uploads returned in the response by specifying the limit parameter in the request.marker at which to continue the list; if there are no more items the marker is null. To return a list of parts that begins at a specific part, set the marker request parameter to the value you obtained from a previous List Parts request. You can also limit the number of parts returned in the response by specifying the limit parameter in the request. marker at which to continue the list; if there are no more items the marker is null. To return a list of parts that begins at a specific part, set the marker request parameter to the value you obtained from a previous List Parts request. You can also limit the number of parts returned in the response by specifying the limit parameter in the request. marker field contains the vault Amazon Resource Name (ARN) at which to continue the list with a new List Vaults request; otherwise, the marker field is null. To return a list of vaults that begins at a specific vault, set the marker request parameter to the vault ARN you obtained from a previous List Vaults request. You can also limit the number of vaults returned in the response by specifying the limit parameter in the request. marker field contains the vault Amazon Resource Name (ARN) at which to continue the list with a new List Vaults request; otherwise, the marker field is null. To return a list of vaults that begins at a specific vault, set the marker request parameter to the vault ARN you obtained from a previous List Vaults request. You can also limit the number of vaults returned in the response by specifying the limit parameter in the request. access-policy subresource of the vault. An access policy is specific to a vault and is also called a vault subresource. You can set one access policy per vault and the policy can be up to 20 KB in size. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies. notification-configuration subresource of the vault. The request should include a JSON document that provides an Amazon SNS topic and specific events for which you want Amazon S3 Glacier to send notifications to the topic.
notification-configuration subresource of the vault. The request should include a JSON document that provides an Amazon SNS topic and specific events for which you want Amazon Glacier to send notifications to the topic.
x-amz-archive-id header of the response. x-amz-archive-id header of the response.
AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. 2012-03-20T17:03:43.221Z.2012-03-20T17:03:43.221Z.GetDataRetrievalPolicy request.GetDataRetrievalPolicy request.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.bytes=0-1048575. By default, this operation downloads the entire output.
",
+ "documentation":"bytes=0-1048575. By default, this operation downloads the entire output.
",
"location":"header",
"locationName":"Range"
}
},
- "documentation":"AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.InProgress state.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. marker value from a previous List Jobs response. You only need to include the marker if you are continuing the pagination of the results started in a previous List Jobs request. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. null.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. null.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.null.null.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon S3 Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID. AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single '-' (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens ('-') in the ID.
"
+ "documentation":"
"
}
diff --git a/awscli/botocore/data/health/2016-08-04/endpoint-rule-set-1.json b/awscli/botocore/data/health/2016-08-04/endpoint-rule-set-1.json
index d818b36f2521..0dc9da01e68d 100644
--- a/awscli/botocore/data/health/2016-08-04/endpoint-rule-set-1.json
+++ b/awscli/botocore/data/health/2016-08-04/endpoint-rule-set-1.json
@@ -29,6 +29,220 @@
}
},
"rules": [
+ {
+ "conditions": [
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "isSet",
+ "argv": [
+ {
+ "ref": "Endpoint"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ false
+ ]
+ },
+ {
+ "fn": "isSet",
+ "argv": [
+ {
+ "ref": "Region"
+ }
+ ]
+ },
+ {
+ "fn": "aws.partition",
+ "argv": [
+ {
+ "ref": "Region"
+ }
+ ],
+ "assign": "PartitionResult"
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-cn"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-us-gov"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso-b"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso-e"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "not",
+ "argv": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso-f"
+ ]
+ }
+ ]
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ true
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://health-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://health.{Region}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
+ },
{
"conditions": [
{
diff --git a/awscli/botocore/data/logs/2014-03-28/paginators-1.json b/awscli/botocore/data/logs/2014-03-28/paginators-1.json
index 4df16d61b70a..ac2062d9f3bc 100644
--- a/awscli/botocore/data/logs/2014-03-28/paginators-1.json
+++ b/awscli/botocore/data/logs/2014-03-28/paginators-1.json
@@ -116,6 +116,12 @@
"limit_key": "maxResults",
"output_token": "nextToken",
"result_key": "sources"
+ },
+ "ListAggregateLogGroupSummaries": {
+ "input_token": "nextToken",
+ "limit_key": "limit",
+ "output_token": "nextToken",
+ "result_key": "aggregateLogGroupSummaries"
}
}
}
diff --git a/awscli/botocore/data/logs/2014-03-28/service-2.json b/awscli/botocore/data/logs/2014-03-28/service-2.json
index abf133c801df..7359ad7e2578 100644
--- a/awscli/botocore/data/logs/2014-03-28/service-2.json
+++ b/awscli/botocore/data/logs/2014-03-28/service-2.json
@@ -61,6 +61,23 @@
],
"documentation":"PENDING or RUNNING state.CreateExportTask operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.
"
+ },
"CreateLogAnomalyDetector":{
"name":"CreateLogAnomalyDetector",
"http":{
@@ -291,7 +327,7 @@
{"shape":"OperationAbortedException"},
{"shape":"ServiceUnavailableException"}
],
- "documentation":"
[ { \"Effect\": \"Allow\", \"Action\": \"iam:PassRole\", \"Resource\": \"arn:aws:iam::123456789012:role/apiCallerCredentials\", \"Condition\": { \"StringLike\": { \"iam:AssociatedResourceARN\": \"arn:aws:logs:us-east-1:123456789012:log-group:aws/cloudtrail/f1d45bff-d0e3-4868-b5d9-2eb678aa32fb:*\" } } }, { \"Effect\": \"Allow\", \"Action\": [ \"cloudtrail:GetEventDataStoreData\" ], \"Resource\": [ \"arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/f1d45bff-d0e3-4868-b5d9-2eb678aa32fb\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"logs:CreateImportTask\", \"logs:CreateLogGroup\", \"logs:CreateLogStream\", \"logs:PutResourcePolicy\" ], \"Resource\": [ \"arn:aws:logs:us-east-1:123456789012:log-group:/aws/cloudtrail/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:Decrypt\", \"kms:GenerateDataKey\" ], \"Resource\": [ \"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012\" ] } ]
logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.logs:PutSubscriptionFilter and logs:PutAccountPolicy permissions.logs:PutTransformer and logs:PutAccountPolicy permissions.logs:PutIndexPolicy and logs:PutAccountPolicy permissions.logs:PutIndexPolicy and logs:PutAccountPolicy permissions.logs:PutMetricExtractionPolicy and logs:PutAccountPolicy permissions.PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.selectionCriteria parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.
@logStream @aws.region @aws.account @source.log traceId PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId in [value, value, ...] will attempt to process only the log events where the indexed field matches the specified value.RequestId won't match a log event containing requestId.selectionCriteria parameter. Field index policies can now be created for specific data source name and type combinations using DataSourceName and DataSourceType selection criteria. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.PutAccountPolicy. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with PutAccountPolicy.LogGroupNamePrefix NOT IN [\"/aws/containerinsights\", \"/aws/ecs/containerinsights\", \"/aws/application-signals/data\"].selectionCriteria parameter. The selection criteria supports filtering by LogGroupName and LogGroupNamePrefix using the operators IN and NOT IN. You can specify up to 50 values in each IN or NOT IN list.LogGroupName IN [\"log-group-1\", \"log-group-2\"] LogGroupNamePrefix NOT IN [\"/aws/prefix1\", \"/aws/prefix2\"] LogGroupNamePrefix IN [\"my-log\"], you can't have another metric extraction policy with selection criteria LogGroupNamePrefix IN [\"/my-log-prod\"] or LogGroupNamePrefix IN [\"/my-logging\"], as the set of log groups matching these prefixes would be a subset of the log groups matching the first policy's prefix, creating an overlap.NOT IN, only one policy with this operator is allowed per account.IN and NOT IN operators, the overlap check ensures that policies don't have conflicting effects. Two policies with IN and NOT IN operators do not overlap if and only if every value in the IN policy is completely contained within some value in the NOT IN policy. For example:
"
+ "documentation":"NOT IN policy for prefix \"/aws/lambda\", you can create an IN policy for the exact log group name \"/aws/lambda/function1\" because the set of log groups matching \"/aws/lambda/function1\" is a subset of the log groups matching \"/aws/lambda\".NOT IN policy for prefix \"/aws/lambda\", you cannot create an IN policy for prefix \"/aws\" because the set of log groups matching \"/aws\" is not a subset of the log groups matching \"/aws/lambda\".
logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.logs:PutSubscriptionFilter and logs:PutAccountPolicy permissions.logs:PutTransformer and logs:PutAccountPolicy permissions.logs:PutIndexPolicy and logs:PutAccountPolicy permissions.logs:PutIndexPolicy and logs:PutAccountPolicy permissions.logs:PutMetricExtractionPolicy and logs:PutAccountPolicy permissions.PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.selectionCriteria parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another transformer policy filtered to my-logpprod or my-logging.PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId in [value, value, ...] will attempt to process only the log events where the indexed field matches the specified value.RequestId won't match a log event containing requestId.LogGroupNamePrefix with the selectionCriteria parameter. You can have another 20 account-level field index policies using DataSourceName and DataSourceType for the selectionCriteria parameter. If you have multiple account-level index policies with LogGroupNamePrefix selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging. Similarly, if you have multiple account-level index policies with DataSourceName and DataSourceType selection criteria, no two of them can use the same data source name and type combination. For example, if you have one policy filtered to the data source name amazon_vpc and data source type flow you cannot create another policy with this combination.
@logStream @aws.region @aws.account @source.log @data_source_name @data_source_type @data_format traceId severityText attributes.session.id amazon_vpc.flow
action logStatus region flowDirection type amazon_route53.resolver_query
transport rcode aws_waf.access
action httpRequest.country aws_cloudtrail.data, aws_cloudtrail.management
eventSource eventName awsRegion userAgent errorCode eventType managementEvent readOnly eventCategory requestId PutAccountPolicy. If you do so, that log group will use that log-group level policy and any account-level policies that match at the data source level; any account-level policy that matches at the log group level (for example, no selection criteria or log group name prefix selection criteria) will be ignored.LogGroupNamePrefix NOT IN [\"/aws/containerinsights\", \"/aws/ecs/containerinsights\", \"/aws/application-signals/data\"].selectionCriteria parameter. The selection criteria supports filtering by LogGroupName and LogGroupNamePrefix using the operators IN and NOT IN. You can specify up to 50 values in each IN or NOT IN list.LogGroupName IN [\"log-group-1\", \"log-group-2\"] LogGroupNamePrefix NOT IN [\"/aws/prefix1\", \"/aws/prefix2\"] LogGroupNamePrefix IN [\"my-log\"], you can't have another metric extraction policy with selection criteria LogGroupNamePrefix IN [\"/my-log-prod\"] or LogGroupNamePrefix IN [\"/my-logging\"], as the set of log groups matching these prefixes would be a subset of the log groups matching the first policy's prefix, creating an overlap.NOT IN, only one policy with this operator is allowed per account.IN and NOT IN operators, the overlap check ensures that policies don't have conflicting effects. Two policies with IN and NOT IN operators do not overlap if and only if every value in the IN policy is completely contained within some value in the NOT IN policy. For example:
"
},
"PutDataProtectionPolicy":{
"name":"PutDataProtectionPolicy",
@@ -2022,6 +2092,11 @@
}
},
"Baseline":{"type":"boolean"},
+ "BatchId":{
+ "type":"string",
+ "max":256,
+ "min":1
+ },
"Boolean":{"type":"boolean"},
"CSV":{
"type":"structure",
@@ -2055,6 +2130,41 @@
}
}
},
+ "CancelImportTaskRequest":{
+ "type":"structure",
+ "required":["importId"],
+ "members":{
+ "importId":{
+ "shape":"ImportId",
+ "documentation":"NOT IN policy for prefix \"/aws/lambda\", you can create an IN policy for the exact log group name \"/aws/lambda/function1\" because the set of log groups matching \"/aws/lambda/function1\" is a subset of the log groups matching \"/aws/lambda\".NOT IN policy for prefix \"/aws/lambda\", you cannot create an IN policy for prefix \"/aws\" because the set of log groups matching \"/aws\" is not a subset of the log groups matching \"/aws/lambda\".aws/.
DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.Operation property with the Deidentify action is what actually masks the data, and it must contain the \"MaskConfig\": {} object. The \"MaskConfig\": {} object must be empty.DataIdentifer arrays must match exactly.policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.policyDocument can be up to 30,720 characters long.
Random for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream.
RequestId and TransactionId.\"policyDocument\": \"{ \\\"Fields\\\": [ \\\"RequestId\\\", \\\"TransactionId\\\" ] }\"
DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.Operation property with the Deidentify action is what actually masks the data, and it must contain the \"MaskConfig\": {} object. The \"MaskConfig\": {} object must be empty.DataIdentifer arrays must match exactly.policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.policyDocument can be up to 30,720 characters long.
Random for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream.
\"policyDocument\": \"{ \\\"Fields\\\": [ \\\"TransactionId\\\" ], \\\"FieldsV2\\\": {\\\"RequestId\\\": {\\\"type\\\": \\\"FIELD_INDEX\\\"}, \\\"APIName\\\": {\\\"type\\\": \\\"FACET\\\"}, \\\"StatusCode\\\": {\\\"type\\\": \\\"FACET\\\"}}}\" FieldsV2 to specify the type for each field. Supported types are FIELD_INDEX and FACET. Field names within Fields and FieldsV2 must be mutually exclusive.selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN [] policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.
policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN [] policyType is TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix policyType is FIELD_INDEX_POLICY, the supported selectionCriteria filters are:
LogGroupNamePrefix DataSourceName AND DataSourceType selectionCriteria for a field index policy you can use either LogGroupNamePrefix by itself or DataSourceName and DataSourceType together.selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.
"
+ "documentation":"APPLICATION_LOGS and EVENT_LOGS.APPLICATION_LOGS.APPLICATION_LOGS, USAGE_LOGS and TRACES.APPLICATION_LOGS, USAGE_LOGS and TRACES.APPLICATION_LOGS and TRACES.APPLICATION_LOGS and TRACES.ACCESS_LOGS.EVENT_LOGS.EGRESS_ACCESS_LOGS and INGRESS_ACCESS_LOGS.AD_DECISION_SERVER_LOGS, MANIFEST_SERVICE_LOGS, and TRANSCODE_LOGS.WORKFLOW_LOGS.ERROR_LOGS.NLB_ACCESS_LOGS.PCS_SCHEDULER_LOGS and PCS_JOBCOMP_LOGS.APPLICATION_LOGS.EVENT_LOGS and SYNC_JOB_LOGS.APPLICATION_LOGS and TRAFFIC_POLICY_DEBUG_LOGS.ACCESS_CONTROL_LOGS, AUTHENTICATION_LOGS, WORKMAIL_AVAILABILITY_PROVIDER_LOGS, WORKMAIL_MAILBOX_ACCESS_LOGS, and WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS.EVENT_LOGS.
"
},
"tags":{
"shape":"Tags",
@@ -6681,7 +7026,7 @@
},
"policyDocument":{
"shape":"PolicyDocument",
- "documentation":"APPLICATION_LOGS and EVENT_LOGS.APPLICATION_LOGS.APPLICATION_LOGS, USAGE_LOGS and TRACES.APPLICATION_LOGS, USAGE_LOGS and TRACES.APPLICATION_LOGS and TRACES.APPLICATION_LOGS and TRACES.ACCESS_LOGS.EVENT_LOGS.EGRESS_ACCESS_LOGS and INGRESS_ACCESS_LOGS.AD_DECISION_SERVER_LOGS, MANIFEST_SERVICE_LOGS, and TRANSCODE_LOGS.WORKFLOW_LOGS.ERROR_LOGS.ALERT_LOGS, ALLOW_LOGS, and DENY_LOGS.NLB_ACCESS_LOGS.PCS_SCHEDULER_LOGS and PCS_JOBCOMP_LOGS.CHAT_LOGS and FEEDBACK_LOGS.APPLICATION_LOGS.EVENT_LOGS and SYNC_JOB_LOGS.APPLICATION_LOGS and TRAFFIC_POLICY_DEBUG_LOGS.ACCESS_CONTROL_LOGS, AUTHENTICATION_LOGS, WORKMAIL_AVAILABILITY_PROVIDER_LOGS, WORKMAIL_MAILBOX_ACCESS_LOGS, and WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS.EVENT_LOGS.RequestId and TransactionId.\"policyDocument\": \"{ \"Fields\": [ \"RequestId\", \"TransactionId\" ] }\" \"policyDocument\": \"{\"Fields\": [ \"TransactionId\" ], \"FieldsV2\": {\"RequestId\": {\"type\": \"FIELD_INDEX\"}, \"APIName\": {\"type\": \"FACET\"}, \"StatusCode\": {\"type\": \"FACET\"}}}\" FieldsV2 to specify the type for each field. Supported types are FIELD_INDEX and FACET. Field names within Fields and FieldsV2 must be mutually exclusive.GET and POST.POST requests.POST requests.NONE and GZIP. This value is only eligible for POST requests.
",
"box":true
+ },
+ "RniEnhancedMetricsEnabled":{
+ "shape":"RniEnhancedMetricsEnabled",
+ "documentation":"TargetIps is available only when the value of Rule type is FORWARD.TargetIps is available only when the value of Rule type is FORWARD. You should not provide TargetIps when the Rule type is DELEGATE.TargetIps parameter. If you provide the TargetIps, you may receive an ERROR message similar to \"Delegate resolver rules need to specify a nameserver name\". This error means you should not provide TargetIps.
"
+ },
+ "RniEnhancedMetricsEnabled":{
+ "shape":"RniEnhancedMetricsEnabled",
+ "documentation":"
StartQuotaUtilizationReport operation. The report generation process is asynchronous and may take several seconds to complete. Poll this operation periodically to check the status and retrieve results when the report is ready.NextToken parameter to retrieve additional pages of results. Reports are automatically deleted after 15 minutes.GetQuotaUtilizationReport operation to check the status and retrieve the results when the report is ready.StartQuotaUtilizationReport operation.
"
+ },
+ "GeneratedAt":{
+ "shape":"DateTime",
+ "documentation":"PENDING - The report generation is in progress. Retry this operation after a few seconds.IN_PROGRESS - The report is being processed. Continue polling until the status changes to COMPLETED.COMPLETED - The report is ready and quota utilization data is available in the response.FAILED - The report generation failed. Check the ErrorCode and ErrorMessage fields for details.FAILED. This field is only present when the status is FAILED.FAILED. This field is only present when the status is FAILED.
AutomaticManagement - The request was automatically created by Service Quotas Automatic Management when quota utilization approached the limit.GetQuotaUtilizationReport operation to retrieve the report results.PENDING when the report is first initiated.Instances per domain for Amazon OpenSearch Service.ACTIVE state. You can use the UpdateCertificate action to deactivate it.
"
+ "documentation":"ACTIVE state. You can use the UpdateCertificate action to deactivate it.TransferCertificate operation, the certificates will no longer be encrypted by their customer managed key configuration. During the transfer process, certificates are encrypted using Amazon Web Services IoT Core owned keys.
"
},
"UntagResource":{
"name":"UntagResource",
@@ -4336,7 +4336,7 @@
{"shape":"ServiceUnavailableException"},
{"shape":"InternalFailureException"}
],
- "documentation":"IN_PROGRESS. If the job execution status is not set to another terminal state before the timer expires, it will be automatically set to TIMED_OUT.payloadTemplate as a JSON document for preprocessing. This preprocessor substitutes placeholders with parameter values to generate the command execution request payload. commandParameterValue can only have one of the below fields listed.commandParameterValue can only have one of the below fields listed.payloadTemplate is processed by the service to generate the final payload sent to devices at StartCommandExecution API invocation.UNHEALTHY, the return value will be UNHEALTHY. To use a customer-managed KMS key, the value of configurationStatus must be HEALTHY. UNHEALTHY, the return value will be UNHEALTHY. To use a customer managed KMS key, the value of configurationStatus must be HEALTHY. errorCode.AWS-IoT namespace.mandatoryParameters or when StartCommandExecution is invoked.payloadTemplate is processed to generate command execution payload.payloadTemplate, and mandatoryParameters.StartCommandExecution API. These parameters need to be specified only when using the AWS-IoT-FleetWise namespace. You can either specify them here or when running the command using the StartCommandExecution API.StartCommandExecution API for execution payload generation.AWS-IoT-FleetWise namespace. The role grants IoT Device Management the permission to access IoT FleetWise resources for generating the payload for the command. This field is not required when you use the AWS-IoT namespace.AWS-IoT-FleetWise namespace. The role grants IoT Device Management the permission to access IoT FleetWise resources for generating the payload for the command. This field is not supported when you use the AWS-IoT namespace.payloadTemplate is processed to generate command execution payload.AWS-IoT-FleetWise as the namespace.
ACTIVE stream group, call UpdateStreamGroup. CreateStreamGroup request is successful, Amazon GameLift Streams assigns a unique ID to the stream group resource and sets the status to ACTIVATING. It can take a few minutes for Amazon GameLift Streams to finish creating the stream group while it searches for unallocated compute resources and provisions them. When complete, the stream group status will be ACTIVE and you can start stream sessions by using StartStreamSession. To check the stream group's status, call GetStreamGroup. ExpiresAt field.
ACTIVE stream group, call UpdateStreamGroup. CreateStreamGroup request is successful, Amazon GameLift Streams assigns a unique ID to the stream group resource and sets the status to ACTIVATING. It can take a few minutes for Amazon GameLift Streams to finish creating the stream group while it searches for unallocated compute resources and provisions them. When complete, the stream group status will be ACTIVE and you can start stream sessions by using StartStreamSession. To check the stream group's status, call GetStreamGroup. ExpiresAt field.ACTIVE status. You can change the description, the set of locations, and the requested capacity of a stream group per location. If you want to change the stream class, create a new stream group.
ACTIVE status. You can change the description, the set of locations, and the requested capacity of a stream group per location. If you want to change the stream class, create a new stream group.
"
+ "documentation":"gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.
"
},
"DefaultApplicationIdentifier":{
"shape":"Identifier",
@@ -827,7 +831,7 @@
},
"StreamClass":{
"shape":"StreamClass",
- "documentation":"gen6n_pro_win2022 (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_pro (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_ultra_win2022 (NVIDIA, ultra) Supports applications with high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_medium (NVIDIA, medium) Supports applications with moderate 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_small (NVIDIA, small) Supports applications with lightweight 3D scene complexity and low CPU usage. Uses NVIDIA L4 Tensor Core GPU.
gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor Core GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor Core GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor Core GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor Core GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor Core GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor Core GPU.
"
+ "documentation":"gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.
"
},
"Id":{
"shape":"Id",
@@ -839,7 +843,7 @@
},
"StatusReason":{
"shape":"StreamGroupStatusReason",
- "documentation":"gen6n_pro_win2022 (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_pro (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_ultra_win2022 (NVIDIA, ultra) Supports applications with high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_medium (NVIDIA, medium) Supports applications with moderate 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_small (NVIDIA, small) Supports applications with lightweight 3D scene complexity and low CPU usage. Uses NVIDIA L4 Tensor Core GPU.
gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor Core GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor Core GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor Core GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor Core GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor Core GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor Core GPU.
ERROR status. The possible reasons can be one of the following:
"
+ "documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available on-demand capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.ERROR status. The possible reasons can be one of the following:
"
},
"LastUpdatedAt":{
"shape":"Timestamp",
@@ -1185,7 +1189,7 @@
},
"StreamClass":{
"shape":"StreamClass",
- "documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.
"
+ "documentation":"gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.
"
},
"Id":{
"shape":"Id",
@@ -1197,7 +1201,7 @@
},
"StatusReason":{
"shape":"StreamGroupStatusReason",
- "documentation":"gen6n_pro_win2022 (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_pro (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_ultra_win2022 (NVIDIA, ultra) Supports applications with high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_medium (NVIDIA, medium) Supports applications with moderate 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_small (NVIDIA, small) Supports applications with lightweight 3D scene complexity and low CPU usage. Uses NVIDIA L4 Tensor Core GPU.
gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor Core GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor Core GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor Core GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor Core GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor Core GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor Core GPU.
ERROR status. The possible reasons can be one of the following:
"
+ "documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available on-demand capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.ERROR status. The possible reasons can be one of the following:
"
},
"LastUpdatedAt":{
"shape":"Timestamp",
@@ -1297,6 +1301,10 @@
"shape":"EnvironmentVariables",
"documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.AdditionalEnvironmentVariables and AdditionalLaunchArgs have similar purposes. AdditionalEnvironmentVariables passes data using environment variables; while AdditionalLaunchArgs passes data using command-line arguments.MaximumCapacity instead. This parameter cannot be used with MaximumCapacity or TargetIdleCapacity in the same location configuration.true. Defaults to false.AdditionalEnvironmentVariables and AdditionalLaunchArgs have similar purposes. AdditionalEnvironmentVariables passes data using environment variables; while AdditionalLaunchArgs passes data using command-line arguments.AdditionalEnvironmentVariables and AdditionalLaunchArgs have similar purposes. AdditionalEnvironmentVariables passes data using environment variables; while AdditionalLaunchArgs passes data using command-line arguments.
"
+ "documentation":"gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.
"
},
"Status":{
"shape":"StreamGroupStatus",
@@ -2030,6 +2087,10 @@
"shape":"StreamSessionStatus",
"documentation":"gen6n_pro_win2022 (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_pro (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_ultra_win2022 (NVIDIA, ultra) Supports applications with high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_medium (NVIDIA, medium) Supports applications with moderate 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_small (NVIDIA, small) Supports applications with lightweight 3D scene complexity and low CPU usage. Uses NVIDIA L4 Tensor Core GPU.
gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor Core GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor Core GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor Core GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor Core GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor Core GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor Core GPU.
"
},
+ "StatusReason":{
+ "shape":"StreamSessionStatusReason",
+ "documentation":"ACTIVATING: The stream session is starting and preparing to stream.ACTIVE: The stream session is ready and waiting for a client connection. A client has ConnectionTimeoutSeconds (specified in StartStreamSession) from when the session reaches ACTIVE state to establish a connection. If no client connects within this timeframe, the session automatically terminates.CONNECTED: The stream session has a connected client. A session will automatically terminate if there is no user input for 60 minutes, or if the maximum length of a session specified by SessionLengthSeconds in StartStreamSession is exceeded.ERROR: The stream session failed to activate. See StatusReason (returned by GetStreamSession and StartStreamSession) for more information.PENDING_CLIENT_RECONNECTION: A client has recently disconnected and the stream session is waiting for the client to reconnect. A client has ConnectionTimeoutSeconds (specified in StartStreamSession) from when the session reaches PENDING_CLIENT_RECONNECTION state to re-establish a connection. If no client connects within this timeframe, the session automatically terminates.RECONNECTING: A client has initiated a reconnect to a session that was in PENDING_CLIENT_RECONNECTION state.TERMINATING: The stream session is ending.TERMINATED: The stream session has ended.ERROR status or TERMINATED status.ERROR status reasons:
applicationLogS3DestinationError: Could not write the application log to the Amazon S3 bucket that is configured for the streaming application. Make sure the bucket still exists.internalError: An internal service error occurred. Start a new stream session to continue streaming.invalidSignalRequest: The WebRTC signal request that was sent is not valid. When starting or reconnecting to a stream session, use generateSignalRequest in the Amazon GameLift Streams Web SDK to generate a new signal request.placementTimeout: Amazon GameLift Streams could not find available stream capacity to start a stream session. Increase the stream capacity in the stream group or wait until capacity becomes available.TERMINATED status reasons:
"
+ },
"Protocol":{
"shape":"Protocol",
"documentation":"apiTerminated: The stream session was terminated by an API call to TerminateStreamSession.applicationExit: The streaming application exited or crashed. The stream session was terminated because the application is no longer running.connectionTimeout: The stream session was terminated because the client failed to connect within the connection timeout period specified by ConnectionTimeoutSeconds.idleTimeout: The stream session was terminated because it exceeded the idle timeout period of 60 minutes with no user input activity.maxSessionLengthTimeout: The stream session was terminated because it exceeded the maximum session length timeout period specified by SessionLengthSeconds.reconnectionTimeout: The stream session was terminated because the client failed to reconnect within the reconnection timeout period specified by ConnectionTimeoutSeconds after losing connection.
"
+ "documentation":"gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.
"
},
"Id":{
"shape":"Id",
@@ -2316,7 +2382,7 @@
},
"StatusReason":{
"shape":"StreamGroupStatusReason",
- "documentation":"gen6n_pro_win2022 (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_pro (NVIDIA, pro) Supports applications with extremely high 3D scene complexity which require maximum resources. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_ultra_win2022 (NVIDIA, ultra) Supports applications with high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA L4 Tensor Core GPU.
gen6n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA L4 Tensor Core GPU.
gen6n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_medium (NVIDIA, medium) Supports applications with moderate 3D scene complexity. Uses NVIDIA L4 Tensor Core GPU.
gen6n_small (NVIDIA, small) Supports applications with lightweight 3D scene complexity and low CPU usage. Uses NVIDIA L4 Tensor Core GPU.
gen5n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor Core GPU.
gen5n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor Core GPU.
gen5n_ultra (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor Core GPU.
gen4n_win2022 (NVIDIA, ultra) Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.6, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor Core GPU.
gen4n_high (NVIDIA, high) Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor Core GPU.
gen4n_ultra (NVIDIA, ultra) Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor Core GPU.
ERROR status. The possible reasons can be one of the following:
"
+ "documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available on-demand capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.ERROR status. The possible reasons can be one of the following:
"
},
"LastUpdatedAt":{
"shape":"Timestamp",
diff --git a/awscli/botocore/data/guardduty/2017-11-28/service-2.json b/awscli/botocore/data/guardduty/2017-11-28/service-2.json
index 6c93955e9ba8..fcd1daa79bfc 100644
--- a/awscli/botocore/data/guardduty/2017-11-28/service-2.json
+++ b/awscli/botocore/data/guardduty/2017-11-28/service-2.json
@@ -9221,6 +9221,11 @@
"documentation":"internalError: The request can't process right now because of an issue with the server. Try again later.noAvailableInstances: Amazon GameLift Streams does not currently have enough available capacity to fulfill your request. Wait a few minutes and retry the request as capacity can shift frequently. You can also try to make the request using a different stream class or in another region.ValidException error.capacity or connectorConfiguration.customPlugin object.customPlugin object.
Details about delivering logs to Amazon CloudWatch Logs.
" }, + "networkType":{ + "shape":"NetworkType", + "documentation":"The network type of the connector. It gives connectors connectivity to either IPv4 (IPV4) or IPv4 and IPv6 (DUAL) destinations. Defaults to IPV4.
" + }, "plugins":{ "shape":"__listOfPluginDescription", "documentation":"Specifies which plugins were used for this connector.
" @@ -1738,6 +1759,14 @@ "max":100, "min":1 }, + "NetworkType":{ + "type":"string", + "documentation":"The network type of a connector.
", + "enum":[ + "IPV4", + "DUAL" + ] + }, "NotFoundException":{ "type":"structure", "members":{ @@ -2018,8 +2047,7 @@ }, "TagResourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "TagValue":{ "type":"string", @@ -2080,8 +2108,7 @@ }, "UntagResourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateConnectorRequest":{ "type":"structure", diff --git a/awscli/botocore/data/mediaconvert/2017-08-29/service-2.json b/awscli/botocore/data/mediaconvert/2017-08-29/service-2.json index 692571747709..a598ecceb06e 100644 --- a/awscli/botocore/data/mediaconvert/2017-08-29/service-2.json +++ b/awscli/botocore/data/mediaconvert/2017-08-29/service-2.json @@ -2146,7 +2146,7 @@ }, "AudioDefaultSelection": { "type": "string", - "documentation": "Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.", + "documentation": "Specify a fallback audio selector for this input. Use to ensure outputs have audio even when the audio selector you specify in your output is missing from the source. DEFAULT (Checked in the MediaConvert console): If your output settings specify an audio selector that does not exist in this input, MediaConvert uses this audio selector instead. This is useful when you have multiple inputs with a different number of audio tracks. NOT_DEFAULT (Unchecked in the MediaConvert console): MediaConvert will not fallback from any missing audio selector. Any output specifying a missing audio selector will be silent.", "enum": [ "DEFAULT", "NOT_DEFAULT" @@ -2375,7 +2375,7 @@ "DefaultSelection": { "shape": "AudioDefaultSelection", "locationName": "defaultSelection", - "documentation": "Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio." + "documentation": "Specify a fallback audio selector for this input. Use to ensure outputs have audio even when the audio selector you specify in your output is missing from the source. DEFAULT (Checked in the MediaConvert console): If your output settings specify an audio selector that does not exist in this input, MediaConvert uses this audio selector instead. This is useful when you have multiple inputs with a different number of audio tracks. NOT_DEFAULT (Unchecked in the MediaConvert console): MediaConvert will not fallback from any missing audio selector. Any output specifying a missing audio selector will be silent." }, "ExternalAudioFileInput": { "shape": "__stringPatternS3Https", @@ -3400,7 +3400,8 @@ "TELETEXT", "NULL_SOURCE", "IMSC", - "WEBVTT" + "WEBVTT", + "TT_3GPP" ] }, "CaptionSourceUpconvertSTLToTeletext": { @@ -7284,6 +7285,22 @@ "FOLLOW_BOTTOM_FIELD" ] }, + "H265MvOverPictureBoundaries": { + "type": "string", + "documentation": "If you are setting up the picture as a tile, you must set this to \"disabled\". In all other configurations, you typically enter \"enabled\".", + "enum": [ + "ENABLED", + "DISABLED" + ] + }, + "H265MvTemporalPredictor": { + "type": "string", + "documentation": "If you are setting up the picture as a tile, you must set this to \"disabled\". In other configurations, you typically enter \"enabled\".", + "enum": [ + "ENABLED", + "DISABLED" + ] + }, "H265ParControl": { "type": "string", "documentation": "Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source, uses the PAR from your input video for your output. To specify a different PAR, choose any value other than Follow source. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.", @@ -7480,6 +7497,16 @@ "locationName": "minIInterval", "documentation": "Specify the minimum number of frames allowed between two IDR-frames in your output. This includes frames created at the start of a GOP or a scene change. Use Min I-Interval to improve video compression by varying GOP size when two IDR-frames would be created near each other. For example, if a regular cadence-driven IDR-frame would fall within 5 frames of a scene-change IDR-frame, and you set Min I-interval to 5, then the encoder would only write an IDR-frame for the scene-change. In this way, one GOP is shortened or extended. If a cadence-driven IDR-frame would be further than 5 frames from a scene-change IDR-frame, then the encoder leaves all IDR-frames in place. To use an automatically determined interval: We recommend that you keep this value blank. This allows for MediaConvert to use an optimal setting according to the characteristics of your input video, and results in better video compression. To manually specify an interval: Enter a value from 1 to 30. Use when your downstream systems have specific GOP size requirements. To disable GOP size variance: Enter 0. MediaConvert will only create IDR-frames at the start of your output's cadence-driven GOP. Use when your downstream systems require a regular GOP size." }, + "MvOverPictureBoundaries": { + "shape": "H265MvOverPictureBoundaries", + "locationName": "mvOverPictureBoundaries", + "documentation": "If you are setting up the picture as a tile, you must set this to \"disabled\". In all other configurations, you typically enter \"enabled\"." + }, + "MvTemporalPredictor": { + "shape": "H265MvTemporalPredictor", + "locationName": "mvTemporalPredictor", + "documentation": "If you are setting up the picture as a tile, you must set this to \"disabled\". In other configurations, you typically enter \"enabled\"." + }, "NumberBFramesBetweenReferenceFrames": { "shape": "__integerMin0Max7", "locationName": "numberBFramesBetweenReferenceFrames", @@ -7570,11 +7597,31 @@ "locationName": "temporalIds", "documentation": "Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output." }, + "TileHeight": { + "shape": "__integerMin64Max2160", + "locationName": "tileHeight", + "documentation": "Set this field to set up the picture as a tile. You must also set TileWidth. The tile height must result in 22 or fewer rows in the frame. The tile width must result in 20 or fewer columns in the frame. And finally, the product of the column count and row count must be 64 or less. If the tile width and height are specified, MediaConvert will override the video codec slices field with a value that MediaConvert calculates." + }, + "TilePadding": { + "shape": "H265TilePadding", + "locationName": "tilePadding", + "documentation": "Set to \"padded\" to force MediaConvert to add padding to the frame, to obtain a frame that is a whole multiple of the tile size. If you are setting up the picture as a tile, you must enter \"padded\". In all other configurations, you typically enter \"none\"." + }, + "TileWidth": { + "shape": "__integerMin256Max3840", + "locationName": "tileWidth", + "documentation": "Set this field to set up the picture as a tile. See TileHeight for more information." + }, "Tiles": { "shape": "H265Tiles", "locationName": "tiles", "documentation": "Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures." }, + "TreeBlockSize": { + "shape": "H265TreeBlockSize", + "locationName": "treeBlockSize", + "documentation": "Select the tree block size used for encoding. If you enter \"auto\", the encoder will pick the best size. If you are setting up the picture as a tile, you must set this to 32x32. In all other configurations, you typically enter \"auto\"." + }, "UnregisteredSeiTimecode": { "shape": "H265UnregisteredSeiTimecode", "locationName": "unregisteredSeiTimecode", @@ -7629,6 +7676,14 @@ "ENABLED" ] }, + "H265TilePadding": { + "type": "string", + "documentation": "Set to \"padded\" to force MediaConvert to add padding to the frame, to obtain a frame that is a whole multiple of the tile size. If you are setting up the picture as a tile, you must enter \"padded\". In all other configurations, you typically enter \"none\".", + "enum": [ + "NONE", + "PADDED" + ] + }, "H265Tiles": { "type": "string", "documentation": "Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.", @@ -7637,6 +7692,14 @@ "ENABLED" ] }, + "H265TreeBlockSize": { + "type": "string", + "documentation": "Select the tree block size used for encoding. If you enter \"auto\", the encoder will pick the best size. If you are setting up the picture as a tile, you must set this to 32x32. In all other configurations, you typically enter \"auto\".", + "enum": [ + "AUTO", + "TREE_SIZE_32X32" + ] + }, "H265UnregisteredSeiTimecode": { "type": "string", "documentation": "Inserts timecode for each frame as 4 bytes of an unregistered SEI message.", @@ -8830,6 +8893,11 @@ "locationName": "height", "documentation": "Specify the height, in pixels, for your video generator input. This is useful for positioning when you include one or more video overlays for this input. To use the default resolution 540x360: Leave both width and height blank. To specify a height: Enter an even integer from 32 to 8192. When you do, you must also specify a value for width." }, + "ImageInput": { + "shape": "__stringMin14PatternS3BmpBMPPngPNGTgaTGAHttpsBmpBMPPngPNGTgaTGA", + "locationName": "imageInput", + "documentation": "Specify the HTTP, HTTPS, or Amazon S3 location of the image that you want to overlay on the video. Use a PNG or TGA file." + }, "SampleRate": { "shape": "__integerMin32000Max48000", "locationName": "sampleRate", @@ -14145,6 +14213,11 @@ "VideoOverlayInput": { "type": "structure", "members": { + "AudioSelectors": { + "shape": "__mapOfAudioSelector", + "locationName": "audioSelectors", + "documentation": "Use Audio selectors to specify audio to use during your Video overlay. You can use multiple Audio selectors per Video overlay. When you include an Audio selector within a Video overlay, MediaConvert mutes any Audio selectors with the same name from the underlying input. For example, if your underlying input has Audio selector 1 and Audio selector 2, and your Video overlay only has Audio selector 1, then MediaConvert replaces all audio for Audio selector 1 during the Video overlay. To replace all audio for all Audio selectors from the underlying input by using a single Audio selector in your overlay, set DefaultSelection to DEFAULT (Check \\\"Use as default\\\" in the MediaConvert console)." + }, "FileInput": { "shape": "__stringPatternS3Https", "locationName": "fileInput", @@ -15601,6 +15674,11 @@ "min": 24, "max": 60000 }, + "__integerMin256Max3840": { + "type": "integer", + "min": 256, + "max": 3840 + }, "__integerMin25Max10000": { "type": "integer", "min": 25, @@ -15681,6 +15759,11 @@ "min": 64000, "max": 640000 }, + "__integerMin64Max2160": { + "type": "integer", + "min": 64, + "max": 2160 + }, "__integerMin6Max16": { "type": "integer", "min": 6, diff --git a/awscli/botocore/data/mediapackagev2/2022-12-25/service-2.json b/awscli/botocore/data/mediapackagev2/2022-12-25/service-2.json index 7c56910e5085..94c42e7b5496 100644 --- a/awscli/botocore/data/mediapackagev2/2022-12-25/service-2.json +++ b/awscli/botocore/data/mediapackagev2/2022-12-25/service-2.json @@ -3818,10 +3818,20 @@ "Url":{ "shape":"SpekeKeyProviderUrlString", "documentation":"The URL of the API Gateway proxy that you set up to talk to your key server. The API Gateway proxy must reside in the same AWS Region as MediaPackage and must start with https://.
The following example shows a URL: https://1wm2dx1f33.execute-api.us-west-2.amazonaws.com/SpekeSample/copyProtection
The ARN for the certificate that you imported to AWS Certificate Manager to add content key encryption to this endpoint. For this feature to work, your DRM key provider must support content key encryption.
" } }, "documentation":"The parameters for the SPEKE key provider.
" }, + "SpekeKeyProviderCertificateArnString":{ + "type":"string", + "max":2048, + "min":20, + "pattern":"arn:([^:\\n]+):acm:([^:\\n]+):([0-9]+):certificate/[a-zA-Z0-9-_]+" + }, "SpekeKeyProviderDrmSystemsList":{ "type":"list", "member":{"shape":"DrmSystem"}, @@ -4361,7 +4371,16 @@ "MALFORMED_SECRET_ARN", "SECRET_FROM_DIFFERENT_ACCOUNT", "SECRET_FROM_DIFFERENT_REGION", - "INVALID_SECRET" + "INVALID_SECRET", + "RESOURCE_NOT_IN_SAME_REGION", + "CERTIFICATE_RESOURCE_NOT_FOUND", + "CERTIFICATE_ACCESS_DENIED", + "DESCRIBE_CERTIFICATE_FAILED", + "INVALID_CERTIFICATE_STATUS", + "INVALID_CERTIFICATE_KEY_ALGORITHM", + "INVALID_CERTIFICATE_SIGNATURE_ALGORITHM", + "MISSING_CERTIFICATE_DOMAIN_NAME", + "INVALID_ARN" ] } }, diff --git a/awscli/botocore/data/payment-cryptography-data/2022-02-03/service-2.json b/awscli/botocore/data/payment-cryptography-data/2022-02-03/service-2.json index 20e6d1cdf162..17dadde2096e 100644 --- a/awscli/botocore/data/payment-cryptography-data/2022-02-03/service-2.json +++ b/awscli/botocore/data/payment-cryptography-data/2022-02-03/service-2.json @@ -49,6 +49,24 @@ ], "documentation":"Encrypts plaintext data to ciphertext using a symmetric (TDES, AES), asymmetric (RSA), or derived (DUKPT or EMV) encryption key scheme. For more information, see Encrypt data in the Amazon Web Services Payment Cryptography User Guide.
You can generate an encryption key within Amazon Web Services Payment Cryptography by calling CreateKey. You can import your own encryption key by calling ImportKey.
For this operation, the key must have KeyModesOfUse set to Encrypt. In asymmetric encryption, plaintext is encrypted using public component. You can import the public component of an asymmetric key pair created outside Amazon Web Services Payment Cryptography by calling ImportKey.
This operation also supports dynamic keys, allowing you to pass a dynamic encryption key as a TR-31 WrappedKeyBlock. This can be used when key material is frequently rotated, such as during every card transaction, and there is need to avoid importing short-lived keys into Amazon Web Services Payment Cryptography. To encrypt using dynamic keys, the keyARN is the Key Encryption Key (KEK) of the TR-31 wrapped encryption key material. The incoming wrapped key shall have a key purpose of D0 with a mode of use of B or D. For more information, see Using Dynamic Keys in the Amazon Web Services Payment Cryptography User Guide.
For symmetric and DUKPT encryption, Amazon Web Services Payment Cryptography supports TDES and AES algorithms. For EMV encryption, Amazon Web Services Payment Cryptography supports TDES algorithms.For asymmetric encryption, Amazon Web Services Payment Cryptography supports RSA.
When you use TDES or TDES DUKPT, the plaintext data length must be a multiple of 8 bytes. For AES or AES DUKPT, the plaintext data length must be a multiple of 16 bytes. For RSA, it sould be equal to the key size unless padding is enabled.
To encrypt using DUKPT, you must already have a BDK (Base Derivation Key) key in your account with KeyModesOfUse set to DeriveKey, or you can generate a new DUKPT key by calling CreateKey. To encrypt using EMV, you must already have an IMK (Issuer Master Key) key in your account with KeyModesOfUse set to DeriveKey.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" }, + "GenerateAs2805KekValidation":{ + "name":"GenerateAs2805KekValidation", + "http":{ + "method":"POST", + "requestUri":"/as2805kekvalidation/generate", + "responseCode":200 + }, + "input":{"shape":"GenerateAs2805KekValidationInput"}, + "output":{"shape":"GenerateAs2805KekValidationOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Establishes node-to-node initialization between payment processing nodes such as an acquirer, issuer or payment network using Australian Standard 2805 (AS2805).
During node-to-node initialization, both communicating nodes must validate that they possess the correct Key Encrypting Keys (KEKs) before proceeding with session key exchange. In AS2805, the sending KEK (KEKs) of one node corresponds to the receiving KEK (KEKr) of its partner node. Each node uses its KEK to encrypt and decrypt session keys exchanged between the nodes. A KEK can be created or imported into Amazon Web Services Payment Cryptography using either the CreateKey or ImportKey operations.
The node initiating communication can use GenerateAS2805KekValidation to generate a combined KEK validation request and KEK validation response to send to the partnering node for validation. When invoked, the API internally generates a random sending key encrypted under KEKs and provides a receiving key encrypted under KEKr as response. The initiating node sends the response returned by this API to its partner for validation.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
" + }, "GenerateCardValidationData":{ "name":"GenerateCardValidationData", "http":{ @@ -83,7 +101,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServerException"} ], - "documentation":"Generates a Message Authentication Code (MAC) cryptogram within Amazon Web Services Payment Cryptography.
You can use this operation to authenticate card-related data by using known data values to generate MAC for data validation between the sending and receiving parties. This operation uses message data, a secret encryption key and MAC algorithm to generate a unique MAC value for transmission. The receiving party of the MAC must use the same message data, secret encryption key and MAC algorithm to reproduce another MAC value for comparision.
You can use this operation to generate a DUPKT, CMAC, HMAC or EMV MAC by setting generation attributes and algorithm to the associated values. The MAC generation encryption key must have valid values for KeyUsage such as TR31_M7_HMAC_KEY for HMAC generation, and the key must have KeyModesOfUse set to Generate and Verify.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + "documentation":"Generates a Message Authentication Code (MAC) cryptogram within Amazon Web Services Payment Cryptography.
You can use this operation to authenticate card-related data by using known data values to generate MAC for data validation between the sending and receiving parties. This operation uses message data, a secret encryption key and MAC algorithm to generate a unique MAC value for transmission. The receiving party of the MAC must use the same message data, secret encryption key and MAC algorithm to reproduce another MAC value for comparision.
You can use this operation to generate a DUPKT, CMAC, HMAC or EMV MAC by setting generation attributes and algorithm to the associated values. The MAC generation encryption key must have valid values for KeyUsage such as TR31_M7_HMAC_KEY for HMAC generation, and the key must have KeyModesOfUse set to Generate.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" }, "GenerateMacEmvPinChange":{ "name":"GenerateMacEmvPinChange", @@ -155,7 +173,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServerException"} ], - "documentation":"Translates an encryption key between different wrapping keys without importing the key into Amazon Web Services Payment Cryptography.
This operation can be used when key material is frequently rotated, such as during every card transaction, and there is a need to avoid importing short-lived keys into Amazon Web Services Payment Cryptography. It translates short-lived transaction keys such as Pin Encryption Key (PEK) generated for each transaction and wrapped with an ECDH (Elliptic Curve Diffie-Hellman) derived wrapping key to another KEK (Key Encryption Key) wrapping key.
Before using this operation, you must first request the public key certificate of the ECC key pair generated within Amazon Web Services Payment Cryptography to establish an ECDH key agreement. In TranslateKeyData, the service uses its own ECC key pair, public certificate of receiving ECC key pair, and the key derivation parameters to generate a derived key. The service uses this derived key to unwrap the incoming transaction key received as a TR31WrappedKeyBlock and re-wrap using a user provided KEK to generate an outgoing Tr31WrappedKeyBlock. For more information on establishing ECDH derived keys, see the Creating keys in the Amazon Web Services Payment Cryptography User Guide.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + "documentation":"Translates an cryptographic key between different wrapping keys without importing the key into Amazon Web Services Payment Cryptography.
This operation can be used when key material is frequently rotated, such as during every card transaction, and there is a need to avoid importing short-lived keys into Amazon Web Services Payment Cryptography. It translates short-lived transaction keys such as PEK generated for each transaction and wrapped with an ECDH derived wrapping key to another KEK wrapping key.
Before using this operation, you must first request the public key certificate of the ECC key pair generated within Amazon Web Services Payment Cryptography to establish an ECDH key agreement. In TranslateKeyData, the service uses its own ECC key pair, public certificate of receiving ECC key pair, and the key derivation parameters to generate a derived key. The service uses this derived key to unwrap the incoming transaction key received as a TR31WrappedKeyBlock and re-wrap using a user provided KEK to generate an outgoing Tr31WrappedKeyBlock.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" }, "TranslatePinData":{ "name":"TranslatePinData", @@ -338,6 +356,46 @@ "pattern":"[0-9a-fA-F]+", "sensitive":true }, + "As2805KekValidationType":{ + "type":"structure", + "members":{ + "KekValidationRequest":{ + "shape":"KekValidationRequest", + "documentation":"Parameter information for generating a KEK validation request during node-to-node initialization.
" + }, + "KekValidationResponse":{ + "shape":"KekValidationResponse", + "documentation":"Parameter information for generating a KEK validation response during node-to-node initialization.
" + } + }, + "documentation":"Parameter information for generating a random key for KEK validation to perform node-to-node initialization.
", + "union":true + }, + "As2805PekDerivationAttributes":{ + "type":"structure", + "required":[ + "SystemTraceAuditNumber", + "TransactionAmount" + ], + "members":{ + "SystemTraceAuditNumber":{ + "shape":"SystemTraceAuditNumberType", + "documentation":"The system trace audit number for the transaction.
" + }, + "TransactionAmount":{ + "shape":"TransactionAmountType", + "documentation":"The transaction amount for the transaction.
" + } + }, + "documentation":"Parameter information to use a PEK derived using AS2805.
" + }, + "As2805RandomKeyMaterial":{ + "type":"string", + "max":48, + "min":32, + "pattern":"(?:[0-9a-fA-F]{32}|[0-9a-fA-F]{48})", + "sensitive":true + }, "AsymmetricEncryptionAttributes":{ "type":"structure", "members":{ @@ -1076,6 +1134,55 @@ "OFB" ] }, + "GenerateAs2805KekValidationInput":{ + "type":"structure", + "required":[ + "KeyIdentifier", + "KekValidationType", + "RandomKeySendVariantMask" + ], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of sending KEK that Amazon Web Services Payment Cryptography uses for node-to-node initialization
Parameter information for generating a random key for KEK validation to perform node-to-node initialization.
" + }, + "RandomKeySendVariantMask":{ + "shape":"RandomKeySendVariantMask", + "documentation":"The key variant to use for generating a random key for KEK validation during node-to-node initialization.
" + } + } + }, + "GenerateAs2805KekValidationOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue", + "RandomKeySend", + "RandomKeyReceive" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of sending KEK that Amazon Web Services Payment Cryptography validates for node-to-node initialization
The key check value (KCV) of the sending KEK that Amazon Web Services Payment Cryptography validates for node-to-node initialization.
" + }, + "RandomKeySend":{ + "shape":"As2805RandomKeyMaterial", + "documentation":"The random key generated for sending KEK validation.
" + }, + "RandomKeyReceive":{ + "shape":"As2805RandomKeyMaterial", + "documentation":"The random key generated for receiving KEK validation. The initiating node sends this key to its partner node for validation.
" + } + } + }, "GenerateCardValidationDataInput":{ "type":"structure", "required":[ @@ -1296,7 +1403,7 @@ }, "PinBlockFormat":{ "shape":"PinBlockFormatForPinData", - "documentation":"The PIN encoding format for pin data generation as specified in ISO 9564. Amazon Web Services Payment Cryptography supports ISO_Format_0, ISO_Format_3 and ISO_Format_4.
The ISO_Format_0 PIN block format is equivalent to the ANSI X9.8, VISA-1, and ECI-1 PIN block formats. It is similar to a VISA-4 PIN block format. It supports a PIN from 4 to 12 digits in length.
The ISO_Format_3 PIN block format is the same as ISO_Format_0 except that the fill digits are random values from 10 to 15.
The ISO_Format_4 PIN block format is the only one supporting AES encryption. It is similar to ISO_Format_3 but doubles the pin block length by padding with fill digit A and random values from 10 to 15.
The PIN encoding format for pin data generation as specified in ISO 9564. Amazon Web Services Payment Cryptography supports ISO_Format_0, ISO_Format_3 and ISO_Format_4.
The ISO_Format_0 PIN block format is equivalent to the ANSI X9.8, VISA-1, and ECI-1 PIN block formats. It is similar to a VISA-4 PIN block format. It supports a PIN from 4 to 12 digits in length.
The ISO_Format_3 PIN block format is the same as ISO_Format_0 except that the fill digits are random values from 10 to 15.
The ISO_Format_4 PIN block format is the only one supporting AES encryption.
The key derivation algorithm to use for generating a KEK validation request.
" + } + }, + "documentation":"Parameter information for generating a KEK validation request during node-to-node initialization.
" + }, + "KekValidationResponse":{ + "type":"structure", + "required":["RandomKeySend"], + "members":{ + "RandomKeySend":{ + "shape":"As2805RandomKeyMaterial", + "documentation":"The random key for generating a KEK validation response.
" + } + }, + "documentation":"Parameter information for generating a KEK validation response during node-to-node initialization.
" + }, "KeyArn":{ "type":"string", "max":150, @@ -1653,7 +1782,7 @@ "KeyMaterial":{ "type":"string", "max":16384, - "min":48, + "min":32, "sensitive":true }, "MacAlgorithm":{ @@ -1666,7 +1795,8 @@ "HMAC_SHA224", "HMAC_SHA256", "HMAC_SHA384", - "HMAC_SHA512" + "HMAC_SHA512", + "AS2805_4_1" ] }, "MacAlgorithmDukpt":{ @@ -1992,6 +2122,13 @@ "pattern":"[0-9a-fA-F]+", "sensitive":true }, + "RandomKeySendVariantMask":{ + "type":"string", + "enum":[ + "VARIANT_MASK_82C0", + "VARIANT_MASK_82" + ] + }, "ReEncryptDataInput":{ "type":"structure", "required":[ @@ -2299,6 +2436,12 @@ "HMAC_SHA224" ] }, + "SystemTraceAuditNumberType":{ + "type":"string", + "max":6, + "min":6, + "pattern":"[0-9]+" + }, "ThrottlingException":{ "type":"structure", "members":{ @@ -2325,6 +2468,12 @@ "pattern":"[0-9a-fA-F]+", "sensitive":true }, + "TransactionAmountType":{ + "type":"string", + "max":12, + "min":12, + "pattern":"[0-9]+" + }, "TransactionDataType":{ "type":"string", "max":1024, @@ -2349,7 +2498,7 @@ }, "KeyCheckValueAlgorithm":{ "shape":"KeyCheckValueAlgorithm", - "documentation":"The key check value (KCV) algorithm used for calculating the KCV.
" + "documentation":"The key check value (KCV) algorithm used for calculating the KCV of the derived key.
" } } }, @@ -2408,6 +2557,10 @@ "OutgoingWrappedKey":{ "shape":"WrappedKey", "documentation":"The WrappedKeyBlock containing the encryption key for encrypting outgoing PIN block data.
" + }, + "IncomingAs2805Attributes":{ + "shape":"As2805PekDerivationAttributes", + "documentation":"The attributes and values to use for incoming AS2805 encryption key for PIN block translation.
" } } }, @@ -2438,24 +2591,39 @@ "members":{ "IsoFormat0":{ "shape":"TranslationPinDataIsoFormat034", - "documentation":"Parameters that are required for ISO9564 PIN format 0 tranlation.
" + "documentation":"Parameters that are required for ISO9564 PIN format 0 translation.
" }, "IsoFormat1":{ "shape":"TranslationPinDataIsoFormat1", - "documentation":"Parameters that are required for ISO9564 PIN format 1 tranlation.
" + "documentation":"Parameters that are required for ISO9564 PIN format 1 translation.
" }, "IsoFormat3":{ "shape":"TranslationPinDataIsoFormat034", - "documentation":"Parameters that are required for ISO9564 PIN format 3 tranlation.
" + "documentation":"Parameters that are required for ISO9564 PIN format 3 translation.
" }, "IsoFormat4":{ "shape":"TranslationPinDataIsoFormat034", - "documentation":"Parameters that are required for ISO9564 PIN format 4 tranlation.
" + "documentation":"Parameters that are required for ISO9564 PIN format 4 translation.
" + }, + "As2805Format0":{ + "shape":"TranslationPinDataAs2805Format0", + "documentation":"Parameters that are required for AS2805 PIN format 0 translation.
" } }, "documentation":"Parameters that are required for translation between ISO9564 PIN block formats 0,1,3,4.
", "union":true }, + "TranslationPinDataAs2805Format0":{ + "type":"structure", + "required":["PrimaryAccountNumber"], + "members":{ + "PrimaryAccountNumber":{ + "shape":"PrimaryAccountNumberType", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters that are required for translation between AS2805 PIN format 0 translation.
" + }, "TranslationPinDataIsoFormat034":{ "type":"structure", "required":["PrimaryAccountNumber"], @@ -2465,12 +2633,12 @@ "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" } }, - "documentation":"Parameters that are required for tranlation between ISO9564 PIN format 0,3,4 tranlation.
" + "documentation":"Parameters that are required for translation between ISO9564 PIN format 0,3,4 translation.
" }, "TranslationPinDataIsoFormat1":{ "type":"structure", "members":{}, - "documentation":"Parameters that are required for ISO9564 PIN format 1 tranlation.
" + "documentation":"Parameters that are required for ISO9564 PIN format 1 translation.
" }, "ValidationDataType":{ "type":"string", diff --git a/awscli/botocore/data/payment-cryptography/2021-09-14/service-2.json b/awscli/botocore/data/payment-cryptography/2021-09-14/service-2.json index 7e52376227a0..d7fbcafc5c20 100644 --- a/awscli/botocore/data/payment-cryptography/2021-09-14/service-2.json +++ b/awscli/botocore/data/payment-cryptography/2021-09-14/service-2.json @@ -584,6 +584,15 @@ "type":"list", "member":{"shape":"Alias"} }, + "As2805KeyVariant":{ + "type":"string", + "enum":[ + "TERMINAL_MAJOR_KEY_VARIANT_00", + "PIN_ENCRYPTION_KEY_VARIANT_28", + "MESSAGE_AUTHENTICATION_KEY_VARIANT_24", + "DATA_ENCRYPTION_KEY_VARIANT_22" + ] + }, "Boolean":{ "type":"boolean", "box":true @@ -617,7 +626,7 @@ }, "Country":{ "shape":"CertificateSubjectTypeCountryString", - "documentation":"The city you provide to create the certificate signing request.
" + "documentation":"The country you provide to create the certificate signing request.
" }, "StateOrProvince":{ "shape":"CertificateSubjectTypeStateOrProvinceString", @@ -883,6 +892,21 @@ "min":16, "pattern":"(?:[0-9a-fA-F][0-9a-fA-F])+" }, + "ExportAs2805KeyCryptogram":{ + "type":"structure", + "required":[ + "WrappingKeyIdentifier", + "As2805KeyVariant" + ], + "members":{ + "WrappingKeyIdentifier":{"shape":"KeyArnOrKeyAliasType"}, + "As2805KeyVariant":{ + "shape":"As2805KeyVariant", + "documentation":"The cryptographic usage of the key under export.
" + } + }, + "documentation":"Parameter information for key material export using AS2805 key cryptogram format.
" + }, "ExportAttributes":{ "type":"structure", "members":{ @@ -1013,6 +1037,10 @@ "DiffieHellmanTr31KeyBlock":{ "shape":"ExportDiffieHellmanTr31KeyBlock", "documentation":"Key derivation parameter information for key material export using asymmetric ECDH key exchange method.
" + }, + "As2805KeyCryptogram":{ + "shape":"ExportAs2805KeyCryptogram", + "documentation":"Parameter information for key material export using AS2805 key cryptogram format.
" } }, "documentation":"Parameter information for key material export from Amazon Web Services Payment Cryptography using TR-31 or TR-34 or RSA wrap and unwrap key exchange method.
", @@ -1308,6 +1336,38 @@ "min":20, "pattern":"[0-9A-F]{20}$|^[0-9A-F]{24}" }, + "ImportAs2805KeyCryptogram":{ + "type":"structure", + "required":[ + "As2805KeyVariant", + "KeyModesOfUse", + "KeyAlgorithm", + "Exportable", + "WrappingKeyIdentifier", + "WrappedKeyCryptogram" + ], + "members":{ + "As2805KeyVariant":{ + "shape":"As2805KeyVariant", + "documentation":"The cryptographic usage of the key under import.
" + }, + "KeyModesOfUse":{"shape":"KeyModesOfUse"}, + "KeyAlgorithm":{ + "shape":"KeyAlgorithm", + "documentation":"The key algorithm of the key under import.
" + }, + "Exportable":{ + "shape":"Boolean", + "documentation":"Specified whether the key is exportable. This data is immutable after the key is imported.
" + }, + "WrappingKeyIdentifier":{"shape":"KeyArnOrKeyAliasType"}, + "WrappedKeyCryptogram":{ + "shape":"WrappedKeyCryptogram", + "documentation":"The wrapped key cryptogram under import.
" + } + }, + "documentation":"Parameter information for key material import using AS2805 key cryptogram format.
" + }, "ImportDiffieHellmanTr31KeyBlock":{ "type":"structure", "required":[ @@ -1434,6 +1494,10 @@ "DiffieHellmanTr31KeyBlock":{ "shape":"ImportDiffieHellmanTr31KeyBlock", "documentation":"Key derivation parameter information for key material import using asymmetric ECDH key exchange method.
" + }, + "As2805KeyCryptogram":{ + "shape":"ImportAs2805KeyCryptogram", + "documentation":"Parameter information for key material import using AS2805 key cryptogram format.
" } }, "documentation":"Parameter information for key material import into Amazon Web Services Payment Cryptography using TR-31 or TR-34 or RSA wrap and unwrap key exchange method.
", @@ -1886,6 +1950,7 @@ "TR31_K0_KEY_ENCRYPTION_KEY", "TR31_K1_KEY_BLOCK_PROTECTION_KEY", "TR31_K3_ASYMMETRIC_KEY_FOR_KEY_AGREEMENT", + "TR31_M0_ISO_16609_MAC_KEY", "TR31_M3_ISO_9797_3_MAC_KEY", "TR31_M1_ISO_9797_1_MAC_KEY", "TR31_M6_ISO_9797_5_CMAC_KEY", diff --git a/awscli/botocore/data/sagemaker/2017-07-24/service-2.json b/awscli/botocore/data/sagemaker/2017-07-24/service-2.json index 95be304bc839..26c2fb315a23 100644 --- a/awscli/botocore/data/sagemaker/2017-07-24/service-2.json +++ b/awscli/botocore/data/sagemaker/2017-07-24/service-2.json @@ -9447,7 +9447,8 @@ "ml.r7i.12xlarge", "ml.r7i.16xlarge", "ml.r7i.24xlarge", - "ml.r7i.48xlarge" + "ml.r7i.48xlarge", + "ml.p6-b300.48xlarge" ] }, "ClusterKubernetesConfig":{ @@ -9815,7 +9816,6 @@ }, "ClusterOrchestrator":{ "type":"structure", - "required":["Eks"], "members":{ "Eks":{ "shape":"ClusterOrchestratorEksConfig", @@ -27014,7 +27014,7 @@ "type":"integer", "documentation":"Optional. Customer requested period in seconds for which the Training cluster is kept alive after the job is finished.
", "box":true, - "max":3600, + "max":21600, "min":0 }, "KendraSettings":{ @@ -39781,7 +39781,8 @@ "ml.p6-b200.48xlarge", "ml.p4de.24xlarge", "ml.p6e-gb200.36xlarge", - "ml.p5.4xlarge" + "ml.p5.4xlarge", + "ml.p6-b300.48xlarge" ] }, "ReservedCapacityOffering":{ @@ -43475,7 +43476,8 @@ "ml.r7i.24xlarge", "ml.r7i.48xlarge", "ml.p6e-gb200.36xlarge", - "ml.p5.4xlarge" + "ml.p5.4xlarge", + "ml.p6-b300.48xlarge" ] }, "TrainingInstanceTypes":{ From dd8b2fc2b124e78cd5c2a4e2f4ea4ecafec131dd Mon Sep 17 00:00:00 2001 From: aws-sdk-python-automationThe platforms the application supports. WINDOWS_SERVER_2019 and AMAZON_LINUX2 are supported for Elastic fleets.
" + "documentation":"The platforms the application supports. WINDOWS_SERVER_2019, AMAZON_LINUX2 and UBUNTU_PRO_2404 are supported for Elastic fleets.
" }, "InstanceFamilies":{ "shape":"StringList", @@ -2552,7 +2552,7 @@ }, "InstanceType":{ "shape":"String", - "documentation":"The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics-design.large
stream.graphics-design.xlarge
stream.graphics-design.2xlarge
stream.graphics-design.4xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The following instance types are available for Elastic fleets:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The following instance types are available for Elastic fleets:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
The fleet platform. WINDOWS_SERVER_2019 and AMAZON_LINUX2 are supported for Elastic fleets.
" + "documentation":"The fleet platform. WINDOWS_SERVER_2019, AMAZON_LINUX2 and UBUNTU_PRO_2404 are supported for Elastic fleets.
" }, "MaxConcurrentSessions":{ "shape":"Integer", @@ -2662,7 +2662,7 @@ }, "InstanceType":{ "shape":"String", - "documentation":"The instance type to use when launching the image builder. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics-design.large
stream.graphics-design.xlarge
stream.graphics-design.2xlarge
stream.graphics-design.4xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The instance type to use when launching the image builder. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics-design.large
stream.graphics-design.xlarge
stream.graphics-design.2xlarge
stream.graphics-design.4xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The supported instances families that determine which image a customer can use when the customer launches a fleet or image builder. The following instances families are supported:
General Purpose
Compute Optimized
Memory Optimized
Graphics
Graphics Design
Graphics Pro
Graphics G4
Graphics G5
The supported instances families that determine which image a customer can use when the customer launches a fleet or image builder. The following instances families are supported:
General Purpose
Compute Optimized
Memory Optimized
Graphics G4
Graphics G5
Graphics G6
The instance type for the image builder. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics-design.large
stream.graphics-design.xlarge
stream.graphics-design.2xlarge
stream.graphics-design.4xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The instance type for the image builder. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics-design.large
stream.graphics-design.xlarge
stream.graphics-design.2xlarge
stream.graphics-design.4xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The following instance types are available for Elastic fleets:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
The instance type to use when launching fleet instances. The following instance types are available:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
stream.compute.large
stream.compute.xlarge
stream.compute.2xlarge
stream.compute.4xlarge
stream.compute.8xlarge
stream.memory.large
stream.memory.xlarge
stream.memory.2xlarge
stream.memory.4xlarge
stream.memory.8xlarge
stream.memory.z1d.large
stream.memory.z1d.xlarge
stream.memory.z1d.2xlarge
stream.memory.z1d.3xlarge
stream.memory.z1d.6xlarge
stream.memory.z1d.12xlarge
stream.graphics.g4dn.xlarge
stream.graphics.g4dn.2xlarge
stream.graphics.g4dn.4xlarge
stream.graphics.g4dn.8xlarge
stream.graphics.g4dn.12xlarge
stream.graphics.g4dn.16xlarge
stream.graphics.g5.xlarge
stream.graphics.g5.2xlarge
stream.graphics.g5.4xlarge
stream.graphics.g5.8xlarge
stream.graphics.g5.16xlarge
stream.graphics.g5.12xlarge
stream.graphics.g5.24xlarge
stream.graphics.g6.xlarge
stream.graphics.g6.2xlarge
stream.graphics.g6.4xlarge
stream.graphics.g6.8xlarge
stream.graphics.g6.16xlarge
stream.graphics.g6.12xlarge
stream.graphics.g6.24xlarge
stream.graphics.gr6.4xlarge
stream.graphics.gr6.8xlarge
stream.graphics.g6f.large
stream.graphics.g6f.xlarge
stream.graphics.g6f.2xlarge
stream.graphics.g6f.4xlarge
stream.graphics.gr6f.4xlarge
The following instance types are available for Elastic fleets:
stream.standard.small
stream.standard.medium
stream.standard.large
stream.standard.xlarge
stream.standard.2xlarge
The platform of the fleet. WINDOWS_SERVER_2019 and AMAZON_LINUX2 are supported for Elastic fleets.
" + "documentation":"The platform of the fleet. WINDOWS_SERVER_2019, AMAZON_LINUX2 and UBUNTU_PRO_2404 are supported for Elastic fleets.
" }, "MaxConcurrentSessions":{ "shape":"Integer", diff --git a/awscli/botocore/data/arc-region-switch/2022-07-26/endpoint-rule-set-1.json b/awscli/botocore/data/arc-region-switch/2022-07-26/endpoint-rule-set-1.json index 0a42812d7b62..abd72333dde9 100644 --- a/awscli/botocore/data/arc-region-switch/2022-07-26/endpoint-rule-set-1.json +++ b/awscli/botocore/data/arc-region-switch/2022-07-26/endpoint-rule-set-1.json @@ -6,24 +6,24 @@ "required": true, "default": false, "documentation": "When true, send this request to the FIPS-compliant regional endpoint. If the configured endpoint does not have a FIPS compliant endpoint, dispatching the request will return an error.", - "type": "Boolean" + "type": "boolean" }, "Endpoint": { "builtIn": "SDK::Endpoint", "required": false, "documentation": "Override the endpoint used to send this request", - "type": "String" + "type": "string" }, "Region": { "builtIn": "AWS::Region", "required": false, "documentation": "The AWS region used to dispatch the request.", - "type": "String" + "type": "string" }, "UseControlPlaneEndpoint": { "required": false, "documentation": "Whether the operation is a control plane operation. Control plane operations are routed to a centralized endpoint in the partition leader.", - "type": "Boolean" + "type": "boolean" } }, "rules": [ diff --git a/awscli/botocore/data/arc-region-switch/2022-07-26/paginators-1.json b/awscli/botocore/data/arc-region-switch/2022-07-26/paginators-1.json index 0bac1960ce59..2e521c97a9dd 100644 --- a/awscli/botocore/data/arc-region-switch/2022-07-26/paginators-1.json +++ b/awscli/botocore/data/arc-region-switch/2022-07-26/paginators-1.json @@ -41,6 +41,12 @@ "output_token": "nextToken", "limit_key": "maxResults", "result_key": "healthChecks" + }, + "ListRoute53HealthChecksInRegion": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "healthChecks" } } } diff --git a/awscli/botocore/data/arc-region-switch/2022-07-26/service-2.json b/awscli/botocore/data/arc-region-switch/2022-07-26/service-2.json index 016289076324..c807aad9ded5 100644 --- a/awscli/botocore/data/arc-region-switch/2022-07-26/service-2.json +++ b/awscli/botocore/data/arc-region-switch/2022-07-26/service-2.json @@ -89,6 +89,7 @@ {"shape":"ResourceNotFoundException"} ], "documentation":"Retrieves detailed information about a Region switch plan. You must specify the ARN of the plan.
", + "readonly":true, "staticContextParams":{ "UseControlPlaneEndpoint":{"value":true} } @@ -105,7 +106,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves the evaluation status of a Region switch plan. The evaluation status provides information about the last time the plan was evaluated and any warnings or issues detected.
" + "documentation":"Retrieves the evaluation status of a Region switch plan. The evaluation status provides information about the last time the plan was evaluated and any warnings or issues detected.
", + "readonly":true }, "GetPlanExecution":{ "name":"GetPlanExecution", @@ -119,7 +121,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves detailed information about a specific plan execution. You must specify the plan ARN and execution ID.
" + "documentation":"Retrieves detailed information about a specific plan execution. You must specify the plan ARN and execution ID.
", + "readonly":true }, "GetPlanInRegion":{ "name":"GetPlanInRegion", @@ -133,7 +136,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves information about a Region switch plan in a specific Amazon Web Services Region. This operation is useful for getting Region-specific information about a plan.
" + "documentation":"Retrieves information about a Region switch plan in a specific Amazon Web Services Region. This operation is useful for getting Region-specific information about a plan.
", + "readonly":true }, "ListPlanExecutionEvents":{ "name":"ListPlanExecutionEvents", @@ -147,7 +151,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the events that occurred during a plan execution. These events provide a detailed timeline of the execution process.
" + "documentation":"Lists the events that occurred during a plan execution. These events provide a detailed timeline of the execution process.
", + "readonly":true }, "ListPlanExecutions":{ "name":"ListPlanExecutions", @@ -161,7 +166,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the executions of a Region switch plan. This operation returns information about both current and historical executions.
" + "documentation":"Lists the executions of a Region switch plan. This operation returns information about both current and historical executions.
", + "readonly":true }, "ListPlans":{ "name":"ListPlans", @@ -172,6 +178,7 @@ "input":{"shape":"ListPlansRequest"}, "output":{"shape":"ListPlansResponse"}, "documentation":"Lists all Region switch plans in your Amazon Web Services account.
", + "readonly":true, "staticContextParams":{ "UseControlPlaneEndpoint":{"value":true} } @@ -187,7 +194,8 @@ "errors":[ {"shape":"AccessDeniedException"} ], - "documentation":"Lists all Region switch plans in your Amazon Web Services account that are available in the current Amazon Web Services Region.
" + "documentation":"Lists all Region switch plans in your Amazon Web Services account that are available in the current Amazon Web Services Region.
", + "readonly":true }, "ListRoute53HealthChecks":{ "name":"ListRoute53HealthChecks", @@ -203,10 +211,28 @@ {"shape":"InternalServerException"} ], "documentation":"List the Amazon Route 53 health checks.
", + "readonly":true, "staticContextParams":{ "UseControlPlaneEndpoint":{"value":true} } }, + "ListRoute53HealthChecksInRegion":{ + "name":"ListRoute53HealthChecksInRegion", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListRoute53HealthChecksInRegionRequest"}, + "output":{"shape":"ListRoute53HealthChecksInRegionResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"IllegalArgumentException"}, + {"shape":"InternalServerException"} + ], + "documentation":"List the Amazon Route 53 health checks in a specific Amazon Web Services Region.
", + "readonly":true + }, "ListTagsForResource":{ "name":"ListTagsForResource", "http":{ @@ -220,6 +246,7 @@ {"shape":"InternalServerException"} ], "documentation":"Lists the tags attached to a Region switch resource.
", + "readonly":true, "staticContextParams":{ "UseControlPlaneEndpoint":{"value":true} } @@ -598,7 +625,7 @@ }, "AsgArn":{ "type":"string", - "pattern":"arn:aws:autoscaling:[a-z0-9-]+:\\d{12}:autoScalingGroup:[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}:autoScalingGroupName/[\\S\\s]{1,255}" + "pattern":"arn:aws[a-zA-Z-]*:autoscaling:[a-z0-9-]+:\\d{12}:autoScalingGroup:[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}:autoScalingGroupName/[\\S\\s]{1,255}" }, "AsgList":{ "type":"list", @@ -889,11 +916,11 @@ }, "EcsClusterArn":{ "type":"string", - "pattern":"arn:aws:ecs:[a-z0-9-]+:\\d{12}:cluster/[a-zA-Z0-9_-]{1,255}" + "pattern":"arn:aws[a-zA-Z-]*:ecs:[a-z0-9-]+:\\d{12}:cluster/[a-zA-Z0-9_-]{1,255}" }, "EcsServiceArn":{ "type":"string", - "pattern":"arn:aws:ecs:[a-z0-9-]+:\\d{12}:service/[a-zA-Z0-9_-]+/[a-zA-Z0-9_-]{1,255}" + "pattern":"arn:aws[a-zA-Z-]*:ecs:[a-z0-9-]+:\\d{12}:service/[a-zA-Z0-9_-]+/[a-zA-Z0-9_-]{1,255}" }, "EcsUngraceful":{ "type":"structure", @@ -1754,6 +1781,51 @@ } } }, + "ListRoute53HealthChecksInRegionRequest":{ + "type":"structure", + "required":["arn"], + "members":{ + "arn":{ + "shape":"PlanArn", + "documentation":"The Amazon Resource Name (ARN) of the Arc Region Switch Plan.
" + }, + "hostedZoneId":{ + "shape":"Route53HostedZoneId", + "documentation":"The hosted zone ID for the health checks.
" + }, + "recordName":{ + "shape":"Route53RecordName", + "documentation":"The record name for the health checks.
" + }, + "maxResults":{ + "shape":"ListRoute53HealthChecksInRegionRequestMaxResultsInteger", + "documentation":"The number of objects that you want to return with this call.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"Specifies that you want to receive the next page of results. Valid only if you received a nextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's nextToken response to request the next page of results.
List of the health checks requested.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"Specifies that you want to receive the next page of results. Valid only if you received a nextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's nextToken response to request the next page of results.
The Amazon Route 53 health check ID.
" }, + "status":{ + "shape":"Route53HealthCheckStatus", + "documentation":"The Amazon Route 53 health check status.
" + }, "region":{ "shape":"Region", "documentation":"The Amazon Route 53 Region.
" @@ -2143,6 +2219,14 @@ "type":"list", "member":{"shape":"Route53HealthCheck"} }, + "Route53HealthCheckStatus":{ + "type":"string", + "enum":[ + "healthy", + "unhealthy", + "unknown" + ] + }, "Route53HostedZoneId":{ "type":"string", "max":32, diff --git a/awscli/botocore/data/artifact/2018-05-10/paginators-1.json b/awscli/botocore/data/artifact/2018-05-10/paginators-1.json index ba4271a9e5c0..ae2de6dee1d8 100644 --- a/awscli/botocore/data/artifact/2018-05-10/paginators-1.json +++ b/awscli/botocore/data/artifact/2018-05-10/paginators-1.json @@ -11,6 +11,12 @@ "output_token": "nextToken", "limit_key": "maxResults", "result_key": "customerAgreements" + }, + "ListReportVersions": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "reports" } } } diff --git a/awscli/botocore/data/artifact/2018-05-10/service-2.json b/awscli/botocore/data/artifact/2018-05-10/service-2.json index f7b10da97124..7327dba275f9 100644 --- a/awscli/botocore/data/artifact/2018-05-10/service-2.json +++ b/awscli/botocore/data/artifact/2018-05-10/service-2.json @@ -31,7 +31,8 @@ {"shape":"ValidationException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"Get the account settings for Artifact.
" + "documentation":"Get the account settings for Artifact.
", + "readonly":true }, "GetReport":{ "name":"GetReport", @@ -51,7 +52,8 @@ {"shape":"ValidationException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"Get the content for a single report.
" + "documentation":"Get the content for a single report.
", + "readonly":true }, "GetReportMetadata":{ "name":"GetReportMetadata", @@ -70,7 +72,8 @@ {"shape":"ValidationException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"Get the metadata for a single report.
" + "documentation":"Get the metadata for a single report.
", + "readonly":true }, "GetTermForReport":{ "name":"GetTermForReport", @@ -90,7 +93,8 @@ {"shape":"ValidationException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"Get the Term content associated with a single report.
" + "documentation":"Get the Term content associated with a single report.
", + "readonly":true }, "ListCustomerAgreements":{ "name":"ListCustomerAgreements", @@ -107,7 +111,28 @@ {"shape":"InternalServerException"}, {"shape":"ValidationException"} ], - "documentation":"List active customer-agreements applicable to calling identity.
" + "documentation":"List active customer-agreements applicable to calling identity.
", + "readonly":true + }, + "ListReportVersions":{ + "name":"ListReportVersions", + "http":{ + "method":"GET", + "requestUri":"/v1/report/listVersions", + "responseCode":200 + }, + "input":{"shape":"ListReportVersionsRequest"}, + "output":{"shape":"ListReportVersionsResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ServiceQuotaExceededException"} + ], + "documentation":"List available report versions for a given report.
", + "readonly":true }, "ListReports":{ "name":"ListReports", @@ -126,7 +151,8 @@ {"shape":"ValidationException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"List available reports.
" + "documentation":"List available reports.
", + "readonly":true }, "PutAccountSettings":{ "name":"PutAccountSettings", @@ -296,8 +322,7 @@ }, "GetAccountSettingsRequest":{ "type":"structure", - "members":{ - } + "members":{} }, "GetAccountSettingsResponse":{ "type":"structure", @@ -462,6 +487,44 @@ } } }, + "ListReportVersionsRequest":{ + "type":"structure", + "required":["reportId"], + "members":{ + "reportId":{ + "shape":"ReportId", + "documentation":"Unique resource ID for the report resource.
", + "location":"querystring", + "locationName":"reportId" + }, + "maxResults":{ + "shape":"MaxResultsAttribute", + "documentation":"Maximum number of resources to return in the paginated response.
", + "location":"querystring", + "locationName":"maxResults" + }, + "nextToken":{ + "shape":"NextTokenAttribute", + "documentation":"Pagination token to request the next page of resources.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListReportVersionsResponse":{ + "type":"structure", + "required":["reports"], + "members":{ + "reports":{ + "shape":"ReportsList", + "documentation":"List of report resources.
" + }, + "nextToken":{ + "shape":"NextTokenAttribute", + "documentation":"Pagination token to request the next page of resources.
" + } + } + }, "ListReportsRequest":{ "type":"structure", "members":{ diff --git a/awscli/botocore/data/bedrock-agentcore-control/2023-06-05/service-2.json b/awscli/botocore/data/bedrock-agentcore-control/2023-06-05/service-2.json index b121951826ff..b5bf2129d4c1 100644 --- a/awscli/botocore/data/bedrock-agentcore-control/2023-06-05/service-2.json +++ b/awscli/botocore/data/bedrock-agentcore-control/2023-06-05/service-2.json @@ -1853,6 +1853,24 @@ "member":{"shape":"AllowedClient"}, "min":1 }, + "AllowedQueryParameters":{ + "type":"list", + "member":{"shape":"HttpQueryParameterName"}, + "max":10, + "min":1 + }, + "AllowedRequestHeaders":{ + "type":"list", + "member":{"shape":"HttpHeaderName"}, + "max":10, + "min":1 + }, + "AllowedResponseHeaders":{ + "type":"list", + "member":{"shape":"HttpHeaderName"}, + "max":10, + "min":1 + }, "AllowedScopeType":{ "type":"string", "max":255, @@ -3150,6 +3168,10 @@ "credentialProviderConfigurations":{ "shape":"CredentialProviderConfigurations", "documentation":"The credential provider configurations for the target. These configurations specify how the gateway authenticates with the target endpoint.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"Optional configuration for HTTP header and query parameter propagation to and from the gateway target.
" } } }, @@ -3209,6 +3231,10 @@ "lastSynchronizedAt":{ "shape":"DateTimestamp", "documentation":"The last synchronization of the target.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"The metadata configuration that was applied to the created gateway target.
" } } }, @@ -5272,6 +5298,10 @@ "lastSynchronizedAt":{ "shape":"DateTimestamp", "documentation":"The last synchronization time.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"The metadata configuration for HTTP header and query parameter propagation to and from this gateway target.
" } }, "documentation":"The gateway target.
" @@ -5882,6 +5912,10 @@ "lastSynchronizedAt":{ "shape":"DateTimestamp", "documentation":"The last synchronization of the target.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"The metadata configuration for HTTP header and query parameter propagation for the retrieved gateway target.
" } } }, @@ -6425,6 +6459,16 @@ "min":1, "pattern":"(Authorization|X-Amzn-Bedrock-AgentCore-Runtime-Custom-[a-zA-Z0-9-]+)" }, + "HttpHeaderName":{ + "type":"string", + "max":100, + "min":1 + }, + "HttpQueryParameterName":{ + "type":"string", + "max":40, + "min":1 + }, "InboundTokenClaimNameType":{ "type":"string", "max":255, @@ -7725,6 +7769,24 @@ "max":50, "min":1 }, + "MetadataConfiguration":{ + "type":"structure", + "members":{ + "allowedRequestHeaders":{ + "shape":"AllowedRequestHeaders", + "documentation":"A list of HTTP headers that are allowed to be propagated from incoming client requests to the target.
" + }, + "allowedQueryParameters":{ + "shape":"AllowedQueryParameters", + "documentation":"A list of URL query parameters that are allowed to be propagated from incoming gateway URL to the target.
" + }, + "allowedResponseHeaders":{ + "shape":"AllowedResponseHeaders", + "documentation":"A list of HTTP headers that are allowed to be propagated from the target response back to the client.
" + } + }, + "documentation":"Configuration for HTTP header and query parameter propagation between the gateway and target servers.
" + }, "MicrosoftOauth2ProviderConfigInput":{ "type":"structure", "required":[ @@ -10403,6 +10465,10 @@ "credentialProviderConfigurations":{ "shape":"CredentialProviderConfigurations", "documentation":"The updated credential provider configurations for the gateway target.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"Configuration for HTTP header and query parameter propagation to the gateway target.
" } } }, @@ -10459,6 +10525,10 @@ "lastSynchronizedAt":{ "shape":"DateTimestamp", "documentation":"The date and time at which the targets were last synchronized.
" + }, + "metadataConfiguration":{ + "shape":"MetadataConfiguration", + "documentation":"The metadata configuration that was applied to the gateway target.
" } } }, diff --git a/awscli/botocore/data/bedrock-data-automation/2023-07-26/service-2.json b/awscli/botocore/data/bedrock-data-automation/2023-07-26/service-2.json index a3619c37cda9..3de22bc2da39 100644 --- a/awscli/botocore/data/bedrock-data-automation/2023-07-26/service-2.json +++ b/awscli/botocore/data/bedrock-data-automation/2023-07-26/service-2.json @@ -13,6 +13,25 @@ "uid":"bedrock-data-automation-2023-07-26" }, "operations":{ + "CopyBlueprintStage":{ + "name":"CopyBlueprintStage", + "http":{ + "method":"PUT", + "requestUri":"/blueprints/{blueprintArn}/copy-stage", + "responseCode":200 + }, + "input":{"shape":"CopyBlueprintStageRequest"}, + "output":{"shape":"CopyBlueprintStageResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Copies a Blueprint from one stage to another
", + "idempotent":true + }, "CreateBlueprint":{ "name":"CreateBlueprint", "http":{ @@ -130,6 +149,25 @@ "documentation":"Gets an existing Amazon Bedrock Data Automation Blueprint
", "readonly":true }, + "GetBlueprintOptimizationStatus":{ + "name":"GetBlueprintOptimizationStatus", + "http":{ + "method":"POST", + "requestUri":"/getBlueprintOptimizationStatus/{invocationArn}", + "responseCode":200 + }, + "input":{"shape":"GetBlueprintOptimizationStatusRequest"}, + "output":{"shape":"GetBlueprintOptimizationStatusResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"API used to get blueprint optimization status.
", + "readonly":true + }, "GetDataAutomationProject":{ "name":"GetDataAutomationProject", "http":{ @@ -149,6 +187,26 @@ "documentation":"Gets an existing Amazon Bedrock Data Automation Project
", "readonly":true }, + "InvokeBlueprintOptimizationAsync":{ + "name":"InvokeBlueprintOptimizationAsync", + "http":{ + "method":"POST", + "requestUri":"/invokeBlueprintOptimizationAsync", + "responseCode":200 + }, + "input":{"shape":"InvokeBlueprintOptimizationAsyncRequest"}, + "output":{"shape":"InvokeBlueprintOptimizationAsyncResponse"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Invoke an async job to perform Blueprint Optimization
", + "idempotent":true + }, "ListBlueprints":{ "name":"ListBlueprints", "http":{ @@ -419,7 +477,9 @@ "blueprintVersion":{"shape":"BlueprintVersion"}, "blueprintStage":{"shape":"BlueprintStage"}, "kmsKeyId":{"shape":"KmsKeyId"}, - "kmsEncryptionContext":{"shape":"KmsEncryptionContext"} + "kmsEncryptionContext":{"shape":"KmsEncryptionContext"}, + "optimizationSamples":{"shape":"BlueprintOptimizationSamples"}, + "optimizationTime":{"shape":"DateTimestamp"} }, "documentation":"Contains the information of a Blueprint.
" }, @@ -463,6 +523,73 @@ "pattern":"[a-zA-Z0-9-_]+", "sensitive":true }, + "BlueprintOptimizationInvocationArn":{ + "type":"string", + "documentation":"Invocation arn.
", + "max":128, + "min":1, + "pattern":"arn:aws(|-cn|-iso|-iso-[a-z]|-us-gov):bedrock:[a-zA-Z0-9-]*:[0-9]{12}:blueprint-optimization-invocation/[a-zA-Z0-9-_]+" + }, + "BlueprintOptimizationJobStatus":{ + "type":"string", + "documentation":"List of status supported by optimization jobs
", + "enum":[ + "Created", + "InProgress", + "Success", + "ServiceError", + "ClientError" + ] + }, + "BlueprintOptimizationObject":{ + "type":"structure", + "required":["blueprintArn"], + "members":{ + "blueprintArn":{ + "shape":"BlueprintArn", + "documentation":"Arn of blueprint.
" + }, + "stage":{ + "shape":"BlueprintStage", + "documentation":"Stage of blueprint.
" + } + }, + "documentation":"Structure for single blueprint entity.
" + }, + "BlueprintOptimizationOutputConfiguration":{ + "type":"structure", + "required":["s3Object"], + "members":{ + "s3Object":{ + "shape":"S3Object", + "documentation":"S3 object.
" + } + }, + "documentation":"Blueprint Optimization Output configuration.
" + }, + "BlueprintOptimizationSample":{ + "type":"structure", + "required":[ + "assetS3Object", + "groundTruthS3Object" + ], + "members":{ + "assetS3Object":{ + "shape":"S3Object", + "documentation":"S3 Object of the asset
" + }, + "groundTruthS3Object":{ + "shape":"S3Object", + "documentation":"Ground truth for the Blueprint and Asset combination
" + } + }, + "documentation":"Blueprint Recommendation Sample
" + }, + "BlueprintOptimizationSamples":{ + "type":"list", + "member":{"shape":"BlueprintOptimizationSample"}, + "documentation":"List of Blueprint Optimization Samples
" + }, "BlueprintSchema":{ "type":"string", "documentation":"Schema of the blueprint
", @@ -546,6 +673,41 @@ }, "exception":true }, + "CopyBlueprintStageRequest":{ + "type":"structure", + "required":[ + "blueprintArn", + "sourceStage", + "targetStage" + ], + "members":{ + "blueprintArn":{ + "shape":"BlueprintArn", + "documentation":"Blueprint to be copied
", + "location":"uri", + "locationName":"blueprintArn" + }, + "sourceStage":{ + "shape":"BlueprintStage", + "documentation":"Source stage to copy from
" + }, + "targetStage":{ + "shape":"BlueprintStage", + "documentation":"Target stage to copy to
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"Client token for idempotency
", + "idempotencyToken":true + } + }, + "documentation":"CopyBlueprintStage Request
" + }, + "CopyBlueprintStageResponse":{ + "type":"structure", + "members":{}, + "documentation":"CopyBlueprintStage Response
" + }, "CreateBlueprintRequest":{ "type":"structure", "required":[ @@ -640,6 +802,13 @@ }, "documentation":"Custom output configuration
" }, + "DataAutomationProfileArn":{ + "type":"string", + "documentation":"Data automation profile arn.
", + "max":128, + "min":1, + "pattern":"arn:aws(|-cn|-us-gov):bedrock:[a-zA-Z0-9-]*:(aws|[0-9]{12}):data-automation-profile/[a-zA-Z0-9-_.]+" + }, "DataAutomationProject":{ "type":"structure", "required":[ @@ -944,6 +1113,41 @@ "min":1, "pattern":".*\\S.*" }, + "GetBlueprintOptimizationStatusRequest":{ + "type":"structure", + "required":["invocationArn"], + "members":{ + "invocationArn":{ + "shape":"BlueprintOptimizationInvocationArn", + "documentation":"Invocation arn.
", + "location":"uri", + "locationName":"invocationArn" + } + }, + "documentation":"Structure for request of GetBlueprintOptimizationStatus API.
" + }, + "GetBlueprintOptimizationStatusResponse":{ + "type":"structure", + "members":{ + "status":{ + "shape":"BlueprintOptimizationJobStatus", + "documentation":"Job Status.
" + }, + "errorType":{ + "shape":"String", + "documentation":"Error Type.
" + }, + "errorMessage":{ + "shape":"String", + "documentation":"Error Message.
" + }, + "outputConfiguration":{ + "shape":"BlueprintOptimizationOutputConfiguration", + "documentation":"Output configuration.
" + } + }, + "documentation":"Response of GetBlueprintOptimizationStatus API.
" + }, "GetBlueprintRequest":{ "type":"structure", "required":["blueprintArn"], @@ -1087,6 +1291,53 @@ "exception":true, "fault":true }, + "InvokeBlueprintOptimizationAsyncRequest":{ + "type":"structure", + "required":[ + "blueprint", + "samples", + "outputConfiguration", + "dataAutomationProfileArn" + ], + "members":{ + "blueprint":{ + "shape":"BlueprintOptimizationObject", + "documentation":"Blueprint to be optimized
" + }, + "samples":{ + "shape":"BlueprintOptimizationSamples", + "documentation":"List of Blueprint Optimization Samples
" + }, + "outputConfiguration":{ + "shape":"BlueprintOptimizationOutputConfiguration", + "documentation":"Output configuration where the results should be placed
" + }, + "dataAutomationProfileArn":{ + "shape":"DataAutomationProfileArn", + "documentation":"Data automation profile ARN
" + }, + "encryptionConfiguration":{ + "shape":"EncryptionConfiguration", + "documentation":"Encryption configuration.
" + }, + "tags":{ + "shape":"TagList", + "documentation":"List of tags.
" + } + }, + "documentation":"Invoke Blueprint Optimization Async Request
" + }, + "InvokeBlueprintOptimizationAsyncResponse":{ + "type":"structure", + "required":["invocationArn"], + "members":{ + "invocationArn":{ + "shape":"BlueprintOptimizationInvocationArn", + "documentation":"ARN of the blueprint optimization job
" + } + }, + "documentation":"Invoke Blueprint Optimization Async Response
" + }, "KmsEncryptionContext":{ "type":"map", "key":{"shape":"EncryptionContextKey"}, @@ -1306,6 +1557,34 @@ "ACCOUNT" ] }, + "S3Object":{ + "type":"structure", + "required":["s3Uri"], + "members":{ + "s3Uri":{ + "shape":"S3Uri", + "documentation":"S3 uri.
" + }, + "version":{ + "shape":"S3ObjectVersion", + "documentation":"S3 object version.
" + } + }, + "documentation":"S3 object
" + }, + "S3ObjectVersion":{ + "type":"string", + "documentation":"S3 object version.
", + "max":1024, + "min":1 + }, + "S3Uri":{ + "type":"string", + "documentation":"A path in S3
", + "max":1024, + "min":1, + "pattern":"s3://[a-z0-9][\\.\\-a-z0-9]{1,61}[a-z0-9](/.*)?" + }, "SensitiveDataConfiguration":{ "type":"structure", "required":["detectionMode"], @@ -1393,6 +1672,7 @@ "DISABLED" ] }, + "String":{"type":"string"}, "Tag":{ "type":"structure", "required":[ @@ -1451,7 +1731,7 @@ "documentation":"ARN of a taggable resource
", "max":1011, "min":20, - "pattern":"arn:aws(|-cn|-us-gov):bedrock:[a-z0-9-]*:[0-9]{12}:(blueprint|data-automation-project)/[a-zA-Z0-9-]{12,36}" + "pattern":"arn:aws(|-cn|-iso|-iso-[a-z]|-us-gov):bedrock:[a-z0-9-]*:[0-9]{12}:(blueprint|data-automation-project|blueprint-optimization-invocation)/[a-zA-Z0-9-]{12,36}" }, "ThrottlingException":{ "type":"structure", diff --git a/awscli/botocore/data/cleanrooms/2022-02-17/service-2.json b/awscli/botocore/data/cleanrooms/2022-02-17/service-2.json index d6b95a05460f..e51a72a9bc03 100644 --- a/awscli/botocore/data/cleanrooms/2022-02-17/service-2.json +++ b/awscli/botocore/data/cleanrooms/2022-02-17/service-2.json @@ -1456,6 +1456,25 @@ ], "documentation":"Updates collaboration metadata and can only be called by the collaboration owner.
" }, + "UpdateCollaborationChangeRequest":{ + "name":"UpdateCollaborationChangeRequest", + "http":{ + "method":"PATCH", + "requestUri":"/collaborations/{collaborationIdentifier}/changeRequests/{changeRequestIdentifier}", + "responseCode":200 + }, + "input":{"shape":"UpdateCollaborationChangeRequestInput"}, + "output":{"shape":"UpdateCollaborationChangeRequestOutput"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Updates an existing collaboration change request. This operation allows approval actions for pending change requests in collaborations (APPROVE, DENY, CANCEL, COMMIT).
For change requests without automatic approval, a member in the collaboration can manually APPROVE or DENY a change request. The collaboration owner can manually CANCEL or COMMIT a change request.
" + }, "UpdateConfiguredAudienceModelAssociation":{ "name":"UpdateConfiguredAudienceModelAssociation", "http":{ @@ -2552,6 +2571,32 @@ "CLEAN_ROOMS_SQL" ] }, + "ApprovalStatus":{ + "type":"string", + "enum":[ + "APPROVED", + "DENIED", + "PENDING" + ] + }, + "ApprovalStatusDetails":{ + "type":"structure", + "required":["status"], + "members":{ + "status":{ + "shape":"ApprovalStatus", + "documentation":"The approval status of a member's vote on the change request. Valid values are PENDING (if they haven't voted), APPROVED, or DENIED.
" + } + }, + "documentation":"Contains detailed information about the approval state of a given member in the collaboration for a given collaboration change request.
" + }, + "ApprovalStatuses":{ + "type":"map", + "key":{"shape":"AccountId"}, + "value":{"shape":"ApprovalStatusDetails"}, + "max":50, + "min":1 + }, "AthenaDatabaseName":{ "type":"string", "max":128, @@ -2609,7 +2654,11 @@ }, "AutoApprovedChangeType":{ "type":"string", - "enum":["ADD_MEMBER"] + "enum":[ + "ADD_MEMBER", + "GRANT_RECEIVE_RESULTS_ABILITY", + "REVOKE_RECEIVE_RESULTS_ABILITY" + ] }, "AutoApprovedChangeTypeList":{ "type":"list", @@ -2941,6 +2990,15 @@ "max":10, "min":1 }, + "ChangeRequestAction":{ + "type":"string", + "enum":[ + "APPROVE", + "DENY", + "CANCEL", + "COMMIT" + ] + }, "ChangeRequestStatus":{ "type":"string", "enum":[ @@ -2957,6 +3015,10 @@ "member":{ "shape":"MemberChangeSpecification", "documentation":"The member change specification when the change type is MEMBER.
The collaboration configuration changes being requested. Currently, this only supports modifying which change types are auto-approved for the collaboration.
" } }, "documentation":"A union that contains the specification details for different types of changes.
", @@ -2964,11 +3026,19 @@ }, "ChangeSpecificationType":{ "type":"string", - "enum":["MEMBER"] + "enum":[ + "MEMBER", + "COLLABORATION" + ] }, "ChangeType":{ "type":"string", - "enum":["ADD_MEMBER"] + "enum":[ + "ADD_MEMBER", + "GRANT_RECEIVE_RESULTS_ABILITY", + "REVOKE_RECEIVE_RESULTS_ABILITY", + "EDIT_AUTO_APPROVED_CHANGE_TYPES" + ] }, "ChangeTypeList":{ "type":"list", @@ -3263,6 +3333,10 @@ "changes":{ "shape":"ChangeList", "documentation":"The list of changes specified in this change request.
" + }, + "approvals":{ + "shape":"ApprovalStatuses", + "documentation":"A list of approval details from collaboration members, including approval status and multi-party approval workflow information.
" } }, "documentation":"Represents a request to modify a collaboration. Change requests enable structured modifications to collaborations after they have been created.
" @@ -3312,6 +3386,10 @@ "changes":{ "shape":"ChangeList", "documentation":"Summary of the changes in this change request.
" + }, + "approvals":{ + "shape":"ApprovalStatuses", + "documentation":"Summary of approval statuses from all collaboration members for this change request.
" } }, "documentation":"Summary information about a collaboration change request.
" @@ -3320,6 +3398,16 @@ "type":"list", "member":{"shape":"CollaborationChangeRequestSummary"} }, + "CollaborationChangeSpecification":{ + "type":"structure", + "members":{ + "autoApprovedChangeTypes":{ + "shape":"AutoApprovedChangeTypeList", + "documentation":"Defines requested updates to properties of the collaboration. Currently, this only supports modifying which change types are auto-approved for the collaboration.
" + } + }, + "documentation":"Defines the specific changes being requested for a collaboration, including configuration modifications and approval requirements.
" + }, "CollaborationConfiguredAudienceModelAssociation":{ "type":"structure", "required":[ @@ -8577,7 +8665,7 @@ }, "ParameterValue":{ "type":"string", - "max":250, + "max":1000, "min":0 }, "PaymentConfiguration":{ @@ -10714,6 +10802,39 @@ } } }, + "UpdateCollaborationChangeRequestInput":{ + "type":"structure", + "required":[ + "collaborationIdentifier", + "changeRequestIdentifier", + "action" + ], + "members":{ + "collaborationIdentifier":{ + "shape":"CollaborationIdentifier", + "documentation":"The unique identifier of the collaboration that contains the change request to be updated.
", + "location":"uri", + "locationName":"collaborationIdentifier" + }, + "changeRequestIdentifier":{ + "shape":"CollaborationChangeRequestIdentifier", + "documentation":"The unique identifier of the specific change request to be updated within the collaboration.
", + "location":"uri", + "locationName":"changeRequestIdentifier" + }, + "action":{ + "shape":"ChangeRequestAction", + "documentation":"The action to perform on the change request. Valid values include APPROVE (approve the change), DENY (reject the change), CANCEL (cancel the request), and COMMIT (commit after the request is approved).
For change requests without automatic approval, a member in the collaboration can manually APPROVE or DENY a change request. The collaboration owner can manually CANCEL or COMMIT a change request.
" + } + } + }, + "UpdateCollaborationChangeRequestOutput":{ + "type":"structure", + "required":["collaborationChangeRequest"], + "members":{ + "collaborationChangeRequest":{"shape":"CollaborationChangeRequest"} + } + }, "UpdateCollaborationInput":{ "type":"structure", "required":["collaborationIdentifier"], diff --git a/awscli/botocore/data/ec2/2016-11-15/service-2.json b/awscli/botocore/data/ec2/2016-11-15/service-2.json index 0c06064518b9..463c27ea7cc4 100644 --- a/awscli/botocore/data/ec2/2016-11-15/service-2.json +++ b/awscli/botocore/data/ec2/2016-11-15/service-2.json @@ -26391,7 +26391,7 @@ }, "Filters":{ "shape":"FilterList", - "documentation":"The filters.
affinity - The affinity setting for an instance running on a Dedicated Host (default | host).
architecture - The instance architecture (i386 | x86_64 | arm64).
availability-zone - The Availability Zone of the instance.
availability-zone-id - The ID of the Availability Zone of the instance.
block-device-mapping.attach-time - The attach time for an EBS volume mapped to the instance, for example, 2022-09-15T17:15:20.000Z.
block-device-mapping.delete-on-termination - A Boolean that indicates whether the EBS volume is deleted on instance termination.
block-device-mapping.device-name - The device name specified in the block device mapping (for example, /dev/sdh or xvdh).
block-device-mapping.status - The status for the EBS volume (attaching | attached | detaching | detached).
block-device-mapping.volume-id - The volume ID of the EBS volume.
boot-mode - The boot mode that was specified by the AMI (legacy-bios | uefi | uefi-preferred).
capacity-reservation-id - The ID of the Capacity Reservation into which the instance was launched.
capacity-reservation-specification.capacity-reservation-preference - The instance's Capacity Reservation preference (open | none).
capacity-reservation-specification.capacity-reservation-target.capacity-reservation-id - The ID of the targeted Capacity Reservation.
capacity-reservation-specification.capacity-reservation-target.capacity-reservation-resource-group-arn - The ARN of the targeted Capacity Reservation group.
client-token - The idempotency token you provided when you launched the instance.
current-instance-boot-mode - The boot mode that is used to launch the instance at launch or start (legacy-bios | uefi).
dns-name - The public DNS name of the instance.
ebs-optimized - A Boolean that indicates whether the instance is optimized for Amazon EBS I/O.
ena-support - A Boolean that indicates whether the instance is enabled for enhanced networking with ENA.
enclave-options.enabled - A Boolean that indicates whether the instance is enabled for Amazon Web Services Nitro Enclaves.
hibernation-options.configured - A Boolean that indicates whether the instance is enabled for hibernation. A value of true means that the instance is enabled for hibernation.
host-id - The ID of the Dedicated Host on which the instance is running, if applicable.
hypervisor - The hypervisor type of the instance (ovm | xen). The value xen is used for both Xen and Nitro hypervisors.
iam-instance-profile.arn - The instance profile associated with the instance. Specified as an ARN.
iam-instance-profile.id - The instance profile associated with the instance. Specified as an ID.
image-id - The ID of the image used to launch the instance.
instance-id - The ID of the instance.
instance-lifecycle - Indicates whether this is a Spot Instance, a Scheduled Instance, or a Capacity Block (spot | scheduled | capacity-block).
instance-state-code - The state of the instance, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name - The state of the instance (pending | running | shutting-down | terminated | stopping | stopped).
instance-type - The type of instance (for example, t2.micro).
instance.group-id - The ID of the security group for the instance.
instance.group-name - The name of the security group for the instance.
ip-address - The public IPv4 address of the instance.
ipv6-address - The IPv6 address of the instance.
kernel-id - The kernel ID.
key-name - The name of the key pair used when the instance was launched.
launch-index - When launching multiple instances, this is the index for the instance in the launch group (for example, 0, 1, 2, and so on).
launch-time - The time when the instance was launched, in the ISO 8601 format in the UTC time zone (YYYY-MM-DDThh:mm:ss.sssZ), for example, 2021-09-29T11:04:43.305Z. You can use a wildcard (*), for example, 2021-09-29T*, which matches an entire day.
maintenance-options.auto-recovery - The current automatic recovery behavior of the instance (disabled | default).
metadata-options.http-endpoint - The status of access to the HTTP metadata endpoint on your instance (enabled | disabled)
metadata-options.http-protocol-ipv4 - Indicates whether the IPv4 endpoint is enabled (disabled | enabled).
metadata-options.http-protocol-ipv6 - Indicates whether the IPv6 endpoint is enabled (disabled | enabled).
metadata-options.http-put-response-hop-limit - The HTTP metadata request put response hop limit (integer, possible values 1 to 64)
metadata-options.http-tokens - The metadata request authorization state (optional | required)
metadata-options.instance-metadata-tags - The status of access to instance tags from the instance metadata (enabled | disabled)
metadata-options.state - The state of the metadata option changes (pending | applied).
monitoring-state - Indicates whether detailed monitoring is enabled (disabled | enabled).
network-interface.addresses.association.allocation-id - The allocation ID.
network-interface.addresses.association.association-id - The association ID.
network-interface.addresses.association.carrier-ip - The carrier IP address.
network-interface.addresses.association.customer-owned-ip - The customer-owned IP address.
network-interface.addresses.association.ip-owner-id - The owner ID of the private IPv4 address associated with the network interface.
network-interface.addresses.association.public-dns-name - The public DNS name.
network-interface.addresses.association.public-ip - The ID of the association of an Elastic IP address (IPv4) with a network interface.
network-interface.addresses.primary - Specifies whether the IPv4 address of the network interface is the primary private IPv4 address.
network-interface.addresses.private-dns-name - The private DNS name.
network-interface.addresses.private-ip-address - The private IPv4 address associated with the network interface.
network-interface.association.allocation-id - The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
network-interface.association.association-id - The association ID returned when the network interface was associated with an IPv4 address.
network-interface.association.carrier-ip - The customer-owned IP address.
network-interface.association.customer-owned-ip - The customer-owned IP address.
network-interface.association.ip-owner-id - The owner of the Elastic IP address (IPv4) associated with the network interface.
network-interface.association.public-dns-name - The public DNS name.
network-interface.association.public-ip - The address of the Elastic IP address (IPv4) bound to the network interface.
network-interface.attachment.attach-time - The time that the network interface was attached to an instance.
network-interface.attachment.attachment-id - The ID of the interface attachment.
network-interface.attachment.delete-on-termination - Specifies whether the attachment is deleted when an instance is terminated.
network-interface.attachment.device-index - The device index to which the network interface is attached.
network-interface.attachment.instance-id - The ID of the instance to which the network interface is attached.
network-interface.attachment.instance-owner-id - The owner ID of the instance to which the network interface is attached.
network-interface.attachment.network-card-index - The index of the network card.
network-interface.attachment.status - The status of the attachment (attaching | attached | detaching | detached).
network-interface.availability-zone - The Availability Zone for the network interface.
network-interface.deny-all-igw-traffic - A Boolean that indicates whether a network interface with an IPv6 address is unreachable from the public internet.
network-interface.description - The description of the network interface.
network-interface.group-id - The ID of a security group associated with the network interface.
network-interface.group-name - The name of a security group associated with the network interface.
network-interface.ipv4-prefixes.ipv4-prefix - The IPv4 prefixes that are assigned to the network interface.
network-interface.ipv6-address - The IPv6 address associated with the network interface.
network-interface.ipv6-addresses.ipv6-address - The IPv6 address associated with the network interface.
network-interface.ipv6-addresses.is-primary-ipv6 - A Boolean that indicates whether this is the primary IPv6 address.
network-interface.ipv6-native - A Boolean that indicates whether this is an IPv6 only network interface.
network-interface.ipv6-prefixes.ipv6-prefix - The IPv6 prefix assigned to the network interface.
network-interface.mac-address - The MAC address of the network interface.
network-interface.network-interface-id - The ID of the network interface.
network-interface.operator.managed - A Boolean that indicates whether the instance has a managed network interface.
network-interface.operator.principal - The principal that manages the network interface. Only valid for instances with managed network interfaces, where managed is true.
network-interface.outpost-arn - The ARN of the Outpost.
network-interface.owner-id - The ID of the owner of the network interface.
network-interface.private-dns-name - The private DNS name of the network interface.
network-interface.private-ip-address - The private IPv4 address.
network-interface.public-dns-name - The public DNS name.
network-interface.requester-id - The requester ID for the network interface.
network-interface.requester-managed - Indicates whether the network interface is being managed by Amazon Web Services.
network-interface.status - The status of the network interface (available) | in-use).
network-interface.source-dest-check - Whether the network interface performs source/destination checking. A value of true means that checking is enabled, and false means that checking is disabled. The value must be false for the network interface to perform network address translation (NAT) in your VPC.
network-interface.subnet-id - The ID of the subnet for the network interface.
network-interface.tag-key - The key of a tag assigned to the network interface.
network-interface.tag-value - The value of a tag assigned to the network interface.
network-interface.vpc-id - The ID of the VPC for the network interface.
network-performance-options.bandwidth-weighting - Where the performance boost is applied, if applicable. Valid values: default, vpc-1, ebs-1.
operator.managed - A Boolean that indicates whether this is a managed instance.
operator.principal - The principal that manages the instance. Only valid for managed instances, where managed is true.
outpost-arn - The Amazon Resource Name (ARN) of the Outpost.
owner-id - The Amazon Web Services account ID of the instance owner.
placement-group-name - The name of the placement group for the instance.
placement-partition-number - The partition in which the instance is located.
platform - The platform. To list only Windows instances, use windows.
platform-details - The platform (Linux/UNIX | Red Hat BYOL Linux | Red Hat Enterprise Linux | Red Hat Enterprise Linux with HA | Red Hat Enterprise Linux with High Availability | Red Hat Enterprise Linux with SQL Server Standard and HA | Red Hat Enterprise Linux with SQL Server Enterprise and HA | Red Hat Enterprise Linux with SQL Server Standard | Red Hat Enterprise Linux with SQL Server Web | Red Hat Enterprise Linux with SQL Server Enterprise | SQL Server Enterprise | SQL Server Standard | SQL Server Web | SUSE Linux | Ubuntu Pro | Windows | Windows BYOL | Windows with SQL Server Enterprise | Windows with SQL Server Standard | Windows with SQL Server Web).
private-dns-name - The private IPv4 DNS name of the instance.
private-dns-name-options.enable-resource-name-dns-a-record - A Boolean that indicates whether to respond to DNS queries for instance hostnames with DNS A records.
private-dns-name-options.enable-resource-name-dns-aaaa-record - A Boolean that indicates whether to respond to DNS queries for instance hostnames with DNS AAAA records.
private-dns-name-options.hostname-type - The type of hostname (ip-name | resource-name).
private-ip-address - The private IPv4 address of the instance. This can only be used to filter by the primary IP address of the network interface attached to the instance. To filter by additional IP addresses assigned to the network interface, use the filter network-interface.addresses.private-ip-address.
product-code - The product code associated with the AMI used to launch the instance.
product-code.type - The type of product code (devpay | marketplace).
ramdisk-id - The RAM disk ID.
reason - The reason for the current state of the instance (for example, shows \"User Initiated [date]\" when you stop or terminate the instance). Similar to the state-reason-code filter.
requester-id - The ID of the entity that launched the instance on your behalf (for example, Amazon Web Services Management Console, Amazon EC2 Auto Scaling, and so on).
reservation-id - The ID of the instance's reservation. A reservation ID is created any time you launch an instance. A reservation ID has a one-to-one relationship with an instance launch request, but can be associated with more than one instance if you launch multiple instances using the same launch request. For example, if you launch one instance, you get one reservation ID. If you launch ten instances using the same launch request, you also get one reservation ID.
root-device-name - The device name of the root device volume (for example, /dev/sda1).
root-device-type - The type of the root device volume (ebs | instance-store).
source-dest-check - Indicates whether the instance performs source/destination checking. A value of true means that checking is enabled, and false means that checking is disabled. The value must be false for the instance to perform network address translation (NAT) in your VPC.
spot-instance-request-id - The ID of the Spot Instance request.
state-reason-code - The reason code for the state change.
state-reason-message - A message that describes the state change.
subnet-id - The ID of the subnet for the instance.
tag:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA, specify tag:Owner for the filter name and TeamA for the filter value.
tag-key - The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
tenancy - The tenancy of an instance (dedicated | default | host).
tpm-support - Indicates if the instance is configured for NitroTPM support (v2.0).
usage-operation - The usage operation value for the instance (RunInstances | RunInstances:00g0 | RunInstances:0010 | RunInstances:1010 | RunInstances:1014 | RunInstances:1110 | RunInstances:0014 | RunInstances:0210 | RunInstances:0110 | RunInstances:0100 | RunInstances:0004 | RunInstances:0200 | RunInstances:000g | RunInstances:0g00 | RunInstances:0002 | RunInstances:0800 | RunInstances:0102 | RunInstances:0006 | RunInstances:0202).
usage-operation-update-time - The time that the usage operation was last updated, for example, 2022-09-15T17:15:20.000Z.
virtualization-type - The virtualization type of the instance (paravirtual | hvm).
vpc-id - The ID of the VPC that the instance is running in.
The filters.
affinity - The affinity setting for an instance running on a Dedicated Host (default | host).
architecture - The instance architecture (i386 | x86_64 | arm64).
availability-zone - The Availability Zone of the instance.
availability-zone-id - The ID of the Availability Zone of the instance.
block-device-mapping.attach-time - The attach time for an EBS volume mapped to the instance, for example, 2022-09-15T17:15:20.000Z.
block-device-mapping.delete-on-termination - A Boolean that indicates whether the EBS volume is deleted on instance termination.
block-device-mapping.device-name - The device name specified in the block device mapping (for example, /dev/sdh or xvdh).
block-device-mapping.status - The status for the EBS volume (attaching | attached | detaching | detached).
block-device-mapping.volume-id - The volume ID of the EBS volume.
boot-mode - The boot mode that was specified by the AMI (legacy-bios | uefi | uefi-preferred).
capacity-reservation-id - The ID of the Capacity Reservation into which the instance was launched.
capacity-reservation-specification.capacity-reservation-preference - The instance's Capacity Reservation preference (open | none).
capacity-reservation-specification.capacity-reservation-target.capacity-reservation-id - The ID of the targeted Capacity Reservation.
capacity-reservation-specification.capacity-reservation-target.capacity-reservation-resource-group-arn - The ARN of the targeted Capacity Reservation group.
client-token - The idempotency token you provided when you launched the instance.
current-instance-boot-mode - The boot mode that is used to launch the instance at launch or start (legacy-bios | uefi).
dns-name - The public DNS name of the instance.
ebs-optimized - A Boolean that indicates whether the instance is optimized for Amazon EBS I/O.
ena-support - A Boolean that indicates whether the instance is enabled for enhanced networking with ENA.
enclave-options.enabled - A Boolean that indicates whether the instance is enabled for Amazon Web Services Nitro Enclaves.
hibernation-options.configured - A Boolean that indicates whether the instance is enabled for hibernation. A value of true means that the instance is enabled for hibernation.
host-id - The ID of the Dedicated Host on which the instance is running, if applicable.
hypervisor - The hypervisor type of the instance (ovm | xen). The value xen is used for both Xen and Nitro hypervisors.
iam-instance-profile.arn - The instance profile associated with the instance. Specified as an ARN.
iam-instance-profile.id - The instance profile associated with the instance. Specified as an ID.
image-id - The ID of the image used to launch the instance.
instance-id - The ID of the instance.
instance-lifecycle - Indicates whether this is a Spot Instance, a Scheduled Instance, or a Capacity Block (spot | scheduled | capacity-block).
instance-state-code - The state of the instance, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name - The state of the instance (pending | running | shutting-down | terminated | stopping | stopped).
instance-type - The type of instance (for example, t2.micro).
instance.group-id - The ID of the security group for the instance.
instance.group-name - The name of the security group for the instance.
ip-address - The public IPv4 address of the instance.
ipv6-address - The IPv6 address of the instance.
kernel-id - The kernel ID.
key-name - The name of the key pair used when the instance was launched.
launch-index - When launching multiple instances, this is the index for the instance in the launch group (for example, 0, 1, 2, and so on).
launch-time - The time when the instance was launched, in the ISO 8601 format in the UTC time zone (YYYY-MM-DDThh:mm:ss.sssZ), for example, 2021-09-29T11:04:43.305Z. You can use a wildcard (*), for example, 2021-09-29T*, which matches an entire day.
maintenance-options.auto-recovery - The current automatic recovery behavior of the instance (disabled | default).
metadata-options.http-endpoint - The status of access to the HTTP metadata endpoint on your instance (enabled | disabled)
metadata-options.http-protocol-ipv4 - Indicates whether the IPv4 endpoint is enabled (disabled | enabled).
metadata-options.http-protocol-ipv6 - Indicates whether the IPv6 endpoint is enabled (disabled | enabled).
metadata-options.http-put-response-hop-limit - The HTTP metadata request put response hop limit (integer, possible values 1 to 64)
metadata-options.http-tokens - The metadata request authorization state (optional | required)
metadata-options.instance-metadata-tags - The status of access to instance tags from the instance metadata (enabled | disabled)
metadata-options.state - The state of the metadata option changes (pending | applied).
monitoring-state - Indicates whether detailed monitoring is enabled (disabled | enabled).
network-interface.addresses.association.allocation-id - The allocation ID.
network-interface.addresses.association.association-id - The association ID.
network-interface.addresses.association.carrier-ip - The carrier IP address.
network-interface.addresses.association.customer-owned-ip - The customer-owned IP address.
network-interface.addresses.association.ip-owner-id - The owner ID of the private IPv4 address associated with the network interface.
network-interface.addresses.association.public-dns-name - The public DNS name.
network-interface.addresses.association.public-ip - The ID of the association of an Elastic IP address (IPv4) with a network interface.
network-interface.addresses.primary - Specifies whether the IPv4 address of the network interface is the primary private IPv4 address.
network-interface.addresses.private-dns-name - The private DNS name.
network-interface.addresses.private-ip-address - The private IPv4 address associated with the network interface.
network-interface.association.allocation-id - The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
network-interface.association.association-id - The association ID returned when the network interface was associated with an IPv4 address.
network-interface.association.carrier-ip - The customer-owned IP address.
network-interface.association.customer-owned-ip - The customer-owned IP address.
network-interface.association.ip-owner-id - The owner of the Elastic IP address (IPv4) associated with the network interface.
network-interface.association.public-dns-name - The public DNS name.
network-interface.association.public-ip - The address of the Elastic IP address (IPv4) bound to the network interface.
network-interface.attachment.attach-time - The time that the network interface was attached to an instance.
network-interface.attachment.attachment-id - The ID of the interface attachment.
network-interface.attachment.delete-on-termination - Specifies whether the attachment is deleted when an instance is terminated.
network-interface.attachment.device-index - The device index to which the network interface is attached.
network-interface.attachment.instance-id - The ID of the instance to which the network interface is attached.
network-interface.attachment.instance-owner-id - The owner ID of the instance to which the network interface is attached.
network-interface.attachment.network-card-index - The index of the network card.
network-interface.attachment.status - The status of the attachment (attaching | attached | detaching | detached).
network-interface.availability-zone - The Availability Zone for the network interface.
network-interface.deny-all-igw-traffic - A Boolean that indicates whether a network interface with an IPv6 address is unreachable from the public internet.
network-interface.description - The description of the network interface.
network-interface.group-id - The ID of a security group associated with the network interface.
network-interface.group-name - The name of a security group associated with the network interface.
network-interface.ipv4-prefixes.ipv4-prefix - The IPv4 prefixes that are assigned to the network interface.
network-interface.ipv6-address - The IPv6 address associated with the network interface.
network-interface.ipv6-addresses.ipv6-address - The IPv6 address associated with the network interface.
network-interface.ipv6-addresses.is-primary-ipv6 - A Boolean that indicates whether this is the primary IPv6 address.
network-interface.ipv6-native - A Boolean that indicates whether this is an IPv6 only network interface.
network-interface.ipv6-prefixes.ipv6-prefix - The IPv6 prefix assigned to the network interface.
network-interface.mac-address - The MAC address of the network interface.
network-interface.network-interface-id - The ID of the network interface.
network-interface.operator.managed - A Boolean that indicates whether the instance has a managed network interface.
network-interface.operator.principal - The principal that manages the network interface. Only valid for instances with managed network interfaces, where managed is true.
network-interface.outpost-arn - The ARN of the Outpost.
network-interface.owner-id - The ID of the owner of the network interface.
network-interface.private-dns-name - The private DNS name of the network interface.
network-interface.private-ip-address - The private IPv4 address.
network-interface.public-dns-name - The public DNS name.
network-interface.requester-id - The requester ID for the network interface.
network-interface.requester-managed - Indicates whether the network interface is being managed by Amazon Web Services.
network-interface.status - The status of the network interface (available) | in-use).
network-interface.source-dest-check - Whether the network interface performs source/destination checking. A value of true means that checking is enabled, and false means that checking is disabled. The value must be false for the network interface to perform network address translation (NAT) in your VPC.
network-interface.subnet-id - The ID of the subnet for the network interface.
network-interface.tag-key - The key of a tag assigned to the network interface.
network-interface.tag-value - The value of a tag assigned to the network interface.
network-interface.vpc-id - The ID of the VPC for the network interface.
network-performance-options.bandwidth-weighting - Where the performance boost is applied, if applicable. Valid values: default, vpc-1, ebs-1.
operator.managed - A Boolean that indicates whether this is a managed instance.
operator.principal - The principal that manages the instance. Only valid for managed instances, where managed is true.
outpost-arn - The Amazon Resource Name (ARN) of the Outpost.
owner-id - The Amazon Web Services account ID of the instance owner.
placement-group-name - The name of the placement group for the instance.
placement-partition-number - The partition in which the instance is located.
platform - The platform. To list only Windows instances, use windows.
platform-details - The platform (Linux/UNIX | Red Hat BYOL Linux | Red Hat Enterprise Linux | Red Hat Enterprise Linux with HA | Red Hat Enterprise Linux with High Availability | Red Hat Enterprise Linux with SQL Server Standard and HA | Red Hat Enterprise Linux with SQL Server Enterprise and HA | Red Hat Enterprise Linux with SQL Server Standard | Red Hat Enterprise Linux with SQL Server Web | Red Hat Enterprise Linux with SQL Server Enterprise | SQL Server Enterprise | SQL Server Standard | SQL Server Web | SUSE Linux | Ubuntu Pro | Windows | Windows BYOL | Windows with SQL Server Enterprise | Windows with SQL Server Standard | Windows with SQL Server Web).
private-dns-name - The private IPv4 DNS name of the instance.
private-dns-name-options.enable-resource-name-dns-a-record - A Boolean that indicates whether to respond to DNS queries for instance hostnames with DNS A records.
private-dns-name-options.enable-resource-name-dns-aaaa-record - A Boolean that indicates whether to respond to DNS queries for instance hostnames with DNS AAAA records.
private-dns-name-options.hostname-type - The type of hostname (ip-name | resource-name).
private-ip-address - The private IPv4 address of the instance. This can only be used to filter by the primary IP address of the network interface attached to the instance. To filter by additional IP addresses assigned to the network interface, use the filter network-interface.addresses.private-ip-address.
product-code - The product code associated with the AMI used to launch the instance.
product-code.type - The type of product code (devpay | marketplace).
ramdisk-id - The RAM disk ID.
reason - The reason for the current state of the instance (for example, shows \"User Initiated [date]\" when you stop or terminate the instance). Similar to the state-reason-code filter.
requester-id - The ID of the entity that launched the instance on your behalf (for example, Amazon Web Services Management Console, Auto Scaling, and so on).
reservation-id - The ID of the instance's reservation. A reservation ID is created any time you launch an instance. A reservation ID has a one-to-one relationship with an instance launch request, but can be associated with more than one instance if you launch multiple instances using the same launch request. For example, if you launch one instance, you get one reservation ID. If you launch ten instances using the same launch request, you also get one reservation ID.
root-device-name - The device name of the root device volume (for example, /dev/sda1).
root-device-type - The type of the root device volume (ebs | instance-store).
source-dest-check - Indicates whether the instance performs source/destination checking. A value of true means that checking is enabled, and false means that checking is disabled. The value must be false for the instance to perform network address translation (NAT) in your VPC.
spot-instance-request-id - The ID of the Spot Instance request.
state-reason-code - The reason code for the state change.
state-reason-message - A message that describes the state change.
subnet-id - The ID of the subnet for the instance.
tag:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA, specify tag:Owner for the filter name and TeamA for the filter value.
tag-key - The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
tenancy - The tenancy of an instance (dedicated | default | host).
tpm-support - Indicates if the instance is configured for NitroTPM support (v2.0).
usage-operation - The usage operation value for the instance (RunInstances | RunInstances:00g0 | RunInstances:0010 | RunInstances:1010 | RunInstances:1014 | RunInstances:1110 | RunInstances:0014 | RunInstances:0210 | RunInstances:0110 | RunInstances:0100 | RunInstances:0004 | RunInstances:0200 | RunInstances:000g | RunInstances:0g00 | RunInstances:0002 | RunInstances:0800 | RunInstances:0102 | RunInstances:0006 | RunInstances:0202).
usage-operation-update-time - The time that the usage operation was last updated, for example, 2022-09-15T17:15:20.000Z.
virtualization-type - The virtualization type of the instance (paravirtual | hvm).
vpc-id - The ID of the VPC that the instance is running in.
The Availability Zone in which to launch the instances.
", + "documentation":"The Availability Zone in which to launch the instances. For example, us-east-2a.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
The block device mappings, which define the EBS volumes and instance store volumes to attach to the instance at launch.
Supported only for fleets of type instant.
For more information, see Block device mappings for volumes on Amazon EC2 instances in the Amazon EC2 User Guide.
", "locationName":"blockDeviceMappingSet" + }, + "AvailabilityZoneId":{ + "shape":"AvailabilityZoneId", + "documentation":"The ID of the Availability Zone in which to launch the instances. For example, use2-az1.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
Describes overrides for a launch template.
" @@ -36529,7 +36534,7 @@ }, "AvailabilityZone":{ "shape":"AvailabilityZoneName", - "documentation":"The Availability Zone in which to launch the instances.
" + "documentation":"The Availability Zone in which to launch the instances. For example, us-east-2a.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
The ID of the AMI in the format ami-17characters00000.
Alternatively, you can specify a Systems Manager parameter, using one of the following formats. The Systems Manager parameter will resolve to an AMI ID on launch.
To reference a public parameter:
resolve:ssm:public-parameter
To reference a parameter stored in the same account:
resolve:ssm:parameter-name
resolve:ssm:parameter-name:version-number
resolve:ssm:parameter-name:label
To reference a parameter shared from another Amazon Web Services account:
resolve:ssm:parameter-ARN
resolve:ssm:parameter-ARN:version-number
resolve:ssm:parameter-ARN:label
For more information, see Use a Systems Manager parameter instead of an AMI ID in the Amazon EC2 User Guide.
This parameter is only available for fleets of type instant. For fleets of type maintain and request, you must specify the AMI ID in the launch template.
The ID of the Availability Zone in which to launch the instances. For example, use2-az1.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
Describes overrides for a launch template.
" @@ -49897,7 +49906,7 @@ }, "AvailabilityZone":{ "shape":"String", - "documentation":"The Availability Zone in which to launch the instances.
", + "documentation":"The Availability Zone in which to launch the instances. For example, us-east-2a.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
The instance requirements. When you specify instance requirements, Amazon EC2 will identify instance types with the provided requirements, and then use your On-Demand and Spot allocation strategies to launch instances from these instance types, in the same way as when you specify a list of instance types.
If you specify InstanceRequirements, you can't specify InstanceType.
The ID of the Availability Zone in which to launch the instances. For example, use2-az1.
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
Describes overrides for a launch template.
" @@ -66855,7 +66869,7 @@ "members":{ "AvailabilityZone":{ "shape":"String", - "documentation":"The Availability Zone.
[Spot Fleet only] To specify multiple Availability Zones, separate them using commas; for example, \"us-west-2a, us-west-2b\".
", + "documentation":"The Availability Zone. For example, us-east-2a.
[Spot Fleet only] To specify multiple Availability Zones, separate them using commas; for example, \"us-east-2a, us-east-2b\".
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated runs on single-tenant hardware. The host tenancy is not supported for Spot Instances.
The ID of the Availability Zone. For example, use2-az1.
[Spot Fleet only] To specify multiple Availability Zones, separate them using commas; for example, \"use2-az1, use2-bz1\".
Either AvailabilityZone or AvailabilityZoneId must be specified in the request, but not both.
Describes Spot Instance placement.
" diff --git a/awscli/botocore/data/ecr/2015-09-21/service-2.json b/awscli/botocore/data/ecr/2015-09-21/service-2.json index 40ea39bc36bf..9913cbd3f5f6 100644 --- a/awscli/botocore/data/ecr/2015-09-21/service-2.json +++ b/awscli/botocore/data/ecr/2015-09-21/service-2.json @@ -1435,7 +1435,7 @@ }, "appliedFor":{ "shape":"RCTAppliedForList", - "documentation":"A list of enumerable strings representing the Amazon ECR repository creation scenarios that this template will apply towards. The two supported scenarios are PULL_THROUGH_CACHE and REPLICATION
A list of enumerable strings representing the Amazon ECR repository creation scenarios that this template will apply towards. The supported scenarios are PULL_THROUGH_CACHE, REPLICATION, and CREATE_ON_PUSH
A list of enumerable Strings representing the repository creation scenarios that this template will apply towards. The two supported scenarios are PULL_THROUGH_CACHE and REPLICATION
" + "documentation":"A list of enumerable Strings representing the repository creation scenarios that this template will apply towards. The supported scenarios are PULL_THROUGH_CACHE, REPLICATION, and CREATE_ON_PUSH
" }, "customRoleArn":{ "shape":"CustomRoleArn", @@ -5165,7 +5166,7 @@ }, "appliedFor":{ "shape":"RCTAppliedForList", - "documentation":"Updates the list of enumerable strings representing the Amazon ECR repository creation scenarios that this template will apply towards. The two supported scenarios are PULL_THROUGH_CACHE and REPLICATION
Updates the list of enumerable strings representing the Amazon ECR repository creation scenarios that this template will apply towards. The supported scenarios are PULL_THROUGH_CACHE, REPLICATION, and CREATE_ON_PUSH
Configuration for a canary deployment strategy that shifts a fixed percentage of traffic to the new service revision, waits for a specified bake time, then shifts the remaining traffic.
This is only valid when you run CreateService or UpdateService with deploymentController set to ECS and a deploymentConfiguration with a strategy set to CANARY.
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
" }, + "capacityOptionType":{ + "shape":"CapacityOptionType", + "documentation":"The capacity option type. This determines whether Amazon ECS launches On-Demand or Spot Instances for your managed instance capacity provider.
Valid values are:
ON_DEMAND - Launches standard On-Demand Instances. On-Demand Instances provide predictable pricing and availability.
SPOT - Launches Spot Instances that use spare Amazon EC2 capacity at reduced cost. Spot Instances can be interrupted by Amazon EC2 with a two-minute notification when the capacity is needed back.
The default is On-Demand
For more information about Amazon EC2 capacity options, see Instance purchasing options in the Amazon EC2 User Guide.
" + }, "instanceRequirements":{ "shape":"InstanceRequirementsRequest", "documentation":"The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
" @@ -6251,7 +6262,7 @@ "members":{ "name":{ "shape":"SettingName", - "documentation":"The resource name for which to modify the account setting.
The following are the valid values for the account setting name.
serviceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
taskLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
containerInstanceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
awsvpcTrunking - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If awsvpcTrunking is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the Amazon Elastic Container Service Developer Guide.
containerInsights - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up.
To use Container Insights with enhanced observability, set the containerInsights account setting to enhanced.
To use Container Insights, set the containerInsights account setting to enabled.
For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the Amazon Elastic Container Service Developer Guide.
dualStackIPv6 - When turned on, when using a VPC in dual stack mode, your tasks using the awsvpc network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual-stack mode.
fargateFIPSMode - If you specify fargateFIPSMode, Fargate FIPS 140 compliance is affected.
fargateTaskRetirementWaitPeriod - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateTaskRetirementWaitPeriod to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the Amazon ECS Developer Guide.
tagResourceAuthorization - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as ecsCreateCluster. If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the ecs:TagResource action. For more information, see Grant permission to tag resources on creation in the Amazon ECS Developer Guide.
defaultLogDriverMode -Amazon ECS supports setting a default delivery mode of log messages from a container to the logDriver that you specify in the container's logConfiguration. The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The defaultLogDriverMode setting supports two values: blocking and non-blocking. If you don't specify a delivery mode in your container definition's logConfiguration, the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration.
On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:
Set the mode option in your container definition's logConfiguration as blocking.
Set the defaultLogDriverMode account setting to blocking.
guardDutyActivate - The guardDutyActivate parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring.
The resource name for which to modify the account setting.
The following are the valid values for the account setting name.
serviceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
taskLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
containerInstanceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
awsvpcTrunking - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If awsvpcTrunking is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the Amazon Elastic Container Service Developer Guide.
containerInsights - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up.
To use Container Insights with enhanced observability, set the containerInsights account setting to enhanced.
To use Container Insights, set the containerInsights account setting to enabled.
For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the Amazon Elastic Container Service Developer Guide.
dualStackIPv6 - When turned on, when using a VPC in dual stack mode, your tasks using the awsvpc network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual-stack mode.
fargateFIPSMode - If you specify fargateFIPSMode, Fargate FIPS 140 compliance is affected.
fargateTaskRetirementWaitPeriod - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateTaskRetirementWaitPeriod to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the Amazon ECS Developer Guide.
fargateEventWindows - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateEventWindows to use EC2 Event Windows associated with Fargate tasks to configure time windows for task retirement.
tagResourceAuthorization - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as ecsCreateCluster. If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the ecs:TagResource action. For more information, see Grant permission to tag resources on creation in the Amazon ECS Developer Guide.
defaultLogDriverMode -Amazon ECS supports setting a default delivery mode of log messages from a container to the logDriver that you specify in the container's logConfiguration. The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The defaultLogDriverMode setting supports two values: blocking and non-blocking. If you don't specify a delivery mode in your container definition's logConfiguration, the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration.
On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:
Set the mode option in your container definition's logConfiguration as blocking.
Set the defaultLogDriverMode account setting to blocking.
guardDutyActivate - The guardDutyActivate parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring.
The Amazon ECS account setting name to modify.
The following are the valid values for the account setting name.
serviceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
taskLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
containerInstanceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
awsvpcTrunking - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If awsvpcTrunking is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the Amazon Elastic Container Service Developer Guide.
containerInsights - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up.
To use Container Insights with enhanced observability, set the containerInsights account setting to enhanced.
To use Container Insights, set the containerInsights account setting to enabled.
For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the Amazon Elastic Container Service Developer Guide.
dualStackIPv6 - When turned on, when using a VPC in dual stack mode, your tasks using the awsvpc network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual-stack mode.
fargateTaskRetirementWaitPeriod - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateTaskRetirementWaitPeriod to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the Amazon ECS Developer Guide.
tagResourceAuthorization - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as ecsCreateCluster. If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the ecs:TagResource action. For more information, see Grant permission to tag resources on creation in the Amazon ECS Developer Guide.
defaultLogDriverMode - Amazon ECS supports setting a default delivery mode of log messages from a container to the logDriver that you specify in the container's logConfiguration. The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The defaultLogDriverMode setting supports two values: blocking and non-blocking. If you don't specify a delivery mode in your container definition's logConfiguration, the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration.
On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:
Set the mode option in your container definition's logConfiguration as blocking.
Set the defaultLogDriverMode account setting to blocking.
guardDutyActivate - The guardDutyActivate parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring.
The Amazon ECS account setting name to modify.
The following are the valid values for the account setting name.
serviceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
taskLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
containerInstanceLongArnFormat - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.
awsvpcTrunking - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If awsvpcTrunking is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the Amazon Elastic Container Service Developer Guide.
containerInsights - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up.
To use Container Insights with enhanced observability, set the containerInsights account setting to enhanced.
To use Container Insights, set the containerInsights account setting to enabled.
For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the Amazon Elastic Container Service Developer Guide.
dualStackIPv6 - When turned on, when using a VPC in dual stack mode, your tasks using the awsvpc network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual-stack mode.
fargateTaskRetirementWaitPeriod - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateTaskRetirementWaitPeriod to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the Amazon ECS Developer Guide.
fargateEventWindows - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use fargateEventWindows to use EC2 Event Windows associated with Fargate tasks to configure time windows for task retirement.
tagResourceAuthorization - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as ecsCreateCluster. If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the ecs:TagResource action. For more information, see Grant permission to tag resources on creation in the Amazon ECS Developer Guide.
defaultLogDriverMode - Amazon ECS supports setting a default delivery mode of log messages from a container to the logDriver that you specify in the container's logConfiguration. The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The defaultLogDriverMode setting supports two values: blocking and non-blocking. If you don't specify a delivery mode in your container definition's logConfiguration, the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration.
On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following:
Set the mode option in your container definition's logConfiguration as blocking.
Set the defaultLogDriverMode account setting to blocking.
guardDutyActivate - The guardDutyActivate parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring.
Configures the command to treat the payloadTemplate as a JSON document for preprocessing. This preprocessor substitutes placeholders with parameter values to generate the command execution request payload.
The maximum amount of time (in milliseconds) that an outgoing call waits for other calls with which it batches messages of the same type. The higher the setting, the longer the latency of the batched HTTP Action will be.
" + }, + "maxBatchSize":{ + "shape":"MaxBatchSize", + "documentation":"The maximum number of messages that are batched together in a single action execution.
" + }, + "maxBatchSizeBytes":{ + "shape":"MaxBatchSizeBytes", + "documentation":"Maximum size of a message batch, in bytes.
" + } + }, + "documentation":"Configuration settings for batching.
" + }, "BatchMode":{"type":"boolean"}, "BeforeSubstitutionFlag":{"type":"boolean"}, "Behavior":{ @@ -11922,6 +11940,7 @@ "ElasticsearchId":{"type":"string"}, "ElasticsearchIndex":{"type":"string"}, "ElasticsearchType":{"type":"string"}, + "EnableBatching":{"type":"boolean"}, "EnableCachingForHttp":{"type":"boolean"}, "EnableIoTLoggingParams":{ "type":"structure", @@ -13111,6 +13130,14 @@ "auth":{ "shape":"HttpAuthorization", "documentation":"The authentication method to use when sending data to an HTTPS endpoint.
" + }, + "enableBatching":{ + "shape":"EnableBatching", + "documentation":"Whether to process the HTTP action messages into a single request. Value can be true or false.
" + }, + "batchConfig":{ + "shape":"BatchConfig", + "documentation":"The configuration settings for batching. For more information, see Batching HTTP action messages.
" } }, "documentation":"Send data to an HTTPS endpoint.
" @@ -17031,6 +17058,21 @@ "max":1024, "pattern":"[A-Za-z0-9+/]+={0,2}" }, + "MaxBatchOpenMs":{ + "type":"integer", + "max":200, + "min":5 + }, + "MaxBatchSize":{ + "type":"integer", + "max":10, + "min":2 + }, + "MaxBatchSizeBytes":{ + "type":"integer", + "max":131072, + "min":100 + }, "MaxBuckets":{ "type":"integer", "max":10000, diff --git a/awscli/botocore/data/opensearch/2021-01-01/service-2.json b/awscli/botocore/data/opensearch/2021-01-01/service-2.json index 6a044fe69ca0..472b2cb30c88 100644 --- a/awscli/botocore/data/opensearch/2021-01-01/service-2.json +++ b/awscli/botocore/data/opensearch/2021-01-01/service-2.json @@ -6351,7 +6351,8 @@ "enum":[ "Data", "Ultrawarm", - "Master" + "Master", + "Warm" ] }, "NonEmptyString":{ diff --git a/awscli/botocore/data/sesv2/2019-09-27/service-2.json b/awscli/botocore/data/sesv2/2019-09-27/service-2.json index 90af9ebd1800..0db96a52e29f 100644 --- a/awscli/botocore/data/sesv2/2019-09-27/service-2.json +++ b/awscli/botocore/data/sesv2/2019-09-27/service-2.json @@ -707,6 +707,20 @@ ], "documentation":"Retrieve inbox placement and engagement rates for the domains that you use to send email.
" }, + "GetEmailAddressInsights":{ + "name":"GetEmailAddressInsights", + "http":{ + "method":"POST", + "requestUri":"/v2/email/email-address-insights/" + }, + "input":{"shape":"GetEmailAddressInsightsRequest"}, + "output":{"shape":"GetEmailAddressInsightsResponse"}, + "errors":[ + {"shape":"TooManyRequestsException"}, + {"shape":"BadRequestException"} + ], + "documentation":"Provides validation insights about a specific email address, including syntax validation, DNS record checks, mailbox existence, and other deliverability factors.
" + }, "GetEmailIdentity":{ "name":"GetEmailIdentity", "http":{ @@ -2438,6 +2452,10 @@ "shape":"TemplateContent", "documentation":"The content of the custom verification email. The total size of the email must be less than 10 MB. The message body may contain HTML, with some limitations. For more information, see Custom verification email frequently asked questions in the Amazon SES Developer Guide.
" }, + "Tags":{ + "shape":"TagList", + "documentation":"An array of objects that define the tags (keys and values) to associate with the custom verification email template.
" + }, "SuccessRedirectionURL":{ "shape":"SuccessRedirectionURL", "documentation":"The URL that the recipient of the verification email is sent to if his or her address is successfully verified.
" @@ -2609,6 +2627,10 @@ "TemplateContent":{ "shape":"EmailTemplateContent", "documentation":"The content of the email template, composed of a subject line, an HTML part, and a text-only part.
" + }, + "Tags":{ + "shape":"TagList", + "documentation":"An array of objects that define the tags (keys and values) to associate with the email template.
" } }, "documentation":"Represents a request to create an email template. For more information, see the Amazon SES Developer Guide.
" @@ -3563,6 +3585,55 @@ "member":{"shape":"InsightsEmailAddress"}, "max":5 }, + "EmailAddressInsightsConfidenceVerdict":{ + "type":"string", + "documentation":"The confidence level of SES that the email address meets the validation criteria:
LOW - Weak or no indication of the specific check (e.g., LOW for IsRoleAddress means the email is less likely to be a role-based address).
MEDIUM - Moderate indication of the specific check (e.g., MEDIUM for IsDisposable means the email might be a disposable address).
HIGH - Strong indication of the specific check (e.g., HIGH for IsRandomInput means the email is very likely randomly generated).
Checks that the email address follows proper RFC standards and contains valid characters in the correct format.
" + }, + "HasValidDnsRecords":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Checks that the domain exists, has valid DNS records, and is configured to receive email.
" + }, + "MailboxExists":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Checks that the mailbox exists and can receive messages without actually sending an email.
" + }, + "IsRoleAddress":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Identifies role-based addresses (such as admin@, support@, or info@) that may have lower engagement rates.
" + }, + "IsDisposable":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Checks disposable or temporary email addresses that could negatively impact your sender reputation.
" + }, + "IsRandomInput":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Checks if the input appears to be random text.
" + } + }, + "documentation":"Contains individual validation checks performed on an email address.
" + }, + "EmailAddressInsightsVerdict":{ + "type":"structure", + "members":{ + "ConfidenceVerdict":{ + "shape":"EmailAddressInsightsConfidenceVerdict", + "documentation":"The confidence level of the validation verdict.
" + } + }, + "documentation":"Contains the overall validation verdict for an email address.
" + }, "EmailAddressList":{ "type":"list", "member":{"shape":"EmailAddress"} @@ -4242,6 +4313,10 @@ "shape":"TemplateContent", "documentation":"The content of the custom verification email.
" }, + "Tags":{ + "shape":"TagList", + "documentation":"An array of objects that define the tags (keys and values) that are associated with the custom verification email template.
" + }, "SuccessRedirectionURL":{ "shape":"SuccessRedirectionURL", "documentation":"The URL that the recipient of the verification email is sent to if his or her address is successfully verified.
" @@ -4484,6 +4559,27 @@ }, "documentation":"An object that includes statistics that are related to the domain that you specified.
" }, + "GetEmailAddressInsightsRequest":{ + "type":"structure", + "required":["EmailAddress"], + "members":{ + "EmailAddress":{ + "shape":"EmailAddress", + "documentation":"The email address to analyze for validation insights.
" + } + }, + "documentation":"A request to return validation insights about an email address.
" + }, + "GetEmailAddressInsightsResponse":{ + "type":"structure", + "members":{ + "MailboxValidation":{ + "shape":"MailboxValidation", + "documentation":"Detailed validation results for the email address.
" + } + }, + "documentation":"Validation insights about an email address.
" + }, "GetEmailIdentityPoliciesRequest":{ "type":"structure", "required":["EmailIdentity"], @@ -4593,6 +4689,10 @@ "TemplateContent":{ "shape":"EmailTemplateContent", "documentation":"The content of the email template, composed of a subject line, an HTML part, and a text-only part.
" + }, + "Tags":{ + "shape":"TagList", + "documentation":"An array of objects that define the tags (keys and values) that are associated with the email template.
" } }, "documentation":"The following element is returned by the service.
" @@ -5936,6 +6036,20 @@ "TRANSACTIONAL" ] }, + "MailboxValidation":{ + "type":"structure", + "members":{ + "IsValid":{ + "shape":"EmailAddressInsightsVerdict", + "documentation":"Overall validity assessment with a confidence verdict.
" + }, + "Evaluations":{ + "shape":"EmailAddressInsightsMailboxEvaluations", + "documentation":"Specific validation checks performed on the email address.
" + } + }, + "documentation":"Contains detailed validation information about an email address.
" + }, "Max24HourSend":{"type":"double"}, "MaxDeliverySeconds":{ "type":"long", @@ -6457,6 +6571,10 @@ "SuppressedReasons":{ "shape":"SuppressionListReasons", "documentation":"A list that contains the reasons that email addresses will be automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
An object that contains additional suppression attributes for your account.
" } }, "documentation":"A request to change your account's suppression preferences.
" @@ -6590,6 +6708,10 @@ "SuppressedReasons":{ "shape":"SuppressionListReasons", "documentation":"A list that contains the reasons that email addresses are automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
An object that contains information about the email address suppression preferences for the configuration set in the current Amazon Web Services Region.
" } }, "documentation":"A request to change the account suppression list preferences for a specific configuration set.
" @@ -7592,10 +7714,46 @@ "SuppressedReasons":{ "shape":"SuppressionListReasons", "documentation":"A list that contains the reasons that email addresses will be automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
An object that contains information about the email address suppression preferences for your account in the current Amazon Web Services Region.
" }, + "SuppressionConditionThreshold":{ + "type":"structure", + "required":["ConditionThresholdEnabled"], + "members":{ + "ConditionThresholdEnabled":{ + "shape":"FeatureStatus", + "documentation":"Indicates whether Auto Validation is enabled for suppression. Set to ENABLED to enable the Auto Validation feature, or set to DISABLED to disable it.
The overall confidence threshold used to determine suppression decisions.
" + } + }, + "documentation":"Contains Auto Validation settings, allowing you to suppress sending to specific destination(s) if they do not meet required threshold. For details on Auto Validation, see Auto Validation.
" + }, + "SuppressionConfidenceThreshold":{ + "type":"structure", + "required":["ConfidenceVerdictThreshold"], + "members":{ + "ConfidenceVerdictThreshold":{ + "shape":"SuppressionConfidenceVerdictThreshold", + "documentation":"The confidence level threshold for suppression decisions.
" + } + }, + "documentation":"Contains the confidence threshold settings for Auto Validation.
" + }, + "SuppressionConfidenceVerdictThreshold":{ + "type":"string", + "documentation":"The confidence level threshold for suppression validation:
MEDIUM – Allows emails to be sent to addresses with medium or high delivery likelihood.
HIGH – Allows emails to be sent only to addresses with high delivery likelihood.
MANAGED – Managed confidence threshold where Amazon SES automatically determines the appropriate level.
A list that contains the reasons that email addresses are automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE – Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
An object that contains information about the suppression list preferences for your account.
" }, + "SuppressionValidationAttributes":{ + "type":"structure", + "required":["ConditionThreshold"], + "members":{ + "ConditionThreshold":{ + "shape":"SuppressionConditionThreshold", + "documentation":"Specifies the condition threshold settings for account-level suppression.
" + } + }, + "documentation":"Structure containing validation attributes used for suppressing sending to specific destination on account level.
" + }, + "SuppressionValidationOptions":{ + "type":"structure", + "required":["ConditionThreshold"], + "members":{ + "ConditionThreshold":{ + "shape":"SuppressionConditionThreshold", + "documentation":"Specifies the condition threshold settings for suppression validation.
" + } + }, + "documentation":"Contains validation options for email address suppression.
" + }, "Tag":{ "type":"structure", "required":[ diff --git a/awscli/botocore/data/ssm-sap/2018-05-10/service-2.json b/awscli/botocore/data/ssm-sap/2018-05-10/service-2.json index ef62e27eec5d..ffbd8bfafe60 100644 --- a/awscli/botocore/data/ssm-sap/2018-05-10/service-2.json +++ b/awscli/botocore/data/ssm-sap/2018-05-10/service-2.json @@ -59,7 +59,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Gets an application registered with AWS Systems Manager for SAP. It also returns the components of the application.
" + "documentation":"Gets an application registered with AWS Systems Manager for SAP. It also returns the components of the application.
", + "readonly":true }, "GetComponent":{ "name":"GetComponent", @@ -75,7 +76,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Gets the component of an application registered with AWS Systems Manager for SAP.
" + "documentation":"Gets the component of an application registered with AWS Systems Manager for SAP.
", + "readonly":true }, "GetConfigurationCheckOperation":{ "name":"GetConfigurationCheckOperation", @@ -90,7 +92,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Gets the details of a configuration check operation by specifying the operation ID.
" + "documentation":"Gets the details of a configuration check operation by specifying the operation ID.
", + "readonly":true }, "GetDatabase":{ "name":"GetDatabase", @@ -105,7 +108,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Gets the SAP HANA database of an application registered with AWS Systems Manager for SAP.
" + "documentation":"Gets the SAP HANA database of an application registered with AWS Systems Manager for SAP.
", + "readonly":true }, "GetOperation":{ "name":"GetOperation", @@ -120,7 +124,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Gets the details of an operation by specifying the operation ID.
" + "documentation":"Gets the details of an operation by specifying the operation ID.
", + "readonly":true }, "GetResourcePermission":{ "name":"GetResourcePermission", @@ -152,7 +157,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists all the applications registered with AWS Systems Manager for SAP.
" + "documentation":"Lists all the applications registered with AWS Systems Manager for SAP.
", + "readonly":true }, "ListComponents":{ "name":"ListComponents", @@ -169,7 +175,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists all the components registered with AWS Systems Manager for SAP.
" + "documentation":"Lists all the components registered with AWS Systems Manager for SAP.
", + "readonly":true }, "ListConfigurationCheckDefinitions":{ "name":"ListConfigurationCheckDefinitions", @@ -184,7 +191,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists all configuration check types supported by AWS Systems Manager for SAP.
" + "documentation":"Lists all configuration check types supported by AWS Systems Manager for SAP.
", + "readonly":true }, "ListConfigurationCheckOperations":{ "name":"ListConfigurationCheckOperations", @@ -200,7 +208,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists the configuration check operations performed by AWS Systems Manager for SAP.
" + "documentation":"Lists the configuration check operations performed by AWS Systems Manager for SAP.
", + "readonly":true }, "ListDatabases":{ "name":"ListDatabases", @@ -216,7 +225,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists the SAP HANA databases of an application registered with AWS Systems Manager for SAP.
" + "documentation":"Lists the SAP HANA databases of an application registered with AWS Systems Manager for SAP.
", + "readonly":true }, "ListOperationEvents":{ "name":"ListOperationEvents", @@ -231,7 +241,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a list of operations events.
Available parameters include OperationID, as well as optional parameters MaxResults, NextToken, and Filters.
Returns a list of operations events.
Available parameters include OperationID, as well as optional parameters MaxResults, NextToken, and Filters.
Lists the operations performed by AWS Systems Manager for SAP.
" + "documentation":"Lists the operations performed by AWS Systems Manager for SAP.
", + "readonly":true }, "ListSubCheckResults":{ "name":"ListSubCheckResults", @@ -261,7 +273,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists the sub-check results of a specified configuration check operation.
" + "documentation":"Lists the sub-check results of a specified configuration check operation.
", + "readonly":true }, "ListSubCheckRuleResults":{ "name":"ListSubCheckRuleResults", @@ -276,7 +289,8 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Lists the rules of a specified sub-check belonging to a configuration check operation.
" + "documentation":"Lists the rules of a specified sub-check belonging to a configuration check operation.
", + "readonly":true }, "ListTagsForResource":{ "name":"ListTagsForResource", @@ -292,7 +306,8 @@ {"shape":"ValidationException"}, {"shape":"ConflictException"} ], - "documentation":"Lists all tags on an SAP HANA application and/or database registered with AWS Systems Manager for SAP.
" + "documentation":"Lists all tags on an SAP HANA application and/or database registered with AWS Systems Manager for SAP.
", + "readonly":true }, "PutResourcePermission":{ "name":"PutResourcePermission", @@ -1078,7 +1093,8 @@ "STOPPED", "WARNING", "UNKNOWN", - "ERROR" + "ERROR", + "STOPPING" ] }, "DatabaseSummary":{ diff --git a/tests/functional/botocore/endpoint-rules/artifact/endpoint-tests-1.json b/tests/functional/botocore/endpoint-rules/artifact/endpoint-tests-1.json index 37819cee8835..69d7ecd43bd8 100644 --- a/tests/functional/botocore/endpoint-rules/artifact/endpoint-tests-1.json +++ b/tests/functional/botocore/endpoint-rules/artifact/endpoint-tests-1.json @@ -202,85 +202,43 @@ } }, { - "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "properties": { - "authSchemes": [ - { - "name": "sigv4", - "signingRegion": "us-gov-west-1" - } - ] - }, - "url": "https://artifact-fips.us-gov-west-1.api.aws" - } - }, - "params": { - "Region": "us-gov-west-1", - "UseFIPS": true, - "UseDualStack": true - } - }, - { - "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled", + "documentation": "For region eusc-de-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingRegion": "us-gov-west-1" + "signingRegion": "eusc-de-east-1" } ] }, - "url": "https://artifact-fips.us-gov-west-1.amazonaws.com" + "url": "https://artifact-fips.eusc-de-east-1.amazonaws.eu" } }, "params": { - "Region": "us-gov-west-1", + "Region": "eusc-de-east-1", "UseFIPS": true, "UseDualStack": false } }, { - "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "properties": { - "authSchemes": [ - { - "name": "sigv4", - "signingRegion": "us-gov-west-1" - } - ] - }, - "url": "https://artifact.us-gov-west-1.api.aws" - } - }, - "params": { - "Region": "us-gov-west-1", - "UseFIPS": false, - "UseDualStack": true - } - }, - { - "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled", + "documentation": "For region eusc-de-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingRegion": "us-gov-west-1" + "signingRegion": "eusc-de-east-1" } ] }, - "url": "https://artifact.us-gov-west-1.amazonaws.com" + "url": "https://artifact.eusc-de-east-1.amazonaws.eu" } }, "params": { - "Region": "us-gov-west-1", + "Region": "eusc-de-east-1", "UseFIPS": false, "UseDualStack": false } @@ -454,43 +412,85 @@ } }, { - "documentation": "For region eusc-de-east-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingRegion": "eusc-de-east-1" + "signingRegion": "us-gov-west-1" } ] }, - "url": "https://artifact-fips.eusc-de-east-1.amazonaws.eu" + "url": "https://artifact-fips.us-gov-west-1.api.aws" } }, "params": { - "Region": "eusc-de-east-1", + "Region": "us-gov-west-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://artifact-fips.us-gov-west-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-west-1", "UseFIPS": true, "UseDualStack": false } }, { - "documentation": "For region eusc-de-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingRegion": "eusc-de-east-1" + "signingRegion": "us-gov-west-1" } ] }, - "url": "https://artifact.eusc-de-east-1.amazonaws.eu" + "url": "https://artifact.us-gov-west-1.api.aws" } }, "params": { - "Region": "eusc-de-east-1", + "Region": "us-gov-west-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://artifact.us-gov-west-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-west-1", "UseFIPS": false, "UseDualStack": false } From 3943d63b6d658c40ce0d0841fa10f664ea84baa8 Mon Sep 17 00:00:00 2001 From: aws-sdk-python-automationThe triggers associated with a Region switch plan.
" }, + "reportConfiguration":{"shape":"ReportConfiguration"}, "name":{ "shape":"PlanName", "documentation":"The name of a Region switch plan.
" @@ -815,6 +816,83 @@ "type":"structure", "members":{} }, + "DocumentDbClusterArn":{ + "type":"string", + "pattern":"arn:aws[a-zA-Z-]*:rds:[a-z0-9-]+:\\d{12}:cluster:[a-zA-Z0-9][a-zA-Z0-9-_]{0,99}" + }, + "DocumentDbClusterArns":{ + "type":"list", + "member":{"shape":"DocumentDbClusterArn"} + }, + "DocumentDbConfiguration":{ + "type":"structure", + "required":[ + "behavior", + "globalClusterIdentifier", + "databaseClusterArns" + ], + "members":{ + "timeoutMinutes":{ + "shape":"DocumentDbConfigurationTimeoutMinutesInteger", + "documentation":"The timeout value specified for the configuration.
" + }, + "crossAccountRole":{ + "shape":"IamRoleArn", + "documentation":"The cross account role for the configuration.
" + }, + "externalId":{ + "shape":"String", + "documentation":"The external ID (secret key) for the configuration.
" + }, + "behavior":{ + "shape":"DocumentDbDefaultBehavior", + "documentation":"The behavior for a global cluster, that is, only allow switchover or also allow failover.
" + }, + "ungraceful":{ + "shape":"DocumentDbUngraceful", + "documentation":"The settings for ungraceful execution.
" + }, + "globalClusterIdentifier":{ + "shape":"DocumentDbGlobalClusterIdentifier", + "documentation":"The global cluster identifier for a DocumentDB global cluster.
" + }, + "databaseClusterArns":{ + "shape":"DocumentDbClusterArns", + "documentation":"The database cluster Amazon Resource Names (ARNs) for a DocumentDB global cluster.
" + } + }, + "documentation":"Configuration for Amazon DocumentDB global clusters used in a Region switch plan.
" + }, + "DocumentDbConfigurationTimeoutMinutesInteger":{ + "type":"integer", + "box":true, + "min":1 + }, + "DocumentDbDefaultBehavior":{ + "type":"string", + "enum":[ + "switchoverOnly", + "failover" + ] + }, + "DocumentDbGlobalClusterIdentifier":{ + "type":"string", + "pattern":"[A-Za-z][0-9A-Za-z-:._]*" + }, + "DocumentDbUngraceful":{ + "type":"structure", + "members":{ + "ungraceful":{ + "shape":"DocumentDbUngracefulBehavior", + "documentation":"The settings for ungraceful execution.
" + } + }, + "documentation":"Configuration for handling failures when performing operations on DocumentDB global clusters.
" + }, + "DocumentDbUngracefulBehavior":{ + "type":"string", + "enum":["failover"] + }, "Duration":{ "type":"string", "pattern":"P(?!$)(\\d+Y)?(\\d+M)?(\\d+D)?(T(?=\\d)(\\d+H)?(\\d+M)?(\\d+S)?)?" @@ -1111,7 +1189,8 @@ "route53HealthCheckConfig":{ "shape":"Route53HealthCheckConfiguration", "documentation":"The Amazon Route 53 health check configuration.
" - } + }, + "documentDbConfig":{"shape":"DocumentDbConfiguration"} }, "documentation":"Execution block configurations for a workflow in a Region switch plan. An execution block represents a specific type of action to perform during a Region switch.
", "union":true @@ -1128,7 +1207,8 @@ "Parallel", "ECSServiceScaling", "EKSResourceScaling", - "Route53HealthCheck" + "Route53HealthCheck", + "DocumentDb" ] }, "ExecutionComment":{ @@ -1210,7 +1290,8 @@ "stepCanceled", "stepPendingApproval", "stepExecutionBehaviorChangedToUngraceful", - "stepPendingApplicationHealthMonitor" + "stepPendingApplicationHealthMonitor", + "planEvaluationWarning" ] }, "ExecutionId":{"type":"string"}, @@ -1237,10 +1318,52 @@ "completedMonitoringApplicationHealth" ] }, + "FailedReportErrorCode":{ + "type":"string", + "enum":[ + "insufficientPermissions", + "invalidResource", + "configurationError" + ] + }, + "FailedReportOutput":{ + "type":"structure", + "members":{ + "errorCode":{ + "shape":"FailedReportErrorCode", + "documentation":"The error code for the failed report generation.
" + }, + "errorMessage":{ + "shape":"String", + "documentation":"The error message for the failed report generation.
" + } + }, + "documentation":"Information about a report generation that failed.
" + }, "Float":{ "type":"float", "box":true }, + "GeneratedReport":{ + "type":"structure", + "members":{ + "reportGenerationTime":{ + "shape":"Timestamp", + "documentation":"The timestamp when the report was generated.
" + }, + "reportOutput":{ + "shape":"ReportOutput", + "documentation":"The output location or cause of a failure in report generation.
" + } + }, + "documentation":"Information about a generated execution report.
" + }, + "GeneratedReportDetails":{ + "type":"list", + "member":{"shape":"GeneratedReport"}, + "max":1, + "min":0 + }, "GetPlanEvaluationStatusRequest":{ "type":"structure", "required":["planArn"], @@ -1386,6 +1509,10 @@ "shape":"Duration", "documentation":"The actual recovery time that Region switch calculates for a plan execution. Actual recovery time includes the time for the plan to run added to the time elapsed until the application health alarms that you've specified are healthy again.
" }, + "generatedReportDetails":{ + "shape":"GeneratedReportDetails", + "documentation":"Information about the location of a generated report, or the cause of its failure.
" + }, "nextToken":{ "shape":"String", "documentation":"Specifies that you want to receive the next page of results. Valid only if you received a nextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's nextToken response to request the next page of results.
The triggers for a plan.
" }, + "reportConfiguration":{ + "shape":"ReportConfiguration", + "documentation":"The report configuration for a plan.
" + }, "name":{ "shape":"PlanName", "documentation":"The name for a plan.
" @@ -2074,6 +2205,48 @@ "key":{"shape":"Region"}, "value":{"shape":"KubernetesScalingResource"} }, + "ReportConfiguration":{ + "type":"structure", + "members":{ + "reportOutput":{ + "shape":"ReportOutputList", + "documentation":"The output configuration for the report.
" + } + }, + "documentation":"Configuration for automatic report generation for plan executions. When configured, Region switch automatically generates a report after each plan execution that includes execution events, plan configuration, and CloudWatch alarm states.
" + }, + "ReportOutput":{ + "type":"structure", + "members":{ + "s3ReportOutput":{ + "shape":"S3ReportOutput", + "documentation":"Information about a report delivered to Amazon S3.
" + }, + "failedReportOutput":{ + "shape":"FailedReportOutput", + "documentation":"The details about a failed report generation.
" + } + }, + "documentation":"The output location or cause of a failure in report generation.
", + "union":true + }, + "ReportOutputConfiguration":{ + "type":"structure", + "members":{ + "s3Configuration":{ + "shape":"S3ReportOutputConfiguration", + "documentation":"Configuration for delivering reports to an Amazon S3 bucket.
" + } + }, + "documentation":"Configuration for report output destinations used in a Region switch plan.
", + "union":true + }, + "ReportOutputList":{ + "type":"list", + "member":{"shape":"ReportOutputConfiguration"}, + "max":1, + "min":1 + }, "ResourceArn":{"type":"string"}, "ResourceNotFoundException":{ "type":"structure", @@ -2268,6 +2441,36 @@ "Off" ] }, + "S3ReportOutput":{ + "type":"structure", + "members":{ + "s3ObjectKey":{ + "shape":"String", + "documentation":"The S3 object key where the generated report is stored.
" + } + }, + "documentation":"Information about a report delivered to Amazon S3.
" + }, + "S3ReportOutputConfiguration":{ + "type":"structure", + "members":{ + "bucketPath":{ + "shape":"S3ReportOutputConfigurationBucketPathString", + "documentation":"The S3 bucket name and optional prefix where reports are stored. Format: bucket-name or bucket-name/prefix.
" + }, + "bucketOwner":{ + "shape":"AccountId", + "documentation":"The Amazon Web Services account ID that owns the S3 bucket. Required to ensure the bucket is still owned by the same expected owner at generation time.
" + } + }, + "documentation":"Configuration for delivering generated reports to an Amazon S3 bucket.
" + }, + "S3ReportOutputConfigurationBucketPathString":{ + "type":"string", + "max":512, + "min":3, + "pattern":"(?:s3://)?[a-z0-9][a-z0-9-]{1,61}[a-z0-9](?:/[^/ ][^/]*)*/?" + }, "Service":{ "type":"structure", "members":{ @@ -2673,6 +2876,10 @@ "triggers":{ "shape":"TriggerList", "documentation":"The updated conditions that can automatically trigger the execution of the plan.
" + }, + "reportConfiguration":{ + "shape":"ReportConfiguration", + "documentation":"The updated report configuration for the plan.
" } } }, diff --git a/awscli/botocore/data/connect/2017-08-08/service-2.json b/awscli/botocore/data/connect/2017-08-08/service-2.json index 6493a09e23ba..0a3ddc3da5be 100644 --- a/awscli/botocore/data/connect/2017-08-08/service-2.json +++ b/awscli/botocore/data/connect/2017-08-08/service-2.json @@ -711,7 +711,7 @@ {"shape":"InvalidParameterException"}, {"shape":"ServiceQuotaExceededException"} ], - "documentation":"Creates a new data table with the specified properties. Supports the creation of all table properties except for attributes and values. A table with no attributes and values is a valid state for a table. The number of tables per instance is limited to 100 per instance. Customers can request an increase by using AWS Service Quotas.
" + "documentation":"Creates a new data table with the specified properties. Supports the creation of all table properties except for attributes and values. A table with no attributes and values is a valid state for a table. The number of tables per instance is limited to 100 per instance. Customers can request an increase by using Amazon Web Services Service Quotas.
" }, "CreateDataTableAttribute":{ "name":"CreateDataTableAttribute", @@ -2665,7 +2665,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InvalidParameterException"} ], - "documentation":"Evaluates values at the time of the request and returns them. It considers the request's timezone or the table's timezone, in that order, when accessing time based tables. When a value is accessed, the accessor's identity and the time of access are saved alongside the value to help identify values that are actively in use. The term \"Batch\" is not included in the operation name since it does not meet all the criteria for a batch operation as specified in Batch Operations: AWS API Standards.
" + "documentation":"Evaluates values at the time of the request and returns them. It considers the request's timezone or the table's timezone, in that order, when accessing time based tables. When a value is accessed, the accessor's identity and the time of access are saved alongside the value to help identify values that are actively in use. The term \"Batch\" is not included in the operation name since it does not meet all the criteria for a batch operation as specified in Batch Operations: Amazon Web Services API Standards.
" }, "GetAttachedFile":{ "name":"GetAttachedFile", @@ -3180,7 +3180,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InvalidParameterException"} ], - "documentation":"Returns all attributes for a specified data table. A maximum of 100 attributes per data table is allowed. Customers can request an increase by using AWS Service Quotas. The response can be filtered by specific attribute IDs for CloudFormation integration.
" + "documentation":"Returns all attributes for a specified data table. A maximum of 100 attributes per data table is allowed. Customers can request an increase by using Amazon Web Services Service Quotas. The response can be filtered by specific attribute IDs for CloudFormation integration.
" }, "ListDataTablePrimaryValues":{ "name":"ListDataTablePrimaryValues", @@ -4683,7 +4683,7 @@ {"shape":"ThrottlingException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Initiates a new outbound SMS contact to a customer. Response of this API provides the ContactId of the outbound SMS contact created.
SourceEndpoint only supports Endpoints with CONNECT_PHONENUMBER_ARN as Type and DestinationEndpoint only supports Endpoints with TELEPHONE_NUMBER as Type. ContactFlowId initiates the flow to manage the new SMS contact created.
This API can be used to initiate outbound SMS contacts for an agent, or it can also deflect an ongoing contact to an outbound SMS contact by using the StartOutboundChatContact Flow Action.
For more information about using SMS in Amazon Connect, see the following topics in the Amazon Connect Administrator Guide:
" + "documentation":"Initiates a new outbound SMS or WhatsApp contact to a customer. Response of this API provides the ContactId of the outbound SMS or WhatsApp contact created.
SourceEndpoint only supports Endpoints with CONNECT_PHONENUMBER_ARN as Type and DestinationEndpoint only supports Endpoints with TELEPHONE_NUMBER as Type. ContactFlowId initiates the flow to manage the new contact created.
This API can be used to initiate outbound SMS or WhatsApp contacts for an agent, or it can also deflect an ongoing contact to an outbound SMS or WhatsApp contact by using the StartOutboundChatContact Flow Action.
For more information about using SMS or WhatsApp in Amazon Connect, see the following topics in the Amazon Connect Administrator Guide:
" }, "StartOutboundEmailContact":{ "name":"StartOutboundEmailContact", @@ -12030,12 +12030,16 @@ "shape":"CurrentMetricName", "documentation":"The name of the metric.
" }, + "MetricId":{ + "shape":"CurrentMetricId", + "documentation":"Out of the box current metrics or custom metrics can be referenced via this field. This field is a valid AWS Connect Arn or a UUID.
" + }, "Unit":{ "shape":"Unit", - "documentation":"The unit for the metric.
" + "documentation":"The Unit parameter is not supported for custom metrics.
The unit for the metric.
" } }, - "documentation":"Contains information about a real-time metric. For a description of each metric, see Metrics definitions in the Amazon Connect Administrator Guide.
" + "documentation":"Contains information about a real-time metric. For a description of each metric, see Metrics definitions in the Amazon Connect Administrator Guide.
Only one of either the Name or MetricId is required.
The current metric names.
", @@ -12230,7 +12238,7 @@ }, "LastModifiedRegion":{ "shape":"RegionName", - "documentation":"The AWS region where the data table was last modified, used for region replication.
" + "documentation":"The Amazon Web Services Region where the data table was last modified, used for region replication.
" }, "Tags":{ "shape":"TagMap", @@ -12298,7 +12306,7 @@ }, "LastModifiedRegion":{ "shape":"RegionName", - "documentation":"The AWS region where this attribute was last modified, used for region replication.
" + "documentation":"The Amazon Web Services Region where this attribute was last modified, used for region replication.
" }, "Validation":{ "shape":"Validation", @@ -14783,6 +14791,14 @@ "AgentStatus":{ "shape":"AgentStatusIdentifier", "documentation":"Information about the agent status assigned to the user.
" + }, + "Subtype":{ + "shape":"Subtype", + "documentation":"The subtype of the channel used for the contact.
" + }, + "ValidationTestType":{ + "shape":"ValidationTestType", + "documentation":"The testing and simulation type
" } }, "documentation":"Contains information about the dimensions for a set of metrics.
" @@ -17885,6 +17901,14 @@ "AgentStatuses":{ "shape":"AgentStatuses", "documentation":"A list of up to 50 agent status IDs or ARNs.
" + }, + "Subtypes":{ + "shape":"Subtypes", + "documentation":"A list of up to 10 subtypes can be provided.
" + }, + "ValidationTestTypes":{ + "shape":"ValidationTestTypes", + "documentation":"A list of up to 10 validationTestTypes can be provided.
" } }, "documentation":"Contains the filter to apply when retrieving metrics.
" @@ -18171,15 +18195,15 @@ }, "Filters":{ "shape":"Filters", - "documentation":"The filters to apply to returned metrics. You can filter up to the following limits:
Queues: 100
Routing profiles: 100
Channels: 3 (VOICE, CHAT, and TASK channels are supported.)
RoutingStepExpressions: 50
AgentStatuses: 50
Metric data is retrieved only for the resources associated with the queues or routing profiles, and by any channels included in the filter. (You cannot filter by both queue AND routing profile.) You can include both resource IDs and resource ARNs in the same request.
When using AgentStatuses as filter make sure Queues is added as primary filter.
When using the RoutingStepExpression filter, you need to pass exactly one QueueId. The filter is also case sensitive so when using the RoutingStepExpression filter, grouping by ROUTING_STEP_EXPRESSION is required.
Currently tagging is only supported on the resources that are passed in the filter.
" + "documentation":"The filters to apply to returned metrics. You can filter up to the following limits:
Queues: 100
Routing profiles: 100
Channels: 3 (VOICE, CHAT, and TASK channels are supported.)
RoutingStepExpressions: 50
AgentStatuses: 50
Subtypes: 10
ValidationTestTypes: 10
Metric data is retrieved only for the resources associated with the queues or routing profiles, and by any channels included in the filter. (You cannot filter by both queue AND routing profile.) You can include both resource IDs and resource ARNs in the same request.
When using AgentStatuses as filter make sure Queues is added as primary filter.
When using Subtypes as filter make sure Queues is added as primary filter.
When using ValidationTestTypes as filter make sure Queues is added as primary filter.
When using the RoutingStepExpression filter, you need to pass exactly one QueueId. The filter is also case sensitive so when using the RoutingStepExpression filter, grouping by ROUTING_STEP_EXPRESSION is required.
Currently tagging is only supported on the resources that are passed in the filter.
" }, "Groupings":{ "shape":"Groupings", - "documentation":"Defines the level of aggregation for metrics data by a dimension(s). Its similar to sorting items into buckets based on a common characteristic, then counting or calculating something for each bucket. For example, when grouped by QUEUE, the metrics returned apply to each queue rather than aggregated for all queues.
The grouping list is an ordered list, with the first item in the list defined as the primary grouping. If no grouping is included in the request, the aggregation happens at the instance-level.
If you group by CHANNEL, you should include a Channels filter. VOICE, CHAT, and TASK channels are supported.
If you group by AGENT_STATUS, you must include the QUEUE as the primary grouping and use queue filter. When you group by AGENT_STATUS, the only metric available is the AGENTS_ONLINE metric.
If you group by ROUTING_PROFILE, you must include either a queue or routing profile filter. In addition, a routing profile filter is required for metrics CONTACTS_SCHEDULED, CONTACTS_IN_QUEUE, and OLDEST_CONTACT_AGE.
When using the RoutingStepExpression filter, group by ROUTING_STEP_EXPRESSION is required.
Defines the level of aggregation for metrics data by a dimension(s). Its similar to sorting items into buckets based on a common characteristic, then counting or calculating something for each bucket. For example, when grouped by QUEUE, the metrics returned apply to each queue rather than aggregated for all queues.
The grouping list is an ordered list, with the first item in the list defined as the primary grouping. If no grouping is included in the request, the aggregation happens at the instance-level.
If you group by CHANNEL, you should include a Channels filter. VOICE, CHAT, and TASK channels are supported.
If you group by AGENT_STATUS, you must include the QUEUE as the primary grouping and use queue filter. When you group by AGENT_STATUS, the only metric available is the AGENTS_ONLINE metric.
If you group by SUBTYPE or VALIDATION_TEST_TYPE as secondary grouping then you must include QUEUE as primary grouping and use Queue as filter
If you group by ROUTING_PROFILE, you must include either a queue or routing profile filter. In addition, a routing profile filter is required for metrics CONTACTS_SCHEDULED, CONTACTS_IN_QUEUE, and OLDEST_CONTACT_AGE.
When using the RoutingStepExpression filter, group by ROUTING_STEP_EXPRESSION is required.
The metrics to retrieve. Specify the name and unit for each metric. The following metrics are available. For a description of all the metrics, see Metrics definitions in the Amazon Connect Administrator Guide.
Unit: COUNT
Name in real-time metrics report: ACW
Unit: COUNT
Name in real-time metrics report: Available
Unit: COUNT
Name in real-time metrics report: Error
Unit: COUNT
Name in real-time metrics report: NPT (Non-Productive Time)
Unit: COUNT
Name in real-time metrics report: On contact
Unit: COUNT
Name in real-time metrics report: On contact
Unit: COUNT
Name in real-time metrics report: Online
Unit: COUNT
Name in real-time metrics report: Staffed
Unit: COUNT
Name in real-time metrics report: In queue
Unit: COUNT
Name in real-time metrics report: Scheduled
Unit: SECONDS
When you use groupings, Unit says SECONDS and the Value is returned in SECONDS.
When you do not use groupings, Unit says SECONDS but the Value is returned in MILLISECONDS. For example, if you get a response like this:
{ \"Metric\": { \"Name\": \"OLDEST_CONTACT_AGE\", \"Unit\": \"SECONDS\" }, \"Value\": 24113.0 }
The actual OLDEST_CONTACT_AGE is 24 seconds.
When the filter RoutingStepExpression is used, this metric is still calculated from enqueue time. For example, if a contact that has been queued under <Expression 1> for 10 seconds has expired and <Expression 2> becomes active, then OLDEST_CONTACT_AGE for this queue will be counted starting from 10, not 0.
Name in real-time metrics report: Oldest
Unit: COUNT
Name in real-time metrics report: Active
Unit: COUNT
Name in real-time metrics report: Availability
The metrics to retrieve. Specify the name or metricId, and unit for each metric. The following metrics are available. For a description of all the metrics, see Metrics definitions in the Amazon Connect Administrator Guide.
MetricId should be used to reference custom metrics or out of the box metrics as Arn. If using MetricId, the limit is 10 MetricId per request.
Unit: COUNT
Name in real-time metrics report: ACW
Unit: COUNT
Name in real-time metrics report: Available
Unit: COUNT
Name in real-time metrics report: Error
Unit: COUNT
Name in real-time metrics report: NPT (Non-Productive Time)
Unit: COUNT
Name in real-time metrics report: On contact
Unit: COUNT
Name in real-time metrics report: On contact
Unit: COUNT
Name in real-time metrics report: Online
Unit: COUNT
Name in real-time metrics report: Staffed
Unit: COUNT
Name in real-time metrics report: In queue
Unit: COUNT
Name in real-time metrics report: Scheduled
Unit: SECONDS
When you use groupings, Unit says SECONDS and the Value is returned in SECONDS.
When you do not use groupings, Unit says SECONDS but the Value is returned in MILLISECONDS. For example, if you get a response like this:
{ \"Metric\": { \"Name\": \"OLDEST_CONTACT_AGE\", \"Unit\": \"SECONDS\" }, \"Value\": 24113.0 }
The actual OLDEST_CONTACT_AGE is 24 seconds.
When the filter RoutingStepExpression is used, this metric is still calculated from enqueue time. For example, if a contact that has been queued under <Expression 1> for 10 seconds has expired and <Expression 2> becomes active, then OLDEST_CONTACT_AGE for this queue will be counted starting from 10, not 0.
Name in real-time metrics report: Oldest
Unit: COUNT
Name in real-time metrics report: Active
Unit: COUNT
Name in real-time metrics report: Availability
The identifier of the Amazon Connect instance that phone numbers are claimed to. You can find the instance ID in the Amazon Resource Name (ARN) of the instance. If both TargetArn and InstanceId are not provided, this API lists numbers claimed to all the Amazon Connect instances belonging to your account in the same AWS Region as the request.
The identifier of the Amazon Connect instance that phone numbers are claimed to. You can find the instance ID in the Amazon Resource Name (ARN) of the instance. If both TargetArn and InstanceId are not provided, this API lists numbers claimed to all the Amazon Connect instances belonging to your account in the same Amazon Web Services Region as the request.
A list of participant types to automatically disconnect when the end customer ends the chat session, allowing them to continue through disconnect flows such as surveys or feedback forms.
Valid value: AGENT.
With the DisconnectOnCustomerExit parameter, you can configure automatic agent disconnection when end customers end the chat, ensuring that disconnect flows are triggered consistently regardless of which participant disconnects first.
A list of participant types to automatically disconnect when the end customer ends the chat session, allowing them to continue through disconnect flows such as surveys or feedback forms.
" } } }, @@ -29466,7 +29492,7 @@ }, "SegmentAttributes":{ "shape":"SegmentAttributes", - "documentation":"A set of system defined key-value pairs stored on individual contact segments using an attribute map. The attributes are standard Amazon Connect attributes. They can be accessed in flows.
Attribute keys can include only alphanumeric, -, and _.
This field can be used to show channel subtype, such as connect:Guide and connect:SMS.
A set of system defined key-value pairs stored on individual contact segments using an attribute map. The attributes are standard Amazon Connect attributes. They can be accessed in flows.
Attribute keys can include only alphanumeric, -, and _.
This field can be used to show channel subtype, such as connect:SMS and connect:WhatsApp.
A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs. The token is valid for 7 days after creation. If a contact is already started, the contact ID is returned.
", + "documentation":"A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs. The token is valid for 7 days after creation. If a contact is already started, the contact ID is returned.
", "idempotencyToken":true } } @@ -30086,6 +30112,11 @@ "max":100, "min":1 }, + "Subtypes":{ + "type":"list", + "member":{"shape":"Subtype"}, + "max":10 + }, "SuccessfulBatchAssociationSummary":{ "type":"structure", "members":{ @@ -33788,6 +33819,12 @@ "type":"list", "member":{"shape":"String"} }, + "ValidationTestType":{"type":"string"}, + "ValidationTestTypes":{ + "type":"list", + "member":{"shape":"ValidationTestType"}, + "max":10 + }, "Value":{"type":"double"}, "ValueBoundary":{ "type":"integer", @@ -34330,7 +34367,7 @@ }, "LastModifiedRegion":{ "shape":"RegionName", - "documentation":"The AWS Region where the workspace was last modified.
" + "documentation":"The Amazon Web Services Region where the workspace was last modified.
" }, "Tags":{ "shape":"TagMap", @@ -34573,7 +34610,7 @@ }, "LastModifiedRegion":{ "shape":"RegionName", - "documentation":"The AWS Region where the workspace was last modified.
" + "documentation":"The Amazon Web Services Region where the workspace was last modified.
" } }, "documentation":"Contains summary information about a workspace.
" diff --git a/awscli/botocore/data/emr-serverless/2021-07-13/service-2.json b/awscli/botocore/data/emr-serverless/2021-07-13/service-2.json index d6a472ed4cbc..a617bb90ad05 100644 --- a/awscli/botocore/data/emr-serverless/2021-07-13/service-2.json +++ b/awscli/botocore/data/emr-serverless/2021-07-13/service-2.json @@ -385,6 +385,10 @@ "identityCenterConfiguration":{ "shape":"IdentityCenterConfiguration", "documentation":"The IAM Identity Center configuration applied to enable trusted identity propagation.
" + }, + "jobLevelCostAllocationConfiguration":{ + "shape":"JobLevelCostAllocationConfiguration", + "documentation":"The configuration object that enables job level cost allocation.
" } }, "documentation":"Information about an application. Amazon EMR Serverless uses applications to run jobs.
" @@ -763,6 +767,10 @@ "identityCenterConfiguration":{ "shape":"IdentityCenterConfigurationInput", "documentation":"The IAM Identity Center Configuration accepts the Identity Center instance parameter required to enable trusted identity propagation. This configuration allows identity propagation between integrated services and the Identity Center instance.
" + }, + "jobLevelCostAllocationConfiguration":{ + "shape":"JobLevelCostAllocationConfiguration", + "documentation":"The configuration object that enables job level cost allocation.
" } } }, @@ -1142,6 +1150,16 @@ "documentation":"The driver that the job runs on.
", "union":true }, + "JobLevelCostAllocationConfiguration":{ + "type":"structure", + "members":{ + "enabled":{ + "shape":"Boolean", + "documentation":"Enables job level cost allocation for the application.
" + } + }, + "documentation":"The configuration object that enables job level cost allocation.
" + }, "JobRun":{ "type":"structure", "required":[ @@ -2281,6 +2299,10 @@ "identityCenterConfiguration":{ "shape":"IdentityCenterConfigurationInput", "documentation":"Specifies the IAM Identity Center configuration used to enable or disable trusted identity propagation. When provided, this configuration determines how the application interacts with IAM Identity Center for user authentication and access control.
" + }, + "jobLevelCostAllocationConfiguration":{ + "shape":"JobLevelCostAllocationConfiguration", + "documentation":"The configuration object that enables job level cost allocation.
" } } }, diff --git a/awscli/botocore/data/iot/2015-05-28/service-2.json b/awscli/botocore/data/iot/2015-05-28/service-2.json index fc0c64c567f6..aa85d09f3407 100644 --- a/awscli/botocore/data/iot/2015-05-28/service-2.json +++ b/awscli/botocore/data/iot/2015-05-28/service-2.json @@ -13063,7 +13063,14 @@ }, "GetV2LoggingOptionsRequest":{ "type":"structure", - "members":{} + "members":{ + "verbose":{ + "shape":"VerboseFlag", + "documentation":"The flag is used to get all the event types and their respective configuration that event-based logging supports.
", + "location":"querystring", + "locationName":"verbose" + } + } }, "GetV2LoggingOptionsResponse":{ "type":"structure", @@ -13079,6 +13086,10 @@ "disableAllLogs":{ "shape":"DisableAllLogs", "documentation":"Disables all logs.
" + }, + "eventConfigurations":{ + "shape":"LogEventConfigurations", + "documentation":"The list of event configurations that override account-level logging.
" } } }, @@ -13137,7 +13148,7 @@ }, "batchConfig":{ "shape":"BatchConfig", - "documentation":"The configuration settings for batching. For more information, see Batching HTTP action messages.
" + "documentation":"The configuration settings for batching. For more information, see Batching HTTP action messages.
" } }, "documentation":"Send data to an HTTPS endpoint.
" @@ -16898,6 +16909,40 @@ }, "documentation":"Describes how to interpret an application-defined timestamp value from an MQTT message payload and the precision of that value.
" }, + "LogDestination":{ + "type":"string", + "max":512, + "min":1, + "pattern":"^[.\\-_/#A-Za-z0-9]+$" + }, + "LogEventConfiguration":{ + "type":"structure", + "required":["eventType"], + "members":{ + "eventType":{ + "shape":"LogEventType", + "documentation":"The type of event to log. These include event types like Connect, Publish, and Disconnect.
" + }, + "logLevel":{ + "shape":"LogLevel", + "documentation":"The logging level for the specified event type. Determines the verbosity of log messages generated for this event type.
" + }, + "logDestination":{ + "shape":"LogDestination", + "documentation":"CloudWatch Log Group for event-based logging. Specifies where log events should be sent. The log destination for event-based logging overrides default Log Group for the specified event type and applies to all resources associated with that event.
" + } + }, + "documentation":"Configuration for event-based logging that specifies which event types to log and their logging settings. Used for account-level logging overrides.
" + }, + "LogEventConfigurations":{ + "type":"list", + "member":{"shape":"LogEventConfiguration"} + }, + "LogEventType":{ + "type":"string", + "max":512, + "min":1 + }, "LogGroupName":{"type":"string"}, "LogLevel":{ "type":"string", @@ -19474,6 +19519,10 @@ "disableAllLogs":{ "shape":"DisableAllLogs", "documentation":"If true all logs are disabled. The default is false.
" + }, + "eventConfigurations":{ + "shape":"LogEventConfigurations", + "documentation":"The list of event configurations that override account-level logging.
" } } }, @@ -22503,6 +22552,7 @@ "pattern":"[\\s\\S]*" }, "Variance":{"type":"double"}, + "VerboseFlag":{"type":"boolean"}, "VerificationState":{ "type":"string", "enum":[ diff --git a/awscli/botocore/data/qbusiness/2023-11-27/service-2.json b/awscli/botocore/data/qbusiness/2023-11-27/service-2.json index a0e6756c3ac8..ec6f506080e3 100644 --- a/awscli/botocore/data/qbusiness/2023-11-27/service-2.json +++ b/awscli/botocore/data/qbusiness/2023-11-27/service-2.json @@ -150,7 +150,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Verifies if a user has access permissions for a specified document and returns the actual ACL attached to the document. Resolves user access on the document via user aliases and groups when verifying user access.
" + "documentation":"Verifies if a user has access permissions for a specified document and returns the actual ACL attached to the document. Resolves user access on the document via user aliases and groups when verifying user access.
", + "readonly":true }, "CreateAnonymousWebExperienceUrl":{ "name":"CreateAnonymousWebExperienceUrl", @@ -673,7 +674,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing Amazon Q Business application.
" + "documentation":"Gets information about an existing Amazon Q Business application.
", + "readonly":true }, "GetChatControlsConfiguration":{ "name":"GetChatControlsConfiguration", @@ -691,7 +693,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about chat controls configured for an existing Amazon Q Business application.
" + "documentation":"Gets information about chat controls configured for an existing Amazon Q Business application.
", + "readonly":true }, "GetChatResponseConfiguration":{ "name":"GetChatResponseConfiguration", @@ -709,7 +712,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves detailed information about a specific chat response configuration from an Amazon Q Business application. This operation returns the complete configuration settings and metadata.
" + "documentation":"Retrieves detailed information about a specific chat response configuration from an Amazon Q Business application. This operation returns the complete configuration settings and metadata.
", + "readonly":true }, "GetDataAccessor":{ "name":"GetDataAccessor", @@ -727,7 +731,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves information about a specified data accessor. This operation returns details about the data accessor, including its display name, unique identifier, Amazon Resource Name (ARN), the associated Amazon Q Business application and IAM Identity Center application, the IAM role for the ISV, the action configurations, and the timestamps for when the data accessor was created and last updated.
" + "documentation":"Retrieves information about a specified data accessor. This operation returns details about the data accessor, including its display name, unique identifier, Amazon Resource Name (ARN), the associated Amazon Q Business application and IAM Identity Center application, the IAM role for the ISV, the action configurations, and the timestamps for when the data accessor was created and last updated.
", + "readonly":true }, "GetDataSource":{ "name":"GetDataSource", @@ -745,7 +750,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing Amazon Q Business data source connector.
" + "documentation":"Gets information about an existing Amazon Q Business data source connector.
", + "readonly":true }, "GetDocumentContent":{ "name":"GetDocumentContent", @@ -763,7 +769,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves the content of a document that was ingested into Amazon Q Business. This API validates user authorization against document ACLs before returning a pre-signed URL for secure document access. You can download or view source documents referenced in chat responses through the URL.
" + "documentation":"Retrieves the content of a document that was ingested into Amazon Q Business. This API validates user authorization against document ACLs before returning a pre-signed URL for secure document access. You can download or view source documents referenced in chat responses through the URL.
", + "readonly":true }, "GetGroup":{ "name":"GetGroup", @@ -782,7 +789,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Describes a group by group name.
" + "documentation":"Describes a group by group name.
", + "readonly":true }, "GetIndex":{ "name":"GetIndex", @@ -800,7 +808,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing Amazon Q Business index.
" + "documentation":"Gets information about an existing Amazon Q Business index.
", + "readonly":true }, "GetMedia":{ "name":"GetMedia", @@ -820,7 +829,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Returns the image bytes corresponding to a media object. If you have implemented your own application with the Chat and ChatSync APIs, and have enabled content extraction from visual data in Amazon Q Business, you use the GetMedia API operation to download the images so you can show them in your UI with responses.
For more information, see Extracting semantic meaning from images and visuals.
" + "documentation":"Returns the image bytes corresponding to a media object. If you have implemented your own application with the Chat and ChatSync APIs, and have enabled content extraction from visual data in Amazon Q Business, you use the GetMedia API operation to download the images so you can show them in your UI with responses.
For more information, see Extracting semantic meaning from images and visuals.
", + "readonly":true }, "GetPlugin":{ "name":"GetPlugin", @@ -838,7 +848,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing Amazon Q Business plugin.
" + "documentation":"Gets information about an existing Amazon Q Business plugin.
", + "readonly":true }, "GetPolicy":{ "name":"GetPolicy", @@ -856,7 +867,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves the current permission policy for a Amazon Q Business application. The policy is returned as a JSON-formatted string and defines the IAM actions that are allowed or denied for the application's resources.
" + "documentation":"Retrieves the current permission policy for a Amazon Q Business application. The policy is returned as a JSON-formatted string and defines the IAM actions that are allowed or denied for the application's resources.
", + "readonly":true }, "GetRetriever":{ "name":"GetRetriever", @@ -874,7 +886,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing retriever used by an Amazon Q Business application.
" + "documentation":"Gets information about an existing retriever used by an Amazon Q Business application.
", + "readonly":true }, "GetUser":{ "name":"GetUser", @@ -893,7 +906,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Describes the universally unique identifier (UUID) associated with a local user in a data source.
" + "documentation":"Describes the universally unique identifier (UUID) associated with a local user in a data source.
", + "readonly":true }, "GetWebExperience":{ "name":"GetWebExperience", @@ -911,7 +925,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets information about an existing Amazon Q Business web experience.
" + "documentation":"Gets information about an existing Amazon Q Business web experience.
", + "readonly":true }, "ListApplications":{ "name":"ListApplications", @@ -928,7 +943,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists Amazon Q Business applications.
Amazon Q Business applications may securely transmit data for processing across Amazon Web Services Regions within your geography. For more information, see Cross region inference in Amazon Q Business.
Lists Amazon Q Business applications.
Amazon Q Business applications may securely transmit data for processing across Amazon Web Services Regions within your geography. For more information, see Cross region inference in Amazon Q Business.
Gets a list of attachments associated with an Amazon Q Business web experience or a list of attachements associated with a specific Amazon Q Business conversation.
" + "documentation":"Gets a list of attachments associated with an Amazon Q Business web experience or a list of attachements associated with a specific Amazon Q Business conversation.
", + "readonly":true }, "ListChatResponseConfigurations":{ "name":"ListChatResponseConfigurations", @@ -965,7 +982,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Retrieves a list of all chat response configurations available in a specified Amazon Q Business application. This operation returns summary information about each configuration to help administrators manage and select appropriate response settings.
" + "documentation":"Retrieves a list of all chat response configurations available in a specified Amazon Q Business application. This operation returns summary information about each configuration to help administrators manage and select appropriate response settings.
", + "readonly":true }, "ListConversations":{ "name":"ListConversations", @@ -984,7 +1002,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists one or more Amazon Q Business conversations.
" + "documentation":"Lists one or more Amazon Q Business conversations.
", + "readonly":true }, "ListDataAccessors":{ "name":"ListDataAccessors", @@ -1002,7 +1021,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the data accessors for a Amazon Q Business application. This operation returns a paginated list of data accessor summaries, including the friendly name, unique identifier, ARN, associated IAM role, and creation/update timestamps for each data accessor.
" + "documentation":"Lists the data accessors for a Amazon Q Business application. This operation returns a paginated list of data accessor summaries, including the friendly name, unique identifier, ARN, associated IAM role, and creation/update timestamps for each data accessor.
", + "readonly":true }, "ListDataSourceSyncJobs":{ "name":"ListDataSourceSyncJobs", @@ -1021,7 +1041,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Get information about an Amazon Q Business data source connector synchronization.
" + "documentation":"Get information about an Amazon Q Business data source connector synchronization.
", + "readonly":true }, "ListDataSources":{ "name":"ListDataSources", @@ -1039,7 +1060,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the Amazon Q Business data source connectors that you have created.
" + "documentation":"Lists the Amazon Q Business data source connectors that you have created.
", + "readonly":true }, "ListDocuments":{ "name":"ListDocuments", @@ -1057,7 +1079,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"A list of documents attached to an index.
" + "documentation":"A list of documents attached to an index.
", + "readonly":true }, "ListGroups":{ "name":"ListGroups", @@ -1076,7 +1099,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Provides a list of groups that are mapped to users.
" + "documentation":"Provides a list of groups that are mapped to users.
", + "readonly":true }, "ListIndices":{ "name":"ListIndices", @@ -1094,7 +1118,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the Amazon Q Business indices you have created.
" + "documentation":"Lists the Amazon Q Business indices you have created.
", + "readonly":true }, "ListMessages":{ "name":"ListMessages", @@ -1113,7 +1138,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets a list of messages associated with an Amazon Q Business web experience.
" + "documentation":"Gets a list of messages associated with an Amazon Q Business web experience.
", + "readonly":true }, "ListPluginActions":{ "name":"ListPluginActions", @@ -1131,7 +1157,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists configured Amazon Q Business actions for a specific plugin in an Amazon Q Business application.
" + "documentation":"Lists configured Amazon Q Business actions for a specific plugin in an Amazon Q Business application.
", + "readonly":true }, "ListPluginTypeActions":{ "name":"ListPluginTypeActions", @@ -1148,7 +1175,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists configured Amazon Q Business actions for any plugin type—both built-in and custom.
" + "documentation":"Lists configured Amazon Q Business actions for any plugin type—both built-in and custom.
", + "readonly":true }, "ListPluginTypeMetadata":{ "name":"ListPluginTypeMetadata", @@ -1165,7 +1193,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists metadata for all Amazon Q Business plugin types.
" + "documentation":"Lists metadata for all Amazon Q Business plugin types.
", + "readonly":true }, "ListPlugins":{ "name":"ListPlugins", @@ -1183,7 +1212,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists configured Amazon Q Business plugins.
" + "documentation":"Lists configured Amazon Q Business plugins.
", + "readonly":true }, "ListRetrievers":{ "name":"ListRetrievers", @@ -1201,7 +1231,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists the retriever used by an Amazon Q Business application.
" + "documentation":"Lists the retriever used by an Amazon Q Business application.
", + "readonly":true }, "ListSubscriptions":{ "name":"ListSubscriptions", @@ -1220,7 +1251,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists all subscriptions created in an Amazon Q Business application.
" + "documentation":"Lists all subscriptions created in an Amazon Q Business application.
", + "readonly":true }, "ListTagsForResource":{ "name":"ListTagsForResource", @@ -1238,7 +1270,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Gets a list of tags associated with a specified resource. Amazon Q Business applications and data sources can have tags associated with them.
" + "documentation":"Gets a list of tags associated with a specified resource. Amazon Q Business applications and data sources can have tags associated with them.
", + "readonly":true }, "ListWebExperiences":{ "name":"ListWebExperiences", @@ -1256,7 +1289,8 @@ {"shape":"ValidationException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Lists one or more Amazon Q Business Web Experiences.
" + "documentation":"Lists one or more Amazon Q Business Web Experiences.
", + "readonly":true }, "PutFeedback":{ "name":"PutFeedback", @@ -1798,8 +1832,7 @@ }, "ActionPayloadFieldValue":{ "type":"structure", - "members":{ - }, + "members":{}, "document":true }, "ActionReview":{ @@ -1926,8 +1959,7 @@ }, "ActionReviewPayloadFieldArrayItemJsonSchema":{ "type":"structure", - "members":{ - }, + "members":{}, "document":true }, "ActionSummary":{ @@ -1990,7 +2022,7 @@ }, "quickSightConfiguration":{ "shape":"QuickSightConfiguration", - "documentation":"The Amazon QuickSight configuration for an Amazon Q Business application that uses QuickSight as the identity provider.
" + "documentation":"The Amazon Quick Suite configuration for an Amazon Q Business application that uses Quick Suite as the identity provider.
" } }, "documentation":"Summary information for an Amazon Q Business application.
" @@ -3343,7 +3375,7 @@ }, "quickSightConfiguration":{ "shape":"QuickSightConfiguration", - "documentation":"The Amazon QuickSight configuration for an Amazon Q Business application that uses QuickSight for authentication. This configuration is required if your application uses QuickSight as the identity provider. For more information, see Creating an Amazon QuickSight integrated application.
" + "documentation":"The Amazon Quick Suite configuration for an Amazon Q Business application that uses Quick Suite for authentication. This configuration is required if your application uses Quick Suite as the identity provider. For more information, see Creating an Amazon Quick Suite integrated application.
" } } }, @@ -3799,8 +3831,7 @@ }, "CreateUserResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "CreateWebExperienceRequest":{ "type":"structure", @@ -4100,8 +4131,7 @@ }, "DataSourceConfiguration":{ "type":"structure", - "members":{ - }, + "members":{}, "documentation":"Provides the configuration information for an Amazon Q Business data source.
", "document":true }, @@ -4267,8 +4297,7 @@ }, "DeleteApplicationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteAttachmentRequest":{ "type":"structure", @@ -4306,8 +4335,7 @@ }, "DeleteAttachmentResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteChatControlsConfigurationRequest":{ "type":"structure", @@ -4323,8 +4351,7 @@ }, "DeleteChatControlsConfigurationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteChatResponseConfigurationRequest":{ "type":"structure", @@ -4349,8 +4376,7 @@ }, "DeleteChatResponseConfigurationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteConversationRequest":{ "type":"structure", @@ -4381,8 +4407,7 @@ }, "DeleteConversationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteDataAccessorRequest":{ "type":"structure", @@ -4407,8 +4432,7 @@ }, "DeleteDataAccessorResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteDataSourceRequest":{ "type":"structure", @@ -4440,8 +4464,7 @@ }, "DeleteDataSourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteDocument":{ "type":"structure", @@ -4494,8 +4517,7 @@ }, "DeleteGroupResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteIndexRequest":{ "type":"structure", @@ -4520,8 +4542,7 @@ }, "DeleteIndexResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeletePluginRequest":{ "type":"structure", @@ -4546,8 +4567,7 @@ }, "DeletePluginResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteRetrieverRequest":{ "type":"structure", @@ -4572,8 +4592,7 @@ }, "DeleteRetrieverResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteUserRequest":{ "type":"structure", @@ -4598,8 +4617,7 @@ }, "DeleteUserResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DeleteWebExperienceRequest":{ "type":"structure", @@ -4624,8 +4642,7 @@ }, "DeleteWebExperienceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "Description":{ "type":"string", @@ -4656,8 +4673,7 @@ }, "DisassociatePermissionResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "DisplayName":{ "type":"string", @@ -5091,8 +5107,7 @@ }, "EndOfInputEvent":{ "type":"structure", - "members":{ - }, + "members":{}, "documentation":"The end of the streaming input for the Chat API.
The Amazon QuickSight authentication configuration for the Amazon Q Business application.
" + "documentation":"The Amazon Quick Suite authentication configuration for the Amazon Q Business application.
" } } }, @@ -6205,13 +6220,13 @@ "type":"string", "max":2048, "min":20, - "pattern":"arn:aws:iam::\\d{12}:(oidc-provider|saml-provider)/[a-zA-Z0-9_\\.\\/@\\-]+" + "pattern":"arn:[a-z0-9-\\.]{1,63}:iam::\\d{12}:(oidc-provider|saml-provider)/[a-zA-Z0-9_\\.\\/@\\-]+" }, "IdcApplicationArn":{ "type":"string", "max":1224, "min":10, - "pattern":"arn:(aws|aws-us-gov|aws-cn|aws-iso|aws-iso-b):sso::\\d{12}:application/(sso)?ins-[a-zA-Z0-9-.]{16}/apl-[a-zA-Z0-9]{16}" + "pattern":"arn:[a-z0-9-\\.]{1,63}:sso::\\d{12}:application/(sso)?ins-[a-zA-Z0-9-.]{16}/apl-[a-zA-Z0-9]{16}" }, "IdcAuthConfiguration":{ "type":"structure", @@ -6235,7 +6250,7 @@ "type":"string", "max":1284, "min":0, - "pattern":"arn:aws:sso::[0-9]{12}:trustedTokenIssuer/(sso)?ins-[a-zA-Z0-9-.]{16}/tti-[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" + "pattern":"arn:[a-z0-9-\\.]{1,63}:sso::[0-9]{12}:trustedTokenIssuer/(sso)?ins-[a-zA-Z0-9-.]{16}/tti-[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, "IdentityProviderConfiguration":{ "type":"structure", @@ -6410,7 +6425,7 @@ "type":"string", "max":1224, "min":10, - "pattern":"arn:(aws|aws-us-gov|aws-cn|aws-iso|aws-iso-b):sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}" + "pattern":"arn:[a-z0-9-\\.]{1,63}:sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}" }, "Instruction":{ "type":"string", @@ -6498,7 +6513,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:aws[a-zA-Z-]*:lambda:[a-z-]*-[0-9]:[0-9]{12}:function:[a-zA-Z0-9-_]+(/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})?(:[a-zA-Z0-9-_]+)?" + "pattern":"arn:[a-z0-9-\\.]{1,63}:lambda:[a-z-]*-[0-9]:[0-9]{12}:function:[a-zA-Z0-9-_]+(/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})?(:[a-zA-Z0-9-_]+)?" }, "LicenseNotFoundException":{ "type":"structure", @@ -7698,8 +7713,7 @@ }, "NoAuthConfiguration":{ "type":"structure", - "members":{ - }, + "members":{}, "documentation":"Information about invoking a custom plugin without any authentication or authorization requirement.
" }, "NumberAttributeBoostingConfiguration":{ @@ -7828,7 +7842,7 @@ }, "PermissionConditionKey":{ "type":"string", - "pattern":"aws:PrincipalTag/qbusiness-dataaccessor:[a-zA-Z]+.*" + "pattern":"aws:[a-zA-Z][a-zA-Z0-9-/:]*" }, "PermissionConditionOperator":{ "type":"string", @@ -7838,7 +7852,7 @@ "type":"string", "max":1000, "min":1, - "pattern":"[a-zA-Z0-9][a-zA-Z0-9_-]*" + "pattern":"[a-zA-Z0-9][a-zA-Z0-9._-]*" }, "PermissionConditionValues":{ "type":"list", @@ -8076,7 +8090,7 @@ "type":"string", "max":1284, "min":1, - "pattern":"arn:aws:iam::[0-9]{12}:role/[a-zA-Z0-9_/+=,.@-]+" + "pattern":"arn:[a-z0-9-\\.]{1,63}:iam::[0-9]{12}:role/[a-zA-Z0-9_/+=,.@-]+" }, "PrincipalUser":{ "type":"structure", @@ -8186,8 +8200,7 @@ }, "PutGroupResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "QAppsConfiguration":{ "type":"structure", @@ -8224,10 +8237,10 @@ "members":{ "clientNamespace":{ "shape":"ClientNamespace", - "documentation":"The Amazon QuickSight namespace that is used as the identity provider. For more information about QuickSight namespaces, see Namespace operations.
" + "documentation":"The Amazon Quick Suite namespace that is used as the identity provider. For more information about Quick Suite namespaces, see Namespace operations.
" } }, - "documentation":"The Amazon QuickSight configuration for an Amazon Q Business application that uses QuickSight as the identity provider. For more information, see Creating an Amazon QuickSight integrated application.
" + "documentation":"The Amazon Quick Suite configuration for an Amazon Q Business application that uses Quick Suite as the identity provider. For more information, see Creating an Amazon Quick Suite integrated application.
" }, "ReadAccessType":{ "type":"string", @@ -8839,8 +8852,7 @@ }, "StopDataSourceSyncJobResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "String":{ "type":"string", @@ -9052,8 +9064,7 @@ }, "TagResourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "TagValue":{ "type":"string", @@ -9245,8 +9256,7 @@ }, "UntagResourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateApplicationRequest":{ "type":"structure", @@ -9294,8 +9304,7 @@ }, "UpdateApplicationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateChatControlsConfigurationRequest":{ "type":"structure", @@ -9344,8 +9353,7 @@ }, "UpdateChatControlsConfigurationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateChatResponseConfigurationRequest":{ "type":"structure", @@ -9384,8 +9392,7 @@ }, "UpdateChatResponseConfigurationResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateDataAccessorRequest":{ "type":"structure", @@ -9423,8 +9430,7 @@ }, "UpdateDataAccessorResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateDataSourceRequest":{ "type":"structure", @@ -9479,8 +9485,7 @@ }, "UpdateDataSourceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateIndexRequest":{ "type":"structure", @@ -9521,8 +9526,7 @@ }, "UpdateIndexResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdatePluginRequest":{ "type":"structure", @@ -9567,8 +9571,7 @@ }, "UpdatePluginResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateRetrieverRequest":{ "type":"structure", @@ -9602,8 +9605,7 @@ }, "UpdateRetrieverResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "UpdateSubscriptionRequest":{ "type":"structure", @@ -9759,8 +9761,7 @@ }, "UpdateWebExperienceResponse":{ "type":"structure", - "members":{ - } + "members":{} }, "Url":{ "type":"string", diff --git a/awscli/botocore/data/wickr/2024-02-01/endpoint-rule-set-1.json b/awscli/botocore/data/wickr/2024-02-01/endpoint-rule-set-1.json new file mode 100644 index 000000000000..1f95801d1f4b --- /dev/null +++ b/awscli/botocore/data/wickr/2024-02-01/endpoint-rule-set-1.json @@ -0,0 +1,350 @@ +{ + "version": "1.0", + "parameters": { + "Region": { + "builtIn": "AWS::Region", + "required": false, + "documentation": "The AWS region used to dispatch the request.", + "type": "string" + }, + "UseDualStack": { + "builtIn": "AWS::UseDualStack", + "required": true, + "default": false, + "documentation": "When true, use the dual-stack endpoint. If the configured endpoint does not support dual-stack, dispatching the request MAY return an error.", + "type": "boolean" + }, + "UseFIPS": { + "builtIn": "AWS::UseFIPS", + "required": true, + "default": false, + "documentation": "When true, send this request to the FIPS-compliant regional endpoint. If the configured endpoint does not have a FIPS compliant endpoint, dispatching the request will return an error.", + "type": "boolean" + }, + "Endpoint": { + "builtIn": "SDK::Endpoint", + "required": false, + "documentation": "Override the endpoint used to send this request", + "type": "string" + } + }, + "rules": [ + { + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Endpoint" + } + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "error": "Invalid Configuration: FIPS and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + } + ], + "type": "tree" + }, + { + "conditions": [], + "rules": [ + { + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "aws.partition", + "argv": [ + { + "ref": "Region" + } + ], + "assign": "PartitionResult" + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } + ] + }, + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "rules": [ + { + "conditions": [], + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://admin.wickr-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + } + ], + "type": "tree" + }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + }, + true + ] + } + ], + "rules": [ + { + "conditions": [], + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://admin.wickr-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + } + ], + "type": "tree" + }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "rules": [ + { + "conditions": [], + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://admin.wickr.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + } + ], + "type": "tree" + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" + } + ], + "type": "tree" + }, + { + "conditions": [], + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://admin.wickr.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ], + "type": "tree" + } + ], + "type": "tree" + } + ], + "type": "tree" + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" + } + ], + "type": "tree" + } + ] +} \ No newline at end of file diff --git a/awscli/botocore/data/wickr/2024-02-01/paginators-1.json b/awscli/botocore/data/wickr/2024-02-01/paginators-1.json new file mode 100644 index 000000000000..f51f71e26c6b --- /dev/null +++ b/awscli/botocore/data/wickr/2024-02-01/paginators-1.json @@ -0,0 +1,52 @@ +{ + "pagination": { + "ListBlockedGuestUsers": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "blocklist" + }, + "ListBots": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "bots" + }, + "ListDevicesForUser": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "devices" + }, + "ListGuestUsers": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "guestlist" + }, + "ListNetworks": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "networks" + }, + "ListSecurityGroupUsers": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "users" + }, + "ListSecurityGroups": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "securityGroups" + }, + "ListUsers": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "users" + } + } +} diff --git a/awscli/botocore/data/wickr/2024-02-01/service-2.json b/awscli/botocore/data/wickr/2024-02-01/service-2.json new file mode 100644 index 000000000000..c87cd4fa02d2 --- /dev/null +++ b/awscli/botocore/data/wickr/2024-02-01/service-2.json @@ -0,0 +1,4174 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2024-02-01", + "auth":["aws.auth#sigv4"], + "endpointPrefix":"admin.wickr", + "protocol":"rest-json", + "protocols":["rest-json"], + "serviceFullName":"AWS Wickr Admin API", + "serviceId":"Wickr", + "signatureVersion":"v4", + "signingName":"wickr", + "uid":"wickr-2024-02-01" + }, + "operations":{ + "BatchCreateUser":{ + "name":"BatchCreateUser", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/users", + "responseCode":200 + }, + "input":{"shape":"BatchCreateUserRequest"}, + "output":{"shape":"BatchCreateUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates multiple users in a specified Wickr network. This operation allows you to provision multiple user accounts simultaneously, optionally specifying security groups, and validation requirements for each user.
codeValidation, inviteCode, and inviteCodeTtl are restricted to networks under preview only.
Deletes multiple users from a specified Wickr network. This operation permanently removes user accounts and their associated data from the network.
", + "idempotent":true + }, + "BatchLookupUserUname":{ + "name":"BatchLookupUserUname", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/users/uname-lookup", + "responseCode":200 + }, + "input":{"shape":"BatchLookupUserUnameRequest"}, + "output":{"shape":"BatchLookupUserUnameResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Looks up multiple user usernames from their unique username hashes (unames). This operation allows you to retrieve the email addresses associated with a list of username hashes.
" + }, + "BatchReinviteUser":{ + "name":"BatchReinviteUser", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/users/re-invite", + "responseCode":200 + }, + "input":{"shape":"BatchReinviteUserRequest"}, + "output":{"shape":"BatchReinviteUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Resends invitation codes to multiple users who have pending invitations in a Wickr network. This operation is useful when users haven't accepted their initial invitations or when invitations have expired.
" + }, + "BatchResetDevicesForUser":{ + "name":"BatchResetDevicesForUser", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/users/{userId}/devices", + "responseCode":200 + }, + "input":{"shape":"BatchResetDevicesForUserRequest"}, + "output":{"shape":"BatchResetDevicesForUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Resets multiple devices for a specific user in a Wickr network. This operation forces the selected devices to log out and requires users to re-authenticate, which is useful for security purposes or when devices need to be revoked.
", + "idempotent":true + }, + "BatchToggleUserSuspendStatus":{ + "name":"BatchToggleUserSuspendStatus", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/users/toggleSuspend", + "responseCode":200 + }, + "input":{"shape":"BatchToggleUserSuspendStatusRequest"}, + "output":{"shape":"BatchToggleUserSuspendStatusResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Suspends or unsuspends multiple users in a Wickr network. Suspended users cannot access the network until they are unsuspended. This operation is useful for temporarily restricting access without deleting user accounts.
", + "idempotent":true + }, + "CreateBot":{ + "name":"CreateBot", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/bots", + "responseCode":200 + }, + "input":{"shape":"CreateBotRequest"}, + "output":{"shape":"CreateBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates a new bot in a specified Wickr network. Bots are automated accounts that can send and receive messages, enabling integration with external systems and automation of tasks.
" + }, + "CreateDataRetentionBot":{ + "name":"CreateDataRetentionBot", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/data-retention-bots", + "responseCode":200 + }, + "input":{"shape":"CreateDataRetentionBotRequest"}, + "output":{"shape":"CreateDataRetentionBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates a data retention bot in a Wickr network. Data retention bots are specialized bots that handle message archiving and compliance by capturing and storing messages for regulatory or organizational requirements.
" + }, + "CreateDataRetentionBotChallenge":{ + "name":"CreateDataRetentionBotChallenge", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/data-retention-bots/challenge", + "responseCode":200 + }, + "input":{"shape":"CreateDataRetentionBotChallengeRequest"}, + "output":{"shape":"CreateDataRetentionBotChallengeResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates a new challenge password for the data retention bot. This password is used for authentication when the bot connects to the network.
" + }, + "CreateNetwork":{ + "name":"CreateNetwork", + "http":{ + "method":"POST", + "requestUri":"/networks", + "responseCode":200 + }, + "input":{"shape":"CreateNetworkRequest"}, + "output":{"shape":"CreateNetworkResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates a new Wickr network with specified access level and configuration. This operation provisions a new communication network for your organization.
", + "idempotent":true + }, + "CreateSecurityGroup":{ + "name":"CreateSecurityGroup", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/security-groups", + "responseCode":200 + }, + "input":{"shape":"CreateSecurityGroupRequest"}, + "output":{"shape":"CreateSecurityGroupResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Creates a new security group in a Wickr network. Security groups allow you to organize users and control their permissions, features, and security settings.
", + "idempotent":true + }, + "DeleteBot":{ + "name":"DeleteBot", + "http":{ + "method":"DELETE", + "requestUri":"/networks/{networkId}/bots/{botId}", + "responseCode":200 + }, + "input":{"shape":"DeleteBotRequest"}, + "output":{"shape":"DeleteBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Deletes a bot from a specified Wickr network. This operation permanently removes the bot account and its associated data from the network.
", + "idempotent":true + }, + "DeleteDataRetentionBot":{ + "name":"DeleteDataRetentionBot", + "http":{ + "method":"DELETE", + "requestUri":"/networks/{networkId}/data-retention-bots", + "responseCode":200 + }, + "input":{"shape":"DeleteDataRetentionBotRequest"}, + "output":{"shape":"DeleteDataRetentionBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Deletes the data retention bot from a Wickr network. This operation permanently removes the bot and all its associated data from the database.
", + "idempotent":true + }, + "DeleteNetwork":{ + "name":"DeleteNetwork", + "http":{ + "method":"DELETE", + "requestUri":"/networks/{networkId}", + "responseCode":200 + }, + "input":{"shape":"DeleteNetworkRequest"}, + "output":{"shape":"DeleteNetworkResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Deletes a Wickr network and all its associated resources, including users, bots, security groups, and settings. This operation is permanent and cannot be undone.
", + "idempotent":true + }, + "DeleteSecurityGroup":{ + "name":"DeleteSecurityGroup", + "http":{ + "method":"DELETE", + "requestUri":"/networks/{networkId}/security-groups/{groupId}", + "responseCode":200 + }, + "input":{"shape":"DeleteSecurityGroupRequest"}, + "output":{"shape":"DeleteSecurityGroupResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Deletes a security group from a Wickr network. This operation cannot be performed on the default security group.
", + "idempotent":true + }, + "GetBot":{ + "name":"GetBot", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/bots/{botId}", + "responseCode":200 + }, + "input":{"shape":"GetBotRequest"}, + "output":{"shape":"GetBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves detailed information about a specific bot in a Wickr network, including its status, group membership, and authentication details.
", + "readonly":true + }, + "GetBotsCount":{ + "name":"GetBotsCount", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/bots/count", + "responseCode":200 + }, + "input":{"shape":"GetBotsCountRequest"}, + "output":{"shape":"GetBotsCountResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves the count of bots in a Wickr network, categorized by their status (pending, active, and total).
", + "readonly":true + }, + "GetDataRetentionBot":{ + "name":"GetDataRetentionBot", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/data-retention-bots", + "responseCode":200 + }, + "input":{"shape":"GetDataRetentionBotRequest"}, + "output":{"shape":"GetDataRetentionBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves information about the data retention bot in a Wickr network, including its status and whether the data retention service is enabled.
", + "readonly":true + }, + "GetGuestUserHistoryCount":{ + "name":"GetGuestUserHistoryCount", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/guest-users/count", + "responseCode":200 + }, + "input":{"shape":"GetGuestUserHistoryCountRequest"}, + "output":{"shape":"GetGuestUserHistoryCountResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves historical guest user count data for a Wickr network, showing the number of guest users per billing period over the past 90 days.
", + "readonly":true + }, + "GetNetwork":{ + "name":"GetNetwork", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}", + "responseCode":200 + }, + "input":{"shape":"GetNetworkRequest"}, + "output":{"shape":"GetNetworkResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves detailed information about a specific Wickr network, including its configuration, access level, and status.
", + "readonly":true + }, + "GetNetworkSettings":{ + "name":"GetNetworkSettings", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/settings", + "responseCode":200 + }, + "input":{"shape":"GetNetworkSettingsRequest"}, + "output":{"shape":"GetNetworkSettingsResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves all network-level settings for a Wickr network, including client metrics, data retention, and other configuration options.
", + "readonly":true + }, + "GetOidcInfo":{ + "name":"GetOidcInfo", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/oidc", + "responseCode":200 + }, + "input":{"shape":"GetOidcInfoRequest"}, + "output":{"shape":"GetOidcInfoResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves the OpenID Connect (OIDC) configuration for a Wickr network, including SSO settings and optional token information if access token parameters are provided.
", + "readonly":true + }, + "GetSecurityGroup":{ + "name":"GetSecurityGroup", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/security-groups/{groupId}", + "responseCode":200 + }, + "input":{"shape":"GetSecurityGroupRequest"}, + "output":{"shape":"GetSecurityGroupResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves detailed information about a specific security group in a Wickr network, including its settings, member counts, and configuration.
", + "readonly":true + }, + "GetUser":{ + "name":"GetUser", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/users/{userId}", + "responseCode":200 + }, + "input":{"shape":"GetUserRequest"}, + "output":{"shape":"GetUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves detailed information about a specific user in a Wickr network, including their profile, status, and activity history.
", + "readonly":true + }, + "GetUsersCount":{ + "name":"GetUsersCount", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/users/count", + "responseCode":200 + }, + "input":{"shape":"GetUsersCountRequest"}, + "output":{"shape":"GetUsersCountResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves the count of users in a Wickr network, categorized by their status (pending, active, rejected) and showing how many users can still be added.
", + "readonly":true + }, + "ListBlockedGuestUsers":{ + "name":"ListBlockedGuestUsers", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/guest-users/blocklist", + "responseCode":200 + }, + "input":{"shape":"ListBlockedGuestUsersRequest"}, + "output":{"shape":"ListBlockedGuestUsersResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of guest users who have been blocked from a Wickr network. You can filter and sort the results.
", + "readonly":true + }, + "ListBots":{ + "name":"ListBots", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/bots", + "responseCode":200 + }, + "input":{"shape":"ListBotsRequest"}, + "output":{"shape":"ListBotsResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of bots in a specified Wickr network. You can filter and sort the results based on various criteria.
", + "readonly":true + }, + "ListDevicesForUser":{ + "name":"ListDevicesForUser", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/users/{userId}/devices", + "responseCode":200 + }, + "input":{"shape":"ListDevicesForUserRequest"}, + "output":{"shape":"ListDevicesForUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of devices associated with a specific user in a Wickr network. This operation returns information about all devices where the user has logged into Wickr.
", + "readonly":true + }, + "ListGuestUsers":{ + "name":"ListGuestUsers", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/guest-users", + "responseCode":200 + }, + "input":{"shape":"ListGuestUsersRequest"}, + "output":{"shape":"ListGuestUsersResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of guest users who have communicated with your Wickr network. Guest users are external users from federated networks who can communicate with network members.
", + "readonly":true + }, + "ListNetworks":{ + "name":"ListNetworks", + "http":{ + "method":"GET", + "requestUri":"/networks", + "responseCode":200 + }, + "input":{"shape":"ListNetworksRequest"}, + "output":{"shape":"ListNetworksResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of all Wickr networks associated with your Amazon Web Services account. You can sort the results by network ID or name.
", + "readonly":true + }, + "ListSecurityGroupUsers":{ + "name":"ListSecurityGroupUsers", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/security-groups/{groupId}/users", + "responseCode":200 + }, + "input":{"shape":"ListSecurityGroupUsersRequest"}, + "output":{"shape":"ListSecurityGroupUsersResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of users who belong to a specific security group in a Wickr network.
", + "readonly":true + }, + "ListSecurityGroups":{ + "name":"ListSecurityGroups", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/security-groups", + "responseCode":200 + }, + "input":{"shape":"ListSecurityGroupsRequest"}, + "output":{"shape":"ListSecurityGroupsResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of security groups in a specified Wickr network. You can sort the results by various criteria.
", + "readonly":true + }, + "ListUsers":{ + "name":"ListUsers", + "http":{ + "method":"GET", + "requestUri":"/networks/{networkId}/users", + "responseCode":200 + }, + "input":{"shape":"ListUsersRequest"}, + "output":{"shape":"ListUsersResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Retrieves a paginated list of users in a specified Wickr network. You can filter and sort the results based on various criteria such as name, status, or security group membership.
", + "readonly":true + }, + "RegisterOidcConfig":{ + "name":"RegisterOidcConfig", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/oidc/save", + "responseCode":200 + }, + "input":{"shape":"RegisterOidcConfigRequest"}, + "output":{"shape":"RegisterOidcConfigResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Registers and saves an OpenID Connect (OIDC) configuration for a Wickr network, enabling Single Sign-On (SSO) authentication through an identity provider.
" + }, + "RegisterOidcConfigTest":{ + "name":"RegisterOidcConfigTest", + "http":{ + "method":"POST", + "requestUri":"/networks/{networkId}/oidc/test", + "responseCode":200 + }, + "input":{"shape":"RegisterOidcConfigTestRequest"}, + "output":{"shape":"RegisterOidcConfigTestResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Tests an OpenID Connect (OIDC) configuration for a Wickr network by validating the connection to the identity provider and retrieving its supported capabilities.
" + }, + "UpdateBot":{ + "name":"UpdateBot", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/bots/{botId}", + "responseCode":200 + }, + "input":{"shape":"UpdateBotRequest"}, + "output":{"shape":"UpdateBotResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the properties of an existing bot in a Wickr network. This operation allows you to modify the bot's display name, security group, password, or suspension status.
", + "idempotent":true + }, + "UpdateDataRetention":{ + "name":"UpdateDataRetention", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/data-retention-bots", + "responseCode":200 + }, + "input":{"shape":"UpdateDataRetentionRequest"}, + "output":{"shape":"UpdateDataRetentionResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the data retention bot settings, allowing you to enable or disable the data retention service, or acknowledge the public key message.
", + "idempotent":true + }, + "UpdateGuestUser":{ + "name":"UpdateGuestUser", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/guest-users/{usernameHash}", + "responseCode":200 + }, + "input":{"shape":"UpdateGuestUserRequest"}, + "output":{"shape":"UpdateGuestUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the block status of a guest user in a Wickr network. This operation allows you to block or unblock a guest user from accessing the network.
" + }, + "UpdateNetwork":{ + "name":"UpdateNetwork", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}", + "responseCode":200 + }, + "input":{"shape":"UpdateNetworkRequest"}, + "output":{"shape":"UpdateNetworkResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the properties of an existing Wickr network, such as its name or encryption key configuration.
", + "idempotent":true + }, + "UpdateNetworkSettings":{ + "name":"UpdateNetworkSettings", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/settings", + "responseCode":200 + }, + "input":{"shape":"UpdateNetworkSettingsRequest"}, + "output":{"shape":"UpdateNetworkSettingsResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates network-level settings for a Wickr network. You can modify settings such as client metrics, data retention, and other network-wide options.
", + "idempotent":true + }, + "UpdateSecurityGroup":{ + "name":"UpdateSecurityGroup", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/security-groups/{groupId}", + "responseCode":200 + }, + "input":{"shape":"UpdateSecurityGroupRequest"}, + "output":{"shape":"UpdateSecurityGroupResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the properties of an existing security group in a Wickr network, such as its name or settings.
", + "idempotent":true + }, + "UpdateUser":{ + "name":"UpdateUser", + "http":{ + "method":"PATCH", + "requestUri":"/networks/{networkId}/users", + "responseCode":200 + }, + "input":{"shape":"UpdateUserRequest"}, + "output":{"shape":"UpdateUserResponse"}, + "errors":[ + {"shape":"ValidationError"}, + {"shape":"BadRequestError"}, + {"shape":"ResourceNotFoundError"}, + {"shape":"ForbiddenError"}, + {"shape":"UnauthorizedError"}, + {"shape":"InternalServerError"}, + {"shape":"RateLimitError"} + ], + "documentation":"Updates the properties of an existing user in a Wickr network. This operation allows you to modify the user's name, password, security group membership, and invite code settings.
codeValidation, inviteCode, and inviteCodeTtl are restricted to networks under preview only.
A detailed message explaining what was wrong with the request and how to correct it.
" + } + }, + "documentation":"The request was invalid or malformed. This error occurs when the request parameters do not meet the API requirements, such as invalid field values, missing required parameters, or improperly formatted data.
", + "error":{ + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "BasicDeviceObject":{ + "type":"structure", + "members":{ + "appId":{ + "shape":"GenericString", + "documentation":"The unique application ID for the Wickr app on this device.
" + }, + "created":{ + "shape":"GenericString", + "documentation":"The timestamp when the device first appeared in the Wickr database.
" + }, + "lastLogin":{ + "shape":"GenericString", + "documentation":"The timestamp when the device last successfully logged into Wickr. This is also used to determine SSO idle time.
" + }, + "statusText":{ + "shape":"GenericString", + "documentation":"The current status of the device, either 'Active' or 'Reset' depending on whether the device is currently active or has been marked for reset.
" + }, + "suspend":{ + "shape":"Boolean", + "documentation":"Indicates whether the device is suspended.
" + }, + "type":{ + "shape":"GenericString", + "documentation":"The operating system of the device (e.g., 'MacOSX', 'Windows', 'iOS', 'Android').
" + } + }, + "documentation":"Represents a device where a user has logged into Wickr, containing information about the device's type, status, and login history.
" + }, + "BatchCreateUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "users" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where users will be created.
", + "location":"uri", + "locationName":"networkId" + }, + "users":{ + "shape":"BatchCreateUserRequestItems", + "documentation":"A list of user objects containing the details for each user to be created, including username, name, security groups, and optional invite codes. Maximum 50 users per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency. If you retry a request with the same client token, the service will return the same response without creating duplicate users.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchCreateUserRequestItem":{ + "type":"structure", + "required":[ + "securityGroupIds", + "username" + ], + "members":{ + "firstName":{ + "shape":"SensitiveString", + "documentation":"The first name of the user.
" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"The last name of the user.
" + }, + "securityGroupIds":{ + "shape":"SecurityGroupIdList", + "documentation":"A list of security group IDs to which the user should be assigned.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The email address or username for the user. Must be unique within the network.
" + }, + "inviteCode":{ + "shape":"GenericString", + "documentation":"A custom invite code for the user. If not provided, one will be generated automatically.
" + }, + "inviteCodeTtl":{ + "shape":"Integer", + "documentation":"The time-to-live for the invite code in days. After this period, the invite code will expire.
" + }, + "codeValidation":{ + "shape":"Boolean", + "documentation":"Indicates whether the user can be verified through a custom invite code.
" + } + }, + "documentation":"Contains the details for a single user to be created in a batch user creation request.
A user can only be assigned to a single security group. Attempting to add a user to multiple security groups is not supported and will result in an error.
codeValidation, inviteCode, and inviteCodeTtl are restricted to networks under preview only.
A message indicating the overall result of the batch operation.
" + }, + "successful":{ + "shape":"Users", + "documentation":"A list of user objects that were successfully created, including their assigned user IDs and invite codes.
" + }, + "failed":{ + "shape":"BatchUserErrorResponseItems", + "documentation":"A list of user creation attempts that failed, including error details explaining why each user could not be created.
" + } + } + }, + "BatchDeleteUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "userIds" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which users will be deleted.
", + "location":"uri", + "locationName":"networkId" + }, + "userIds":{ + "shape":"UserIds", + "documentation":"A list of user IDs identifying the users to be deleted from the network. Maximum 50 users per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency. If you retry a request with the same client token, the service will return the same response without attempting to delete users again.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchDeleteUserResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the overall result of the batch deletion operation.
" + }, + "successful":{ + "shape":"BatchUserSuccessResponseItems", + "documentation":"A list of user IDs that were successfully deleted from the network.
" + }, + "failed":{ + "shape":"BatchUserErrorResponseItems", + "documentation":"A list of user deletion attempts that failed, including error details explaining why each user could not be deleted.
" + } + } + }, + "BatchDeviceErrorResponseItem":{ + "type":"structure", + "required":["appId"], + "members":{ + "field":{ + "shape":"GenericString", + "documentation":"The field that caused the error.
" + }, + "reason":{ + "shape":"GenericString", + "documentation":"A description of why the device operation failed.
" + }, + "appId":{ + "shape":"GenericString", + "documentation":"The application ID of the device that failed to be processed.
" + } + }, + "documentation":"Contains error information for a device operation that failed in a batch device request.
" + }, + "BatchDeviceErrorResponseItems":{ + "type":"list", + "member":{"shape":"BatchDeviceErrorResponseItem"} + }, + "BatchDeviceSuccessResponseItem":{ + "type":"structure", + "required":["appId"], + "members":{ + "appId":{ + "shape":"GenericString", + "documentation":"The application ID of the device that was successfully processed.
" + } + }, + "documentation":"Contains information about a device that was successfully processed in a batch device operation.
" + }, + "BatchDeviceSuccessResponseItems":{ + "type":"list", + "member":{"shape":"BatchDeviceSuccessResponseItem"} + }, + "BatchLookupUserUnameRequest":{ + "type":"structure", + "required":[ + "networkId", + "unames" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where the users will be looked up.
", + "location":"uri", + "locationName":"networkId" + }, + "unames":{ + "shape":"Unames", + "documentation":"A list of username hashes (unames) to look up. Each uname is a unique identifier for a user's username. Maximum 50 unames per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchLookupUserUnameResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the overall result of the batch lookup operation.
" + }, + "successful":{ + "shape":"BatchUnameSuccessResponseItems", + "documentation":"A list of successfully resolved username hashes with their corresponding email addresses.
" + }, + "failed":{ + "shape":"BatchUnameErrorResponseItems", + "documentation":"A list of username hash lookup attempts that failed, including error details explaining why each lookup failed.
" + } + } + }, + "BatchReinviteUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "userIds" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where users will be reinvited.
", + "location":"uri", + "locationName":"networkId" + }, + "userIds":{ + "shape":"UserIds", + "documentation":"A list of user IDs identifying the users to be reinvited to the network. Maximum 50 users per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchReinviteUserResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the overall result of the batch reinvitation operation.
" + }, + "successful":{ + "shape":"BatchUserSuccessResponseItems", + "documentation":"A list of user IDs that were successfully reinvited.
" + }, + "failed":{ + "shape":"BatchUserErrorResponseItems", + "documentation":"A list of reinvitation attempts that failed, including error details explaining why each user could not be reinvited.
" + } + } + }, + "BatchResetDevicesForUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "userId", + "appIds" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the user whose devices will be reset.
", + "location":"uri", + "locationName":"networkId" + }, + "userId":{ + "shape":"UserId", + "documentation":"The ID of the user whose devices will be reset.
", + "location":"uri", + "locationName":"userId" + }, + "appIds":{ + "shape":"AppIds", + "documentation":"A list of application IDs identifying the specific devices to be reset for the user. Maximum 50 devices per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchResetDevicesForUserResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the overall result of the batch device reset operation.
" + }, + "successful":{ + "shape":"BatchDeviceSuccessResponseItems", + "documentation":"A list of application IDs that were successfully reset.
" + }, + "failed":{ + "shape":"BatchDeviceErrorResponseItems", + "documentation":"A list of device reset attempts that failed, including error details explaining why each device could not be reset.
" + } + } + }, + "BatchToggleUserSuspendStatusRequest":{ + "type":"structure", + "required":[ + "networkId", + "suspend", + "userIds" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where users will be suspended or unsuspended.
", + "location":"uri", + "locationName":"networkId" + }, + "suspend":{ + "shape":"Boolean", + "documentation":"A boolean value indicating whether to suspend (true) or unsuspend (false) the specified users.
", + "location":"querystring", + "locationName":"suspend" + }, + "userIds":{ + "shape":"UserIds", + "documentation":"A list of user IDs identifying the users whose suspend status will be toggled. Maximum 50 users per batch request.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "BatchToggleUserSuspendStatusResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the overall result of the batch suspend status toggle operation.
" + }, + "successful":{ + "shape":"BatchUserSuccessResponseItems", + "documentation":"A list of user IDs whose suspend status was successfully toggled.
" + }, + "failed":{ + "shape":"BatchUserErrorResponseItems", + "documentation":"A list of suspend status toggle attempts that failed, including error details explaining why each user's status could not be changed.
" + } + } + }, + "BatchUnameErrorResponseItem":{ + "type":"structure", + "required":["uname"], + "members":{ + "field":{ + "shape":"GenericString", + "documentation":"The field that caused the error.
" + }, + "reason":{ + "shape":"GenericString", + "documentation":"A description of why the username hash lookup failed.
" + }, + "uname":{ + "shape":"Uname", + "documentation":"The username hash that failed to be looked up.
" + } + }, + "documentation":"Contains error information for a username hash lookup that failed in a batch uname lookup request.
" + }, + "BatchUnameErrorResponseItems":{ + "type":"list", + "member":{"shape":"BatchUnameErrorResponseItem"} + }, + "BatchUnameSuccessResponseItem":{ + "type":"structure", + "required":[ + "uname", + "username" + ], + "members":{ + "uname":{ + "shape":"Uname", + "documentation":"The username hash that was successfully resolved.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The email address or username corresponding to the username hash.
" + } + }, + "documentation":"Contains information about a username hash that was successfully resolved in a batch uname lookup operation.
" + }, + "BatchUnameSuccessResponseItems":{ + "type":"list", + "member":{"shape":"BatchUnameSuccessResponseItem"} + }, + "BatchUserErrorResponseItem":{ + "type":"structure", + "required":["userId"], + "members":{ + "field":{ + "shape":"GenericString", + "documentation":"The field that caused the error.
" + }, + "reason":{ + "shape":"GenericString", + "documentation":"A description of why the user operation failed.
" + }, + "userId":{ + "shape":"UserId", + "documentation":"The user ID associated with the failed operation.
" + } + }, + "documentation":"Contains error information for a user operation that failed in a batch user request.
" + }, + "BatchUserErrorResponseItems":{ + "type":"list", + "member":{"shape":"BatchUserErrorResponseItem"} + }, + "BatchUserSuccessResponseItem":{ + "type":"structure", + "required":["userId"], + "members":{ + "userId":{ + "shape":"UserId", + "documentation":"The user ID that was successfully processed.
" + } + }, + "documentation":"Contains information about a user that was successfully processed in a batch user operation.
" + }, + "BatchUserSuccessResponseItems":{ + "type":"list", + "member":{"shape":"BatchUserSuccessResponseItem"} + }, + "BlockedGuestUser":{ + "type":"structure", + "required":[ + "username", + "admin", + "modified", + "usernameHash" + ], + "members":{ + "username":{ + "shape":"GenericString", + "documentation":"The username of the blocked guest user.
" + }, + "admin":{ + "shape":"GenericString", + "documentation":"The username of the administrator who blocked this guest user.
" + }, + "modified":{ + "shape":"GenericString", + "documentation":"The timestamp when the guest user was blocked or last modified.
" + }, + "usernameHash":{ + "shape":"GenericString", + "documentation":"The unique username hash identifier for the blocked guest user.
" + } + }, + "documentation":"Represents a guest user who has been blocked from accessing a Wickr network.
" + }, + "BlockedGuestUserList":{ + "type":"list", + "member":{"shape":"BlockedGuestUser"} + }, + "Boolean":{ + "type":"boolean", + "box":true + }, + "Bot":{ + "type":"structure", + "members":{ + "botId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the bot.
" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"The display name of the bot that is visible to users.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The username of the bot.
" + }, + "uname":{ + "shape":"GenericString", + "documentation":"The unique username hash identifier for the bot.
" + }, + "pubkey":{ + "shape":"GenericString", + "documentation":"The public key of the bot used for encryption.
" + }, + "status":{ + "shape":"BotStatus", + "documentation":"The current status of the bot (1 for pending, 2 for active).
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the security group to which the bot belongs.
" + }, + "hasChallenge":{ + "shape":"Boolean", + "documentation":"Indicates whether the bot has a password set.
" + }, + "suspended":{ + "shape":"Boolean", + "documentation":"Indicates whether the bot is currently suspended.
" + }, + "lastLogin":{ + "shape":"GenericString", + "documentation":"The timestamp of the bot's last login.
" + } + }, + "documentation":"Represents a bot account in a Wickr network with all its informational fields.
" + }, + "BotId":{ + "type":"string", + "max":10, + "min":1, + "pattern":"[0-9]+" + }, + "BotStatus":{ + "type":"integer", + "box":true + }, + "Bots":{ + "type":"list", + "member":{"shape":"Bot"} + }, + "CallingSettings":{ + "type":"structure", + "members":{ + "canStart11Call":{ + "shape":"Boolean", + "documentation":"Specifies whether users can start one-to-one calls.
" + }, + "canVideoCall":{ + "shape":"Boolean", + "documentation":"Specifies whether users can make video calls (as opposed to audio-only calls). Valid only when audio call(canStart11Call) is enabled.
" + }, + "forceTcpCall":{ + "shape":"Boolean", + "documentation":"When enabled, forces all calls to use TCP protocol instead of UDP for network traversal.
" + } + }, + "documentation":"Defines the calling feature permissions and settings for users in a security group, controlling what types of calls users can initiate and participate in.
" + }, + "ClientToken":{ + "type":"string", + "max":64, + "min":1, + "pattern":"[a-zA-Z0-9-_:]+" + }, + "CreateBotRequest":{ + "type":"structure", + "required":[ + "networkId", + "username", + "groupId", + "challenge" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where the bot will be created.
", + "location":"uri", + "locationName":"networkId" + }, + "username":{ + "shape":"GenericString", + "documentation":"The username for the bot. This must be unique within the network and follow the network's naming conventions.
" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"The display name for the bot that will be visible to users in the network.
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the security group to which the bot will be assigned.
" + }, + "challenge":{ + "shape":"SensitiveString", + "documentation":"The password for the bot account.
" + } + } + }, + "CreateBotResponse":{ + "type":"structure", + "required":["botId"], + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the bot creation operation.
" + }, + "botId":{ + "shape":"BotId", + "documentation":"The unique identifier assigned to the newly created bot.
" + }, + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the network where the bot was created.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The username of the newly created bot.
" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"The display name of the newly created bot.
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the security group to which the bot was assigned.
" + } + } + }, + "CreateDataRetentionBotChallengeRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the data retention bot.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "CreateDataRetentionBotChallengeResponse":{ + "type":"structure", + "required":["challenge"], + "members":{ + "challenge":{ + "shape":"SensitiveString", + "documentation":"The newly generated challenge password for the data retention bot.
" + } + } + }, + "CreateDataRetentionBotRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where the data retention bot will be created.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "CreateDataRetentionBotResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating that the data retention bot was successfully provisioned.
" + } + } + }, + "CreateNetworkRequest":{ + "type":"structure", + "required":[ + "networkName", + "accessLevel" + ], + "members":{ + "networkName":{ + "shape":"GenericString", + "documentation":"The name for the new network. Must be between 1 and 20 characters.
" + }, + "accessLevel":{ + "shape":"AccessLevel", + "documentation":"The access level for the network. Valid values are STANDARD or PREMIUM, which determine the features and capabilities available to network members.
" + }, + "enablePremiumFreeTrial":{ + "shape":"Boolean", + "documentation":"Specifies whether to enable a premium free trial for the network. It is optional and has a default value as false. When set to true, the network starts with premium features for a limited trial period.
" + }, + "encryptionKeyArn":{ + "shape":"GenericString", + "documentation":"The ARN of the Amazon Web Services KMS customer managed key to use for encrypting sensitive data in the network.
" + } + } + }, + "CreateNetworkResponse":{ + "type":"structure", + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The unique identifier assigned to the newly created network.
" + }, + "networkName":{ + "shape":"GenericString", + "documentation":"The name of the newly created network.
" + }, + "encryptionKeyArn":{ + "shape":"GenericString", + "documentation":"The ARN of the KMS key being used to encrypt sensitive data in the network.
" + } + } + }, + "CreateSecurityGroupRequest":{ + "type":"structure", + "required":[ + "networkId", + "name", + "securityGroupSettings" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where the security group will be created.
", + "location":"uri", + "locationName":"networkId" + }, + "name":{ + "shape":"GenericString", + "documentation":"The name for the new security group.
" + }, + "securityGroupSettings":{ + "shape":"SecurityGroupSettingsRequest", + "documentation":"The configuration settings for the security group, including permissions, federation settings, and feature controls.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "CreateSecurityGroupResponse":{ + "type":"structure", + "required":["securityGroup"], + "members":{ + "securityGroup":{ + "shape":"SecurityGroup", + "documentation":"The details of the newly created security group, including its ID, name, and settings.
" + } + } + }, + "DataRetentionActionType":{ + "type":"string", + "enum":[ + "ENABLE", + "DISABLE", + "PUBKEY_MSG_ACK" + ] + }, + "DeleteBotRequest":{ + "type":"structure", + "required":[ + "networkId", + "botId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which the bot will be deleted.
", + "location":"uri", + "locationName":"networkId" + }, + "botId":{ + "shape":"BotId", + "documentation":"The unique identifier of the bot to be deleted.
", + "location":"uri", + "locationName":"botId" + } + } + }, + "DeleteBotResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the bot deletion operation.
" + } + } + }, + "DeleteDataRetentionBotRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which the data retention bot will be deleted.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "DeleteDataRetentionBotResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating that the data retention bot and all associated data were successfully deleted.
" + } + } + }, + "DeleteNetworkRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network to delete.
", + "location":"uri", + "locationName":"networkId" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency. If you retry a request with the same client token, the service will return the same response without attempting to delete the network again.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + } + } + }, + "DeleteNetworkResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating that the network deletion has been initiated successfully.
" + } + } + }, + "DeleteSecurityGroupRequest":{ + "type":"structure", + "required":[ + "networkId", + "groupId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which the security group will be deleted.
", + "location":"uri", + "locationName":"networkId" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the security group to delete.
", + "location":"uri", + "locationName":"groupId" + } + } + }, + "DeleteSecurityGroupResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the security group deletion operation.
" + }, + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the network from which the security group was deleted.
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the security group that was deleted.
" + } + } + }, + "Devices":{ + "type":"list", + "member":{"shape":"BasicDeviceObject"} + }, + "ErrorDetail":{ + "type":"structure", + "members":{ + "field":{ + "shape":"GenericString", + "documentation":"The name of the field that contains an error or warning.
" + }, + "reason":{ + "shape":"GenericString", + "documentation":"A detailed description of the error or warning.
" + } + }, + "documentation":"Contains detailed error information explaining why an operation failed, including which field caused the error and the reason for the failure.
" + }, + "ErrorDetailList":{ + "type":"list", + "member":{"shape":"ErrorDetail"} + }, + "ForbiddenError":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message explaining why access was denied and what permissions are required.
" + } + }, + "documentation":"Access to the requested resource is forbidden. This error occurs when the authenticated user does not have the necessary permissions to perform the requested operation, even though they are authenticated.
", + "error":{ + "httpStatusCode":403, + "senderFault":true + }, + "exception":true + }, + "GenericString":{ + "type":"string", + "pattern":"[\\S\\s]*" + }, + "GetBotRequest":{ + "type":"structure", + "required":[ + "networkId", + "botId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the bot.
", + "location":"uri", + "locationName":"networkId" + }, + "botId":{ + "shape":"BotId", + "documentation":"The unique identifier of the bot to retrieve.
", + "location":"uri", + "locationName":"botId" + } + } + }, + "GetBotResponse":{ + "type":"structure", + "members":{ + "botId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the bot.
" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"The display name of the bot that is visible to users.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The username of the bot.
" + }, + "uname":{ + "shape":"GenericString", + "documentation":"The unique username hash identifier for the bot.
" + }, + "pubkey":{ + "shape":"GenericString", + "documentation":"The public key of the bot used for encryption.
" + }, + "status":{ + "shape":"BotStatus", + "documentation":"The current status of the bot (1 for pending, 2 for active).
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the security group to which the bot belongs.
" + }, + "hasChallenge":{ + "shape":"Boolean", + "documentation":"Indicates whether the bot has a password set.
" + }, + "suspended":{ + "shape":"Boolean", + "documentation":"Indicates whether the bot is currently suspended.
" + }, + "lastLogin":{ + "shape":"GenericString", + "documentation":"The timestamp of the bot's last login.
" + } + } + }, + "GetBotsCountRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network for which to retrieve bot counts.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetBotsCountResponse":{ + "type":"structure", + "required":[ + "pending", + "active", + "total" + ], + "members":{ + "pending":{ + "shape":"Integer", + "documentation":"The number of bots with pending status (invited but not yet activated).
" + }, + "active":{ + "shape":"Integer", + "documentation":"The number of bots with active status.
" + }, + "total":{ + "shape":"Integer", + "documentation":"The total number of bots in the network (active and pending).
" + } + } + }, + "GetDataRetentionBotRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the data retention bot.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetDataRetentionBotResponse":{ + "type":"structure", + "members":{ + "botName":{ + "shape":"GenericString", + "documentation":"The name of the data retention bot.
" + }, + "botExists":{ + "shape":"Boolean", + "documentation":"Indicates whether a data retention bot exists in the network.
" + }, + "isBotActive":{ + "shape":"Boolean", + "documentation":"Indicates whether the data retention bot is active and operational.
" + }, + "isDataRetentionBotRegistered":{ + "shape":"Boolean", + "documentation":"Indicates whether the data retention bot has been registered with the network.
" + }, + "isDataRetentionServiceEnabled":{ + "shape":"Boolean", + "documentation":"Indicates whether the data retention service is enabled for the network.
" + }, + "isPubkeyMsgAcked":{ + "shape":"Boolean", + "documentation":"Indicates whether the public key message has been acknowledged by the bot.
" + } + } + }, + "GetGuestUserHistoryCountRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network for which to retrieve guest user history.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetGuestUserHistoryCountResponse":{ + "type":"structure", + "required":["history"], + "members":{ + "history":{ + "shape":"GuestUserHistoryCountList", + "documentation":"A list of historical guest user counts, organized by month and billing period.
" + } + } + }, + "GetNetworkRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network to retrieve.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetNetworkResponse":{ + "type":"structure", + "required":[ + "networkId", + "networkName", + "accessLevel", + "awsAccountId", + "networkArn" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The unique identifier of the network.
" + }, + "networkName":{ + "shape":"GenericString", + "documentation":"The name of the network.
" + }, + "accessLevel":{ + "shape":"AccessLevel", + "documentation":"The access level of the network (STANDARD or PREMIUM), which determines available features and capabilities.
" + }, + "awsAccountId":{ + "shape":"GenericString", + "documentation":"The Amazon Web Services account ID that owns the network.
" + }, + "networkArn":{ + "shape":"GenericString", + "documentation":"The Amazon Resource Name (ARN) of the network.
" + }, + "standing":{ + "shape":"Integer", + "documentation":"The current standing or status of the network.
" + }, + "freeTrialExpiration":{ + "shape":"GenericString", + "documentation":"The expiration date and time for the network's free trial period, if applicable.
" + }, + "migrationState":{ + "shape":"Integer", + "documentation":"The SSO redirect URI migration state, managed by the SSO redirect migration wizard. Values: 0 (not started), 1 (in progress), or 2 (completed).
" + }, + "encryptionKeyArn":{ + "shape":"GenericString", + "documentation":"The ARN of the Amazon Web Services KMS customer managed key used for encrypting sensitive data in the network.
" + } + } + }, + "GetNetworkSettingsRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network whose settings will be retrieved.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetNetworkSettingsResponse":{ + "type":"structure", + "required":["settings"], + "members":{ + "settings":{ + "shape":"SettingsList", + "documentation":"A list of network settings, where each setting includes a name, value, and type.
" + } + } + }, + "GetOidcInfoRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network whose OIDC configuration will be retrieved.
", + "location":"uri", + "locationName":"networkId" + }, + "clientId":{ + "shape":"GenericString", + "documentation":"The OAuth client ID for retrieving access tokens (optional).
", + "location":"querystring", + "locationName":"clientId" + }, + "code":{ + "shape":"GenericString", + "documentation":"The authorization code for retrieving access tokens (optional).
", + "location":"querystring", + "locationName":"code" + }, + "grantType":{ + "shape":"GenericString", + "documentation":"The OAuth grant type for retrieving access tokens (optional).
", + "location":"querystring", + "locationName":"grantType" + }, + "redirectUri":{ + "shape":"GenericString", + "documentation":"The redirect URI for the OAuth flow (optional).
", + "location":"querystring", + "locationName":"redirectUri" + }, + "url":{ + "shape":"GenericString", + "documentation":"The URL for the OIDC provider (optional).
", + "location":"querystring", + "locationName":"url" + }, + "clientSecret":{ + "shape":"SensitiveString", + "documentation":"The OAuth client secret for retrieving access tokens (optional).
", + "location":"querystring", + "locationName":"clientSecret" + }, + "codeVerifier":{ + "shape":"GenericString", + "documentation":"The PKCE code verifier for enhanced security in the OAuth flow (optional).
", + "location":"querystring", + "locationName":"codeVerifier" + }, + "certificate":{ + "shape":"GenericString", + "documentation":"The CA certificate for secure communication with the OIDC provider (optional).
", + "location":"querystring", + "locationName":"certificate" + } + } + }, + "GetOidcInfoResponse":{ + "type":"structure", + "members":{ + "openidConnectInfo":{ + "shape":"OidcConfigInfo", + "documentation":"The OpenID Connect configuration information for the network, including issuer, client ID, scopes, and other SSO settings.
" + }, + "tokenInfo":{ + "shape":"OidcTokenInfo", + "documentation":"OAuth token information including access token, refresh token, and expiration details (only present if token parameters were provided in the request).
" + } + } + }, + "GetSecurityGroupRequest":{ + "type":"structure", + "required":[ + "networkId", + "groupId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the security group.
", + "location":"uri", + "locationName":"networkId" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the security group to retrieve.
", + "location":"uri", + "locationName":"groupId" + } + } + }, + "GetSecurityGroupResponse":{ + "type":"structure", + "required":["securityGroup"], + "members":{ + "securityGroup":{ + "shape":"SecurityGroup", + "documentation":"The detailed information about the security group, including all its settings and member counts.
" + } + } + }, + "GetUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "userId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the user.
", + "location":"uri", + "locationName":"networkId" + }, + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier of the user to retrieve.
", + "location":"uri", + "locationName":"userId" + }, + "startTime":{ + "shape":"SyntheticTimestamp_epoch_seconds", + "documentation":"The start time for filtering the user's last activity. Only activity after this timestamp will be considered. Time is specified in epoch seconds.
", + "location":"querystring", + "locationName":"startTime" + }, + "endTime":{ + "shape":"SyntheticTimestamp_epoch_seconds", + "documentation":"The end time for filtering the user's last activity. Only activity before this timestamp will be considered. Time is specified in epoch seconds.
", + "location":"querystring", + "locationName":"endTime" + } + } + }, + "GetUserResponse":{ + "type":"structure", + "required":["userId"], + "members":{ + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier of the user.
" + }, + "firstName":{ + "shape":"SensitiveString", + "documentation":"The first name of the user.
" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"The last name of the user.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The email address or username of the user.
" + }, + "isAdmin":{ + "shape":"Boolean", + "documentation":"Indicates whether the user has administrator privileges in the network.
" + }, + "suspended":{ + "shape":"Boolean", + "documentation":"Indicates whether the user is currently suspended.
" + }, + "status":{ + "shape":"Integer", + "documentation":"The current status of the user (1 for pending, 2 for active).
" + }, + "lastActivity":{ + "shape":"Integer", + "documentation":"The timestamp of the user's last activity in the network, specified in epoch seconds.
" + }, + "lastLogin":{ + "shape":"Integer", + "documentation":"The timestamp of the user's last login to the network, specified in epoch seconds.
" + }, + "securityGroupIds":{ + "shape":"SecurityGroupIdList", + "documentation":"A list of security group IDs to which the user belongs.
" + } + } + }, + "GetUsersCountRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network for which to retrieve user counts.
", + "location":"uri", + "locationName":"networkId" + } + } + }, + "GetUsersCountResponse":{ + "type":"structure", + "required":[ + "pending", + "active", + "rejected", + "remaining", + "total" + ], + "members":{ + "pending":{ + "shape":"Integer", + "documentation":"The number of users with pending status (invited but not yet accepted).
" + }, + "active":{ + "shape":"Integer", + "documentation":"The number of users with active status in the network.
" + }, + "rejected":{ + "shape":"Integer", + "documentation":"The number of users who have rejected network invitations.
" + }, + "remaining":{ + "shape":"Integer", + "documentation":"The number of additional users that can be added to the network while maintaining premium free trial eligibility.
" + }, + "total":{ + "shape":"Integer", + "documentation":"The total number of users in the network (active and pending combined).
" + } + } + }, + "GuestUser":{ + "type":"structure", + "required":[ + "billingPeriod", + "username", + "usernameHash" + ], + "members":{ + "billingPeriod":{ + "shape":"GenericString", + "documentation":"The billing period when this guest user accessed the network (e.g., '2024-01').
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The username of the guest user.
" + }, + "usernameHash":{ + "shape":"GenericString", + "documentation":"The unique username hash identifier for the guest user.
" + } + }, + "documentation":"Represents a guest user who has accessed the network from a federated Wickr network.
" + }, + "GuestUserHistoryCount":{ + "type":"structure", + "required":[ + "month", + "count" + ], + "members":{ + "month":{ + "shape":"GenericString", + "documentation":"The month and billing period in YYYY_MM format (e.g., '2024_01').
" + }, + "count":{ + "shape":"GenericString", + "documentation":"The number of guest users who have communicated with your Wickr network during this billing period.
" + } + }, + "documentation":"Contains the count of guest users for a specific billing period, used for tracking historical guest user activity.
" + }, + "GuestUserHistoryCountList":{ + "type":"list", + "member":{"shape":"GuestUserHistoryCount"} + }, + "GuestUserList":{ + "type":"list", + "member":{"shape":"GuestUser"} + }, + "Integer":{ + "type":"integer", + "box":true + }, + "InternalServerError":{ + "type":"structure", + "required":["message"], + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message describing the internal server error that occurred.
" + } + }, + "documentation":"An unexpected error occurred on the server while processing the request. This indicates a problem with the Wickr service itself rather than with the request. If this error persists, contact Amazon Web Services Support.
", + "error":{"httpStatusCode":500}, + "exception":true, + "fault":true + }, + "ListBlockedGuestUsersRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which to list blocked guest users.
", + "location":"uri", + "locationName":"networkId" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of blocked guest users to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The field to sort blocked guest users by. Accepted values include 'username', 'admin', and 'modified'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "username":{ + "shape":"GenericString", + "documentation":"Filter results to only include blocked guest users with usernames matching this value.
", + "location":"querystring", + "locationName":"username" + }, + "admin":{ + "shape":"GenericString", + "documentation":"Filter results to only include blocked guest users that were blocked by this administrator.
", + "location":"querystring", + "locationName":"admin" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListBlockedGuestUsersResponse":{ + "type":"structure", + "required":["blocklist"], + "members":{ + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + }, + "blocklist":{ + "shape":"BlockedGuestUserList", + "documentation":"A list of blocked guest user objects within the current page.
" + } + } + }, + "ListBotsRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which to list bots.
", + "location":"uri", + "locationName":"networkId" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of bots to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The fields to sort bots by. Multiple fields can be specified by separating them with '+'. Accepted values include 'username', 'firstName', 'displayName', 'status', and 'groupId'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"Filter results to only include bots with display names matching this value.
", + "location":"querystring", + "locationName":"displayName" + }, + "username":{ + "shape":"GenericString", + "documentation":"Filter results to only include bots with usernames matching this value.
", + "location":"querystring", + "locationName":"username" + }, + "status":{ + "shape":"BotStatus", + "documentation":"Filter results to only include bots with this status (1 for pending, 2 for active).
", + "location":"querystring", + "locationName":"status" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"Filter results to only include bots belonging to this security group.
", + "location":"querystring", + "locationName":"groupId" + } + } + }, + "ListBotsResponse":{ + "type":"structure", + "required":["bots"], + "members":{ + "bots":{ + "shape":"Bots", + "documentation":"A list of bot objects matching the specified filters and within the current page.
" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + } + } + }, + "ListDevicesForUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "userId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the user.
", + "location":"uri", + "locationName":"networkId" + }, + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier of the user whose devices will be listed.
", + "location":"uri", + "locationName":"userId" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of devices to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The fields to sort devices by. Multiple fields can be specified by separating them with '+'. Accepted values include 'lastlogin', 'type', 'suspend', and 'created'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + } + } + }, + "ListDevicesForUserResponse":{ + "type":"structure", + "required":["devices"], + "members":{ + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + }, + "devices":{ + "shape":"Devices", + "documentation":"A list of device objects associated with the user within the current page.
" + } + } + }, + "ListGuestUsersRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which to list guest users.
", + "location":"uri", + "locationName":"networkId" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of guest users to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The field to sort guest users by. Accepted values include 'username' and 'billingPeriod'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "username":{ + "shape":"GenericString", + "documentation":"Filter results to only include guest users with usernames matching this value.
", + "location":"querystring", + "locationName":"username" + }, + "billingPeriod":{ + "shape":"GenericString", + "documentation":"Filter results to only include guest users from this billing period (e.g., '2024-01').
", + "location":"querystring", + "locationName":"billingPeriod" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListGuestUsersResponse":{ + "type":"structure", + "required":["guestlist"], + "members":{ + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + }, + "guestlist":{ + "shape":"GuestUserList", + "documentation":"A list of guest user objects within the current page.
" + } + } + }, + "ListNetworksRequest":{ + "type":"structure", + "members":{ + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of networks to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The field to sort networks by. Accepted values are 'networkId' and 'networkName'. Default is 'networkId'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListNetworksResponse":{ + "type":"structure", + "required":["networks"], + "members":{ + "networks":{ + "shape":"NetworkList", + "documentation":"A list of network objects for the Amazon Web Services account.
" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + } + } + }, + "ListSecurityGroupUsersRequest":{ + "type":"structure", + "required":[ + "networkId", + "groupId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the security group.
", + "location":"uri", + "locationName":"networkId" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the security group whose users will be listed.
", + "location":"uri", + "locationName":"groupId" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of users to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The field to sort users by. Multiple fields can be specified by separating them with '+'. Accepted values include 'username', 'firstName', and 'lastName'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + } + } + }, + "ListSecurityGroupUsersResponse":{ + "type":"structure", + "required":["users"], + "members":{ + "users":{ + "shape":"Users", + "documentation":"A list of user objects belonging to the security group within the current page.
" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + } + } + }, + "ListSecurityGroupsRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which to list security groups.
", + "location":"uri", + "locationName":"networkId" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of security groups to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The field to sort security groups by. Accepted values include 'id' and 'name'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + } + } + }, + "ListSecurityGroupsResponse":{ + "type":"structure", + "members":{ + "securityGroups":{ + "shape":"SecurityGroupList", + "documentation":"A list of security group objects in the current page.
" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + } + } + }, + "ListUsersRequest":{ + "type":"structure", + "required":["networkId"], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network from which to list users.
", + "location":"uri", + "locationName":"networkId" + }, + "nextToken":{ + "shape":"GenericString", + "documentation":"The token for retrieving the next page of results. This is returned from a previous request when there are more results available.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"Integer", + "documentation":"The maximum number of users to return in a single page. Valid range is 1-100. Default is 10.
", + "location":"querystring", + "locationName":"maxResults" + }, + "sortFields":{ + "shape":"GenericString", + "documentation":"The fields to sort users by. Multiple fields can be specified by separating them with '+'. Accepted values include 'username', 'firstName', 'lastName', 'status', and 'groupId'.
", + "location":"querystring", + "locationName":"sortFields" + }, + "sortDirection":{ + "shape":"SortDirection", + "documentation":"The direction to sort results. Valid values are 'ASC' (ascending) or 'DESC' (descending). Default is 'DESC'.
", + "location":"querystring", + "locationName":"sortDirection" + }, + "firstName":{ + "shape":"SensitiveString", + "documentation":"Filter results to only include users with first names matching this value.
", + "location":"querystring", + "locationName":"firstName" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"Filter results to only include users with last names matching this value.
", + "location":"querystring", + "locationName":"lastName" + }, + "username":{ + "shape":"GenericString", + "documentation":"Filter results to only include users with usernames matching this value.
", + "location":"querystring", + "locationName":"username" + }, + "status":{ + "shape":"UserStatus", + "documentation":"Filter results to only include users with this status (1 for pending, 2 for active).
", + "location":"querystring", + "locationName":"status" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"Filter results to only include users belonging to this security group.
", + "location":"querystring", + "locationName":"groupId" + } + } + }, + "ListUsersResponse":{ + "type":"structure", + "members":{ + "nextToken":{ + "shape":"GenericString", + "documentation":"The token to use for retrieving the next page of results. If this is not present, there are no more results.
" + }, + "users":{ + "shape":"Users", + "documentation":"A list of user objects matching the specified filters and within the current page.
" + } + } + }, + "Long":{ + "type":"long", + "box":true + }, + "Network":{ + "type":"structure", + "required":[ + "networkId", + "networkName", + "accessLevel", + "awsAccountId", + "networkArn" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The unique identifier of the network.
" + }, + "networkName":{ + "shape":"GenericString", + "documentation":"The name of the network.
" + }, + "accessLevel":{ + "shape":"AccessLevel", + "documentation":"The access level of the network (STANDARD or PREMIUM), which determines available features and capabilities.
" + }, + "awsAccountId":{ + "shape":"GenericString", + "documentation":"The Amazon Web Services account ID that owns the network.
" + }, + "networkArn":{ + "shape":"GenericString", + "documentation":"The Amazon Resource Name (ARN) of the network.
" + }, + "standing":{ + "shape":"Integer", + "documentation":"The current standing or status of the network.
" + }, + "freeTrialExpiration":{ + "shape":"GenericString", + "documentation":"The expiration date and time for the network's free trial period, if applicable.
" + }, + "migrationState":{ + "shape":"Integer", + "documentation":"The SSO redirect URI migration state, managed by the SSO redirect migration wizard. Values: 0 (not started), 1 (in progress), or 2 (completed).
" + }, + "encryptionKeyArn":{ + "shape":"GenericString", + "documentation":"The ARN of the Amazon Web Services KMS customer managed key used for encrypting sensitive data in the network.
" + } + }, + "documentation":"Represents a Wickr network with all its configuration and status information.
" + }, + "NetworkId":{ + "type":"string", + "max":8, + "min":8, + "pattern":"[0-9]{8}" + }, + "NetworkList":{ + "type":"list", + "member":{"shape":"Network"} + }, + "NetworkSettings":{ + "type":"structure", + "members":{ + "enableClientMetrics":{ + "shape":"Boolean", + "documentation":"Allows Wickr clients to send anonymized performance and usage metrics to the Wickr backend server for service improvement and troubleshooting.
" + }, + "readReceiptConfig":{ + "shape":"ReadReceiptConfig", + "documentation":"Configuration for read receipts at the network level, controlling the default behavior for whether senders can see when their messages have been read.
" + }, + "dataRetention":{ + "shape":"Boolean", + "documentation":"Indicates whether the data retention feature is enabled for the network. When true, messages are captured by the data retention bot for compliance and archiving purposes.
" + } + }, + "documentation":"Contains network-level configuration settings that apply to all users and security groups within a Wickr network.
" + }, + "OidcConfigInfo":{ + "type":"structure", + "required":[ + "companyId", + "scopes", + "issuer" + ], + "members":{ + "applicationName":{ + "shape":"GenericString", + "documentation":"The name of the OIDC application as registered with the identity provider.
" + }, + "clientId":{ + "shape":"GenericString", + "documentation":"The OAuth client ID assigned by the identity provider for authentication requests.
" + }, + "companyId":{ + "shape":"GenericString", + "documentation":"Custom identifier your end users will use to sign in with SSO.
" + }, + "scopes":{ + "shape":"GenericString", + "documentation":"The OAuth scopes requested from the identity provider, which determine what user information is accessible (e.g., 'openid profile email').
" + }, + "issuer":{ + "shape":"GenericString", + "documentation":"The issuer URL of the identity provider, which serves as the base URL for OIDC endpoints and configuration discovery.
" + }, + "clientSecret":{ + "shape":"SensitiveString", + "documentation":"The OAuth client secret used to authenticate the application with the identity provider.
" + }, + "secret":{ + "shape":"SensitiveString", + "documentation":"An additional secret credential used by the identity provider for authentication.
" + }, + "redirectUrl":{ + "shape":"GenericString", + "documentation":"The callback URL where the identity provider redirects users after successful authentication. This URL must be registered with the identity provider.
" + }, + "userId":{ + "shape":"GenericString", + "documentation":"The claim field from the OIDC token to use as the unique user identifier (e.g., 'email', 'sub', or a custom claim).
" + }, + "customUsername":{ + "shape":"GenericString", + "documentation":"A custom field mapping to extract the username from the OIDC token when the standard username claim is insufficient.
" + }, + "caCertificate":{ + "shape":"GenericString", + "documentation":"The X.509 CA certificate for validating SSL/TLS connections to the identity provider when using self-signed or enterprise certificates.
" + }, + "applicationId":{ + "shape":"OidcConfigInfoApplicationIdInteger", + "documentation":"The unique identifier for the registered OIDC application. Valid range is 1-10.
" + }, + "ssoTokenBufferMinutes":{ + "shape":"Integer", + "documentation":"The grace period in minutes before the SSO token expires when the system should proactively refresh the token to maintain seamless user access.
" + }, + "extraAuthParams":{ + "shape":"GenericString", + "documentation":"Additional authentication parameters to include in the OIDC authorization request as a query string. Useful for provider-specific extensions.
" + } + }, + "documentation":"Contains the OpenID Connect (OIDC) configuration information for Single Sign-On (SSO) authentication, including identity provider settings and client credentials.
" + }, + "OidcConfigInfoApplicationIdInteger":{ + "type":"integer", + "box":true, + "max":10, + "min":1 + }, + "OidcTokenInfo":{ + "type":"structure", + "members":{ + "codeVerifier":{ + "shape":"GenericString", + "documentation":"The PKCE (Proof Key for Code Exchange) code verifier, a cryptographically random string used to enhance security in the OAuth flow.
" + }, + "codeChallenge":{ + "shape":"GenericString", + "documentation":"The PKCE code challenge, a transformed version of the code verifier sent during the authorization request for verification.
" + }, + "accessToken":{ + "shape":"GenericString", + "documentation":"The OAuth access token that can be used to access protected resources on behalf of the authenticated user.
" + }, + "idToken":{ + "shape":"GenericString", + "documentation":"The OpenID Connect ID token containing user identity information and authentication context as a signed JWT.
" + }, + "refreshToken":{ + "shape":"GenericString", + "documentation":"The OAuth refresh token that can be used to obtain new access tokens without requiring the user to re-authenticate.
" + }, + "tokenType":{ + "shape":"GenericString", + "documentation":"The type of access token issued, typically 'Bearer', which indicates how the token should be used in API requests.
" + }, + "expiresIn":{ + "shape":"Long", + "documentation":"The lifetime of the access token in seconds, indicating when the token will expire and need to be refreshed.
" + } + }, + "documentation":"Contains OAuth token information returned from the identity provider, including access tokens, ID tokens, and PKCE parameters used for secure authentication.
" + }, + "PasswordRequirements":{ + "type":"structure", + "members":{ + "lowercase":{ + "shape":"Integer", + "documentation":"The minimum number of lowercase letters required in passwords.
" + }, + "minLength":{ + "shape":"Integer", + "documentation":"The minimum password length in characters.
" + }, + "numbers":{ + "shape":"Integer", + "documentation":"The minimum number of numeric characters required in passwords.
" + }, + "symbols":{ + "shape":"Integer", + "documentation":"The minimum number of special symbol characters required in passwords.
" + }, + "uppercase":{ + "shape":"Integer", + "documentation":"The minimum number of uppercase letters required in passwords.
" + } + }, + "documentation":"Defines password complexity requirements for users in a security group, including minimum length and character type requirements.
" + }, + "PermittedNetworksList":{ + "type":"list", + "member":{"shape":"NetworkId"} + }, + "PermittedWickrEnterpriseNetwork":{ + "type":"structure", + "required":[ + "domain", + "networkId" + ], + "members":{ + "domain":{ + "shape":"GenericString", + "documentation":"The domain identifier for the permitted Wickr enterprise network.
" + }, + "networkId":{ + "shape":"NetworkId", + "documentation":"The network ID of the permitted Wickr enterprise network.
" + } + }, + "documentation":"Identifies a Wickr enterprise network that is permitted for global federation, allowing users to communicate with members of the specified network.
" + }, + "PermittedWickrEnterpriseNetworksList":{ + "type":"list", + "member":{"shape":"PermittedWickrEnterpriseNetwork"} + }, + "RateLimitError":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating that the rate limit was exceeded and suggesting when to retry.
" + } + }, + "documentation":"The request was throttled because too many requests were sent in a short period of time. Wait a moment and retry the request. Consider implementing exponential backoff in your application.
", + "error":{ + "httpStatusCode":429, + "senderFault":true + }, + "exception":true + }, + "ReadReceiptConfig":{ + "type":"structure", + "members":{ + "status":{ + "shape":"Status", + "documentation":"The read receipt status mode for the network.
" + } + }, + "documentation":"Configuration for read receipts at the network level, controlling whether senders can see when their messages have been read.
" + }, + "RegisterOidcConfigRequest":{ + "type":"structure", + "required":[ + "networkId", + "companyId", + "issuer", + "scopes" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network for which OIDC will be configured.
", + "location":"uri", + "locationName":"networkId" + }, + "companyId":{ + "shape":"GenericString", + "documentation":"Custom identifier your end users will use to sign in with SSO.
" + }, + "customUsername":{ + "shape":"GenericString", + "documentation":"A custom field mapping to extract the username from the OIDC token (optional).
The customUsername is only required if you use something other than email as the username field.
Additional authentication parameters to include in the OIDC flow (optional).
" + }, + "issuer":{ + "shape":"GenericString", + "documentation":"The issuer URL of the OIDC provider (e.g., 'https://login.example.com').
" + }, + "scopes":{ + "shape":"GenericString", + "documentation":"The OAuth scopes to request from the OIDC provider (e.g., 'openid profile email').
" + }, + "secret":{ + "shape":"SensitiveString", + "documentation":"The client secret for authenticating with the OIDC provider (optional).
" + }, + "ssoTokenBufferMinutes":{ + "shape":"Integer", + "documentation":"The buffer time in minutes before the SSO token expires to refresh it (optional).
" + }, + "userId":{ + "shape":"GenericString", + "documentation":"Unique identifier provided by your identity provider to authenticate the access request. Also referred to as clientID.
" + } + } + }, + "RegisterOidcConfigResponse":{ + "type":"structure", + "required":[ + "companyId", + "scopes", + "issuer" + ], + "members":{ + "applicationName":{ + "shape":"GenericString", + "documentation":"The name of the registered OIDC application.
" + }, + "clientId":{ + "shape":"GenericString", + "documentation":"The OAuth client ID assigned to the application.
" + }, + "companyId":{ + "shape":"GenericString", + "documentation":"Custom identifier your end users will use to sign in with SSO.
" + }, + "scopes":{ + "shape":"GenericString", + "documentation":"The OAuth scopes configured for the application.
" + }, + "issuer":{ + "shape":"GenericString", + "documentation":"The issuer URL of the OIDC provider.
" + }, + "clientSecret":{ + "shape":"SensitiveString", + "documentation":"The OAuth client secret for the application.
" + }, + "secret":{ + "shape":"SensitiveString", + "documentation":"The client secret for authenticating with the OIDC provider.
" + }, + "redirectUrl":{ + "shape":"GenericString", + "documentation":"The redirect URL configured for the OAuth flow.
" + }, + "userId":{ + "shape":"GenericString", + "documentation":"The claim field being used as the user identifier.
" + }, + "customUsername":{ + "shape":"GenericString", + "documentation":"The custom field mapping used for extracting the username.
" + }, + "caCertificate":{ + "shape":"GenericString", + "documentation":"The CA certificate used for secure communication with the OIDC provider.
" + }, + "applicationId":{ + "shape":"RegisterOidcConfigResponseApplicationIdInteger", + "documentation":"The unique identifier for the registered OIDC application.
" + }, + "ssoTokenBufferMinutes":{ + "shape":"Integer", + "documentation":"The buffer time in minutes before the SSO token expires.
" + }, + "extraAuthParams":{ + "shape":"GenericString", + "documentation":"The additional authentication parameters configured for the OIDC flow.
" + } + } + }, + "RegisterOidcConfigResponseApplicationIdInteger":{ + "type":"integer", + "box":true, + "max":10, + "min":1 + }, + "RegisterOidcConfigTestRequest":{ + "type":"structure", + "required":[ + "networkId", + "issuer", + "scopes" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network for which the OIDC configuration will be tested.
", + "location":"uri", + "locationName":"networkId" + }, + "extraAuthParams":{ + "shape":"GenericString", + "documentation":"Additional authentication parameters to include in the test (optional).
" + }, + "issuer":{ + "shape":"GenericString", + "documentation":"The issuer URL of the OIDC provider to test.
" + }, + "scopes":{ + "shape":"GenericString", + "documentation":"The OAuth scopes to test with the OIDC provider.
" + }, + "certificate":{ + "shape":"GenericString", + "documentation":"The CA certificate for secure communication with the OIDC provider (optional).
" + } + } + }, + "RegisterOidcConfigTestResponse":{ + "type":"structure", + "members":{ + "tokenEndpoint":{ + "shape":"GenericString", + "documentation":"The token endpoint URL discovered from the OIDC provider.
" + }, + "userinfoEndpoint":{ + "shape":"GenericString", + "documentation":"The user info endpoint URL discovered from the OIDC provider.
" + }, + "responseTypesSupported":{ + "shape":"StringList", + "documentation":"The OAuth response types supported by the OIDC provider.
" + }, + "scopesSupported":{ + "shape":"StringList", + "documentation":"The OAuth scopes supported by the OIDC provider.
" + }, + "issuer":{ + "shape":"GenericString", + "documentation":"The issuer URL confirmed by the OIDC provider.
" + }, + "authorizationEndpoint":{ + "shape":"GenericString", + "documentation":"The authorization endpoint URL discovered from the OIDC provider.
" + }, + "endSessionEndpoint":{ + "shape":"GenericString", + "documentation":"The end session endpoint URL for logging out users from the OIDC provider.
" + }, + "logoutEndpoint":{ + "shape":"GenericString", + "documentation":"The logout endpoint URL for terminating user sessions.
" + }, + "grantTypesSupported":{ + "shape":"StringList", + "documentation":"The OAuth grant types supported by the OIDC provider.
" + }, + "revocationEndpoint":{ + "shape":"GenericString", + "documentation":"The token revocation endpoint URL for invalidating tokens.
" + }, + "tokenEndpointAuthMethodsSupported":{ + "shape":"StringList", + "documentation":"The authentication methods supported by the token endpoint.
" + }, + "microsoftMultiRefreshToken":{ + "shape":"Boolean", + "documentation":"Indicates whether the provider supports Microsoft multi-refresh tokens.
" + } + } + }, + "ResourceNotFoundError":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message identifying which resource was not found.
" + } + }, + "documentation":"The requested resource could not be found. This error occurs when you try to access or modify a network, user, bot, security group, or other resource that doesn't exist or has been deleted.
", + "error":{ + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "SecurityGroup":{ + "type":"structure", + "required":[ + "activeMembers", + "botMembers", + "id", + "isDefault", + "name", + "modified", + "securityGroupSettings" + ], + "members":{ + "activeMembers":{ + "shape":"Integer", + "documentation":"The number of active user members currently in the security group.
" + }, + "botMembers":{ + "shape":"Integer", + "documentation":"The number of bot members currently in the security group.
" + }, + "activeDirectoryGuid":{ + "shape":"GenericString", + "documentation":"The GUID of the Active Directory group associated with this security group, if synchronized with LDAP.
" + }, + "id":{ + "shape":"GenericString", + "documentation":"The unique identifier of the security group.
" + }, + "isDefault":{ + "shape":"Boolean", + "documentation":"Indicates whether this is the default security group for the network. Each network has only one default group.
" + }, + "name":{ + "shape":"GenericString", + "documentation":"The human-readable name of the security group.
" + }, + "modified":{ + "shape":"Integer", + "documentation":"The timestamp when the security group was last modified, specified in epoch seconds.
" + }, + "securityGroupSettings":{ + "shape":"SecurityGroupSettings", + "documentation":"The comprehensive configuration settings that define capabilities and restrictions for members of this security group.
" + } + }, + "documentation":"Represents a security group in a Wickr network, containing membership statistics, configuration, and all permission settings that apply to its members.
" + }, + "SecurityGroupId":{ + "type":"string", + "pattern":"[\\S]+" + }, + "SecurityGroupIdList":{ + "type":"list", + "member":{"shape":"SecurityGroupId"} + }, + "SecurityGroupList":{ + "type":"list", + "member":{"shape":"SecurityGroup"} + }, + "SecurityGroupSettings":{ + "type":"structure", + "members":{ + "alwaysReauthenticate":{ + "shape":"Boolean", + "documentation":"Requires users to reauthenticate every time they return to the application, providing an additional layer of security.
" + }, + "atakPackageValues":{ + "shape":"SecurityGroupStringList", + "documentation":"Configuration values for ATAK (Android Team Awareness Kit) package integration, when ATAK is enabled.
" + }, + "calling":{ + "shape":"CallingSettings", + "documentation":"The calling feature permissions and settings that control what types of calls users can initiate and participate in.
" + }, + "checkForUpdates":{ + "shape":"Boolean", + "documentation":"Enables automatic checking for Wickr client updates to ensure users stay current with the latest version.
" + }, + "enableAtak":{ + "shape":"Boolean", + "documentation":"Enables ATAK (Android Team Awareness Kit) integration for tactical communication and situational awareness.
" + }, + "enableCrashReports":{ + "shape":"Boolean", + "documentation":"Allow users to report crashes.
" + }, + "enableFileDownload":{ + "shape":"Boolean", + "documentation":"Specifies whether users can download files from messages to their devices.
" + }, + "enableGuestFederation":{ + "shape":"Boolean", + "documentation":"Allows users to communicate with guest users from other Wickr networks and federated external networks.
" + }, + "enableNotificationPreview":{ + "shape":"Boolean", + "documentation":"Enables message preview text in push notifications, allowing users to see message content before opening the app.
" + }, + "enableOpenAccessOption":{ + "shape":"Boolean", + "documentation":"Allow users to avoid censorship when they are geo-blocked or have network limitations.
" + }, + "enableRestrictedGlobalFederation":{ + "shape":"Boolean", + "documentation":"Enables restricted global federation, limiting external communication to only specified permitted networks.
" + }, + "filesEnabled":{ + "shape":"Boolean", + "documentation":"Enables file sharing capabilities, allowing users to send and receive files in conversations.
" + }, + "forceDeviceLockout":{ + "shape":"Integer", + "documentation":"Defines the number of failed login attempts before data stored on the device is reset. Should be less than lockoutThreshold.
" + }, + "forceOpenAccess":{ + "shape":"Boolean", + "documentation":"Automatically enable and enforce Wickr open access on all devices. Valid only if enableOpenAccessOption settings is enabled.
" + }, + "forceReadReceipts":{ + "shape":"Boolean", + "documentation":"Allow user approved bots to read messages in rooms without using a slash command.
" + }, + "globalFederation":{ + "shape":"Boolean", + "documentation":"Allows users to communicate with users on other Wickr instances (Wickr Enterprise) outside the current network.
" + }, + "isAtoEnabled":{ + "shape":"Boolean", + "documentation":"Enforces a two-factor authentication when a user adds a new device to their account.
" + }, + "isLinkPreviewEnabled":{ + "shape":"Boolean", + "documentation":"Enables automatic preview of links shared in messages, showing webpage thumbnails and descriptions.
" + }, + "locationAllowMaps":{ + "shape":"Boolean", + "documentation":"Allows map integration in location sharing, enabling users to view shared locations on interactive maps. Only allowed when location setting is enabled.
" + }, + "locationEnabled":{ + "shape":"Boolean", + "documentation":"Enables location sharing features, allowing users to share their current location with others.
" + }, + "maxAutoDownloadSize":{ + "shape":"Long", + "documentation":"The maximum file size in bytes that will be automatically downloaded without user confirmation. Only allowed if fileDownload is enabled. Valid Values [512000 (low_quality), 7340032 (high_quality) ]
" + }, + "maxBor":{ + "shape":"Integer", + "documentation":"The maximum burn-on-read (BOR) time in seconds, which determines how long messages remain visible before auto-deletion after being read.
" + }, + "maxTtl":{ + "shape":"Long", + "documentation":"The maximum time-to-live (TTL) in seconds for messages, after which they will be automatically deleted from all devices.
" + }, + "messageForwardingEnabled":{ + "shape":"Boolean", + "documentation":"Enables message forwarding, allowing users to forward messages from one conversation to another.
" + }, + "passwordRequirements":{ + "shape":"PasswordRequirements", + "documentation":"The password complexity requirements that users must follow when creating or changing passwords.
" + }, + "presenceEnabled":{ + "shape":"Boolean", + "documentation":"Enables presence indicators that show whether users are online, away, or offline.
" + }, + "quickResponses":{ + "shape":"SecurityGroupStringList", + "documentation":"A list of pre-defined quick response message templates that users can send with a single tap.
" + }, + "showMasterRecoveryKey":{ + "shape":"Boolean", + "documentation":"Users will get a master recovery key that can be used to securely sign in to their Wickr account without having access to their primary device for authentication. Available in SSO enabled network.
" + }, + "shredder":{ + "shape":"ShredderSettings", + "documentation":"The message shredder configuration that controls secure deletion of messages and files from devices.
" + }, + "ssoMaxIdleMinutes":{ + "shape":"Integer", + "documentation":"The duration for which users SSO session remains inactive before automatically logging them out for security. Available in SSO enabled network.
" + }, + "federationMode":{ + "shape":"Integer", + "documentation":"The local federation mode controlling how users can communicate with other networks. Values: 0 (none), 1 (federated), 2 (restricted).
" + }, + "lockoutThreshold":{ + "shape":"Integer", + "documentation":"The number of failed password attempts before a user account is locked out.
" + }, + "permittedNetworks":{ + "shape":"PermittedNetworksList", + "documentation":"A list of network IDs that are permitted for local federation when federation mode is set to restricted.
" + }, + "permittedWickrAwsNetworks":{ + "shape":"WickrAwsNetworksList", + "documentation":"A list of permitted Wickr networks for global federation, restricting communication to specific approved networks.
" + }, + "permittedWickrEnterpriseNetworks":{ + "shape":"PermittedWickrEnterpriseNetworksList", + "documentation":"A list of permitted Wickr Enterprise networks for global federation, restricting communication to specific approved networks.
" + } + }, + "documentation":"Comprehensive configuration settings that define all user capabilities, restrictions, and features for members of a security group. These settings control everything from calling permissions to federation settings to security policies.
" + }, + "SecurityGroupSettingsRequest":{ + "type":"structure", + "members":{ + "lockoutThreshold":{ + "shape":"Integer", + "documentation":"The number of failed password attempts before a user account is locked out.
" + }, + "permittedNetworks":{ + "shape":"PermittedNetworksList", + "documentation":"A list of network IDs that are permitted for local federation when federation mode is set to restricted.
" + }, + "enableGuestFederation":{ + "shape":"Boolean", + "documentation":"Guest users let you work with people outside your organization that only have limited access to Wickr. Only valid when federationMode is set to Global.
" + }, + "globalFederation":{ + "shape":"Boolean", + "documentation":"Allow users to securely federate with all Amazon Web Services Wickr networks and Amazon Web Services Enterprise networks.
" + }, + "federationMode":{ + "shape":"Integer", + "documentation":"The local federation mode. Values: 0 (none), 1 (federated - all networks), 2 (restricted - only permitted networks).
" + }, + "enableRestrictedGlobalFederation":{ + "shape":"Boolean", + "documentation":"Enables restricted global federation to limit communication to specific permitted networks only. Requires globalFederation to be enabled.
" + }, + "permittedWickrAwsNetworks":{ + "shape":"WickrAwsNetworksList", + "documentation":"A list of permitted Amazon Web Services Wickr networks for restricted global federation.
" + }, + "permittedWickrEnterpriseNetworks":{ + "shape":"PermittedWickrEnterpriseNetworksList", + "documentation":"A list of permitted Wickr Enterprise networks for restricted global federation.
" + } + }, + "documentation":"Contains the security group configuration settings that can be specified when creating or updating a security group. This is a subset of SecurityGroupSettings containing only the modifiable federation and security settings.
" + }, + "SecurityGroupStringList":{ + "type":"list", + "member":{"shape":"GenericString"} + }, + "SensitiveString":{ + "type":"string", + "pattern":"[\\S\\s]*", + "sensitive":true + }, + "Setting":{ + "type":"structure", + "required":[ + "optionName", + "value", + "type" + ], + "members":{ + "optionName":{ + "shape":"GenericString", + "documentation":"The name of the network setting (e.g., 'enableClientMetrics', 'dataRetention').
" + }, + "value":{ + "shape":"GenericString", + "documentation":"The current value of the setting as a string. Boolean values are represented as 'true' or 'false'.
" + }, + "type":{ + "shape":"GenericString", + "documentation":"The data type of the setting value (e.g., 'boolean', 'string', 'number').
" + } + }, + "documentation":"Represents a single network-level configuration setting with its name, value, and data type. Settings control network-wide behaviors and features.
" + }, + "SettingsList":{ + "type":"list", + "member":{"shape":"Setting"} + }, + "ShredderSettings":{ + "type":"structure", + "members":{ + "canProcessManually":{ + "shape":"Boolean", + "documentation":"Specifies whether users can manually trigger the shredder to delete content.
" + }, + "intensity":{ + "shape":"Integer", + "documentation":"Prevents Wickr data from being recovered by overwriting deleted Wickr data. Valid Values: Must be one of [0, 20, 60, 100]
" + } + }, + "documentation":"Configuration for the message shredder feature, which securely deletes messages and files from devices to prevent data recovery.
" + }, + "SortDirection":{ + "type":"string", + "enum":[ + "ASC", + "DESC" + ] + }, + "Status":{ + "type":"string", + "enum":[ + "DISABLED", + "ENABLED", + "FORCE_ENABLED" + ] + }, + "StringList":{ + "type":"list", + "member":{"shape":"GenericString"} + }, + "SyntheticTimestamp_epoch_seconds":{ + "type":"timestamp", + "timestampFormat":"unixTimestamp" + }, + "Uname":{"type":"string"}, + "Unames":{ + "type":"list", + "member":{"shape":"GenericString"} + }, + "UnauthorizedError":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message explaining why the authentication failed.
" + } + }, + "documentation":"The request was not authenticated or the authentication credentials were invalid. This error occurs when the request lacks valid authentication credentials or the credentials have expired.
", + "error":{ + "httpStatusCode":401, + "senderFault":true + }, + "exception":true + }, + "UpdateBotRequest":{ + "type":"structure", + "required":[ + "networkId", + "botId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the bot to update.
", + "location":"uri", + "locationName":"networkId" + }, + "botId":{ + "shape":"BotId", + "documentation":"The unique identifier of the bot to update.
", + "location":"uri", + "locationName":"botId" + }, + "displayName":{ + "shape":"GenericString", + "documentation":"The new display name for the bot.
" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The ID of the new security group to assign the bot to.
" + }, + "challenge":{ + "shape":"SensitiveString", + "documentation":"The new password for the bot account.
" + }, + "suspend":{ + "shape":"Boolean", + "documentation":"Set to true to suspend the bot or false to unsuspend it. Omit this field for standard updates that don't affect suspension status.
" + } + } + }, + "UpdateBotResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the bot update operation.
" + } + } + }, + "UpdateDataRetentionRequest":{ + "type":"structure", + "required":[ + "networkId", + "actionType" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the data retention bot.
", + "location":"uri", + "locationName":"networkId" + }, + "actionType":{ + "shape":"DataRetentionActionType", + "documentation":"The action to perform. Valid values are 'ENABLE' (to enable the data retention service), 'DISABLE' (to disable the service), or 'PUBKEY_MSG_ACK' (to acknowledge the public key message).
" + } + } + }, + "UpdateDataRetentionResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the update operation.
" + } + } + }, + "UpdateGuestUserRequest":{ + "type":"structure", + "required":[ + "networkId", + "usernameHash", + "block" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network where the guest user's status will be updated.
", + "location":"uri", + "locationName":"networkId" + }, + "usernameHash":{ + "shape":"GenericString", + "documentation":"The username hash (unique identifier) of the guest user to update.
", + "location":"uri", + "locationName":"usernameHash" + }, + "block":{ + "shape":"Boolean", + "documentation":"Set to true to block the guest user or false to unblock them.
" + } + } + }, + "UpdateGuestUserResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating the result of the update operation.
" + } + } + }, + "UpdateNetworkRequest":{ + "type":"structure", + "required":[ + "networkId", + "networkName" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network to update.
", + "location":"uri", + "locationName":"networkId" + }, + "networkName":{ + "shape":"GenericString", + "documentation":"The new name for the network. Must be between 1 and 20 characters.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique identifier for this request to ensure idempotency.
", + "idempotencyToken":true, + "location":"header", + "locationName":"X-Client-Token" + }, + "encryptionKeyArn":{ + "shape":"GenericString", + "documentation":"The ARN of the Amazon Web Services KMS customer managed key to use for encrypting sensitive data in the network.
" + } + } + }, + "UpdateNetworkResponse":{ + "type":"structure", + "members":{ + "message":{ + "shape":"GenericString", + "documentation":"A message indicating that the network was updated successfully.
" + } + } + }, + "UpdateNetworkSettingsRequest":{ + "type":"structure", + "required":[ + "networkId", + "settings" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network whose settings will be updated.
", + "location":"uri", + "locationName":"networkId" + }, + "settings":{ + "shape":"NetworkSettings", + "documentation":"A map of setting names to their new values. Each setting should be provided with its appropriate type (boolean, string, number, etc.).
" + } + } + }, + "UpdateNetworkSettingsResponse":{ + "type":"structure", + "required":["settings"], + "members":{ + "settings":{ + "shape":"SettingsList", + "documentation":"A list of the updated network settings, showing the new values for each modified setting.
" + } + } + }, + "UpdateSecurityGroupRequest":{ + "type":"structure", + "required":[ + "networkId", + "groupId" + ], + "members":{ + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the Wickr network containing the security group to update.
", + "location":"uri", + "locationName":"networkId" + }, + "groupId":{ + "shape":"GenericString", + "documentation":"The unique identifier of the security group to update.
", + "location":"uri", + "locationName":"groupId" + }, + "name":{ + "shape":"GenericString", + "documentation":"The new name for the security group.
" + }, + "securityGroupSettings":{ + "shape":"SecurityGroupSettings", + "documentation":"The updated configuration settings for the security group.
Federation mode - 0 (Local federation), 1 (Restricted federation), 2 (Global federation)
" + } + } + }, + "UpdateSecurityGroupResponse":{ + "type":"structure", + "required":["securityGroup"], + "members":{ + "securityGroup":{ + "shape":"SecurityGroup", + "documentation":"The updated security group details, including the new settings.
" + } + } + }, + "UpdateUserDetails":{ + "type":"structure", + "members":{ + "firstName":{ + "shape":"SensitiveString", + "documentation":"The new first name for the user.
" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"The new last name for the user.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The new username or email address for the user.
" + }, + "securityGroupIds":{ + "shape":"SecurityGroupIdList", + "documentation":"The updated list of security group IDs to which the user should belong.
" + }, + "inviteCode":{ + "shape":"GenericString", + "documentation":"A new custom invite code for the user.
" + }, + "inviteCodeTtl":{ + "shape":"Integer", + "documentation":"The new time-to-live for the invite code in days.
" + }, + "codeValidation":{ + "shape":"Boolean", + "documentation":"Indicates whether the user can be verified through a custom invite code.
" + } + }, + "documentation":"Contains the modifiable details for updating an existing user, including name, password, security group membership, and invitation settings.
A user can only be assigned to a single security group. Attempting to add a user to multiple security groups is not supported and will result in an error.
The ID of the Wickr network containing the user to update.
", + "location":"uri", + "locationName":"networkId" + }, + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier of the user to update.
" + }, + "userDetails":{ + "shape":"UpdateUserDetails", + "documentation":"An object containing the user details to be updated, such as name, password, security groups, and invite code settings.
" + } + } + }, + "UpdateUserResponse":{ + "type":"structure", + "required":[ + "userId", + "networkId", + "suspended" + ], + "members":{ + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier of the updated user.
" + }, + "networkId":{ + "shape":"NetworkId", + "documentation":"The ID of the network where the user was updated.
" + }, + "securityGroupIds":{ + "shape":"SecurityGroupIdList", + "documentation":"The list of security group IDs to which the user now belongs after the update.
" + }, + "firstName":{ + "shape":"SensitiveString", + "documentation":"The updated first name of the user.
" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"The updated last name of the user.
" + }, + "middleName":{ + "shape":"GenericString", + "documentation":"The middle name of the user (currently not used).
" + }, + "suspended":{ + "shape":"Boolean", + "documentation":"Indicates whether the user is suspended after the update.
" + }, + "modified":{ + "shape":"Integer", + "documentation":"The timestamp when the user was last modified, specified in epoch seconds.
" + }, + "status":{ + "shape":"Integer", + "documentation":"The user's status after the update.
" + }, + "inviteCode":{ + "shape":"GenericString", + "documentation":"The updated invite code for the user, if applicable.
" + }, + "inviteExpiration":{ + "shape":"Integer", + "documentation":"The expiration time of the user's invite code, specified in epoch seconds.
" + }, + "codeValidation":{ + "shape":"Boolean", + "documentation":"Indicates whether the user can be verified through a custom invite code.
" + } + } + }, + "User":{ + "type":"structure", + "members":{ + "userId":{ + "shape":"UserId", + "documentation":"The unique identifier for the user within the network.
" + }, + "firstName":{ + "shape":"SensitiveString", + "documentation":"The first name of the user.
" + }, + "lastName":{ + "shape":"SensitiveString", + "documentation":"The last name of the user.
" + }, + "username":{ + "shape":"GenericString", + "documentation":"The email address or username of the user. For bots, this must end in 'bot'.
" + }, + "securityGroups":{ + "shape":"SecurityGroupIdList", + "documentation":"A list of security group IDs to which the user is assigned, determining their permissions and feature access.
" + }, + "isAdmin":{ + "shape":"Boolean", + "documentation":"Indicates whether the user has administrator privileges in the network.
" + }, + "suspended":{ + "shape":"Boolean", + "documentation":"Indicates whether the user is currently suspended and unable to access the network.
" + }, + "status":{ + "shape":"Integer", + "documentation":"The current status of the user (1 for pending invitation, 2 for active).
" + }, + "otpEnabled":{ + "shape":"Boolean", + "documentation":"Indicates whether one-time password (OTP) authentication is enabled for the user.
" + }, + "scimId":{ + "shape":"GenericString", + "documentation":"The SCIM (System for Cross-domain Identity Management) identifier for the user, used for identity synchronization. Currently not used.
" + }, + "type":{ + "shape":"GenericString", + "documentation":"The descriptive type of the user account (e.g., 'user').
" + }, + "cell":{ + "shape":"GenericString", + "documentation":"The phone number minus country code, used for cloud deployments.
" + }, + "countryCode":{ + "shape":"GenericString", + "documentation":"The country code for the user's phone number, used for cloud deployments.
" + }, + "challengeFailures":{ + "shape":"Integer", + "documentation":"The number of failed password attempts for enterprise deployments, used for account lockout policies.
" + }, + "isInviteExpired":{ + "shape":"Boolean", + "documentation":"Indicates whether the user's email invitation code has expired, applicable to cloud deployments.
" + }, + "isUser":{ + "shape":"Boolean", + "documentation":"Indicates whether this account is a user (as opposed to a bot or other account type).
" + }, + "inviteCode":{ + "shape":"GenericString", + "documentation":"The invitation code for this user, used during registration to join the network.
" + }, + "codeValidation":{ + "shape":"Boolean", + "documentation":"Indicates whether the user can be verified through a custom invite code.
" + }, + "uname":{ + "shape":"GenericString", + "documentation":"The unique identifier for the user.
" + } + }, + "documentation":"Represents a user account in a Wickr network with detailed profile information, status, security settings, and authentication details.
codeValidation, inviteCode and inviteCodeTtl are restricted to networks under preview only.
A list of validation error details, where each item identifies a specific field that failed validation and explains the reason for the failure.
" + } + }, + "documentation":"One or more fields in the request failed validation. This error provides detailed information about which fields were invalid and why, allowing you to correct the request and retry.
", + "error":{ + "httpStatusCode":422, + "senderFault":true + }, + "exception":true + }, + "WickrAwsNetworks":{ + "type":"structure", + "required":[ + "region", + "networkId" + ], + "members":{ + "region":{ + "shape":"GenericString", + "documentation":"The Amazon Web Services region identifier where the network is hosted (e.g., 'us-east-1').
" + }, + "networkId":{ + "shape":"NetworkId", + "documentation":"The network ID of the Wickr Amazon Web Services network.
" + } + }, + "documentation":"Identifies a Amazon Web Services Wickr network by region and network ID, used for configuring permitted networks for global federation.
" + }, + "WickrAwsNetworksList":{ + "type":"list", + "member":{"shape":"WickrAwsNetworks"} + } + }, + "documentation":"Welcome to the Amazon Web Services Wickr API Reference.
The Amazon Web Services Wickr application programming interface (API) is designed for administrators to perform key tasks, such as creating and managing Amazon Web Services Wickr, networks, users, security groups, bots and more. This guide provides detailed information about the Amazon Web Services Wickr API, including operations, types, inputs and outputs, and error codes. You can use an Amazon Web Services SDK, the Amazon Web Services Command Line Interface (Amazon Web Services CLI, or the REST API to make API calls for Amazon Web Services Wickr.
Using Amazon Web Services SDK
The SDK clients authenticate your requests by using access keys that you provide. For more information, see Authentication and access using Amazon Web Services SDKs and tools in the Amazon Web Services SDKs and Tools Reference Guide.
Using Amazon Web Services CLI
Use your access keys with the Amazon Web Services CLI to make API calls. For more information about setting up the Amazon Web Services CLI, see Getting started with the Amazon Web Services CLI in the Amazon Web Services Command Line Interface User Guide for Version 2.
Using REST APIs
If you use REST to make API calls, you must authenticate your request by providing a signature. Amazon Web Services Wickr supports Signature Version 4. For more information, see Amazon Web Services Signature Version 4 for API requests in the Amazon Web Services Identity and Access Management User Guide.
Access and permissions to the APIs can be controlled by Amazon Web Services Identity and Access Management. The managed policy Amazon Web ServicesWickrFullAccess grants full administrative permission to the Amazon Web Services Wickr service APIs. For more information on restricting access to specific operations, see Identity and access management for Amazon Web Services Wickr in the Amazon Web Services Wickr Administration Guide.
Types of Errors:
The Amazon Web Services Wickr APIs provide an HTTP interface. HTTP defines ranges of HTTP Status Codes for different types of error responses.
Client errors are indicated by HTTP Status Code class of 4xx
Service errors are indicated by HTTP Status Code class of 5xx
In this reference guide, the documentation for each API has an Errors section that includes a brief discussion about HTTP status codes. We recommend looking there as part of your investigation when you get an error.
" +} diff --git a/awscli/botocore/data/wickr/2024-02-01/waiters-2.json b/awscli/botocore/data/wickr/2024-02-01/waiters-2.json new file mode 100644 index 000000000000..13f60ee66be6 --- /dev/null +++ b/awscli/botocore/data/wickr/2024-02-01/waiters-2.json @@ -0,0 +1,5 @@ +{ + "version": 2, + "waiters": { + } +} diff --git a/awscli/botocore/data/workspaces-web/2020-07-08/service-2.json b/awscli/botocore/data/workspaces-web/2020-07-08/service-2.json index ba7d033d9a7f..6be9fc27749d 100644 --- a/awscli/botocore/data/workspaces-web/2020-07-08/service-2.json +++ b/awscli/botocore/data/workspaces-web/2020-07-08/service-2.json @@ -2618,6 +2618,10 @@ "brandingConfigurationInput":{ "shape":"BrandingConfigurationCreateInput", "documentation":"The branding configuration input that customizes the appearance of the web portal for end users. This includes a custom logo, favicon, wallpaper, localized strings, color theme, and an optional terms of service.
" + }, + "webAuthnAllowed":{ + "shape":"EnabledType", + "documentation":"Specifies whether the user can use WebAuthn redirection for passwordless login to websites within the streaming session.
" } } }, @@ -5563,6 +5567,10 @@ "brandingConfigurationInput":{ "shape":"BrandingConfigurationUpdateInput", "documentation":"The branding configuration that customizes the appearance of the web portal for end users. When updating user settings without an existing branding configuration, all fields (logo, favicon, wallpaper, localized strings, and color theme) are required except for terms of service. When updating user settings with an existing branding configuration, all fields are optional.
" + }, + "webAuthnAllowed":{ + "shape":"EnabledType", + "documentation":"Specifies whether the user can use WebAuthn redirection for passwordless login to websites within the streaming session.
" } } }, @@ -5688,6 +5696,10 @@ "brandingConfiguration":{ "shape":"BrandingConfiguration", "documentation":"The branding configuration output that customizes the appearance of the web portal for end users.
" + }, + "webAuthnAllowed":{ + "shape":"EnabledType", + "documentation":"Specifies whether the user can use WebAuthn redirection for passwordless login to websites within the streaming session.
" } }, "documentation":"A user settings resource that can be associated with a web portal. Once associated with a web portal, user settings control how users can transfer data between a streaming session and the their local devices.
" @@ -5747,6 +5759,10 @@ "brandingConfiguration":{ "shape":"BrandingConfiguration", "documentation":"The branding configuration output that customizes the appearance of the web portal for end users.
" + }, + "webAuthnAllowed":{ + "shape":"EnabledType", + "documentation":"Specifies whether the user can use WebAuthn redirection for passwordless login to websites within the streaming session.
" } }, "documentation":"The summary of user settings.
" diff --git a/tests/functional/botocore/endpoint-rules/wickr/endpoint-tests-1.json b/tests/functional/botocore/endpoint-rules/wickr/endpoint-tests-1.json new file mode 100644 index 000000000000..29d6df91fcd7 --- /dev/null +++ b/tests/functional/botocore/endpoint-rules/wickr/endpoint-tests-1.json @@ -0,0 +1,270 @@ +{ + "testCases": [ + { + "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-east-1.api.aws" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-east-1.api.aws" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.cn-north-1.api.amazonwebservices.com.cn" + } + }, + "params": { + "Region": "cn-north-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.cn-north-1.amazonaws.com.cn" + } + }, + "params": { + "Region": "cn-north-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.cn-north-1.api.amazonwebservices.com.cn" + } + }, + "params": { + "Region": "cn-north-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.cn-north-1.amazonaws.com.cn" + } + }, + "params": { + "Region": "cn-north-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-gov-east-1.api.aws" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-gov-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-gov-east-1.api.aws" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-gov-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr-fips.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://admin.wickr.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled", + "expect": { + "endpoint": { + "url": "https://example.com" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled", + "expect": { + "endpoint": { + "url": "https://example.com" + } + }, + "params": { + "UseFIPS": false, + "UseDualStack": false, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "For custom endpoint with fips enabled and dualstack disabled", + "expect": { + "error": "Invalid Configuration: FIPS and custom endpoint are not supported" + }, + "params": { + "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": false, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "For custom endpoint with fips disabled and dualstack enabled", + "expect": { + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" + }, + "params": { + "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": true, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } + } + ], + "version": "1.0" +} \ No newline at end of file From dd993394f0f790f2afbfb254648041921e4e4e4d Mon Sep 17 00:00:00 2001 From: aws-sdk-python-automationThe Amazon Resource Name (ARN) of the IAM role assumed by Config and used by the specified configuration recorder.
The server will reject a request without a defined roleARN for the configuration recorder
While the API model does not require this field, the server will reject a request without a defined roleARN for the configuration recorder.
Policies and compliance results
IAM policies and other policies managed in Organizations can impact whether Config has permissions to record configuration changes for your resources. Additionally, rules directly evaluate the configuration of a resource and rules don't take into account these policies when running evaluations. Make sure that the policies in effect align with how you intend to use Config.
Keep Minimum Permisions When Reusing an IAM role
If you use an Amazon Web Services service that uses Config, such as Security Hub or Control Tower, and an IAM role has already been created, make sure that the IAM role that you use when setting up Config keeps the same minimum permissions as the pre-existing IAM role. You must do this to ensure that the other Amazon Web Services service continues to run as expected.
For example, if Control Tower has an IAM role that allows Config to read S3 objects, make sure that the same permissions are granted to the IAM role you use when setting up Config. Otherwise, it may interfere with how Control Tower operates.
The service-linked IAM role for Config must be used for service-linked configuration recorders
For service-linked configuration recorders, you must use the service-linked IAM role for Config: AWSServiceRoleForConfig.
The Amazon Resource Name (ARN) of the IAM role assumed by Config and used by the specified configuration recorder.
The server will reject a request without a defined roleARN for the configuration recorder
While the API model does not require this field, the server will reject a request without a defined roleARN for the configuration recorder.
Policies and compliance results
IAM policies and other policies managed in Organizations can impact whether Config has permissions to record configuration changes for your resources. Additionally, rules directly evaluate the configuration of a resource and rules don't take into account these policies when running evaluations. Make sure that the policies in effect align with how you intend to use Config.
Keep Minimum Permisions When Reusing an IAM role
If you use an Amazon Web Services service that uses Config, such as Security Hub CSPM or Control Tower, and an IAM role has already been created, make sure that the IAM role that you use when setting up Config keeps the same minimum permissions as the pre-existing IAM role. You must do this to ensure that the other Amazon Web Services service continues to run as expected.
For example, if Control Tower has an IAM role that allows Config to read S3 objects, make sure that the same permissions are granted to the IAM role you use when setting up Config. Otherwise, it may interfere with how Control Tower operates.
The service-linked IAM role for Config must be used for service-linked configuration recorders
For service-linked configuration recorders, you must use the service-linked IAM role for Config: AWSServiceRoleForConfig.
Determines how placement groups spread instances.
Host – You can use host only with Outpost placement groups.
Rack – No usage restrictions.
Reserved for future use.
" + }, "DryRun":{ "shape":"Boolean", "documentation":"Checks whether you have the required permissions for the operation, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.
The spread level for the placement group. Only Outpost placement groups can be spread across hosts.
", "locationName":"spreadLevel" + }, + "LinkedGroupId":{ + "shape":"PlacementGroupId", + "documentation":"Reserved for future use.
", + "locationName":"linkedGroupId" } }, "documentation":"Describes a placement group.
" diff --git a/awscli/botocore/data/guardduty/2017-11-28/service-2.json b/awscli/botocore/data/guardduty/2017-11-28/service-2.json index fcd1daa79bfc..fe31e8f22d3e 100644 --- a/awscli/botocore/data/guardduty/2017-11-28/service-2.json +++ b/awscli/botocore/data/guardduty/2017-11-28/service-2.json @@ -5839,7 +5839,10 @@ }, "GetRemainingFreeTrialDaysRequest":{ "type":"structure", - "required":["DetectorId"], + "required":[ + "AccountIds", + "DetectorId" + ], "members":{ "DetectorId":{ "shape":"DetectorId", diff --git a/awscli/botocore/data/pcs/2023-02-10/service-2.json b/awscli/botocore/data/pcs/2023-02-10/service-2.json index 4d8976cb1fe7..30ab02146bc8 100644 --- a/awscli/botocore/data/pcs/2023-02-10/service-2.json +++ b/awscli/botocore/data/pcs/2023-02-10/service-2.json @@ -382,7 +382,7 @@ }, "mode":{ "shape":"AccountingMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means Slurm accounting is enabled.
The default value for mode is NONE. A value of STANDARD means Slurm accounting is enabled.
The accounting configuration includes configurable settings for Slurm accounting. It's a property of the ClusterSlurmConfiguration object.
" @@ -410,7 +410,7 @@ }, "mode":{ "shape":"AccountingMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means Slurm accounting is enabled.
The default value for mode is NONE. A value of STANDARD means Slurm accounting is enabled.
The accounting configuration includes configurable settings for Slurm accounting. It's a property of the ClusterSlurmConfiguration object.
" @@ -1252,14 +1252,14 @@ "members":{ "secretArn":{ "shape":"String", - "documentation":"The Amazon Resource Name (ARN) of the AWS Secrets Manager secret containing the JWT key.
" + "documentation":"The Amazon Resource Name (ARN) of the Amazon Web Services Secrets Manager secret containing the JWT key.
" }, "secretVersion":{ "shape":"String", - "documentation":"The version of the AWS Secrets Manager secret containing the JWT key.
" + "documentation":"The version of the Amazon Web Services Secrets Manager secret containing the JWT key.
" } }, - "documentation":"The JWT key stored in AWS Secrets Manager for Slurm REST API authentication.
" + "documentation":"The JWT key stored in Amazon Web Services Secrets Manager for Slurm REST API authentication.
" }, "ListClustersRequest":{ "type":"structure", @@ -1842,7 +1842,7 @@ "members":{ "mode":{ "shape":"SlurmRestMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means the Slurm REST API is enabled.
The default value for mode is NONE. A value of STANDARD means the Slurm REST API is enabled.
The Slurm REST API configuration includes settings for enabling and configuring the Slurm REST API. It's a property of the ClusterSlurmConfiguration object.
" @@ -1860,7 +1860,7 @@ "members":{ "mode":{ "shape":"SlurmRestMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means the Slurm REST API is enabled.
The default value for mode is NONE. A value of STANDARD means the Slurm REST API is enabled.
The Slurm REST API configuration includes settings for enabling and configuring the Slurm REST API. It's a property of the ClusterSlurmConfiguration object.
" @@ -1982,7 +1982,7 @@ }, "mode":{ "shape":"AccountingMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means Slurm accounting is enabled.
The default value for mode is NONE. A value of STANDARD means Slurm accounting is enabled.
The accounting configuration includes configurable settings for Slurm accounting.
" @@ -2161,7 +2161,7 @@ "members":{ "mode":{ "shape":"SlurmRestMode", - "documentation":"The default value for mode is STANDARD. A value of STANDARD means the Slurm REST API is enabled.
The default value for mode is NONE. A value of STANDARD means the Slurm REST API is enabled.
The Slurm REST API configuration includes settings for enabling and configuring the Slurm REST API.
" From b7ccb2355ed1a2453adfec3e5ef3a526cf151a58 Mon Sep 17 00:00:00 2001 From: aws-sdk-python-automation