Practice Free SAA-C03 Exam Online Questions
A company wants to migrate its 1 PB on-premises image repository to AWS. The images will be used by a serverless web application Images stored in the repository are rarely accessed, but they must be immediately available Additionally, the images must be encrypted at rest and protected from accidental deletion
Which solution meets these requirements?
- A . Implement client-side encryption and store the images in an Amazon S3 Glacier vault Set a vault lock to prevent accidental deletion
- B . Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Enable versioning default encryption and MFA Delete on the S3 bucket.
- C . Store the images in an Amazon FSx for Windows File Server file share Configure the Amazon FSx file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK) to encrypt the images in the file share Use NTFS permission sets on the images to prevent accidental deletion
- D . Store the images in an Amazon Elastic File System (Amazon EFS) file share in the Infrequent Access storage class Configure the EFS file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK) to encrypt the images in the file share. Use NFS permission sets on the images to prevent accidental deletion
B
Explanation:
This answer is correct because it provides a resilient and durable replacement for the on-premises file share that is compatible with a serverless web application. Amazon S3 is a fully managed object storage service that can store any amount of data and serve it over the internet. It supports the following features:
Resilience: Amazon S3 stores data across multiple Availability Zones within a Region, and offers 99.999999999%(11 9’s) of durability. It also supports cross-region replication, which enables automatic and asynchronous copying of objects across buckets in different AWS Regions.
Durability: Amazon S3 encrypts data at rest using server-side encryption with either Amazon S3-managed keys (SSE-S3), AWS KMS keys (SSE-KMS), or customer-provided keys (SSE-C). It also supports encryption in transit using SSL/TLS. Amazon S3 also provides data protection features such as versioning, which keeps multiple versions of an object in the same bucket, and MFA Delete, which requires additional authentication for deleting an object version or changing the versioning state of a bucket.
Performance: Amazon S3 delivers high performance and scalability for serving static and dynamic web content. It also supports features such as S3 Transfer Acceleration, which speeds up data transfers by routing requests to AWS edge locations, and S3 Select, which enables retrieving only a subset of data from an object by using simple SQL expressions.
The S3 Standard-Infrequent Access (S3 Standard-IA) storage class is suitable for storing images that are rarely accessed, but must be immediately available when needed. It offers the same high durability, throughput, and low latency as S3 Standard, but with a lower storage cost per GB and a higher per-request cost.
Reference: Amazon Simple Storage Service
Storage classes – Amazon Simple Storage Service
A company recently launched a new product that is highly available in one AWS Region. The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), a public Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.
Which combination of steps will meet these requirements? (Select THREE.)
- A . In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.
- B . Create an Amazon Route 53 failover record.
- C . Modify the DynamoDB table to create a DynamoDB global table.
- D . In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.
- E . Modify the DynamoDB table to create global secondary indexes (GSIs).
- F . Create an AWS PrivateLink endpoint for the application.
A, B, C
Explanation:
To make the application highly available across regions:
Deploy the application in a different region using a new ECS cluster and ALB to ensure regional redundancy.
Use Route 53 failover routing to automatically direct traffic to the healthy region in case of failure.
Use DynamoDB Global Tables to ensure the database is replicated and available across multiple regions, supporting read and write operations in each region.
Option D (EKS cluster in the same region): This does not provide regional redundancy.
Option E (Global Secondary Indexes): GSIs improve query performance but do not provide multi-region availability.
Option F (PrivateLink): PrivateLink is for secure communication, not for cross-region high availability.
AWS
Reference: DynamoDB Global Tables
Amazon ECS with ALB
A company hosts its application on AWS. The company uses Amazon Cognito to manage users When users log in to the application the application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an AWS managed solution that will control access to the REST API to reduce development efforts
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure an AWS Lambda function to be an authorize! in API Gateway to validate which user made the request
- B . For each user, create and assign an API key that must be sent with each request Validate the key by using an AWS Lambda function
- C . Send the user’s email address in the header with every request Invoke an AWS Lambda function to validate that the user with that email address has proper access
- D . Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request
D
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
To control access to the REST API and reduce development efforts, the company can use an Amazon Cognito user pool authorizer in API Gateway. This will allow Amazon Cognito to validate each request and ensure that only authenticated users can access the API. This solution has the LEAST operational overhead, as it does not require the company to develop and maintain any additional infrastructure or code.
A company has an employee web portal. Employees log in to the portal to view payroll details. The company is developing a new system to give employees the ability to upload scanned documents for reimbursement. The company runs a program to extract text-based data from the documents and attach the extracted information to each employee’s reimbursement IDs for processing.
The employee web portal requires 100%uptime. The document extract program runs infrequently throughout the day on an on-demand basis. The company wants to build a scalable and cost-effective new system that will require minimal changes to the existing web portal. The company does not want to make any code changes.
Which solution will meet these requirements with the LEAST implementation effort?
- A . Run Amazon EC2 On-Demand Instances in an Auto Scaling group for the web portal. Use an AWS Lambda function to run the document extract program. Invoke the Lambda function when an employee uploads a new reimbursement document.
- B . Run Amazon EC2 Spot Instances in an Auto Scaling group for the web portal. Run the document extract program on EC2 Spot Instances Start document extract program instances when an employee uploads a new reimbursement document.
- C . Purchase a Savings Plan to run the web portal and the document extract program. Run the web portal and the document extract program in an Auto Scaling group.
- D . Create an Amazon S3 bucket to host the web portal. Use Amazon API Gateway and an AWS Lambda function for the existing functionalities. Use the Lambda function to run the document extract program. Invoke the Lambda function when the API that is associated with a new document upload is called.
A
Explanation:
This solution offers the most scalable and cost-effective approach with minimal changes to the existing web portal and no code modifications.
Amazon EC2 On-Demand Instances in an Auto Scaling Group: Running the web portal on EC2 On-Demand instances ensures 100%uptime and scalability. The Auto Scaling group will maintain the desired number of instances, automatically scaling up or down as needed, ensuring high availability for the employee web portal.
AWS Lambda for Document Extraction: Lambda is a serverless compute service that allows you to run code in response to events without provisioning or managing servers. By using Lambda to run the document extraction program, you can trigger the function whenever an employee uploads a document. This approach is cost-effective since you only pay for the compute time used by the Lambda function.
No Code Changes Required: This solution integrates with the existing infrastructure with minimal implementation effort and does not require any modifications to the web portal’s code.
Why Not Other Options?
Option B (Spot Instances): Spot Instances are not suitable for workloads requiring 100%uptime, as they can be terminated by AWS with short notice.
Option C (Savings Plan): A Savings Plan could reduce costs but does not address the requirement for running the document extraction program efficiently or without code changes.
Option D (S3 with API Gateway and Lambda): This would require significant changes to the existing web portal setup, including moving the portal to S3 and reconfiguring its architecture, which contradicts the requirement of minimal implementation effort and no code changes.
AWS
Reference: Amazon EC2 Auto Scaling – Information on how to use Auto Scaling for EC2 instances.
AWS Lambda – Overview of AWS Lambda and its use cases.
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use Amazon DynamoDB for backend data storage. The application traffic will be unpredictable. T company expects that the application read and write throughput to the database will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
- A . Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set DynamoDB auto scaling to a maximum defined capacity.
- B . Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
- C . Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
- D . Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class.
B
Explanation:
The most cost-effective DynamoDB table configuration for the web application is to configure DynamoDB in on-demand mode by using the DynamoDB Standard table class. This configuration will allow the company to scale in response to application traffic and pay only for the read and write requests that the application performs on the table.
On-demand mode is a flexible billing option that can handle thousands of requests per second without capacity planning. On-demand mode automatically adjusts the table’s capacity based on the
incoming traffic, and charges only for the read and write requests that are actually performed. On-demand mode is suitable for applications with unpredictable or variable workloads, or applications that prefer the ease of paying for only what they use1.
The DynamoDB Standard table class is the default and recommended table class for most workloads. The DynamoDB Standard table class offers lower throughput costs than the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and is more cost-effective for tables where throughput is the dominant cost. The DynamoDB Standard table class also offers the same performance, durability, and availability as the DynamoDB Standard-IA table class2.
The other options are not correct because they are either not cost-effective or not suitable for the use case. Configuring DynamoDB with provisioned read and write by using the DynamoDB Standard table class, and setting DynamoDB auto scaling to a maximum defined capacity is not correct because this configuration requires manual estimation and management of the table’s capacity, which adds complexity and cost to the solution. Provisioned mode is a billing option that requires users to specify the amount of read and write capacity units for their tables, and charges for the reserved capacity regardless of usage. Provisioned mode is suitable for applications with predictable or stable workloads, or applications that require finer-grained control over their capacity settings1. Configuring DynamoDB with provisioned read and write by using the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and setting DynamoDB auto scaling to a maximum defined capacity is not correct because this configuration is not cost-effective for tables with moderate to high throughput. The DynamoDB Standard-IA table class offers lower storage costs than the DynamoDB Standard table class, but higher throughput costs. The DynamoDB Standard-IA table class is optimized for tables where storage is the dominant cost, such as tables that store infrequently accessed data2. Configuring DynamoDB in on-demand mode by using the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class is not correct because this configuration is not cost-effective for tables with moderate to high throughput. As mentioned above, the DynamoDB Standard-IA table class has higher throughput costs than the DynamoDB Standard table class, which can offset the savings from lower storage costs.
Reference: Table classes – Amazon DynamoDB
Read/write capacity mode – Amazon DynamoDB
A company stores sensitive data in Amazon S3 A solutions architect needs to create an encryption solution. The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.
Which solution will meet these requirements?
- A . Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data
- B . Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
- C . Create an AWS managed key by using AWS Key Management Service {AWS KMS) Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
- D . Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
B
Explanation:
Understanding the Requirement: The company needs to control the creation, rotation, and disabling of encryption keys for data stored in S3 with minimal effort.
Analysis of Options:
SSE-S3: Provides server-side encryption using S3 managed keys but does not offer full control over key management.
Customer managed key with AWS KMS (SSE-KMS): Allows the company to fully control key creation, rotation, and disabling, providing a high level of security and compliance.
AWS managed key with AWS KMS (SSE-KMS): While it provides some control, it does not offer the same level of granularity as customer-managed keys.
EC2 instance encryption and re-upload: This approach is operationally intensive and does not leverage AWS managed services for efficient key management.
Best Solution:
Customer managed key with AWS KMS (SSE-KMS): This solution meets the requirement for full control over encryption keys with minimal operational overhead, leveraging AWS managed services for secure key management.
Reference: AWS Key Management Service (KMS)
Amazon S3 Encryption
A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only.
Which configuration will meet this requirement?
- A . Configure the security group for the EC2 instances.
- B . Configure the security group on the Application Load Balancer.
- C . Configure AWS WAF on the Application Load Balancer in a VPC.
- D . Configure the network ACL for the subnet that contains the EC2 instances.
C
Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
A company is deploying a new application on Amazon EC2 instances. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution wil meet this requirement?
- A . Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
- B . Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2 instances.
- C . Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the ESS level.
- D . Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account Ensure that the key policy is active.
B
Explanation:
The solution that will meet the requirement of ensuring that all data that is written to the EBS volumes is encrypted at rest is B. Create the EBS volumes as encrypted volumes and attach the encrypted EBS volumes to the EC2 instances. When you create an EBS volume, you can specify whether to encrypt the volume. If you choose to encrypt the volume, all data written to the volume is automatically encrypted at rest using AWS-managed keys. You can also use customer-managed keys (CMKs) stored in AWS KMS to encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2 instances to ensure that all data written to the volumes is encrypted at rest.
A company recently created a disaster recovery site in a Different AWS Region.The company needs to transfer large amounts of data back and forth between NFS file systems in the two Regions on a periods.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS DataSync.
- B . Use AWS Snowball devices
- C . Set up an SFTP server on Amazon EC2
- D . Use AWS Database Migration Service (AWS DMS)
A
Explanation:
This option is the most efficient because it uses AWS DataSync, which is a secure, online service that automates and accelerates moving data between on-premises and AWS Storage services1. It also uses DataSync to transfer large amounts of data back and forth between NFS file systems in the two Regions on a periodic basis, which simplifies and speeds up the data transfer process with minimal operational overhead. This solution meets the requirement of transferring large amounts of data back and forth between NFS file systems in the two Regions on a periodic basis with the least operational overhead.
Option B is less efficient because it uses AWS Snowball devices, which are physical devices that let you transfer large amounts of data into and out of AWS2. However, this does not provide a periodic data transfer solution, as it requires manual handling and shipping of the devices.
Option C is less efficient because it sets up an SFTP server on Amazon EC2, which is a way to provide secure file transfer protocol (SFTP) access to files stored in Amazon S33. However, this does not provide a periodic data transfer solution, as it requires manual initiation and monitoring of the file transfers.
Option D is less efficient because it uses AWS Database Migration Service (AWS DMS), which is a service that helps you migrate databases to AWS quickly and securely. However, this does not provide a data transfer solution for NFS file systems, as it only supports relational databases and non-relational data stores.
A company is developing a public web application that needs to access multiple AWS services. The application will have hundreds of users who must log in to the application first before using the services.
The company needs to implement a secure and scalable method to grant the web application
temporary access to the AWS resources.
Which solution will meet these requirements?
- A . Create an IAM role for each AWS service that the application needs to access. Assign the roles directly to the instances that the web application runs on.
- B . Create an IAM role that has the access permissions the web application requires. Configure the web application to use AWS Security Token Service (AWS STS) to assume the IAM role. Use STS tokens to access the required AWS services.
- C . Use AWS IAM Identity Center to create a user pool that includes the application users. Assign access credentials to the web application users. Use the credentials to access the required AWS services.
- D . Create an IAM user that has programmatic access keys for the AWS services. Store the access keys in AWS Systems Manager Parameter Store. Retrieve the access keys from Parameter Store. Use the keys in the web application.
B
Explanation:
Option B is the correct solution because:
AWS Security Token Service (STS) allows the web application to request temporary security credentials that grant access to AWS resources. These temporary credentials are secure and short-lived, reducing the risk of misuse.
Using STS and IAM roles ensures scalability by enabling the application to dynamically assume roles with the required permissions for each AWS service.
Option A: Assigning IAM roles directly to instances is less flexible and would grant the same permissions to all applications on the instance, which is not ideal for a multi-service web application.
Option C: AWS IAM Identity Center is used for managing single sign-on (SSO) for workforce users and is not designed for granting programmatic access to web applications.
Option D: Storing long-term access keys, even in AWS Systems Manager Parameter Store, is less secure and does not scale well compared to temporary credentials from STS.
AWS Documentation
Reference: AWS Security Token Service (STS)
IAM Roles for Temporary Credentials