Practice Free SAA-C03 Exam Online Questions
A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancing (ELB) load balancer Based on the application’s history, the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
- A . Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
- B . Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand
- C . Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period
- D . Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling:EC2_INSTANCE_LAUNCH events.
B
Explanation:
Understanding the Requirement: The company anticipates a spike in traffic during a holiday and wants to ensure the Auto Scaling group can handle the increased load without impacting performance.
Analysis of Options:
CloudWatch Alarm: This reacts to spikes based on metrics like CPU utilization but does not proactively scale before the anticipated demand.
Recurring Scheduled Action: This allows the Auto Scaling group to scale up based on a known schedule, ensuring additional capacity is available before the expected spike.
Increase Min/Max Instances: This could result in unnecessary costs by maintaining higher capacity even when not needed.
SNS Notification: Alerts on scaling events but does not proactively manage scaling to prevent performance issues.
Best Solution for Proactive Scaling:
Create a recurring scheduled action: This approach ensures that the Auto Scaling group scales up before the peak demand, providing the necessary capacity proactively without manual intervention.
Reference: Scheduled Scaling for Auto Scaling
A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.
- A . Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access
control. - B . Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.
- C . Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.
- D . Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.
C
Explanation:
Detailed
A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.
The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.
- B . Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.
- C . Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
- D . Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API operation is called within the company’s account.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.
- B . Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.
- C . Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.
- D . Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a CreateImage API call is detected.
C
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html#:~:text=For%20example%2C%20you%20can%20create%20an%20EventBridge%20rule%20that%20detects%20when%20the%20AMI%20creation%20process%20has%20completed%20and%20then%20invokes%20an%20Amazon%20SNS%20topic%20to%20send%20an%20email%20notificati on%20to%20you.
A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). The container images that the ECS task definition uses need to be scanned for Common Vulnerabilities and Exposures (CVEs). New container images that are created also need to be scanned.
Which solution will meet these requirements with the FEWEST changes to the workloads?
- A . Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify scan on push filters for the ECR basic scan.
- B . Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an S3 Event Notification to initiate a Made scan for every event with an s3:ObjeclCreated:Put event type
- C . Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository. Specify scan on push filters for the ECR enhanced scan.
- D . Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3 Event Notification for s3:ObjectCrealed:* events to invoke an AWS Lambda function. Configure the Lambda function to initiate an Amazon Inspector scan.
A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and the system cannot run on more than one instance. A solutions architect must design a resilient solution that can improve the recovery time for the system.
What should the solutions architect recommend to meet these requirements?
- A . Enable termination protection for the EC2 instance.
- B . Configure the EC2 instance for Multi-AZ deployment.
- C . Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
- D . Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.
C
Explanation:
To design a resilient solution that can improve the recovery time for the system, a solutions architect should recommend creating an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
This solution has the following benefits:
It allows the EC2 instance to be automatically recovered when a system status check failure occurs, such as loss of network connectivity, loss of system power, software issues on the physical host, or hardware issues on the physical host that impact network reachability1.
It preserves the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata of the original instance. A recovered instance is identical to the original instance, except for any data that is in-memory, which is lost during the recovery process1.
It does not require any modification of the application code or the EC2 instance configuration. The solutions architect can create a CloudWatch alarm using the AWS Management Console, the AWS CLI, or the CloudWatch API2.
Reference: 1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
2: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html#ec2-instance-recover-create-alarm
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The application architecture has an Application Load Balancer, an Amazon EC2 web server that hosts user session states, and a MySQL database that runs on an EC2 instance. The company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and to ensure high availability across all three Availability Zones.
Which solution will meet these requirements?
- A . Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones. - B . Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
- C . Migrate the MySQL database to Amazon DynamoDB. Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
- D . Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
A
Explanation:
This answer is correct because it meets the requirements of scaling to meet future application capacity demands and ensuring high availability across all three Availability Zones. By migrating the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment, the company can benefit from automatic failover, backup, and patching of the database across multiple Availability Zones. By using Amazon ElastiCache for Redis with high availability, the company can store session data and cache reads in a fast, in-memory data store that can also fail over across Availability Zones. By migrating the web server to an Auto Scaling group that is in three Availability Zones, the company can automatically scale the web server capacity based on the demand and traffic patterns.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up it necessary
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
- A . Use an Amazon Aurora global database with a pilot light deployment
- B . Use an Amazon Aurora global database with a warm standby deployment
- C . Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment
- D . Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment
B
Explanation:
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The company’s online application uses the database to process transactions. The data analysis team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reporting purposes.
- B . Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes.
- C . Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes.
- D . Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
A
Explanation:
Amazon RDS for Microsoft SQL Server is a fully managed service that offers SQL Server 2014, 2016, 2017, and 2019 editions while offloading database administration tasks such as backups, patching, and scaling. Amazon RDS supports read replicas, which are read-only copies of the primary database that can be used for reporting purposes without affecting the performance of the online application. This solution will meet the requirements with the least operational overhead, as it does not require any code changes or manual intervention.
Reference: 1 provides an overview of Amazon RDS for Microsoft SQL Server and its benefits.
2 explains how to create and use read replicas with Amazon RDS.
A company runs an application that stores and shares photos. Users upload the photos to an Amazon S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- B . Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
- C . Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload
the thumbnail to the second S3 bucket. - D . Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.
C
Explanation:
The most cost-effective and scalable solution for generating thumbnails when photos are uploaded to an S3 bucket is to use S3 event notifications to trigger an AWS Lambda function. This approach avoids the need for a long-running EC2 instance or EMR cluster, making it highly cost-effective because Lambda only charges for the time it takes to process each event.
S3 Event Notifications: Automatically triggers the Lambda function when a new photo is uploaded to the S3 bucket.
AWS Lambda: A serverless compute service that scales automatically and only charges for execution time, which makes it the most economical choice when dealing with periodic events like photo uploads.
The Lambda function can generate the thumbnail and upload it to a second S3 bucket, fulfilling the requirement efficiently.
Option A and Option B (EMR or EC2 with scheduled scripts)**: These are less cost-effective as they involve continuously running infrastructure, which incurs unnecessary costs.
Option D (S3 Storage Lens): S3 Storage Lens is a tool for storage analytics and is not designed for event-based photo processing.
AWS
Reference: Amazon S3 Event Notifications
AWS Lambda Pricing