Practice Free SAA-C03 Exam Online Questions
A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
Which solution will meet these requirements?
- A . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.
- B . Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.
B
Explanation:
Amazon SQS FIFO queues ensure that orders are processed in the exact order received and maintain message deduplication.
AWS Lambda scales automatically, handling bursts and maintaining high availability in a cost-effective manner.
Option A and D: Amazon SNS does not guarantee ordered processing.
Option C: Standard SQS queues do not guarantee order. AWS Documentation
Reference: Amazon SQS FIFO Queues
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts
- B . Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization
- C . Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export the log data to a central Amazon S3 bucket
- D . Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket
C
Explanation:
This option is the most operationally efficient way to meet the requirements because it allows the company to monitor and analyze the database login activity across all the accounts in the organization. By publishing the Aurora general logs to a log group in Amazon CloudWatch Logs, the company can enable the logging of the database connections, disconnections, and failed authentication attempts. By exporting the log data to a central Amazon S3 bucket, the company can store the log data in a durable and cost-effective way and use other AWS services or tools to perform further analysis or alerting on the log data. For example, the company can use Amazon Athena to query the log data in Amazon S3, or use Amazon SNS to send notifications based on the log data.
A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.
Which solution will meet these requirements MOST cost-effectively?
- A . Copy the records from the application into an Amazon Redshift cluster.
- B . Copy the records from the application into an Amazon Neptune cluster.
- C . Copy the records from the application into an Amazon Timestream database.
- D . Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.
D
Explanation:
Amazon QLDB is the most cost-effective and suitable service for maintaining immutable, cryptographically verifiable logs of data changes. QLDB provides a fully managed ledger database with a built-in cryptographic hash chain, making it ideal for recording changes to accounting records, ensuring data integrity and security.
QLDB reduces operational overhead by offering fully managed services, so there’s no need for server management, and it’s built specifically to ensure immutability and verifiability, making it the best fit for the given requirements.
Option A (Redshift): Redshift is designed for analytics and not for immutable, cryptographically verifiable logs.
Option B (Neptune): Neptune is a graph database, which is not suitable for this use case.
Option C (Timestream): Timestream is a time series database optimized for time-stamped data, but it does not provide immutable or cryptographically verifiable logs.
AWS
Reference: Amazon QLDB
How QLDB Works
A media company hosts its video processing workload on AWS. The workload uses Amazon EC2 instances in an Auto Scaling group to handle varying levels of demand. The workload stores the original videos and the processed videos in an Amazon S3 bucket.
The company wants to ensure that the video processing workload is scalable. The company wants to prevent failed processing attempts because of resource constraints. The architecture must be able to handle sudden spikes in video uploads without impacting the processing capability.
Which solution will meet these requirements with the LEAST overhead?
- A . Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Configure an Amazon S3 event notification to invoke the Lambda functions when a new video is uploaded. Configure the Lambda functions to process videos directly and to save processed videos back to the
S3 bucket. - B . Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Use Amazon S3 to invoke an Amazon Simple Notification Service (Amazon SNS) topic when a new video is uploaded. Subscribe the Lambda functions to the SNS topic. Configure the Lambda functions to process the videos asynchronously and to save processed videos back to the S3 bucket.
- C . Configure an Amazon S3 event notification to send a message to an Amazon Simple Queue Service (Amazon SQS) queue when a new video is uploaded. Configure the existing Auto Scaling group to poll the SQS queue, process the videos, and save processed videos back to the S3 bucket.
- D . Configure an Amazon S3 upload trigger to invoke an AWS Step Functions state machine when a new video is uploaded. Configure the state machine to orchestrate the video processing workflow by placing a job message in the Amazon SQS queue. Configure the job message to invoke the EC2 instances to process the videos. Save processed videos back to the S3 bucket.
C
Explanation:
This solution addresses the scalability needs of the workload while preventing failed processing attempts due to resource constraints.
Amazon S3 event notifications can be used to trigger a message to an SQS queue whenever a new video is uploaded.
The existing Auto Scaling group of EC2 instances can poll the SQS queue, ensuring that the EC2 instances only process videos when there is a job in the queue.
SQS decouples the video upload and processing steps, allowing the system to handle sudden spikes in video uploads without overloading EC2 instances.
The use of Auto Scaling ensures that the EC2 instances can scale in or out based on the demand, maintaining cost efficiency while avoiding processing failures due to insufficient resources.
AWS
Reference: S3 Event Notifications details how to configure notifications for S3 events.
Amazon SQS is a fully managed message queuing service that decouples components of the system.
Auto Scaling EC2 explains how to manage automatic scaling of EC2 instances based on demand.
Why the other options are incorrect:
A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company must minimize changes to the application during the migration. The company wants to improve application resiliency after the migration.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
- B . Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
- C . Migrate the database to an Amazon RDS Multi-AZ deployment.
- D . Migrate the web tier to an AWS Lambda function.
- E . Migrate the database to an Amazon DynamoDB table.
AC
Explanation:
An Auto Scaling group is a collection of EC2 instances that share similar characteristics and can be scaled in or out automatically based on demand. An Auto Scaling group can be placed behind an Application Load Balancer, which is a type of Elastic Load Balancing load balancer that distributes incoming traffic across multiple targets in multiple Availability Zones. This solution will improve the resiliency of the web tier by providing high availability, scalability, and fault tolerance. An Amazon RDS Multi-AZ deployment is a configuration that automatically creates a primary database instance and synchronously replicates the data to a standby instance in a different Availability Zone. When a failure occurs, Amazon RDS automatically fails over to the standby instance without manual intervention. This solution will improve the resiliency of the database tier by providing data redundancy, backup support, and availability. This combination of steps will meet the requirements with minimal changes to the application during the migration.
Reference:
1 describes the concept and benefits of an Auto Scaling group.
2 provides an overview of Application Load Balancers and their benefits.
3 explains how Amazon RDS Multi-AZ deployments work and their benefits.
A gaming company wants to launch a new internet-facing application in multiple AWS Regions. The application will use the TCP and UDP protocols for communication. The company needs to provide high availability and minimum latency for global users.
Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)
- A . Create internal Network Load Balancers in front of the application in each Region.
- B . Create external Application Load Balancers in front of the application in each Region.
- C . Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
- D . Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
- E . Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region.
BC
Explanation:
This combination of actions will provide high availability and minimum latency for global users by using AWS Global Accelerator and Application Load Balancers. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your internet-facing applications by using the AWS global network. It provides two global static public IPs that act as a fixed entry point to your application endpoints, such as Application Load Balancers, in multiple Regions1. Global Accelerator uses the AWS backbone network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure. It also offers TCP and UDP support, traffic encryption, and DDoS protection2. Application Load Balancers are external load balancers that distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. They support both HTTP and HTTPS (SSL/TLS) protocols, and offer advanced features such as content-based routing, health checks, and integration with other AWS services3. By creating external Application Load Balancers in front of the application in each Region, you can ensure that the application can handle varying load patterns and scale on demand. By creating an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region, you can leverage the performance, security, and availability of the AWS global network to deliver the best possible user experience.
Reference: 1: What is AWS Global Accelerator? – AWS Global Accelerator4, Overview section2: Network Acceleration Service – AWS Global Accelerator – AWS5, Why AWS Global Accelerator? section. 3: What is an Application Load Balancer? – Elastic Load Balancing6, Overview
section.
A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create Amazon RDS automated backups. Set the retention period to 90 days.
- B . Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
- C . Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
- D . Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.
D
Explanation:
AWS Backup is the most appropriate solution for managing backups with minimal operational overhead while meeting the regulatory requirement to retain data for 90 days and enabling point-in-time restore for up to 14 days.
AWS Backup: AWS Backup provides a centralized backup management solution that supports automated backup scheduling, retention management, and compliance reporting across AWS services, including Amazon RDS. By creating a backup plan, you can define a retention period (in this case, 90 days) and automate the backup process.
Point-in-Time Restore (PITR): Amazon RDS supports point-in-time restore for up to 35 days with
automated backups. By using AWS Backup in conjunction with RDS, you ensure that your backup strategy meets the requirement for restoring data to a specific point in time within the last 14 days.
Why Not Other Options?
Option A (RDS Automated Backups): While RDS automated backups support PITR, they do not directly support retention beyond 35 days without manual intervention.
Option B (Manual Snapshots): Manually creating and managing snapshots is operationally intensive and less automated compared to AWS Backup.
Option C (Aurora Clones): Aurora Clone is a feature specific to Amazon Aurora and is not applicable to Amazon RDS for Oracle.
AWS
Reference: AWS Backup – Overview of AWS Backup and its capabilities.
Amazon RDS Automated Backups – Information on how RDS automated backups work and their limitations.
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
- A . Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
- B . Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an
AWS Lambda function for processing. - C . Use an API Gateway authorizer to block any requests while the application processes an order.
- D . Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
B
Explanation:
To ensure that orders are processed in the order that they are received, the best solution is to use an Amazon SQS FIFO (First-In-First-Out) queue. This type of queue maintains the exact order in which messages are sent and received. In this case, the application can send information about new orders to an Amazon API Gateway REST API, which can then use an API Gateway integration to send a message to an Amazon SQS FIFO queue for processing. The queue can then be configured to invoke an AWS Lambda function to perform the necessary processing on each order. This ensures that orders are processed in the exact order in which they are received.
A company s order system sends requests from clients to Amazon EC2 instances. The EC2 instances process the orders and men store the orders in a database on Amazon RDS Users report that they must reprocess orders when the system fails. The company wants a resilient solution that can process orders automatically it a system outage occurs.
What should a solutions architect do to meet these requirements?
- A . Move (he EC2 Instances into an Auto Scaling group Create an Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon Elastic Container Service (Amazon ECS) task
- B . Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB) Update the order system to send messages to the ALB endpoint.
- C . Move the EC2 instances into an Auto Scaling group Configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue Configure the EC2 instances to consume messages from the queue
- D . Create an Amazon Simple Notification Service (Amazon SNS) topic Create an AWS Lambda function, and subscribe the function to the SNS topic Configure the order system to send messages to the SNS topic Send a command to the EC2 instances to process the messages by using AWS Systems Manager Run Command
C
Explanation:
To meet the company’s requirements of having a resilient solution that can process orders automatically in case of a system outage, the solutions architect needs to implement a fault-tolerant architecture. Based on the given scenario, a potential solution is to move the EC2 instances into an Auto Scaling group and configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue. The EC2 instances can then consume messages from the queue.
A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable Modifications or deletions of the files by anyone before a designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
- A . Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only 1AM permissions to any AWS principals that access the S3 bucket until the designated date.
- B . Create a new Amazon S3 bucket with S3 Versioning enabled Use S3 Object Lock with a retention period in accordance with the designated date Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objrcts.
- C . Create a new Amazon S3 bucket with S3 Versioning enabled Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
- D . Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only 1AM permissions to any AWS principals that access the S3 bucket.
B
Explanation:
Amazon S3 is a service that provides object storage in the cloud. It can be used to store and serve static web content, such as HTML, CSS, JavaScript, images, and videos1. By creating a new Amazon S3 bucket and configuring it for static website hosting, the solution can share information with the public.
Amazon S3 Versioning is a feature that keeps multiple versions of an object in the same bucket. It helps protect objects from accidental deletion or overwriting by preserving, retrieving, and restoring every version of every object stored in an S3 bucket2. By enabling S3 Versioning on the new bucket, the solution can prevent modifications or deletions of the files by anyone.
Amazon S3 Object Lock is a feature that allows users to store objects using a write-once-read-many (WORM) model. It can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. It requires S3 Versioning to be enabled on the bucket3. By using S3 Object Lock with a retention period in accordance with the designated date, the solution can prohibit modifications or deletions of the files by anyone before that date.
Amazon S3 bucket policies are JSON documents that define access permissions for a bucket and its objects. They can be used to grant or deny access to specific users or groups based on conditions such as IP address, time of day, or source bucket. By setting an S3 bucket policy to allow read-only access to the objects, the solution can ensure that the files are publicly readable.