Practice Free SAA-C03 Exam Online Questions
A company is designing a website that displays stock market prices to users. The company wants to use Amazon ElastiCache (Redis OSS) for the data caching layer. The company needs to ensure that the website’s data caching layer can automatically fail over to another node if necessary.
- A . Enable read replicas in ElastiCache (Redis OSS). Promote the read replica when necessary.
- B . Enable Multi-AZ in ElastiCache (Redis OSS). Fail over to a second node when necessary.
- C . Export a backup of the ElastiCache (Redis OSS) cache to an Amazon S3 bucket. Restore the cache to a second cluster when necessary.
- D . Export a backup of the ElastiCache (Redis OSS) cache by using AWS Backup. Restore the cache to a second cluster when necessary.
B
Explanation:
For high availability, Amazon ElastiCache for Redis supports Multi-AZ with automatic failover, which provides primary and replica nodes in different Availability Zones. If the primary node fails, Redis automatically promotes a replica to primary.
From AWS Documentation:
“When you enable Multi-AZ with automatic failover, Amazon ElastiCache automatically detects failures and promotes a read replica to primary with minimal downtime.”
(Source: Amazon ElastiCache for Redis User Guide)
Why B is correct:
Multi-AZ provides automatic failover and data replication.
Ensures continuous availability and protects against node or AZ failures.
Fully managed with no manual intervention needed.
Why others are incorrect:
A: Manual promotion is not automatic.
C & D: Restoring from backup is slow and meant for disaster recovery, not failover.
Reference: Amazon ElastiCache for Redis User Guide C “High Availability with Multi-AZ” AWS Well-Architected Framework C Reliability Pillar
A company has resources across multiple AWS Regions and accounts. A newly hired solutions architect discovers that a previous employee did not provide details about the resources inventory. The solutions architect needs to build and map the relationship details of the various workloads across all accounts.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
- B . Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads manually.
- C . Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
- D . Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.
C
Explanation:
Workload Discovery on AWS is a tool that automatically discovers AWS resources across multiple accounts and Regions and builds visual architecture diagrams that show resource relationships. It is specifically designed to help teams understand and document existing workloads with minimal manual effort.
This service provides multi-account visibility, relationship mapping, and exportable diagrams, which
directly satisfies the need to “build and map the relationship details of the various workloads” with the least operational overhead.
Systems Manager Inventory (Option A) focuses on collecting configuration and inventory data (primarily for instances and some resources) and does not generate architecture diagrams or full relationship views.
Step Functions (Option B) and X-Ray (Option D) would require custom collection and manual diagramming and are not purpose-built for environment discovery and mapping.
A company uses an Amazon DynamoDB table to store data that the company receives from devices. The DynamoDB table supports a customer-facing website to display recent activity on customer devices. The company configured the table with provisioned throughput for writes and reads
The company wants to calculate performance metrics for customer device data on a daily basis. The solution must have minimal effect on the table’s provisioned read and write capacity
Which solution will meet these requirements?
- A . Use an Amazon Athena SQL query with the Amazon Athena DynamoDB connector to calculate performance metrics on a recurring schedule.
- B . Use an AWS Glue job with the AWS Glue DynamoDB export connector to calculate performance metrics on a recurring schedule.
- C . Use an Amazon Redshift COPY command to calculate performance metrics on a recurring schedule.
- D . Use an Amazon EMR job with an Apache Hive external table to calculate performance metrics on a recurring schedule.
A
Explanation:
Amazon Athena provides a cost-effective, serverless way to query data without affecting the performance of DynamoDB. By using the Athena DynamoDB connector, the company can perform the necessary SQL queries without consuming read capacity on the DynamoDB table, which is essential for minimizing impact on provisioned throughput.
Key benefits:
Minimal Impact on Provisioned Capacity: Athena queries do not directly impact DynamoDB’s read capacity, making it ideal for running analytics without affecting the customer-facing workloads.
Cost-Effective: Athena is a serverless solution, meaning you pay only for the queries you run, making it highly cost-effective compared to running a dedicated cluster like Amazon EMR or Redshift.
AWS Documentation: The use of Athena to query DynamoDB through its connector aligns with AWS’s best practices for performance efficiency and cost optimization.
A company has an application that processes information from documents that users upload. When a user uploads a new document to an Amazon S3 bucket, an AWS Lambda function is invoked. The Lambda function processes information from the documents.
The company discovers that the application did not process many recently uploaded documents. The company wants to ensure that the application processes each document with retries if there is an error during the first attempt to process the document.
Which solution will meet these requirements?
- A . Create an Amazon API Gateway REST API that has a proxy integration to the Lambda function.
Update the application to send requests to the REST API. - B . Configure a replication policy on the S3 bucket to stage the documents in another S3 bucket that an AWS Batch job processes on a daily schedule.
- C . Deploy an Application Load Balancer in front of the Lambda function that processes the documents.
- D . Configure an Amazon Simple Queue Service (Amazon SQS) queue as an event source for the Lambda function. Configure an S3 event notification on the S3 bucket to send new document upload events to the SQS queue.
D
Explanation:
Using SQS as a buffer between S3 and the Lambda function ensures durability and allows for retries in case of processing failures. Messages in the queue can be retried by Lambda, and failed processing can be directed to a dead-letter queue for further inspection. This guarantees reliable and scalable message-driven processing.
Reference: AWS Documentation C Using Amazon SQS as Lambda Event Source with S3 Trigger
A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?
- A . Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross-Region Replication.
- B . Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy instance for the database.
- C . Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the snapshots in the event of a failure.
- D . Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database.
B
Explanation:
The correct answer is B because the company wants high availability, minimum downtime, minimum data loss, and the least operational effort. For the application tier, configuring the Auto Scaling group to span multiple Availability Zones increases resilience by ensuring that the loss of one Availability Zone does not make the application unavailable. Since the application is already behind an Application Load Balancer, traffic can continue to be routed to healthy instances in other Availability Zones.
For the database tier, the single-AZ Aurora PostgreSQL deployment represents a single point of failure. Configuring the database for Multi-AZ improves availability and durability by maintaining a synchronized standby in another Availability Zone and supporting failover with minimal disruption. This approach is aligned with AWS best practices for relational database high availability.
Using Amazon RDS Proxy further improves availability at the application layer by managing database connections efficiently and helping applications reconnect more quickly during failover events. This reduces the operational complexity of handling transient database interruptions and can improve application resilience.
Option A introduces multi-Region complexity, which is not required to meet the stated need and creates more operational overhead.
Option C does not provide high availability because snapshots are a backup and restore mechanism, not an automatic failover solution, and hourly snapshots can result in significant data loss.
Option D is unnecessarily complex and changes the application data flow in a way that is not needed.
The simplest and most AWS-aligned solution is to make both the compute and database layers highly available across multiple Availability Zones. That provides resilience with the least operational burden.
A healthcare company is developing an AWS Lambda function that publishes notifications to an encrypted Amazon Simple Notification Service (Amazon SNS) topic. The notifications contain protected health information (PHI).
The SNS topic uses AWS Key Management Service (AWS KMS) customer-managed keys for encryption. The company must ensure that the application has the necessary permissions to publish messages securely to the SNS topic.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Create a resource policy for the SNS topic that allows the Lambda function to publish messages to the topic.
- B . Use server-side encryption with AWS KMS keys (SSE-KMS) for the SNS topic instead of customer-managed keys.
- C . Create a resource policy for the encryption key that the SNS topic uses that has the necessary AWS KMS permissions.
- D . Specify the Lambda function’s Amazon Resource Name (ARN) in the SNS topic’s resourcepolicy.
- E . Associate an Amazon API Gateway HTTP API with the SNS topic to control access to the topic by using API Gateway resource policies.
- F . Configure a Lambda execution role that has the necessary IAM permissions to use a customer-managed key in AWS KMS.
A, C, F
Explanation:
To securely publish messages to an encrypted Amazon SNS topic, the following steps are required:
A company has a web application that has thousands of users. The application uses 8-10 user-uploaded images to generate Al images. Users can download the generated Al Images once every 6 hours. The company also has a premium user option that gives users the ability to download the generated Al images anytime
The company uses the user-uploaded images to run Al model training twice a year. The company needs a storage solution to store the images.
Which storage solution meets these requirements MOST cost-effectively?
- A . Move uploaded images to Amazon S3 Glacier Deep Archive. Move premium user-generated Al
images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA). - B . Move uploaded images to Amazon S3 Glacier Deep Archive. Move all generated Al images to S3 Glacier Flexible Retrieval.
- C . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
- D . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move all generated Al images to S3 Glacier Flexible Retrieval
C
Explanation:
S3 One Zone-IA:
Suitable for infrequently accessed data that doesn’t require multiple Availability Zone resilience.
Cost-effective for storing user-uploaded images that are only used for AI model training twice a year.
S3 Standard:
Ideal for frequently accessed data with high durability and availability.
Store premium user-generated AI images here to ensure they are readily available for download at any time.
S3 Standard-IA:
Cost-effective storage for data that is accessed less frequently but still requires rapid retrieval.
Store non-premium user-generated AI images here, as these images are only downloaded once every 6 hours, making it a good balance between cost and accessibility.
Cost-Effectiveness: This solution optimizes storage costs by categorizing data based on access patterns and durability requirements, ensuring that each type of data is stored in the most cost-effective manner.
Reference: Amazon S3 Storage Classes
S3 One Zone-IA
A company needs to grant a team of developers access to the company’s AWS resources. The company must maintain a high level of security for the resources.
The company requires an access control solution that will prevent unauthorized access to the sensitive data.
Which solution will meet these requirements?
- A . Share the IAM user credentials for each development team member with the rest of the team to simplify access management and to streamline development workflows.
- B . Define IAM roles that have fine-grained permissions based on the principle of least privilege.
Assign an IAM role to each developer. - C . Create IAM access keys to grant programmatic access to AWS resources. Allow only developers to interact with AWS resources through API calls by using the access keys.
- D . Create an AWS Cognito user pool. Grant developers access to AWS resources by using the user pool.
B
Explanation:
The best practice for secure access control in AWS is to use IAM roles with least-privilege policies, granting only the permissions necessary to perform required tasks. Assigning roles individually ensures that developers cannot overstep their intended access boundaries.
Sharing credentials or using permanent access keys increases the risk of security breaches. Cognito is primarily intended for managing user access to applications, not AWS infrastructure. Thus, Option B best meets security and access control requirements.
A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures. The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.
Which solution will meet these requirements?
- A . Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
- B . Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.
- C . Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
- D . Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.
A
Explanation:
The requirement combines two key goals: reduce post-migration database administration/maintenance cost and minimize application rewrites for an existing Microsoft SQL Server application that relies heavily on T-SQL and stored procedures. Amazon Aurora PostgreSQL-Compatible with Babelfish is designed for this exact migration pattern: it helps applications written for SQL Server to run with reduced code changes by providing compatibility for commonly used SQL Server semantics, including T-SQL constructs, procedural logic, and SQL Server-style connectivity patterns (depending on feature usage). Aurora itself is a managed database service that reduces operational overhead compared to self-managed databases by offloading tasks such as provisioning, patching (within service capabilities), backups, and high availability patterns.
Option C (RDS for SQL Server) would indeed minimize rewrites because it keeps the engine the same, but it typically does not optimize operating costs as effectively as migrating off commercial SQL Server licensing/edition costs; it also keeps you on the same engine family, which often carries higher license implications and may not meet the “minimize operating costs” intent as strongly as moving to Aurora PostgreSQL with Babelfish.
Option B is not suitable because Redshift Spectrum is for analytics over data in S3, not for running an OLTP ecommerce database with stored procedures.
Option D is also a mismatch: EMR is for big data processing frameworks, not a relational OLTP database replacement for SQL Server.
Therefore, A best balances lower ongoing operational cost with reduced refactoring effort by using Aurora’s managed capabilities while leveraging Babelfish to ease SQL Server-to-PostgreSQL migration friction.
A company runs multiple workloads in separate AWS environments. The company wants to optimize its AWS costs but must maintain the same level of performance for the environments.
The company’s production environment requires resources to be highly available. The other environments do not require highly available resources.
Each environment has the same set of networking components, including the following:
• 1 VPC
• 1 Application Load Balancer
• 4 subnets distributed across 2 Availability Zones (2 public subnets and 2 private subnets)
• 2 NAT gateways (1 in each public subnet)
• 1 internet gateway
Which solution will meet these requirements?
- A . Do not change the production environment workload. For each non-production workload, remove one NAT gateway and update the route tables for private subnets to target the remaining NAT gateway for the destination 0.0.0.0/0.
- B . Reduce the number of Availability Zones that all workloads in all environments use.
- C . Replace every NAT gateway with a t4g.large NAT instance. Update the route tables for each private subnet to target the NAT instance that is in the same Availability Zone for the destination 0.0.0.0/0.
- D . In each environment, create one transit gateway and remove one NAT gateway. Configure routing on the transit gateway to forward traffic for the destination 0.0.0.0/0 to the remaining NAT gateway. Update private subnet route tables to target the transit gateway for the destination 0.0.0.0/0.
A
Explanation:
Maintaining two NAT gateways for production ensures high availability. Reducing to one NAT gateway in non-production environments lowers cost while maintaining necessary connectivity. This approach is recommended by AWS for cost optimization in non-critical environments.
Reference Extract:
"For environments that do not require high availability, you can reduce costs by using a single NAT gateway and updating route tables accordingly."
Source: AWS Certified Solutions Architect C Official Study Guide, Cost Optimization and NAT Gateway section.
