Practice Free SAA-C03 Exam Online Questions
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications.
Which action should the solutions architect take?
- A . Configure a CloudFront signed URL.
- B . Configure a CloudFront signed cookie.
- C . Configure a CloudFront field-level encryption profile.
- D . Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy.
C
Explanation:
Field-level encryption in Amazon CloudFront provides end-to-end encryption for specific data fields (e.g., credit card numbers, social security numbers). It ensures that sensitive fields are encrypted at the edge before being forwarded to the origin, and only the authorized application with the private key can decrypt them.
This adds a layer of protection beyond HTTPS, which encrypts the whole payload but not individual fields. Signed URLs and cookies are for access control, not encryption. Setting HTTPS Only is a good practice but does not satisfy the field-specific encryption requirement.
A company recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon Elastic File System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes the company’s files, and the index is stored in an Amazon RDS database.
The company needs to optimize storage costs with some application and services changes.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an Amazon S3 bucket that uses an Intelligent-Tiering lifecycle policy. Copy all files to the S3 bucket. Update the application to use Amazon S3 API to store and retrieve files.
- B . Deploy Amazon FSx for Windows File Server file shares. Update the application to use CIFS protocol to store and retrieve files.
- C . Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount point to store and retrieve files.
- D . Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to the S3 bucket.
Update the application to use Amazon S3 API to store and retrieve files as standard retrievals.
A
Explanation:
Comprehensive and Detailed
To optimize storage costs, migrating data from Amazon EFS to Amazon S3 with the Intelligent-Tiering storage class is a cost-effective solution.
Amazon S3 Intelligent-Tiering: This storage class automatically moves data between frequent, infrequent, and archive access tiers based on access patterns, reducing storage costs without impacting performance.
Cost Savings: By leveraging Intelligent-Tiering, the company can achieve significant cost savings, especially for data with unpredictable access patterns.
Application Update: The application must be updated to interact with Amazon S3 APIs for storing and retrieving files, ensuring seamless integration with the new storage solution.
This approach provides a scalable, durable, and cost-effective storage solution, aligning with the company’s optimization goals.
Reference: Amazon S3 Intelligent-Tiering Storage Class
Managing storage costs with Amazon S3 Intelligent-Tiering
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns.
Which solution will meet these requirements?
- A . Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3.
Use machine learning (ML) models to analyze the transcript files. - B . Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena to analyze the
transcript files. - C . Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries to analyze the transcript files.
- D . Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3.
Use Amazon Textract to analyze the transcript files.
B
Explanation:
Amazon Transcribe supports automatic speech recognition (ASR) with speaker diarization (i.e., multiple speaker identification). The transcripts can be stored in Amazon S3 and queried using Amazon Athena, which provides a serverless, pay-as-you-go interactive querying model.
Reference: AWS Documentation C Amazon Transcribe + Amazon Athena for Speech-to-Insights Pipelines
A company is building a gaming application that needs to send unique events to multiple leaderboards, player matchmaking systems, and authentication services concurrently. The company requires an AWS-based event-driven system that delivers events in order and supports a publish-subscribe model. The gaming application must be the publisher, and the leaderboards, matchmaking systems, and authentication services must be the subscribers.
Which solution will meet these requirements?
- A . Amazon EventBridge event buses
- B . Amazon Simple Notification Service (Amazon SNS) FIFO topics
- C . Amazon Simple Notification Service (Amazon SNS) standard topics
- D . Amazon Simple Queue Service (Amazon SQS) FIFO queues
B
Explanation:
The requirement is an event-driven pub/sub system that guarantees ordered delivery of events.
Amazon SNS FIFO topics provide the publish-subscribe model along with FIFO (First-In-First-Out) delivery and exactly-once message processing, ensuring ordered delivery to multiple subscribers.
Option A, EventBridge, provides event buses but does not guarantee event ordering across multiple subscribers.
Option C (SNS standard topics) provides pub/sub but without ordering guarantees.
Option D (SQS FIFO queues) guarantees order but are point-to-point queues, not pub/sub.
Thus, Amazon SNS FIFO topics meet the requirements for ordered pub/sub messaging.
Reference: Amazon SNS FIFO Topics (https: //docs.aws.amazon.com/sns/latest/dg/fifo-topics.html)
Amazon EventBridge (https: //docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
A company wants a flexible compute solution that includes Amazon EC2 instances and AWS Fargate.
The company does not want to commit to multi-year contracts.
Which purchasing option will meet these requirements MOST cost-effectively?
- A . Purchase a 1-year EC2 Instance Savings Plan with the All Upfront option.
- B . Purchase a 1-year Compute Savings Plan with the No Upfront option.
- C . Purchase a 1-year Compute Savings Plan with the Partial Upfront option.
- D . Purchase a 1-year Compute Savings Plan with the All Upfront option.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To optimize costs for both Amazon EC2 and AWS Fargate, the best option is a Compute Savings Plan because it offers flexibility across instance families, Regions, and compute options including EC2, AWS Fargate, and AWS Lambda.
Unlike EC2 Instance Savings Plans, which apply only to specific instance families, Compute Savings Plans apply across multiple services.
Since the company does not want to commit to multi-year contracts or large upfront payments, the 1-year No Upfront Compute Savings Plan provides the greatest flexibility with no upfront capital commitment, while still offering cost savings over On-Demand pricing.
This option also aligns with cost-optimization best practices by allowing for scalability and service mix flexibility.
Reference: AWS Compute Savings Plans
AWS Pricing Models
A solutions architect needs to optimize a large data analytics job that runs on an Amazon EMR cluster. The job takes 13 hours to finish. The cluster has multiple core nodes and worker nodes deployed on large, compute-optimized instances.
After reviewing EMR logs, the solutions architect discovers that several nodes are idle for more than 5 hours while the job is running. The solutions architect needs to optimize cluster performance.
Which solution will meet this requirement MOST cost-effectively?
- A . Increase the number of core nodes to ensure there is enough processing power to handle the analytics job without any idle time.
- B . Use the EMR managed scaling feature to automatically resize the cluster based on workload.
- C . Migrate the analytics job to a set of AWS Lambda functions. Configure reserved concurrency for the functions.
- D . Migrate the analytics job core nodes to a memory-optimized instance type to reduce the total job runtime.
B
Explanation:
EMR managed scaling dynamically resizes the cluster by adding or removing nodes based on the workload. This feature helps minimize idle time and reduces costs by scaling the cluster to meet processing demands efficiently.
Option A: Increasing the number of core nodes might increase idle time further, as it does not address the root cause of underutilization.
Option C: Migrating the job to Lambda is infeasible for large analytics jobs due to resource and runtime constraints.
Option D: Changing to memory-optimized instances may not necessarily reduce idle time or optimize costs.
AWS Documentation
Reference: EMR Managed Scaling
A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
Which solution will meet these requirements?
- A . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS
Lambda function to process the orders. - B . Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.
- C . Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.
B
Explanation:
Amazon SQS FIFO queuesensure that orders are processed in the exact order received and maintain message deduplication.
AWS Lambdascales automatically, handling bursts and maintaining high availability in a cost-effective manner.
Option A and D: Amazon SNS does not guarantee ordered processing.
Option C: Standard SQS queues do not guarantee order.
AWS Documentation
Reference: Amazon SQS FIFO Queues
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Create an Amazon DynamoDB database table configured with global tables.
- B . Create an Amazon RDS database with Multi-AZ deployments
- C . Create an Amazon RDS database with Multi-AZ DB cluster deployment.
- D . Create an Amazon RDS database configured with cross-Region read replicas.
C
Explanation:
Amazon RDSMulti-AZ DB cluster deployment ensures high availability by automatically replicating data across multiple Availability Zones (AZs), and it supports failover in case of a failure in one AZ. This setup also provides increased capacity for read workloads by allowing read scaling with reader instances in different AZs. This solution offers the most operational efficiency with minimal manual intervention.
Option A (DynamoDB): DynamoDB is not suitable for a relational database workload, which requires a PostgreSQL engine.
Option B (RDS with Multi-AZ): While this provides high availability, it doesn’t offer read scaling capabilities.
Option D (Cross-Region Read Replicas): This adds complexity and is not necessary if the requirement is high availability within a single region.
AWS
Reference: Amazon RDS Multi-AZ DB Cluster
A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website serves only a small amount of traffic. The company is concerned about the reliability of the instance and needs to migrate to a highly available architecture. The company cannot modify the application code.
Which combination of actions should a solutions architect take to achieve high availability for the website? (Select TWO.)
- A . Provision an internet gateway in each Availability Zone in use.
- B . Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
- C . Migrate the database to Amazon DynamoDB. and enable DynamoDB auto scaling.
- D . Use AWS DataSync to synchronize the database data across multiple EC2 instances.
- E . Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.
B, E
Explanation:
To achieve high availability for the website, two key actions should be taken:
Amazon RDS for MySQL Multi-AZ: By migrating the database to an RDS for MySQL Multi-AZ deployment, the database becomes highly available. Multi-AZ provides automatic failover from the primary database to a standby replica in another Availability Zone, ensuring database availability even in the case of an AZ failure.
Application Load Balancer and Auto Scaling: Deploying an Application Load Balancer (ALB) in front of the EC2 instances ensures that traffic is evenly distributed across the instances. Configuring an Auto Scaling group to run EC2 instances across multiple Availability Zones ensures that the application remains available even if one instance or one AZ becomes unavailable. This setup enhances fault tolerance and improves reliability.
Why Not Other Options?:
Option A (Internet Gateway per AZ): Internet Gateways are region-wide resources and do not need to be provisioned per Availability Zone. This option does not contribute to high availability.
Option C (DynamoDB + Auto Scaling): DynamoDB would require changes to the application code to switch from MySQL, which is not possible per the question’s constraints.
Option D (DataSync): AWS DataSync is used for data transfer and synchronization, not for achieving high availability for a database.
AWS
Reference: Amazon RDS Multi-AZ Deployments- Explanation of how Multi-AZ deployments work in Amazon RDS.
Application Load Balancing- Details on how to configure and use ALB for distributing traffic across multiple instances.
A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC. The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC contains a NAT gateway in a public subnet to access the S3 bucket. The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy.
Which solution meets these requirements?
- A . Replace the NAT gateway with a NAT instance.
- B . Replace the NAT gateway with an internet gateway.
- C . Replace the NAT gateway with a gateway VPC endpoint.
- D . Replace the NAT gateway with an AWS Direct Connect connection.
C
Explanation:
A VPC gateway endpoint for Amazon S3 enables private connectivity to S3 without routing traffic through a NAT gateway or over the internet, eliminating NAT gateway costs. This solution is secure and redundant, as S3 endpoints are highly available by design.
AWS Documentation Extract:
"A gateway VPC endpoint enables you to privately connect your VPC to supported AWS services without requiring a NAT gateway or internet gateway."
(Source: Amazon VPC documentation, Gateway Endpoints)
A: NAT instances still incur operational overhead and costs.
B: Internet gateway exposes resources and does not provide private access.
D: Direct Connect is for hybrid networking, not for cost-efficient S3 access.
Reference: AWS Certified Solutions Architect C Official Study Guide, VPC Networking and Endpoints.
