Practice Free SAA-C03 Exam Online Questions
A company recently migrated a data warehouse to AWS. The company has an AWS Direct Connect connection to AWS. Company users query the data warehouse by using a visualization tool. The average size of the queries that the data warehouse returns is 50 MB. The average visualization that the visualization tool produces is 500 KB in size. The result sets that the data warehouse returns are not cached.
The company wants to optimize costs for data transfers between the data warehouse and the
company.
Which solution will meet this requirement?
- A . Host the visualization tool on premises. Connect to the data warehouse directly through the internet.
- B . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.
- C . Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection.
- D . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.
D
Explanation:
A company is launching a new application that will be hosted on Amazon EC2 instances. A solutions architect needs to design a solution that does not allow public IPv4 access that originates from the internet. However, the solution must allow the EC2 instances to make outbound IPv4 internet requests.
- A . Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.
- B . Deploy an internet gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.
- C . Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.
- D . Deploy an egress-only internet gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.
C
Explanation:
Why Option C is Correct:
NAT Gateway: Allows private subnets to access the internet for outbound requests while preventing inbound connections.
High Availability: Deploying NAT gateways in both AZs ensures fault tolerance. Shared Route Table: Simplifies routing configuration for private subnets.
Why Other Options Are Not Ideal:
Option A: Creating separate route tables for each subnet adds unnecessary complexity.
Option B: Internet gateways allow inbound access, violating the requirement to block public IPv4 access.
Option D: Egress-only internet gateways are designed for IPv6, not IPv4.
AWS
Reference: Amazon VPC NAT Gateway:
AWS Documentation – NAT Gateway
A company has an Amazon S3 bucket that contains critical dat a. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
- A . Enable versioning on the S3 bucket.
- B . Enable MFA Delete on the S3 bucket.
- C . Create a bucket policy on the S3 bucket.
- D . Enable default encryption on the S3 bucket.
- E . Create a lifecycle policy for the objects in the S3 bucket.
A, B
Explanation:
To protect data in an S3 bucket from accidental deletion, versioning should be enabled, which enables you to preserve, retrieve, and restore every version of every object in an S3 bucket. Additionally, enabling MFA (multi-factor authentication) Delete on the S3 bucket adds an extra layer of protection by requiring an authentication token in addition to the user’s access keys to delete objects in the bucket.
Reference: AWS S3 Versioning documentation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
AWS S3 MFA Delete documentation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
A company selves a dynamic website from a flee! of Amazon EC2 instances behind an Application Load Balancer (ALB). The website needs to support multiple languages to serve customers around the world. The website’s architecture is running in the us-west-1 Region and is exhibiting high request latency tor users that are located in other parts of the world
The website needs to serve requests quickly and efficiently regardless of a user’s location However the company does not want to recreate the existing architecture across multiple Regions
What should a solutions architect do to meet these requirements?
- A . Replace the existing architecture with a website that is served from an Amazon S3 bucket Configure an Amazon CloudFront distribution with the S3 bucket as the origin Set the cache behavior settings to cache based on the Accept-Language request header
- B . Configure an Amazon CloudFront distribution with the ALB as the origin Set the cache behavior settings to cache based on the Accept-Language request header
- C . Create an Amazon API Gateway API that is integrated with the ALB Configure the API to use the HTTP integration type Set up an API Gateway stage to enable the API cache based on the Accept-Language request header
- D . Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region Put all the EC2 instances and the ALB behind an Amazon Route 53 record set with a geolocation routing policy
B
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html Configuring caching based on the language of the viewer: If you want CloudFront to cache different versions of your objects based on the language specified in the request, configure CloudFront to forward the Accept-Language header to your origin.
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)
- A . Create an ongoing replication task.
- B . Create a database backup of the on-premises database
- C . Create an AWS Database Migration Service (AWS DMS) replication server
- D . Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
- E . Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization
AC
Explanation:
AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can also continuously replicate data with low latency from any supported source to any supported target. For example, you can replicate from multiple sources to Amazon Simple Storage Service (Amazon S3) to build a highly available and scalable data lake solution. You can also consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift. Learn more about the supported source and target databases. https://aws.amazon.com/dms/
A developer is creating a serverless application that performs video encoding. The encoding process runs as background jobs and takes several minutes to encode each video. The process must not send an immediate result to users.
The developer is using Amazon API Gateway to manage an API for the application. The developer needs to run test invocations and request validations. The developer must distribute API keys to control access to the API.
Which solution will meet these requirements?
- A . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the Event invocation type to call the Lambda function.
- B . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the Event invocation type to call the Lambda function.
- C . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the RequestResponse invocation type to call the Lambda function.
- D . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the RequestResponse invocation type to call the Lambda function.
B
Explanation:
Background Jobs with Event Invocation Type:
The Event invocation type is asynchronous, meaning the Lambda function does not send an immediate result to the API Gateway and processes the request in the background. This is ideal for video encoding tasks that take time.
REST API vs. HTTP API:
REST APIs support advanced features like API keys, request validation, and throttling that HTTP APIs do not support fully.
Since the developer needs API keys and request validations, a REST API is the correct choice.
Integration with Lambda:
AWS Lambda integration is seamless with REST APIs, and using the Event invocation ensures asynchronous processing.
Incorrect Options Analysis:
Option A: HTTP API lacks full support for API keys and validation.
Option C and D: RequestResponse invocation type requires immediate responses, unsuitable for background jobs.
Reference: AWS Lambda Invocation Types
Amazon API Gateway REST APIs
A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP. The application processes the data immediately and sends a message back to the device if necessary No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to another AWS Region
Which solution will meet these requirements?
- A . Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer (NLB) in each of the two Regions Configure the NLB to invoke an AWS Lambda function to process the data
- B . Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the NLB Process the data in Amazon ECS.
- C . Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster. Set the ECS service as the target for the ALB Process the data in Amazon ECS
- D . Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer (ALB) in each of the two Regions Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the ALB Process the data in Amazon ECS
B
Explanation:
To meet the requirements of minimizing latency for data transmission from the devices and providing rapid failover to another AWS Region, the best solution would be to use AWS Global Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container Service (Amazon ECS). AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to route traffic to optimal AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and endpoints, and provide automatic failover to another AWS Region.
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.
Which solution will meet these requirements?
- A . Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
- B . Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
- C . Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
- D . Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.
B
Explanation:
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda can be used to tag resources with the cost center ID of the user who created the resource, by querying the RDS database that maps users to cost centers. Amazon EventBridge is a serverless event bus service that enables event-driven architectures. EventBridge can be configured to react to AWS CloudTrail events, which are recorded API calls made by or on behalf of the AWS account. EventBridge can invoke the Lambda function when a resource is created in the specific AWS account, passing the user identity and resource information as parameters. This solution will meet the requirements, as it enables automatic tagging of resources based on the user and cost center mapping.
Reference: 1 provides an overview of AWS Lambda and its benefits.
2 provides an overview of Amazon EventBridge and its benefits.
3 explains the concept and benefits of AWS CloudTrail events.
A company is developing a mobile game that streams score updates to a backend processor and then posts results on a leaderboard A solutions architect needs to design a solution that can handle large traffic spikes process the mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution
What should the solutions architect do to meet these requirements?
- A . Push score updates to Amazon Kinesis Data Streams Process the updates in Kinesis Data Streams with AWS Lambda Store the processed updates in Amazon DynamoDB.
- B . Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling Store the processed updates in Amazon Redshift.
- C . Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic Subscribe an AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database running on Amazon EC2.
- D . Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
A
Explanation:
Amazon Kinesis Data Streams is a scalable and reliable service that can ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes and preserve the order of the incoming data records. AWS Lambda is a serverless compute service that can process the data streams from Kinesis Data Streams without requiring any infrastructure management. It can also scale automatically to match the throughput of the data stream. Amazon DynamoDB is a fully managed, highly available, and fast NoSQL database that can store the processed updates from Lambda. It can also handle high write throughput and provide consistent performance. By using these services, the solutions architect can design a solution that meets the requirements of the company with the least operational overhead.
A company’s security team requests that network traffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then accessed intermittently.
What should a solutions architect do to meet these requirements when configuring the logs?
- A . Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days
- B . Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.
- C . Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
- D . Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
D
Explanation:
There’s a table here that specifies that VPC Flow logs can go directly to S3. Does not need to go via CloudTrail and then to S3. Nor via CW.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3