Practice Free SAA-C03 Exam Online Questions
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
- B . Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
- C . Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
- D . Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.
B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures,
and more secure1. RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to 66%and enables IAM authentication and Secrets Manager integration for database access1. RDS Proxy can be enabled for most applications with no code changes2.
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
- B . Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
- C . Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
- D . Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.
A
Explanation:
AWS DataSync is a service that makes it easy to move large amounts of data between AWS storage services and on-premises storage systems. AWS DataSync can copy files from an S3 bucket to an EFS file system and another S3 bucket continuously, as well as overwrite only the files that have changed in the source. This solution will meet the requirements with the least operational overhead, as it does not require any code development or manual intervention.
Reference: 4 explains how to create AWS DataSync locations for different storage services.
5 describes how to create and configure AWS DataSync tasks for data transfer.
6 discusses the different transfer modes that AWS DataSync supports.
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy an AWS Global Accelerator accelerator in front of the web servers.
- B . Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
- C . Deploy an Amazon ElastiCache for Redis instance in front of the web servers.
- D . Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.
B
Explanation:
ElastiCache, enhances the performance of web applications by quickly retrieving information from fully-managed in-memory data stores. It utilizes Memcached and Redis, and manages to considerably reduce the time your applications would, otherwise, take to read data from disk-based databases. Amazon CloudFront supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control Protocol (TCP) protocol. Common use cases include dynamic API calls, web pages and web applications, as well as an application’s static files such as audio and images. It also supports on-demand media streaming over HTTP. AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for non-HTTP use cases, such as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
What is the MOST cost-effective solution to connect these VPCs?
- A . Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
- B . Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
- C . Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
- D . Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
D
Explanation:
To connect two VPCs in the same Region within the same AWS account, VPC peering is the most cost-effective solution. VPC peering allows direct network traffic between the VPCs without requiring a gateway, VPN connection, or AWS Transit Gateway. VPC peering also does not incur any additional charges for data transfer between the VPCs.
Reference:
What Is VPC Peering?
VPC Peering Pricing
A company has an AWS Glue extract. transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 bucket.
New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?
- A . Edit the job to use job bookmarks.
- B . Edit the job to delete data after the data is processed
- C . Edit the job by setting the NumberOfWorkers field to 1.
- D . Use a FindMatches machine learning (ML) transform.
C
Explanation:
This is the purpose of bookmarks: "AWS Glue tracks data that has already been processed during a previous run of an ETL job by persisting state information from the job run. This persisted state information is called a job bookmark. Job bookmarks help AWS Glue maintain state information and prevent the reprocessing of old data." https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
A company has an on-premises business application that generates hundreds of files each day. These files are stored on an SMB file share and require a low-latency connection to the application servers. A new company policy states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.
The application development team does not have time to make the necessary code modifications to move the application to AWS.
Which service should a solutions architect recommend to allow the application to copy files to AWS?
- A . Amazon Elastic File System (Amazon EFS)
- B . Amazon FSx for Windows File Server
- C . AWS Snowball
- D . AWS Storage Gateway
D
Explanation:
Understanding the Requirement: The company needs to copy files generated by an on-premises application to AWS without modifying the application code. The files are stored on an SMB file share and require a low-latency connection to the application servers.
Analysis of Options:
Amazon Elastic File System (EFS): EFS is designed for Linux-based workloads and does not natively support SMB file shares.
Amazon FSx for Windows File Server: FSx supports SMB file shares but would require changes to the application or additional infrastructure to connect on-premises systems.
AWS Snowball: Suitable for large data transfers but not for continuous, low-latency file copying.
AWS Storage Gateway: Provides a hybrid cloud storage solution, supporting SMB file shares and enabling seamless copying of files to AWS without requiring changes to the application.
Best Solution:
AWS Storage Gateway: This service meets the requirement for a low-latency, seamless file transfer solution from on-premises to AWS without modifying the application code.
Reference: AWS Storage Gateway
Amazon FSx for Windows File Server
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
- A . Configure the application to send the data to Amazon Kinesis Data Firehose.
- B . Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
- C . Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application’s API for the data.
- D . Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data.
- E . Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by
BD
Explanation:
https://docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data. This step can be done using AWS Lambda to extract the shipping statistics and organize the data into an HTML format.
B. Use Amazon Simple Email Service (Amazon SES) to format the data and send the report by email. This step can be done by using Amazon SES to send the report to multiple email addresses at the same time every morning.
Therefore, options D and B are the correct choices for this question.
Option A is incorrect because Kinesis Data Firehose is not necessary for this use case.
Option C is incorrect because AWS Glue is not required to query the application’s API.
Option E is incorrect because S3 event notifications cannot be used to send the report by email.
A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware.
Which networking solution meets these requirements?
- A . Run the EC2 instances in a spread placement group.
- B . Group the EC2 instances in separate accounts.
- C . Configure the EC2 instances with dedicated tenancy.
- D . Configure the EC2 instances with shared tenancy.
A
Explanation:
it allows the company to deploy an application that processes large quantities of data in parallel and prevent groups of nodes from sharing the same underlying hardware. By running the EC2 instances in a spread placement group, the company can launch a small number of instances across distinct underlying hardware to reduce correlated failures. A spread placement group ensures that each instance is isolated from each other at the rack level.
Reference: Placement Groups
Spread Placement Groups
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The
company wants to improve the application’s ability to handle traffic spikes and to minimize latency.
The solution must optimize costs during periods when traffic is low.
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Key Requirements:
Handle traffic spikes efficiently and reduce latency caused by cold starts.
Optimize costs during low traffic periods.
Analysis of Options:
Option A:
Provisioned Concurrency: Reduces cold start latency by pre-warming Lambda environments for the required number of concurrent executions.
AWS Application Auto Scaling: Automatically adjusts provisioned concurrency based on demand, ensuring cost optimization by scaling down during low traffic.
Correct Approach: Provides a balance between performance during traffic spikes and cost optimization during idle periods.
Option B:
Using EC2 instances with Auto Scaling introduces unnecessary complexity for a serverless
architecture. It requires additional management and does not address the issue of cold starts for Lambda.
Incorrect Approach: Contradicts the serverless design philosophy and increases operational overhead.
Option C:
Setting a fixed concurrency level ensures performance during spikes but does not optimize costs during low traffic. This approach would maintain provisioned instances unnecessarily.
Incorrect Approach: Lacks cost optimization.
Option D:
Using EventBridge Scheduler for periodic invocations may reduce cold starts but does not dynamically scale based on traffic demand. It also leads to unnecessary invocations during idle times.
Incorrect Approach: Suboptimal for high traffic fluctuations and cost control.
AWS Solution Architect
Reference: AWS Lambda Provisioned Concurrency
AWS Application Auto Scaling with Lambda
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue writes to an Amazon RDS table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
- A . Use the CreateQueue API call to create a new queue
- B . Use the Add Permission API call to add appropriate permissions
- C . Use the ReceiveMessage API call to set an appropriate wail time
- D . Use the ChangeMessageVisibility APi call to increase the visibility timeout
D
Explanation:
The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the consumer fails before deleting the message and your system doesn’t call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A – You can’t intruduce one more Queue in the existing one; Option B – only Permission & Option C – Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be affected adversely when processing the same message more than once).