Practice Free SAA-C03 Exam Online Questions
A company is developing a highly available natural language processing (NLP) application. The application handles large volumes of concurrent requests. The application performs NLP tasks such as entity recognition, sentiment analysis, and key phrase extraction on text data.
The company needs to store data that the application processes in a highly available and scalable database.
Options:
- A . Create an Amazon API Gateway REST API endpoint to handle incoming requests. Configure the REST API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Comprehend to perform NLP tasks on the text data. Store the processed data in Amazon DynamoDB.
- B . Create an Amazon API Gateway HTTP API endpoint to handle incoming requests. Configure the HTTP API to invoke an AWS Lambda function for each request. Configure the Lambda function to call
Amazon Translate to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache. - C . Create an Amazon SQS queue to buffer incoming requests. Deploy the NLP application on Amazon EC2 instances in an Auto Scaling group. Use Amazon Comprehend to perform NLP tasks. Store the processed data in an Amazon RDS database.
- D . Create an Amazon API Gateway WebSocket API endpoint to handle incoming requests. Configure the WebSocket API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Textract to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.
A
Explanation:
A company has an application that uses an Amazon RDS for PostgreSQL database. The company is developing an application feature that will store sensitive information for an individual in the database.
During a security review of the environment, the company discovers that the RDS DB instance is not encrypting data at rest. The company needs a solution that will provide encryption at rest for all the existing data and for any new data that is entered for an individual.
Which combination of steps should the company take to meet these requirements? (Select TWO.)
- A . Create a snapshot of the DB instance. Enable encryption on the snapshot. Use the encrypted snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.
- B . Create a snapshot of the DB instance. Create an encrypted copy of the snapshot. Use the encrypted snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.
- C . Modify the configuration of the DB instance by enabling encryption. Create a snapshot of the DB instance. Use the snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.
- D . Use AWS Key Management Service (AWS KMS) to create a new default AWS managed aws/rds key.
Select this key as the encryption key for operations with Amazon RDS. - E . Use AWS Key Management Service (AWS KMS) to create a new customer managed key. Select this key as the encryption key for operations with Amazon RDS.
B, E
Explanation:
Amazon RDS does not support enabling encryption at rest on an existing unencrypted DB instance. To encrypt an existing RDS instance’s data at rest, the recommended method is to:
Take a snapshot of the unencrypted DB instance.
Create an encrypted copy of the snapshot using AWS KMS. This encrypted snapshot contains the existing data encrypted at rest.
Restore a new DB instance from the encrypted snapshot. This new instance will have encryption at rest enabled.
Additionally, to manage encryption keys securely, companies can use customer managed keys (CMKs) in AWS Key Management Service (KMS). CMKs provide greater control over key management policies, rotation, and usage permissions compared to default AWS managed keys. Using a CMK allows customization of access control and auditability.
Option A is incorrect because you cannot enable encryption directly on a snapshot; you must create an encrypted copy.
Option C is invalid because encryption cannot be enabled by modifying an existing instance’s configuration.
Option D refers to the default AWS managed key, which is less flexible than customer managed keys.
Reference: Encrypting Amazon RDS Resources
(https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html)
Copying an Encrypted Snapshot
(https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html)
AWS KMS Customer Master Keys
(https: //docs.aws.amazon.com/kms/latest/developerguide/concepts.html)
AWS Well-Architected Framework ― Security Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
A company has developed an API by using an Amazon API Gateway REST API and AWS Lambda functions. The API serves static content and dynamic content to users worldwide. The company wants to decrease the latency of transferring the content for API requests.
Which solution will meet these requirements?
- A . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- B . Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- C . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
- D . Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
A
Explanation:
Edge-Optimized API: Designed for global users by routing requests through CloudFront’s edge locations, reducing latency.
Content Encoding: Enabling content encoding compresses data, further optimizing performance by decreasing payload size.
Caching: Adding API Gateway caching reduces the number of calls to Lambda and database backends, improving latency.
Reserved Concurrency: Although useful, this does not directly affect latency for transferring static and dynamic content.
AWS API Gateway Edge-Optimized APIs Documentation
A solutions architect needs to secure an Amazon API Gateway REST API. Users need to be able to log in to the API by using common external social identity providers (IdPs). The social IdPs must use standard authentication protocols such as SAML or OpenID Connect (OIDC). The solutions architect needs to protect the API against attempts to exploit application vulnerabilities.
Which combination of steps will meet these security requirements? (Select TWO.)
- A . Create an AWS WAF web ACL that is associated with the REST API. Add the appropriate managed rules to the ACL.
- B . Subscribe to AWS Shield Advanced. Enable DDoS protection. Associate Shield Advanced with the REST API.
- C . Create an Amazon Cognito user pool with a federation to the social IdPs. Integrate the user pool with the REST API.
- D . Create an API key in API Gateway. Associate the API key with the REST API.
- E . Create an IP address filter in AWS WAF that allows only the social IdPs. Associate the filter with the web ACL and the API.
A, C
Explanation:
Step A: AWS WAF with managed rules protects the API against application-layer attacks, such as SQL injection and cross-site scripting (XSS).
Step C: Amazon Cognito provides secure authentication and supports federation with social IdPs using OIDC or SAML. It integrates seamlessly with API Gateway.
Option B: AWS Shield Advanced provides DDoS protection, which is not explicitly required in this
scenario.
Option D: API keys provide identification, not authentication, and are insufficient for this use case.
Option E: IP filters in WAF are overly restrictive for federated authentication scenarios.
AWS Documentation
Reference: Amazon Cognito Federation
AWS WAF Managed Rules
A company runs its critical storage application in the AWS Cloud. The application uses Amazon S3 in two AWS Regions. The company wants the application to send remote user data to the nearest S3 bucket with no public network congestion. The company also wants the application to fail over with the least amount of management of Amazon S3.
Which solution will meet these requirements?
- A . Implement an active-active design between the two Regions. Configure the application to use the regional S3 endpoints closest to the user.
- B . Use an active-passive configuration with S3 Multi-Region Access Points. Create a global endpoint for each of the Regions.
- C . Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.
- D . Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cross-Region Replication.
D
Explanation:
To meet the requirement of low-latency global access and failover with minimal management, the best solution is to use Amazon S3 Multi-Region Access Points (MRAP) with Cross-Region Replication (CRR).
Multi-Region Access Points provide a global endpoint that automatically routes requests to the nearest AWS Region using the AWS Global Accelerator infrastructure. This avoids public internet congestion and ensures low-latency access.
When combined with S3 Cross-Region Replication, data is automatically synchronized between buckets in different Regions, enabling active-active setup.
In case of a Regional failure, S3 Multi-Region Access Points handle failover automatically, requiring no manual intervention.
Options A and C require manual management and configuration of endpoints per Region.
Option B misrepresents MRAP―it is used for active-active, not active-passive.
Reference: S3 Multi-Region Access Points
S3 Cross-Region Replication
A company has an application that processes information from documents that users upload. When a user uploads a new document to an Amazon S3 bucket, an AWS Lambda function is invoked. The Lambda function processes information from the documents.
The company discovers that the application did not process many recently uploaded documents. The company wants to ensure that the application processes each document with retries if there is an error during the first attempt to process the document.
Which solution will meet these requirements?
- A . Create an Amazon API Gateway REST API that has a proxy integration to the Lambda function.
Update the application to send requests to the REST API. - B . Configure a replication policy on the S3 bucket to stage the documents in another S3 bucket that an AWS Batch job processes on a daily schedule.
- C . Deploy an Application Load Balancer in front of the Lambda function that processes the documents.
- D . Configure an Amazon Simple Queue Service (Amazon SQS) queue as an event source for the Lambda function. Configure an S3 event notification on the S3 bucket to send new document upload events to the SQS queue.
D
Explanation:
Using SQS as a buffer between S3 and the Lambda function ensures durability and allows for retries in case of processing failures. Messages in the queue can be retried by Lambda, and failed processing can be directed to a dead-letter queue for further inspection. This guarantees reliable and scalable message-driven processing.
Reference: AWS Documentation C Using Amazon SQS as Lambda Event Source with S3 Trigger
A company runs an enterprise resource planning (ERP) system on Amazon EC2 instances in a single AWS Region. Users connect to the ERP system by using a public API that is hosted on the EC2 instances. International users report slow API response times from their data centers.
A solutions architect needs to improve API response times for the international users.
Which solution will meet these requirements MOST cost-effectively?
- A . Set up an AWS Direct Connect connection that has a public virtual interface (VIF) to connect each user’s data center to the EC2 instances. Create a Direct Connect gateway for the ERP system API to route user API requests.
- B . Deploy Amazon API Gateway endpoints in multiple Regions. Use Amazon Route 53 latency-based routing to route requests to the nearest endpoint. Configure a VPC peering connection between the Regions to connect to the ERP system.
- C . Set up AWS Global Accelerator. Configure listeners for the necessary ports. Configure endpoint groups for the appropriate Regions to distribute traffic. Create an endpoint in each group for the API.
- D . Use AWS Site-to-Site VPN to establish dedicated VPN tunnels between multiple Regions and user networks. Route traffic to the API through the VPN connections.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
AWS Global Accelerator improves the performance and availability of applications by directing user traffic through the AWS global network of edge locations using anycast IP addresses. It reduces latency and jitter for global users accessing applications in a single Region.
Why this works:
Global Accelerator routes user requests to the nearest AWS edge location using AWS’s high-performance backbone network.
It then forwards traffic to the optimal endpoint ― in this case, the public API hosted on EC2.
This is much more cost-effective and requires less operational complexity than deploying and maintaining multiple API Gateway endpoints across regions (Option B), or setting up Direct Connect links for every international location (Option A).
Option C requires no application change and is designed specifically for latency improvement and high availability.
Reference: AWS Global Accelerator Documentation
Use Cases for Global Accelerator
Performance Improvements for Global Users
An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs) across multiple AWS Regions. The NLBs can route requests to targets over the internet. The company wants to improve the customer playing experience by reducing end-to-end load time for its global customer base.
Which solution will meet these requirements?
- A . Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.
- B . Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.
- C . Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.
- D . Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.
D
Explanation:
The company wants to reduce end-to-end load time for its global customer base.AWS Global Accelerator provides a network optimization service that reduces latency by routing traffic to the nearest AWS edge locations, improving the user experience for globally distributed customers.
AWS Global Accelerator:
Global Accelerator improves the performance of your applications by routing traffic through AWS’s global network infrastructure. This reduces the number of hops and latency compared to using the public internet.
By creating a standard accelerator and configuring the existing NLBs as target endpoints, Global Accelerator ensures that traffic from users around the world is routed to the nearest AWS edge location and then through optimized paths to the NLBs in each region. This significantly improves end-to-end load time for global customers.
Why Not the Other Options?:
Option A (ALBs instead of NLBs): ALBs are designed for HTTP/HTTPS traffic and provide layer 7 features, but they wouldn’t solve the latency issue for a global customer base. The key problem here is latency, and Global Accelerator is specifically designed to address that.
Option B (Route 53 weighted routing): Route 53 can route traffic to different regions, but it doesn’t optimize network performance. It simply balances traffic between endpoints without improving latency.
Option C (Additional NLBs in more regions): This could potentially improve latency but would require setting up infrastructure in multiple regions. Global Accelerator is a simpler and more efficient solution that leverages AWS’s existing global network.
AWS
Reference: AWS Global Accelerator
By using AWS Global Accelerator with the existing NLBs, the company can optimize global traffic routing and improve the customer experience by minimizing latency. Therefore, Option Dis the correct answer.
A company tracks customer satisfaction by using surveys that the company hosts on its website. The surveys sometimes reach thousands of customers every hour. Survey results are currently sent in email messages to the company so company employees can manually review results and assess customer sentiment.
The company wants to automate the customer survey process. Survey results must be available for the previous 12 months.
Which solution will meet these requirements in the MOST scalable way?
- A . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Create an AWS Lambda function to poll the SQS queue, call Amazon Comprehend for sentiment analysis, and save the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
- B . Send the survey results data to an API that is running on an Amazon EC2 instance. Configure the API to store the survey results as a new record in an Amazon DynamoDB table, call Amazon Comprehend for sentiment analysis, and save the results in a second DynamoDB table. Set the TTL for all records to 365 days in the future.
- C . Write the survey results data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function to read the data and call Amazon Rekognition for sentiment analysis. Store the sentiment analysis results in a second S3 bucket. Use S3 Lifecycle policies on each bucket to expire objects after 365 days.
- D . Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke an AWS Lambda function that calls Amazon Lex for sentiment analysis and saves the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.
A
Explanation:
This solution is the most scalable and efficient way to handle large volumes of survey data while automating sentiment analysis:
API Gateway and SQS: The survey results are sent to API Gateway, which forwards the data to an SQS queue. SQS can handle large volumes of messages and ensures that messages are not lost.
AWS Lambda: Lambda is triggered by polling the SQS queue, where it processes the survey data.
Amazon Comprehend: Comprehend is used for sentiment analysis, providing insights into customer
satisfaction.
DynamoDB with TTL: Results are stored in DynamoDB with a Time to Live (TTL)attribute set to expire after 365 days, automatically removing old data and reducing storage costs.
Option B (EC2 API): Running an API on EC2 requires more maintenance and scalability management compared to API Gateway.
Option C (S3 and Rekognition): Amazon Rekognition is for image and video analysis, not sentiment analysis.
Option D (Amazon Lex): Amazon Lex is used for building conversational interfaces, not sentiment analysis.
AWS
Reference: Amazon Comprehend for Sentiment Analysis
Amazon SQS
DynamoDB TTL
A company is using an AWS Lambda function in a VPC. The Lambda function needs to access dependencies that exceed the size of the Lambda layer quota. The data that the Lambda function retrieves must be encrypted in transit.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the dependencies in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system to the Lambda function. Retrieve the dependencies from the file system.
- B . Store the dependencies on an Amazon EC2 instance that has an instance store volume and web server software. Use HTTPS API calls to retrieve the dependencies each time the Lambda function runs.
- C . Store the dependencies on an Amazon EC2 instance that hosts an NFS file server. Read the files from the EC2 instance each time the Lambda function runs.
- D . Store the dependencies in two separate Lambda layers. Redesign the application to have two Lambda functions that use different Lambda layers.
A
Explanation:
Lambda supports mounting an Amazon EFS file system inside your function to store larger dependencies beyond the 250 MB layer quota.
EFS automatically encrypts data in transit using TLS.
“You can configure your Lambda function to mount an Amazon EFS file system, enabling your function to access large amounts of data or large dependencies.”
“Amazon EFS automatically encrypts all data at rest and in transit.”
― Lambda with Amazon EFS
This is the least operational overhead approach.
