Practice Free SAA-C03 Exam Online Questions
A company hosts a three-tier web application that includes a PostgreSQL database. The database stores the metadata from documents. The company searches the metadata for key terms to retrieve documents that the company reviews in a report each month. The documents are stored in Amazon S3. The documents are usually written only once, but they are updated frequency. The reporting process takes a few hours with the use of relational queries. The reporting process must not affect any document modifications or the addition of new documents.
What are the MOST operationally efficient solutions that meet these requirements? (Select TWO)
- A . Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica Scale the read replica to generate the reports.
- B . Set up a new Amazon RDS for PostgreSQL Reserved Instance and an On-Demand read replica Scale the read replica to generate the reports
- C . Set up a new Amazon Aurora PostgreSQL DB cluster that includes a Reserved Instance and an Aurora Replica issue queries to the Aurora Replica to generate the reports.
- D . Set up a new Amazon RDS for PostgreSQL Multi-AZ Reserved Instance Configure the reporting module to query the secondary RDS node so that the reporting module does not affect the primary node
- E . Set up a new Amazon DynamoDB table to store the documents Use a fixed write capacity to support new document entries Automatically scale the read capacity to support the reports
B, C
Explanation:
These options are operationally efficient because they use Amazon RDS read replicas to offload the reporting workload from the primary DB instance and avoid affecting any document modifications or the addition of new documents1. They also use Reserved Instances for the primary DB instance to reduce costs and On-Demand or Aurora Replicas for the read replicas to scale as needed.
Option A is less efficient because it uses Amazon S3 Glacier Flexible Retrieval, which is a cold storage class that has higher retrieval costs and longer retrieval times than Amazon S3 Standard. It also uses EventBridge rules to invoke the job nightly, which does not meet the requirement of processing incoming data files as soon as possible.
Option D is less efficient because it uses AWS Lambda to process the files, which has a maximum execution time of 15 minutes per invocation, which might not be enough for processing each file that needs 3-8 minutes. It also uses S3 event notifications to invoke the Lambda function when the files arrive, which could cause concurrency issues if there are thousands of small data files arriving periodically.
Option E is less efficient because it uses Amazon DynamoDB, which is a NoSQL database service that does not support relational queries, which are needed for generating the reports. It also uses fixed write capacity, which could cause throttling or underutilization depending on the incoming data files.
A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.
What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?
- A . Create a DX connection in each new account. Route the network traffic to the on-premises servers.
- B . Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
- C . Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
- D . Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
D
Explanation:
Requirement Analysis: Need quick, cost-effective, and consistent access to on-premises network services from multiple AWS accounts.
AWS Transit Gateway: Centralizes and simplifies network management by connecting VPCs and on-premises networks.
Direct Connect Integration: Assigning DX to the transit gateway ensures consistent and high-performance connectivity.
Operational Overhead: Minimal because Transit Gateway simplifies routing and management.
Implementation:
Set up AWS Transit Gateway.
Connect new AWS accounts to the Transit Gateway.
Route traffic through Transit Gateway to on-premises servers via Direct Connect.
Conclusion: This solution provides a scalable, cost-effective, and low-overhead method to meet connectivity requirements.
Reference
AWS Transit Gateway: AWS Transit Gateway Documentation
A company is migrating a legacy application from an on-premises data center to AWS. The application relies on hundreds of cron Jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.
The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring.
The solution must support running the cron jobs in response to an event in the future.
Which solution will meet these requirements?
- A . Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.
- B . Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run the cron jobs.
- C . Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule Run the cron job tasks on AWS Fargate.
- D . Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.
C
Explanation:
This solution is the most suitable for running cron jobs on AWS with minimal refactoring, while also supporting the possibility of running jobs in response to future events.
Container Image for Cron Jobs: By containerizing the cron jobs, you can package the environment and dependencies required to run the jobs, ensuring consistency and ease of deployment across different environments.
Amazon EventBridge Scheduler: EventBridge Scheduler allows you to create a recurring schedule that can trigger tasks (like running your cron jobs) at specific times or intervals. It provides fine-grained control over scheduling and integrates seamlessly with AWS services.
AWS Fargate: Fargate is a serverless compute engine for containers that removes the need to
manage EC2 instances. It allows you to run containers without worrying about the underlying infrastructure. Fargate is ideal for running jobs that can vary in duration, like cron jobs, as it scales automatically based on the task’s requirements.
Why Not Other Options?
Option A (Lambda): While AWS Lambda could handle short-running cron jobs, it has limitations in terms of execution duration (maximum of 15 minutes) and might not be suitable for jobs that run up to 20 minutes.
Option B (AWS Batch on ECS): AWS Batch is more suitable for batch processing and workloads that require complex job dependencies or orchestration, which might be more than what is needed for simple cron jobs.
Option D (Step Functions with Wait State): While Step Functions provide orchestration capabilities, this approach would introduce unnecessary complexity and overhead compared to the straightforward scheduling with EventBridge and running on Fargate.
AWS
Reference: Amazon EventBridge Scheduler – Details on how to schedule tasks using Amazon EventBridge Scheduler.
AWS Fargate – Information on how to run containers in a serverless manner using AWS Fargate.
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
- A . Add AWS KMS permissions in the Lambda resource policy.
- B . Add AWS KMS permissions in the Lambda execution role.
- C . Add AWS KMS permissions in the Lambda function policy.
- D . Allow the Lambda execution role in the AWS KMS key policy.
- E . Allow the Lambda resource policy in the AWS KMS key policy.
BD
Explanation:
B and D are the correct answers because they ensure that the Lambda execution role has the permissions to decrypt and use the environment variables, and that the AWS KMS key policy allows the Lambda execution role to use the key. The Lambda execution role is an IAM role that grants the Lambda function permission to access AWS resources, such as AWS KMS. The AWS KMS key policy is a resource-based policy that controls access to the key. By adding AWS KMS permissions in the Lambda execution role and allowing the Lambda execution role in the AWS KMS key policy, the solutions architect can implement the correct permissions for encrypting and decrypting environment variables.
Reference: AWS Lambda Execution Role
Using AWS KMS keys in AWS Lambda
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?
- A . Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
- B . Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3.
Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access. - C . Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register (he S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.
- D . Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.
C
Explanation:
To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS Lake Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access to the data. To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS Glue JDBC connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation access controls to limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and minimize operational overhead.
A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform so that authorized users can watch the company’s content on their mobile devices
What should a solutions architect recommend to meet these requirements?
- A . Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.
- B . Set up IPsec VPN between the mobile app and the AWS environment to stream content
- C . Use Amazon CloudFront Provide signed URLs to stream content.
- D . Set up AWS Client VPN between the mobile app and the AWS environment to stream content.
C
Explanation:
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront supports signed URLs that provide authorized access to your content. This feature allows the company to control who can access their content and for how long, providing a secure and scalable solution for millions of users.
A company is conducting an internal audit. The company wants to ensure that the data in an Amazon S3 bucket that is associated with the company’s AWS Lake Formation data lake does not contain sensitive customer or employee data. The company wants to discover personally identifiable information (Pll) or financial information, including passport numbers and credit card numbers.
Which solution will meet these requirements?
- A . Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security Standards (PCI DSS) for auditing.
- B . Configure Amazon S3 Inventory on the S3 bucket. Configure Amazon Athena to query the inventory.
- C . Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data types.
- D . Use Amazon S3 Select to run a report across the S3 bucket.
C
Explanation:
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie can run data discovery jobs that use managed identifiers for various types of PII or financial information, such as passport numbers and credit card numbers. Macie can also generate findings that alert you to potential issues or risks with your data.
Reference: https://docs.aws.amazon.com/macie/latest/userguide/macie-identifiers.html
A company provides an online service for posting video content and transcoding it for use by any mobile platform. The application architecture uses Amazon Elastic File System (Amazon EFS) Standard to collect and store the videos so that multiple Amazon EC2 Linux instances can access the video content for processing As the popularity of the service has grown over time, the storage costs have become too expensive.
Which storage solution is MOST cost-effective?
- A . Use AWS Storage Gateway for files to store and process the video content
- B . Use AWS Storage Gateway for volumes to store and process the video content
- C . Use Amazon EFS for storing the video content Once processing is complete transfer the files to Amazon Elastic Block Store (Amazon EBS)
- D . Use Amazon S3 for storing the video content Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume attached to the server for processing
D
Explanation:
• Amazon S3 for large-scale, durable, and inexpensive storage of the video content. S3 storage costs are significantly lower than EFS.
• Amazon EBS only temporarily during processing. By mounting an EBS volume only when a video needs to be processed, and unmounting it after, the time the content spends on the higher-cost EBS storage is minimized. •. The EBS volume can be sized to match the workload needs for active processing, keeping costs lower. The volume does not need to store the entire video library long-term.
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Which solution will meet these requirements?
- A . Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an SFTP client.
- B . Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer.
Upload website content by using an SFTP client. - C . Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website content by using theAWSCLI.
- D . Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content by using the SFTP client.
C
Explanation:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe-server.html
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.
Which solution resolves this issue in the MOST cost-effective way?
- A . Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
- B . Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
- C . Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type
- D . Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage
A
Explanation:
Amazon Aurora Serverless v2 is a cost-effective solution that can automatically scale the database capacity up and down based on the application’s needs. It can handle unpredictable traffic spikes without requiring any provisioning or management of database instances. It is compatible with PostgreSQL and offers high performance, availability, and durability1.
Reference: 1: AWS Ramp-Up Guide: Architect2, page 312: AWS Certified Solutions Architect – Associate exam guide3, page 9.