Practice Free SAA-C03 Exam Online Questions
A company is launching a new gaming application. The company will use Amazon EC2 Auto Scaling groups to deploy the application. The application stores user data in a relational database.
The company has office locations around the world that need to run analytics on the user data in the database. The company needs a cost-effective database solution that provides cross-Region disaster recovery with low-latency read performance across AWS Regions.
Which solution will meet these requirements?
- A . Create an Amazon ElastiCache for Redis cluster in the Region where the application is deployed. Create read replicas in Regions where the company offices are located. Ensure the company offices read from the read replica instances.
- B . Create Amazon DynamoDB global tables. Deploy the tables to the Regions where the company offices are located and to the Region where the application is deployed. Ensure that each company office reads from the tables that are in the same Region as the office.
- C . Create an Amazon Aurora global database. Configure the primary cluster to be in the Region where the application is deployed. Configure the secondary Aurora replicas to be in the Regions where the company offices are located. Ensure the company offices read from the Aurora replicas.
- D . Create an Amazon RDS Multi-AZ DB cluster deployment in the Region where the application is deployed. Ensure the company offices read from read replica instances.
A solutions architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery should be as close to the edge as possible, with the least delivery time.
Which solution meets these requirements and is MOST secure?
- A . Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
- B . Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
- C . Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
- D . Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
C
Explanation:
This solution meets the requirements for a highly available application with web, application, and database tiers, as well as providing edge-based content delivery. Additionally, it maximizes security by having the ALB in a private subnet, which limits direct access to the web servers, while still being able to serve traffic over the Internet via the public ALB. This will ensure that the web servers are not exposed to the public Internet, which reduces the attack surface and provides a secure way to access the application.
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to avoid accidental exposure of the objects in the S3 buckets to the public. All S3 objects in the entire AWS account need to remain private
Which solution will meal these requirements?
- A . Use Amazon GuardDuty to monitor S3 bucket policies Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public
- B . Use AWS Trusted Advisor to find publicly accessible S3 Dockets Configure email notifications In Trusted Advisor when a change is detected manually change the S3 bucket policy if it allows public access
- C . Use AWS Resource Access Manager to find publicly accessible S3 buckets Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change it detected. Deploy a Lambda function that programmatically remediates the change.
- D . Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting Apply tie SCP to tie account
D
Explanation:
The S3 Block Public Access feature allows you to restrict public access to S3 buckets and objects within the account. You can enable this feature at the account level to prevent any S3 bucket from being made public, regardless of the bucket policy settings. AWS Organizations can be used to apply a Service Control Policy (SCP) to the account to prevent IAM users from changing this setting, ensuring that all S3 objects remain private. This is a straightforward and effective solution that requires minimal operational overhead.
A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include filtering of anomalies, normalizing of data to standard date and time values, and generation of aggregates for analyses.
The company must store the transformed data in S3 buckets that data analysts access. The company needs a prebuilt solution for data transformation that does not require code. The solution must provide data lineage and data profiling. The company needs to share the data transformation steps with employees throughout the company.
Which solution will meet these requirements?
- A . Configure an AWS Glue Studio visual canvas to transform the data. Share the transformation steps with employees by using AWS Glue jobs.
- B . Configure Amazon EMR Serverless to transform the data. Share the transformation steps with employees by using EMR Serveriess jobs.
- C . Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees by using DataBrew recipes.
- D . Create Amazon Athena tables for the data. Write Athena SQL queries to transform the data. Share the Athena SQL queries with employees.
C
Explanation:
The most suitable solution for the company’s requirements is to configure AWS Glue DataBrew to transform the data and share the transformation steps with employees by using DataBrew recipes. This solution will provide a prebuilt solution for data transformation that does not require code, and will also provide data lineage and data profiling. The company can easily share the data transformation steps with employees throughout the company by using DataBrew recipes.
AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data for analytics or machine learning by up to 80%faster. Users can
upload their data from various sources, such as Amazon S3, Amazon RDS, Amazon Redshift, Amazon Aurora, or Glue Data Catalog, and use a point-and-click interface to apply over 250 built-in transformations. Users can also preview the results of each transformation step and see how it affects the quality and distribution of the data1.
A DataBrew recipe is a reusable set of transformation steps that can be applied to one or more datasets. Users can create recipes from scratch or use existing ones from the DataBrew recipe library. Users can also export, import, or share recipes with other users or groups within their AWS account or organization2.
DataBrew also provides data lineage and data profiling features that help users understand and improve their data quality. Data lineage shows the source and destination of each dataset and how it is transformed by each recipe step. Data profiling shows various statistics and metrics about each dataset, such as column
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
- A . Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
- B . Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA).
- C . Group members are allowed the ec2:Stoplnstances and ec2:Terminatelnstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are
permitted any other Amazon EC2 action. - D . Group members are allowed the ec2:Stoplnstances and ec2:Terminatelnstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
D
Explanation:
This answer is correct because it reflects the effect of the IAM policy on the group members. The policy has two statements: one with an Allow effect and one with a Deny effect. The Allow statement grants permission to perform any EC2 action on any resource within the us-east-1 Region. The Deny statement overrides the Allow statement and denies permission to perform the ec2:StopInstances and ec2:TerminateInstances actions on any resource within the us-east-1 Region, unless the group member is logged in with MFA. Therefore, the group members can perform any EC2 action except stopping or terminating instances in the us-east-1 Region, unless they use MFA.
A company’s web application consists of an Amazon API Gateway API in front of an AWS Lambda function and an Amazon DynamoDB database. The Lambda function
handles the business logic, and the DynamoDB table hosts the data. The application uses Amazon Cognito user pools to identify the individual users of the application. A solutions architect needs to update the application so that only users who have a subscription can access premium content.
- A . Enable API caching and throttling on the API Gateway API
- B . Set up AWS WAF on the API Gateway API Create a rule to filter users who have a subscription
- C . Apply fine-grained IAM permissions to the premium content in the DynamoDB table
- D . Implement API usage plans and API keys to limit the access of users who do not have a subscription.
D
Explanation:
This option is the most efficient because it uses API usage plans and API keys, which are features of Amazon API Gateway that allow you to control who can access your API and how much and how fast they can access it1. It also implements API usage plans and API keys to limit the access of users who do not have a subscription, which enables you to create different tiers of access for your API and charge users accordingly. This solution meets the requirement of updating the application so that only users who have a subscription can access premium content.
Option A is less efficient because it uses API caching and throttling on the API Gateway API, which are features of Amazon API Gateway that allow you to improve the performance and availability of your API and protect your backend systems from traffic spikes2. However, this does not provide a way to limit the access of users who do not have a subscription.
Option B is less efficient because it uses AWS WAF on the API Gateway API, which is a web application firewall service that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources3. However, this does not provide a way to limit the access of users who do not have a subscription.
Option C is less efficient because it uses fine-grained IAM permissions to the premium content in the DynamoDB table, which are permissions that allow you to control access to specific items or attributes within a table4. However, this does not provide a way to limit the access of users who do not have a subscription at the API level.
A company is redesigning a static website. The company needs a solution to host the new website in the company’s AWS account. The solution must be secure and scalable.
Which combination of solutions will meet these requirements? (Select THREE.)
- A . Configure an Amazon CloudFront distribution. Set the Amazon S3 bucket as the origin.
- B . Associate an AWS Certificate Manager (ACM) TLS certificate to the Amazon CloudFront distribution.
- C . Enable static website hosting for the Amazon S3 bucket.
- D . Create an Amazon S3 bucket to store the static website content.
- E . Export the website’s SSL/TLS certificate from AWS Certificate Manager (ACM) to the root of the Amazon S3 bucket.
- F . Turn off Block Public Access for the Amazon S3 bucket.
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.
Which solution will meet these requirements?
- A . Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes
- B . Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.
- C . Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
- D . Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.
B
Explanation:
AWS CloudFormation allows for the automated, repeatable setup of infrastructure, reducing human error and ensuring consistency. AWS Config provides the ability to track changes in the infrastructure, ensuring that all changes are logged and auditable, which satisfies the requirement for tracking incremental changes.
Option A and C (AWS Organizations): AWS Organizations manage multiple accounts, but they are not designed for infrastructure setup or change tracking.
Option D (Service Catalog): Service Catalog is used for deploying products, not for setting up infrastructure or tracking changes.
AWS
Reference: AWS Config
AWS CloudFormation
A company uses high concurrency AWS Lambda functions to process a constantly increasing number of messages in a message queue during marketing events. The Lambda functions use CPU intensive code to process the messages. The company wants to reduce the compute costs and to maintain service latency for its customers.
Which solution will meet these requirements?
- A . Configure reserved concurrency for the Lambda functions. Decrease the memory allocated to the Lambda functions.
- B . Configure reserved concurrency for the Lambda functions. Increase the memory according to AWS Compute Optimizer recommendations.
- C . Configure provisioned concurrency for the Lambda functions. Decrease the memory allocated to the Lambda functions.
- D . Configure provisioned concurrency for the Lambda functions. Increase the memory according to AWS Compute Optimizer recommendations.
D
Explanation:
The company wants to reduce the compute costs and maintain service latency for its Lambda functions that process a constantly increasing number of messages in a message queue. The Lambda functions use CPU intensive code to process the messages. To meet these requirements, a solutions architect should recommend the following solution:
Configure provisioned concurrency for the Lambda functions. Provisioned concurrency is the number of pre-initialized execution environments that are allocated to the Lambda functions. These execution environments are prepared to respond immediately to incoming function requests, reducing the cold start latency. Configuring provisioned concurrency also helps to avoid throttling errors due to reaching the concurrency limit of the Lambda service.
Increase the memory according to AWS Compute Optimizer recommendations. AWS Compute Optimizer is a service that provides recommendations for optimal AWS resource configurations based on your utilization data. By increasing the memory allocated to the Lambda functions, you can also increase the CPU power and improve the performance of your CPU intensive code. AWS Compute Optimizer can help you find the optimal memory size for your Lambda functions based on your workload characteristics and performance goals.
This solution will reduce the compute costs by avoiding unnecessary over-provisioning of memory and CPU resources, and maintain service latency by using provisioned concurrency and optimal memory size for the Lambda functions.
Reference: Provisioned Concurrency
AWS Compute Optimizer
A company has a nightly batch processing routine that analyzes report files that an on-premises file system receives daily through SFTP. The company wants to move the solution to the AWS Cloud. The solution must be highly available and resilient. The solution also must minimize operational effort.
Which solution meets these requirements?
- A . Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for
storage. Use an Amazon EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation. - B . Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic Block Store {Amazon EBS) volume for storage. Use an Auto Scaling group with the minimum number of instances and desired number of instances set to 1.
- C . Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic File System (Amazon EFS) file system for storage. Use an Auto Scaling group with the minimum number of instances and desired number of instances set to 1.
- D . Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation.
D
Explanation:
The solution that meets the requirements of high availability, performance, security, and static IP addresses is to use Amazon CloudFront, Application Load Balancers (ALBs), Amazon Route 53, and AWS WAF. This solution allows the company to distribute its HTTP-based application globally using CloudFront, which is a content delivery network (CDN) service that caches content at edge locations and provides static IP addresses for each edge location. The company can also use Route 53 latency-based routing to route requests to the closest ALB in each Region, which balances the load across the EC2 instances. The company can also deploy AWS WAF on the CloudFront distribution to protect the application against common web exploits by creating rules that allow, block, or count web requests based on conditions that are defined. The other solutions do not meet all the requirements because they either use Network Load Balancers (NLBs), which do not support HTTP-based applications, or they do not use CloudFront, which provides better performance and security than AWS Global Accelerator.
Reference: =
Amazon CloudFront
Application Load Balancer
Amazon Route 53
AWS WAF