Practice Free SAA-C03 Exam Online Questions
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
- A . Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
- B . Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
- C . Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots.
- D . Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to
EC2 instances in the test environment.
C
Explanation:
To clone the production data into the test environment with high I/O performance and without affecting the production environment, the best option is to take EBS snapshots of the production EBS volumes and restore them onto new EBS volumes in the test environment. Then, attach the new EBS volumes to EC2 instances in the test environment. This option minimizes the time required to clone the data and ensures that modifications to the cloned data do not affect the production environment. Therefore, option C is the correct answer.
Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are frequently accessed during the first week after production and must be stored for several years. The reports must be retrievable within 6 hours.
Which solution meets these requirements MOST cost-effectively?
- A . Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
- B . Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
- C . Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
- D . Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
A
Explanation:
To store and retrieve reports that are frequently accessed during the first week and must be stored for several years, S3 Standard and S3 Glacier are suitable solutions. S3 Standard offers high durability, availability, and performance for frequently accessed data. S3 Glacier offers secure and durable storage for long-term data archiving at a low cost. S3 Lifecycle rules can be used to transition the reports from S3 Standard to S3 Glacier after 7 days, which can reduce storage costs. S3 Glacier also supports retrieval within 6 hours.
Reference: Storage Classes
Object Lifecycle Management
Retrieving Archived Objects from Amazon S3 Glacier
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
- B . Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
- C . Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
- D . Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
A
Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS environment based on AWS best practices. It automates the setup of AWS Organizations and applies security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets. This centralized VPC will manage and control the networking resources. AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts.
This allows different workload accounts to utilize the shared networking resources without the need to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple
AWS accounts, while AWS RAM facilitates centralized management of networking resources, reducing operational overhead and ensuring consistent security and compliance.
Reference: AWS Control Tower
AWS Resource Access Manager
A company is deploying a new public web application toAWS. The application Will run behind an Application Load Balancer (ALE). The application needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the certificate expires.
What should a solutions architect do to meet these requirements?
- A . Use AWS Certificate Manager (ACM) to issue an SSUTLS certificate. Apply the certificate to the ALB Use the managed renewal feature to automatically rotate the
certificate. - B . Use AWS Certificate Manager (ACM) to issue an SSUTLS certificate_ Import the key material from the certificate. Apply the certificate to the ALB Use the managed
renewal teature to automatically rotate the certificate. - C . Use AWS Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. use the managed renewal feature to automatically rotate the certificate
- D . Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB_ Use Amazon EventBridge to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
D
Explanation:
To use an SSL/TLS certificate that is issued by an external CA, the certificate must be imported to
AWS Certificate Manager (ACM). ACM can send a notification when the certificate is nearing expiration, but it cannot automatically rotate the certificate. Therefore, the certificate must be rotated manually by importing a new certificate and applying it to the ALB.
Reference: Importing Certificates into AWS Certificate Manager
Renewing and Rotating Imported Certificates
Using an ACM Certificate with an Application Load Balancer
A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size vanes between 2 GB and 5 GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accessible for up to 3 months. The company needs the most cost-effective solution that will not increase retrieval time
Which S3 storage class should the company use to meet these requirements?
- A . S3 Intelligent-Tiering
- B . S3 Glacier Instant Retrieval
- C . S3 Standard
- D . S3 Standard-Infrequent Access (S3 Standard-IA)
D
Explanation:
S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may not meet the requirement of "immediately available" data. On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3 months.
A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-lime analytics. A secure transfer is important because the data is considered sensitive.
Which solution offers the MOST reliable data transfer?
- A . AWS DataSync over public internet
- B . AWS DataSync over AWS Direct Connect
- C . AWS Database Migration Service (AWS DMS) over public internet
- D . AWS Database Migration Service (AWS DMS) over AWS Direct Connect
B
Explanation:
These are some of the main use cases for AWS DataSync:
• Data migration C Move active datasets rapidly over the network into Amazon S3, Amazon EFS, or FSx for Windows File Server. DataSync includes automatic encryption and data integrity validation to help make sure that your data arrives securely, intact, and ready to use.
"DataSync includes encryption and integrity validation to help make sure your data arrives securely, intact, and ready to use." https://aws.amazon.com/datasync/faqs/
A company is performing a security review of its Amazon EMR API usage. The company’s developers use an integrated development environment (IDE) that is hosted on Amazon EC2 instances. The IDE is configured to authenticate users to AWS by using access keys. Traffic between the company’s EC2 instances and EMR cluster uses public IP addresses.
A solutions architect needs to improve the company’s overall security posture. The solutions architect needs to reduce the company’s use of long-term credentials and to limit the amount of communication that uses public IP addresses.
Which combination of steps will MOST improve the security of the company’s architecture? (Select TWO.)
- A . Set up a gateway endpoint to the EMR cluster.
- B . Set up interface VPC endpoints to connect to the EMR cluster.
- C . Set up a private NAT gateway to connect to the EMR cluster.
- D . Set up 1AM roles for the developers to use to connect to the Amazon EMR API.
- E . Set up AWS Systems Manager Parameter Store to store access keys for each developer.
A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?
- A . Configure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region.
- B . Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in
time. - C . Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB.
- D . Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the DynamoDB table by using the EBS snapshot.
B
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html
A video game company is deploying a new gaming application to its global users. The company requires a solution that will provide near real-time reviews and rankings of the players.
A solutions architect must design a solution to provide fast access to the data. The solution must also ensure the data persists on disks in the event that the company restarts the application.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. Store the player data in the S3 bucket.
- B . Create Amazon EC2 instances in multiple AWS Regions. Store the player data on the EC2 instances.
Configure Amazon Route 53 with geolocation records to direct users to the closest EC2 instance. - C . Deploy an Amazon ElastiCache for Redis cluster. Store the player data in the ElastiCache cluster.
- D . Deploy an Amazon ElastiCache for Memcached cluster. Store the player data in the ElastiCache cluster.
C
Explanation:
Requirement Analysis: The application needs near real-time access to data, persistence, and minimal operational overhead.
ElastiCache for Redis: Provides in-memory data storage with persistence, supporting fast access and durability.
Operational Overhead: Managed service reduces the burden of setup, maintenance, and scaling.
Implementation:
Deploy an ElastiCache for Redis cluster.
Configure Redis to persist data to disk using AOF (Append-Only File) or RDB (Redis Database Backup)
snapshots.
Conclusion: ElastiCache for Redis meets the requirements for fast access, data persistence, and low operational overhead.
Reference
Amazon ElastiCache: ElastiCache for Redis Documentation
Which solution can the company use to route traffic to all the EC2 instances?
- A . Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
- B . Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
- C . Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing policy to route requests to one of the six EC2 instances. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
- D . Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to one of the two ALBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
B
Explanation:
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions.
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html