Practice Free SAA-C03 Exam Online Questions
A solutions architect is designing the network architecture for an application that runs on Amazon EC2 instances in an Auto Scaling group. The application needs to access data that is in Amazon S3 buckets.
Traffic to the S3 buckets must not use public IP addresses. The solutions architect will deploy the application in a VPC that has public and private subnets.
Which solutions will meet these requirements? (Select TWO.)
- A . Deploy the EC2 instances in a private subnet. Configure a default route to an egress-only internet gateway.
- B . Deploy the EC2 instances in a public subnet. Create a gateway endpoint for Amazon S3. Associate the endpoint with the subnet’s route table.
- C . Deploy the EC2 instances in a public subnet. Create an interface endpoint for Amazon S3.
Configure DNS hostnames and DNS resolution for the VPC. - D . Deploy the EC2 instances in a private subnet. Configure a default route to a NAT gateway in a public subnet.
- E . Deploy the EC2 instances in a private subnet. Configure a default route to a customer gateway.
B, D
Explanation:
Option B: A gateway endpoint for S3 allows traffic to S3 without using public IPs and integrates with route tables.
Option D: Deploying EC2 instances in a private subnet with a NAT gateway enables outbound internet connectivity for other requirements without public IPs.
Option A: Egress-only internet gateways are for IPv6 traffic and do not work for IPv4 in this context.
Option C: Interface endpoints are not required for S3 as gateway endpoints are more suitable and cost-effective.
Option E: A customer gateway is for hybrid connectivity (e.g., on-premises), not suitable for this case.
AWS Documentation
Reference: VPC Endpoints
Amazon S3 Gateway Endpoints
How can a law firm make files publicly readable while preventing modifications or deletions until a specific future date?
- A . Upload files to an Amazon S3 bucket configured for static website hosting. Grant read-only IAM permissions to any AWS principals.
- B . Create an S3 bucket. Enable S3 Versioning. Use S3 Object Lock with a retention period. Create a CloudFront distribution. Use a bucket policy to restrict access.
- C . Create an S3 bucket. Enable S3 Versioning. Configure an event trigger with AWS Lambda to restore modified objects from a private S3 bucket.
- D . Upload files to an S3 bucket for static website hosting. Use S3 Object Lock with a retention period.
Grant read-only IAM permissions.
B
Explanation:
Option B ensures the use of S3 Object Lock and Versioning to meet compliance for immutability.
CloudFront enhances performance while a bucket policy ensures secure access.
Option A lacks immutability safeguards.
Option C introduces unnecessary complexity.
Option D misses out on additional security benefits offered by CloudFront.
A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux instances that run with elastic IP addresses to accept traffic from trusted IP sources on the internet. The SFTP service is backed by shared storage that is attached to the instances. User accounts are created and managed as Linux users in the SFTP servers.
The company wants a serverless option that provides high IOPS performance and highly configurable security. The company also wants to maintain control over user permissions.
Which solution will meet these requirements?
- A . Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.
- B . Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP addresses and a VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses. Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.
- C . Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
- D . Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a VPC endpoint that has internal access in a private subnet. Attach a security group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
C
Explanation:
AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services using SFTP, FTPS, FTP, and AS2 protocols. You can use AWS Transfer Family to create an SFTP-enabled server with a public endpoint that allows only trusted IP addresses. You can also attach an Amazon S3 bucket with default encryption enabled to the SFTP service endpoint, which will provide high IOPS performance and highly configurable security for your data at rest. You can also maintain control over user permissions by granting users access to the SFTP service using IAM roles or service-managed identities.
Reference:
https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-s3.html
A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network storage servers. The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with a minimal number of infrastructure changes.
Which solution will meet these requirements?
- A . Deploy an Amazon S3 File Gateway
- B . Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3
- C . Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes
- D . Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
D
Explanation:
Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency access.
Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency access while the less frequently accessed data is stored cost-effectively in Amazon S3.
Implementation:
Deploy a Storage Gateway appliance on-premises or in a virtual environment.
Configure it as a volume gateway with cached volumes.
Create volumes and configure your applications to use these volumes.
Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises
infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.
Reference: AWS Storage Gateway Volume Gateway
Volume Gateway Cached Volumes
A company has deployed a server less application that invokes an AWS Lambda function when new documents are uploaded to an Amazon S3 bucket. The application uses the Lambda function to process the documents After a recent marketing campaign the company noticed that the application did not process many of. The documents
What should a solutions architect do to improve the architecture of this application?
- A . Set the Lambda function’s runtime timeout value to 15 minutes
- B . Configure an S3 bucket replication policy Stage the documents m the S3 bucket for later processing
- C . Deploy an additional Lambda function Load balance the processing of the documents across the two Lambda functions
- D . Create an Amazon Simple Queue Service (Amazon SOS) queue Send the requests to the queue Configure the queue as an event source for Lambda.
D
Explanation:
To improve the architecture of this application, the best solution would be to use Amazon Simple
Queue Service (Amazon SQS) to buffer the requests and decouple the S3 bucket from the Lambda function. This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available. This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available. By using Amazon SQS, the architecture is decoupled and the Lambda function can process the documents in a scalable and fault-tolerant manner
A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket. However, the videos are large in their raw format.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the performance and scalability of the app while minimizing operational overhead.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Deploy Amazon CloudFront for content delivery and caching
- B . Use AWS DataSync to replicate the video files across AWS Regions in other S3 buckets
- C . Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
- D . Deploy an Auto Scaling group of Amazon EC2 instances in Local Zones for content delivery and caching
- E . Deploy an Auto Scaling group of Amazon EC2 Instances to convert the video files to more appropriate formats.
AC
Explanation:
Understanding the Requirement: The mobile app captures and uploads raw video clips to S3, but users experience buffering and playback issues due to the large size of these videos.
Analysis of Options:
Amazon CloudFront: A content delivery network (CDN) that can cache and deliver content globally with low latency. It helps reduce buffering by delivering content from edge locations closer to the users.
AWS DataSync: Primarily used for data transfer and replication across AWS Regions, which does not directly address the video size and buffering issue.
Amazon Elastic Transcoder: A media transcoding service that can convert raw video files into formats and resolutions more suitable for streaming, reducing the size and improving playback performance.
EC2 Instances in Local Zones: While this could provide content delivery and caching, it involves more operational overhead compared to using CloudFront.
EC2 Instances for Transcoding: Involves setting up and maintaining infrastructure, leading to higher operational overhead compared to using Elastic Transcoder.
Best Combination of Solutions:
Deploy Amazon CloudFront: This optimizes the performance by caching content at edge locations, reducing latency and buffering for users.
Use Amazon Elastic Transcoder: This reduces the file size and converts videos into formats better suited for streaming on mobile devices.
Reference: Amazon CloudFront
Amazon Elastic Transcoder
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal requirement to retain all new and existing data in Amazon S3 for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Turn on the S3 Versionmg feature for the S3 bucket Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
- B . Turn on S3 Object Lock with governance retention mode for the S3 bucket Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
- C . Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
- D . Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance
C
Explanation:
S3 Object Lock enables a write-once-read-many (WORM) model for objects stored in Amazon S3. It can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely1. S3 Object Lock has two retention modes: governance mode and compliance mode. Compliance mode provides the highest level of protection and prevents any user, including the root user, from deleting or modifying an object version until the retention period expires. To use S3 Object Lock, a new bucket with Object Lock enabled must be created, and a default retention period can be optionally configured for objects placed in the bucket2. To bring existing objects into compliance, they must be recopied into the bucket with a retention period specified.
Option A is incorrect because S3 Versioning and S3 Lifecycle do not provide WORM protection for objects. Moreover, MFA delete only applies to deleting object versions, not modifying them.
Option B is incorrect because governance mode allows users with special permissions to override or remove the retention settings or delete the object if necessary. This does not meet the legal requirement of retaining all data for 7 years.
Option D is incorrect because S3 Batch Operations cannot be used to apply compliance mode
retention periods to existing objects. S3 Batch Operations can only apply governance mode retention
periods or legal holds. Reference URL: 2:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-console.html 3:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-
data-access 4: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-
acceleration.html 1: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html:
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-
access-points/
A global ecommerce company runs its critical workloads on AWS. The workloads use an Amazon RDS for PostgreSQL DB instance that is configured for a Multi-AZ deployment.
Customers have reported application timeouts when the company undergoes database failovers. The company needs a resilient solution to reduce failover time
Which solution will meet these requirements?
- A . Create an Amazon RDS Proxy. Assign the proxy to the DB instance.
- B . Create a read replica for the DB instance Move the read traffic to the read replica.
- C . Enable Performance Insights. Monitor the CPU load to identify the timeouts.
- D . Take regular automatic snapshots Copy the automatic snapshots to multiple AWS Regions
A
Explanation:
Amazon RDS Proxy: RDS Proxy is a fully managed, highly available database proxy that makes applications more resilient to database failures by pooling and sharing connections, and it can automatically handle database failovers.
Reduced Failover Time: By using RDS Proxy, the connection management between the application and the database is improved, reducing failover times significantly. RDS Proxy maintains connections in a connection pool and reduces the time required to re-establish connections during a failover.
Configuration:
Create an RDS Proxy instance.
Configure the proxy to connect to the RDS for PostgreSQL DB instance.
Modify the application configuration to use the RDS Proxy endpoint instead of the direct database endpoint.
Operational Benefits: This solution provides high availability and reduces application timeouts during failovers with minimal changes to the application code.
Reference: Amazon RDS Proxy
Setting Up RDS Proxy
A solutions architect needs to design the architecture for an application that a vendor provides as a Docker container image. The container needs 50 GB of storage available for temporary files. The infrastructure must be serverless.
Which solution meets these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted volume that has more than 50 GB of space
- B . Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space
- C . Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
- D . Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space Create a task definition for the container image. Create a service with that task definition.
C
Explanation:
The AWS Fargate launch type is a serverless way to run containers on Amazon ECS, without having to manage any underlying infrastructure. You only pay for the resources required to run your containers, and AWS handles the provisioning, scaling, and security of the cluster. Amazon EFS is a fully managed, elastic, and scalable file system that can be mounted to multiple containers, and provides high availability and durability. By using AWS Fargate and Amazon EFS, you can run your Docker container image with 50 GB of storage available for temporary files, with the least
operational overhead. This solution meets the requirements of the question.
Reference: AWS Fargate
Amazon Elastic File System
Using Amazon EFS file systems with Amazon ECS
A media company has an ecommerce website to sell music. Each music file is stored as an MP3 file. Premium users of the website purchase music files and download the files. The company wants to store music files on AWS. The company wants to provide access only to the premium users. The company wants to use the same URL for all premium users.
Which solution will meet these requirements?
- A . Store the MP3 files on a set of Amazon EC2 instances that have Amazon Elastic Block Store (Amazon EBS) volumes attached. Manage access to the files by creating an IAM user and an IAM policy for each premium user.
- B . Store all the MP3 files in an Amazon S3 bucket. Create a presigned URL for each MP3 file. Share the presigned URLs with the premium users.
- C . Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Generate CloudFront signed cookies for the music files. Share the signed cookies with the premium users.
- D . Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Use a CloudFront signed URL for each music file. Share the signed URLs with the premium users.
C
Explanation:
CloudFront Signed Cookies:
CloudFront signed cookies allow the company to provide access to premium users while maintaining a single, consistent URL.
This approach is simpler and more scalable than managing presigned URLs for each file.
Incorrect Options Analysis:
Option A: Using EC2 and EBS increases complexity and cost.
Option B: Managing presigned URLs for each file is not scalable.
Option D: CloudFront signed URLs require unique URLs for each file, which does not meet the requirement for a single URL.
Reference: Serving Private Content with CloudFront