Practice Free SAA-C03 Exam Online Questions
An loT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket. The sensors collect approximately 2 MB of data every night for each mattress. The company must process and summarize the data for each mattress. The results need to be available as soon as possible Data processing will require 1 GB of memory and will finish within 30 seconds.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Glue with a Scalajob.
- B . Use Amazon EMR with an Apache Spark script.
- C . Use AWS Lambda with a Python script.
- D . Use AWS Glue with a PySpark job.
C
Explanation:
AWS Lambda charges you based on the number of invocations and the execution time of your function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-effective choice. You only pay for the actual usage without the need to provision and maintain infrastructure.
A company wants to direct its users to a backup static error page if the company’s primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead.
Which solution will meet these requirements?
- A . Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints.
- B . Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
- C . Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
- D . Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.
B
Explanation:
This solution meets the requirements of directing users to a backup static error page if the primary website is unavailable, minimizing changes and infrastructure overhead. Route 53 active-passive failover configuration can route traffic to a primary resource when it is healthy or to a secondary resource when the primary resource is unhealthy. Route 53 health checks can monitor the health of the ALB endpoint and trigger the failover when needed. The static error page can be hosted in an S3 bucket that is configured as a website, which is a simple and cost-effective way to serve static content.
Option A is incorrect because using a latency routing policy can route traffic based on the lowest network latency for users, but it does not provide failover functionality.
Option C is incorrect because using an active-active configuration with the ALB and an EC2 instance can increase the infrastructure overhead and complexity, and it does not guarantee that the EC2 instance will always be healthy.
Option D is incorrect because using a multivalue answer routing policy can return multiple values for a query, but it does not provide failover functionality.
Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-failover.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to the game’s metadata load times Performance metrics indicate that simply scaling the database will not help A solutions architect must explore all options that include capabilities for snapshots, replication, and sub-millisecond response times
What should the solutions architect recommend to solve these issues?
- A . Migrate the database to Amazon Aurora with Aurora Replicas
- B . Migrate the database to Amazon DynamoDB with global tables
- C . Add an Amazon ElastiCache for Redis layer in front of the database.
- D . Add an Amazon ElastiCache for Memcached layer in front of the database
C
Explanation:
This option is the most suitable way to improve the game’s metadata load times without migrating the database. Amazon ElastiCache for Redis is a fully managed, in-memory data store that provides sub-millisecond latency and high throughput for read-intensive workloads. You can use it as a caching layer in front of your RDS DB instance to store frequently accessed metadata and reduce the load on the database. You can also take advantage of Redis features such as snapshots, replication, and data persistence to ensure data durability and availability. ElastiCache for Redis scales automatically to meet your demand and integrates with other AWS services such as CloudFormation, CloudWatch, and IAM.
Option A is not optimal because migrating the database to Amazon Aurora with Aurora Replicas would incur additional costs and complexity. Amazon Aurora is a relational database service that provides high performance, availability, and compatibility with MySQL and PostgreSQL. Aurora Replicas are read-only copies of the primary database that can be used for scaling read capacity and enhancing availability. However, migrating the database to Aurora would require modifying the application code, testing the compatibility, and performing the data migration. Moreover, Aurora Replicas may not provide sub-millisecond response times as ElastiCache for Redis does.
Option B is not optimal because migrating the database to Amazon DynamoDB with global tables would incur additional costs and complexity. Amazon DynamoDB is a NoSQL database service that provides fast and flexible data access for any scale. Global tables are a feature of DynamoDB that enables you to replicate your data across multiple AWS Regions for high availability and performance. However, migrating the database to DynamoDB would require changing the data model, modifying the application code, and performing the data migration. Moreover, global tables may not be necessary for the game’s metadata, as they are mainly used for cross-region data access and disaster recovery.
Option D is not optimal because adding an Amazon ElastiCache for Memcached layer in front of the database would not provide the same capabilities as ElastiCache for Redis. Amazon ElastiCache for Memcached is another fully managed, in-memory data store that provides high performance and scalability for caching workloads. However, Memcached does not support snapshots, replication, or data persistence, which means that the cached data may be lost in case of a node failure or a cache
eviction. Moreover, Memcached does not integrate with other AWS services as well as Redis does.
Therefore, ElastiCache for Redis is a better choice for this scenario.
Reference:
What Is Amazon ElastiCache for Redis?
What Is Amazon Aurora?
What Is Amazon DynamoDB?
What Is Amazon ElastiCache for Memcached?
A solutions architect is designing the architecture for a software demonstration environment The
environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The system will experience significant increases in traffic during working hours but Is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Select TWO)
- A . Use AWS Auto Scaling to adjust the ALB capacity based on request rate
- B . Use AWS Auto Scaling to scale the capacity of the VPC internet gateway
- C . Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions
- D . Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization
- E . Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends Revert to the default values at the start of the week
D, E
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#target-tracking-choose-metrics
A target tracking scaling policy is a type of dynamic scaling policy that adjusts the capacity of an Auto Scaling group based on a specified metric and a target value1. A target tracking scaling policy can automatically scale out or scale in the Auto Scaling group to keep the actual metric value at or near the target value1. A target tracking scaling policy is suitable for scenarios where the load on the application changes frequently and unpredictably, such as during working hours2.
To meet the requirements of the scenario, the solutions architect should use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. Instance CPU utilization is a common metric that reflects the demand on the application1. The solutions architect should specify a target value that represents the ideal average CPU utilization level for the application, such as 50 percent1. Then, the Auto Scaling group will scale out or scale in to maintain that level of CPU utilization.
Scheduled scaling is a type of scaling policy that performs scaling actions based on a date and
time3. Scheduled scaling is suitable for scenarios where the load on the application changes periodically and predictably, such as on weekends2.
To meet the requirements of the scenario, the solutions architect should also use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. This way, the Auto Scaling group will terminate all instances on weekends when they are not required to operate. The solutions architect should also revert to the default values at the start of the week, so that the Auto Scaling group can resume normal operation.
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
- A . Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
- B . Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
- C . Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
- D . Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
- E . Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on each EC2 instance to share the files.
A, D
Explanation:
A because Elasticache, despite being ideal for leaderboards per Amazon, doesn’t cache at edge locations. D because FSx has higher performance for low latency needs. https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file-services "FSx is built for high performance and submillisecond latency using solid-state drive storage volumes. This design enables users to select storage capacity and latency independently. Thus, even a subterabyte file system can have 256 Mbps or higher throughput and support volumes up to 64 TB."
Amazon S3 is an object storage service that can store static files such as images, videos, documents, etc. Amazon EFS is a file storage service that can store files in a hierarchical structure and supports NFS protocol. Amazon FSx for Windows File Server is a file storage service that can store files in a hierarchical structure and supports SMB protocol. Amazon EBS is a block storage service that can store data in fixed-size blocks and attach to EC2 instances.
Based on these definitions, the combination of steps that should be taken to meet the requirements are: