Practice Free SAA-C03 Exam Online Questions
An IAM user made several configuration changes to AWS resources m their company’s account during a production deployment last week. A solutions architect learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM user was responsible for making changes.
Which service should the solutions architect use to find the desired information?
- A . Amazon GuardDuty
- B . Amazon Inspector
- C . AWS CloudTrail
- D . AWS Config
C
Explanation:
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS account, including changes made by IAM users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail, the solutions architect can identify the IAM user who made the configuration changes to the security group rules.
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
Which solution will meet these requirements?
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Provisioned Concurrency:
AWS Lambda’s provisioned concurrency ensures that a predefined number of execution environments are pre-warmed and ready to handle requests, reducing latency during traffic spikes.
This solution optimizes costs during low-traffic periods when combined with AWS Application Auto Scaling to dynamically adjust the provisioned concurrency based on demand.
Incorrect Options Analysis:
Option B: Switching to EC2 would increase complexity and cost for a serverless application.
Option C: A fixed concurrency level may result in over-provisioning during low-traffic periods, leading to higher costs.
Option D: Periodically warming functions does not effectively handle sudden spikes in traffic.
Reference: AWS Lambda Provisioned Concurrency
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)
- A . Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
- B . Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.
Deploy an Application Load Balancer in the private subnets. - C . Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance in private subnets.
- D . Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnet.
- E . Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnets.
AE
Explanation:
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You can launch your EC2 instances in other subnets of these Availability Zones instead.
A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC.
A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in
latency for some requests.
What should the solutions architect do to resolve this issue?
- A . Disable session affinity (sticky sessions) on the ALB.
- B . Replace the ALB with a Network Load Balancer.
- C . Increase the number of EC2 instances in each Availability Zone.
- D . Adjust the frequency of the health checks on the ALB’s target group.
A
Explanation:
The issue described in the question, where incoming traffic seems to favor one EC2 instance, is often caused by session affinity (also known as sticky sessions) being enabled on the Application Load Balancer (ALB). When session affinity is enabled, the ALB routes requests from the same client to the same EC2 instance. This can cause an imbalance in traffic distribution, leading to performance bottlenecks on certain instances while others remain underutilized.
To resolve this issue, disabling session affinity ensures that the ALB distributes incoming traffic evenly across all EC2 instances, allowing better load distribution and reducing latency. The ALB will rely on its round-robin or least outstanding requests algorithm (depending on the configuration) to distribute traffic more evenly across instances.
Option B (Network Load Balancer): The NLB is designed for Layer 4 (TCP) traffic and low latency use cases, but it is not needed here as the problem is with load balancing logic at the application layer (Layer 7). The ALB is more appropriate for HTTP/HTTPS traffic.
Option C (Increase EC2 Instances): Adding more EC2 instances does not solve the root issue of uneven traffic distribution.
Option D (Health Check Frequency): Adjusting health check frequency won’t address the imbalance caused by session affinity.
AWS
Reference: Application Load Balancer Sticky Sessions
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
- A . Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.
- B . Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
- C . Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.
- D . Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template. Create an Auto Scaling group with the launch template Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer to the Auto Scaling group.
D
Explanation:
Migrate the database to Amazon Aurora MySQL – this will let the DB scale on it’s own; it’ll scale automatically without needing adjustment. Create AMI of the web app and using a launch template – this will make the creating of any future instances of the app seamless. They can then be added to the auto scaling group which will save them money as it will scale up and down based on demand. Using a spot fleet to launch instances- This solves the "MOST cost-effective" portion of the question as spot instances come at a huge discount at the cost of being terminated at any time Amazon deems fit. I think this is why there’s a bit of disagreement on this. While it’s the most cost effective, it would be a terrible choice if amazon were to terminate that spot instance during a busy period.
A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates to the five businesses that the company owns. The company’s research and development (R&D) business is separating from the company and will need its own organization. A solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
- A . Have the R&D AWS account be part of both organizations during the transition.
- B . Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization.
- C . Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account to the new R&D AWS account.
- D . Have the R&D AWS account join the new organization. Make the new management account a member of the prior organization.
B
Explanation:
it allows the solutions architect to create a separate organization for the research and development (R&D) business and move its AWS account to the new organization. By inviting the R&D AWS account to be part of the new organization after it has left the prior organization, the solutions architect can ensure that there is no overlap or conflict between the two organizations. The R&D AWS account can accept or decline the invitation to join the new organization. Once accepted, it will be subject to any policies and controls applied by the new organization.
Reference: Inviting an AWS Account to Join Your Organization
Leaving an Organization as a Member Account
A company runs demonstration environments for its customers on Amazon EC2 instances. Each environment is isolated in its own VPC. The company’s operations team needs to be notified when RDP or SSH access to an environment has been established.
- A . Configure Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected.
- B . Configure the EC2 instances with an IAM instance profile that has an IAM role with the AmazonSSMManagedInstanceCore policy attached.
- C . Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters. Create an Amazon CloudWatch metric alarm with a notification action for when the alarm is in the ALARM state.
- D . Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.
C
Explanation:
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
- A . Create a gateway VPC endpoint to the S3 bucket.
- B . Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
- C . Create an instance profile on Amazon EC2 to allow S3 access.
- D . Create an Amazon API Gateway API with a private link to access the S3 endpoint.
A
Explanation:
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet
A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Install an external image management library on an EC2 instance. Use the image management library to process the images.
- B . Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
- C . Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
- D . Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
C
Explanation:
Lambda@Edge is a service that allows you to run Lambda functions at CloudFront edge locations. It can be used to modify requests and responses that flow through CloudFront. CloudFront origin request policy is a policy that controls the values (URL query strings, HTTP headers, and cookies) that are included in requests that CloudFront sends to the origin. It can be used to collect additional information at the origin or to customize the origin response. CloudFront response headers policy is a policy that specifies the HTTP headers that CloudFront removes or adds in responses that it sends to viewers. It can be used to add security or custom headers to responses.
Based on these definitions, the solution that will meet the requirements with the least operational overhead is:
C. Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
This solution would allow the application to use a Lambda@Edge function to resize the images dynamically and serve appropriate formats to clients based on the User-Agent HTTP header in the request. The Lambda@Edge function would run at the edge locations, reducing latency and load on the origin. The application code would only need to include an external image management library that can perform image manipulation tasks1.
An ecommerce company runs an application that uses an Amazon DynamoDB table in a single AWS Region. The company wants to deploy the application to a second Region. The company needs to support multi-active replication with low latency reads and writes to the existing DynamoDB table in both Regions.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Create a DynamoDB global secondary index (GSI) for the existing table. Create a new table in the second Region. Convert the existing DynamoDB table to a global table. Specify the new table as the secondary table.
- B . Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create a new application that uses the DynamoDB Streams Kinesis Adapter and the Amazon Kinesis Client Library (KCL). Configure the new application to read data from the DynamoDB table in the first Region and to write the data to the new table in the second Region.
- C . Convert the existing DynamoDB table to a global table. Choose the appropriate second Region to achieve active-active write capabilities in both Regions.
- D . Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create an AWS Lambda function in the first Region that reads data from the table in the first Region and writes the data to the new table in the second Region. Set a DynamoDB stream as the input trigger for the Lambda function.
C
Explanation:
Converting the existing DynamoDB table to a global table provides active-active replication and low-latency reads and writes in both Regions. DynamoDB global tables are specifically designed for multi-Region and multi-active use cases.
Option A: GSIs do not provide multi-Region replication or active-active capabilities.
Option B and D: Using DynamoDB Streams and custom replication is less operationally efficient than global tables and introduces additional complexity.
AWS Documentation
Reference: DynamoDB Global Tables