Practice Free SAA-C03 Exam Online Questions
A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.
Which solution meets these requirements?
- A . Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
- B . Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
- C . Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
- D . Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
A
Explanation:
AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest healthy endpoint and will help to improve the overall user experience.
A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the ou data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.
What should the solutions architect do to reduce the overall data transfer costs?
- A . Place all the EC2 instances in an Auto Scaling group.
- B . Place all the EC2 instances in the same AWS Region.
- C . Place all the EC2 instances in the same Availability Zone.
- D . Place all the EC2 instances in private subnets in multiple Availability Zones.
C
Explanation:
Requirement Analysis: The application involves batch processing of large data transfers between EC2 instances.
Data Transfer Costs: Data transfer within the same Availability Zone (AZ) is typically free, while cross-AZ transfers incur additional costs.
Implementation:
Launch all EC2 instances within the same Availability Zone.
Ensure the instances are part of the same subnet to facilitate seamless data transfer.
Conclusion: Placing all EC2 instances in the same AZ reduces data transfer costs significantly without affecting the application’s functionality.
Reference
AWS Pricing: AWS Data Transfer Pricing
A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build a solution to analyze the performance of the web application with a granularity of no more than 2 minutes.
What should the solutions architect do to meet this requirement?
- A . Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.
- B . Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.
- C . Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.
- D . Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.
B
Explanation:
To analyze the performance of the web application with granularity of no more than 2 minutes, enabling detailed monitoring on EC2 instances is the best solution. By default, CloudWatch provides metrics at a 5-minute interval. Enabling detailed monitoring allows you to collect metrics at 1-minute intervals, which will give you the level of granularity you need to analyze performance during peak traffic.
Amazon CloudWatch metrics can then be used to analyze CPU utilization, memory usage, disk I/O, and network throughput, among other performance-related metrics, at the desired granularity.
Option A: Sending CloudWatch logs to Redshift for analysis is unnecessary and overcomplicated for simple performance analysis, which can be done using CloudWatch metrics alone.
Option C: Fetching EC2 logs via Lambda adds complexity, and CloudWatch metrics already provide the required data for performance analysis.
Option D: Sending logs to S3 and using Redshift for analysis is also more complex than necessary for simple performance monitoring.
AWS
Reference: Monitoring Amazon EC2 with CloudWatch
Amazon CloudWatch Detailed Monitoring
A company wants to implement new security compliance requirements for its development team to limit the use of approved Amazon Machine Images (AMIs).
The company wants to provide access to only the approved operating system and software for all its Amazon EC2 instances. The company wants the solution to have the least amount of lead time for launching EC2 instances.
Which solution will meet these requirements?
- A . Create a portfolio by using AWS Service Catalog that includes only EC2 instances launched with approved AMIs. Ensure that all required software is preinstalled on the AMIs. Create the necessary permissions for developers to use the portfolio.
- B . Create an AMI that contains the approved operating system and software by using EC2 Image Builder. Give developers access to that AMI to launch the EC2 instances.
- C . Create an AMI that contains the approved operating system Tell the developers to use the approved AMI Create an Amazon EventBridge rule to run an AWS Systems Manager script when a new EC2 instance is launched. Configure the script to install the required software from a repository.
- D . Create an AWS Config rule to detect the launch of EC2 instances with an AMI that is not approved. Associate a remediation rule to terminate those instances and launch the instances again with the approved AMI. Use AWS Systems Manager to automatically install the approved software on the launch of an EC2 instance.
A
Explanation:
AWS Service Catalog is designed to allow organizations to manage a catalog of approved products (including AMIs) that users can deploy. By creating a portfolio that contains only EC2 instances launched with preapproved AMIs, the company can enforce compliance with the approved operating system and software for all EC2 instances. Service Catalog also streamlines the process of launching EC2 instances, reducing the lead time while ensuring that developers use only the approved configurations.
Option B (EC2 Image Builder): While EC2 Image Builder helps in creating and managing AMIs, it doesn’t provide the enforcement mechanism that Service Catalog does.
Option C (EventBridge rule and Systems Manager script): This solution is reactive and involves more operational complexity compared to Service Catalog.
Option D (AWS Config rule): This option is reactive (it terminates non-compliant instances after launch) and introduces additional operational overhead.
AWS
Reference: AWS Service Catalog
A global company runs its applications in multiple AWS accounts in AWS Organizations. The company’s applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure AWS Config with a rule to report the incomplete multipart upload object count.
- B . Create a service control policy (SCP) to report the incomplete multipart upload object count.
- C . Configure S3 Storage Lens to report the incomplete multipart upload object count.
- D . Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
C
Explanation:
S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.
Reference:
1 explains how to use S3 Storage Lens to gain insights into S3 storage usage and activity.
2 describes the concept and benefits of multipart uploads.
A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network Load Balancer. The company stores the product’s objects in an Amazon S3 bucket.
The company recently experienced malicious attacks against its systems. The company needs a solution that continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket. The solution must also report suspicious activity and display the information on a dashboard.
Which solution will meet these requirements?
- A . Configure Amazon Made to monitor and report findings to AWS Config.
- B . Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
- C . Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
- D . Configure AWS Config to monitor and report findings to Amazon EventBridge.
C
Explanation:
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior across the AWS account and workloads. GuardDuty analyzes data sources such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs to identify potential threats such as compromised instances, reconnaissance, port scanning, and data exfiltration. GuardDuty can report its findings to AWS Security Hub, which is a service that provides a comprehensive view of the security posture of the AWS account and workloads. Security Hub aggregates, organizes, and prioritizes security alerts from multiple AWS services and partner solutions, and displays them on a dashboard. This solution will meet the requirements, as it enables continuous monitoring, reporting, and visualization of malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.
Reference: 1 provides an overview of Amazon GuardDuty and its benefits.
2 explains how GuardDuty generates and reports findings based on threat detection.
3 provides an overview of AWS Security Hub and its benefits.
4 describes how Security Hub collects and displays findings from multiple sources on a dashboard
A solutions architect is implementing a document review application using an Amazon S3 bucket for
storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
- A . Enable a read-only bucket ACL.
- B . Enable versioning on the bucket.
- C . Attach an IAM policy to the bucket.
- D . Enable MFA Delete on the bucket.
- E . Encrypt the bucket using AWS KMS.
BD
Explanation:
Versioning is a feature of Amazon S3 that allows users to keep multiple versions of the same object in a bucket. It can help prevent accidental deletion of the documents and ensure that all versions of the documents are available1. MFA Delete is a feature of Amazon S3 that adds an extra layer of security by requiring two forms of authentication to delete a version or change the versioning state of a bucket. It can help prevent unauthorized or accidental deletion of the documents2. By enabling both versioning and MFA Delete on the bucket, the solution can meet the requirements.
A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.
Which solution meets these requirements?
- A . Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
- B . Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.
- C . Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.
- D . Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
A
Explanation:
Client-side encryption is a method of encrypting data before uploading it to Amazon S3. It allows users to manage the encryption process, encryption keys, and related tools1. By using client-side encryption, the solution can ensure that the data is encrypted at rest and in transit, as Amazon S3 will not have access to the encryption keys or the unencrypted data2.
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit’s account independently upon request. The root email recipient missed a notification that was sent to the root user email address of one account. The company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?
- A . Configure the company’s email server to forward notification email messages that are sent to the AWS account root user email address to all users in the organization.
- B . Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
- C . Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the appropriate groups.
- D . Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
B
Explanation:
Use a group email address for the management account’s root user https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-address
A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more than 50%for a short burst of time. However, if the CPU utilization increases to more than 50%and read IOPS on the disk are high at the same time, the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?
- A . Create Amazon CloudWatch composite alarms where possible.
- B . Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
- C . Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
- D . Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.
A
Explanation:
Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm noise**. For example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific conditions. You then can set up your composite alarm to go into ALARM and send you notifications when the underlying metric alarms go into ALARM by configuring the underlying metric alarms never to take actions. Currently, composite alarms can take the following actions:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html