Practice Free SAA-C03 Exam Online Questions
A finance company has a web application that generates credit reports for customers. The company hosts the frontend of the web application on a fleet of Amazon EC2 instances that is associated with an Application Load Balancer (ALB). The application generates reports by running queries on an Amazon RDS for SQL Server database.
The company recently discovered that malicious traffic from around the world is abusing the application by submitting unnecessary requests. The malicious traffic is consuming significant compute resources. The company needs to address the malicious traffic.
Which solution will meet this requirement?
- A . Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Update the web ACL to block IP addresses that are associated with malicious traffic.
- B . Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Use the AWS WAF Bot Control managed rule feature.
- C . Set up AWS Shield to protect the ALB and the database.
- D . Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Configure the AWS WAF IP reputation rule.
B
Explanation:
The AWS WAF Bot Control managed rule is designed to automatically detect and mitigate bot traffic. This feature is particularly useful for addressing malicious traffic and conserving compute resources by filtering unnecessary requests at the ALB level.
Option A: Blocking IP addresses manually introduces significant operational overhead and is not scalable against dynamic, worldwide malicious traffic.
Option C: AWS Shield provides DDoS protection, but the scenario does not describe a DDoS attack.
WAF is better suited for managing application-layer threats like bot traffic.
Option D: The AWS WAF IP reputation rule helps block traffic from known bad IPs but may not address bot traffic effectively.
AWS Documentation
Reference: AWS WAF Bot Control
AWS WAF Managed Rules
A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged tor auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and «the MOST operationally efferent?
- A . Server-side encryption with customer-provided keys (SSE-C)
- B . Server-side encryption with Amazon S3 managed keys (SSE-S3)
- C . Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
- D . Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automate rotation
D
Explanation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
When you enable automatic key rotation for a customer managed key, AWS KMS generates new cryptographic material for the KMS key every year. AWS KMS also saves the KMS key’s older cryptographic material in perpetuity so it can be used to decrypt data that the KMS key encrypted.
Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use. AWS KMS supports optional automatic key rotation only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter.
A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UDP-based workload hosted on premises.
Which combination of actions should a solutions architect take to improve availability and performance? (Select TWO.)
- A . Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
- B . Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.
- C . Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints.
and the second will route lo the on-premises endpoints. - D . Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.
- E . Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints.
A, D
Explanation:
For improving availability and performance of the hybrid application, the following solutions are optimal:
AWS Global Accelerator (Option A): Global Accelerator provides high availability and improves performance by using the AWS global network to route user traffic to the nearest healthy endpoint (across AWS Regions). By adding the Network Load Balancers as endpoints, Global Accelerator ensures that traffic is routed efficiently to the closest endpoint, improving both availability and performance.
Network Load Balancer (Option D): The stateful TCP-based workload hosted on Amazon EC2 instances and the stateless UDP-based workload hosted on-premises are best served by Network Load Balancers (NLBs). NLBs are designed to handle TCP and UDP traffic with ultra-low latency and can route traffic to both EC2 and on-premises endpoints.
Option B (CloudFront and Route 53): CloudFront is better suited for HTTP/HTTPS workloads, not for TCP/UDP-based applications.
Option C (ALB): Application Load Balancers do not support the stateless UDP-based workload, making NLBs the better choice for both TCP and UDP.
AWS
Reference: AWS Global Accelerator
Network Load Balancer
A company needs to optimize the cost of its Amazon EC2 Instances. The company also needs to change the type and family of its EC2 instances every 2-3 months.
What should the company do lo meet these requirements?
- A . Purchase Partial Upfront Reserved Instances tor a 3-year term.
- B . Purchase a No Upfront Compute Savings Plan for a 1-year term.
- C . Purchase All Upfront Reserved Instances for a 1 -year term.
- D . Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
B
Explanation:
Understanding the Requirements: The company needs to optimize costs and has the flexibility to change EC2 instance types and families frequently (every 2-3 months).
Savings Plans Overview: Savings Plans offer significant savings over On-Demand pricing, with the flexibility to use any instance type and family within a region.
No Upfront Compute Savings Plan: This plan allows for cost optimization without any upfront payment, offering flexibility to change instance types and families.
Term Selection: A 1-year term is appropriate for balancing cost savings and flexibility given the frequent changes in instance types.
Conclusion: A No Upfront Compute Savings Plan for a 1-year term provides the needed flexibility and cost savings without the commitment and inflexibility of Reserved Instances.
Reference
AWS Savings Plans: AWS Savings Plans
AWS Cost Management Documentation: AWS Cost Management
A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS Lambda. The users of this web application will be geographically distributed, and the company wants to reduce the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
- A . Private endpoint
- B . Regional endpoint
- C . Interface VPC endpoint
- D . Edge-optimzed endpoint
D
Explanation:
An edge-optimized API endpoint is best for geographically distributed clients, as it routes the API requests to the nearest CloudFront Point of Presence (POP). This reduces the latency and improves the performance of the API. Edge-optimized endpoints are the default type for API Gateway REST APIs1.
A regional API endpoint is intended for clients in the same region as the API, and it does not use CloudFront to route the requests. A private API endpoint is an API endpoint that can only be accessed from a VPC using an interface VPC endpoint. A regional or private endpoint would not meet the requirement of reducing the latency for geographically distributed users1.
A company uses a Microsoft SOL Server database. The company’s applications are connected to the database. The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the application code.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Use the AWS Schema Conversion Tool <AWS SCT) to rewrite the SOL queries in the applications.
- B . Enable Babelfish on Aurora PostgreSQL to run the SQL queues from the applications.
- C . Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS).
- D . Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL
- E . Use AWS Database Migration Service (AWS DMS) to rewrite the SOI queries in the applications
B, C
Explanation:
Requirement Analysis: The goal is to migrate from Microsoft SQL Server to Amazon Aurora PostgreSQL with minimal application code changes.
Babelfish for Aurora PostgreSQL: Babelfish allows Aurora PostgreSQL to understand SQL Server queries natively, reducing the need for application code changes.
AWS Schema Conversion Tool (SCT): This tool helps in converting the database schema from SQL Server to PostgreSQL.
AWS Database Migration Service (DMS): DMS can be used to migrate data from SQL Server to Aurora PostgreSQL seamlessly.
Combined Approach: Enabling Babelfish addresses the SQL query compatibility, while SCT and DMS handle the schema and data migration.
Reference
Babelfish for Aurora PostgreSQL: Babelfish Documentation
AWS SCT and DMS: AWS Database Migration Service
A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture. The company plans to create many new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized corporate directory service.
Which combination of actions should a solutions architect recommend to meet these requirements? (Select TWO.)
- A . Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
- B . Set up an Amazon Cognito identity pool. Configure AWS 1AM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication.
- C . Configure a service control policy (SCP) to manage the AWS accounts. Add AWS 1AM Identity Center (AWS Single Sign-On) to AWS Directory Service.
- D . Create a new organization in AWS Organizations. Configure the organization’s authentication mechanism to use AWS Directory Service directly.
- E . Set up AWS 1AM Identity Center (AWS Single Sign-On) in the organization. Configure 1AM Identity Center, and integrate it with the company’s corporate directory service.
AE
Explanation:
AWS Organizations is a service that helps users centrally manage and govern multiple AWS accounts. It allows users to create organizational units (OUs) to group accounts based on business needs or other criteria. It also allows users to define and attach service control policies (SCPs) to OUs or accounts to restrict the actions that can be performed by the accounts1. By creating a new organization in AWS Organizations with all features turned on, the solution can consolidate and manage the new AWS accounts for different business units.
AWS IAM Identity Center (formerly known as AWS Single Sign-On) is a service that provides single sign-on access for all of your AWS accounts and cloud applications. It connects with Microsoft Active Directory through AWS Directory Service to allow users in that directory to sign in to a personalized AWS access portal using their existing Active Directory user names and passwords. From the AWS access portal, users have access to all the AWS accounts and cloud applications that they have permissions for2. By setting up IAM Identity Center in the organization and integrating it with the company’s corporate directory service, the solution can authenticate access to these AWS accounts using a centralized corporate directory service.
B. Set up an Amazon Cognito identity pool. Configure AWS 1AM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication. This solution will not meet the requirement of authenticating access to these AWS accounts by using a centralized corporate directory service, as Amazon Cognito is a service that provides user sign-up, sign-in, and access control for web and mobile applications, not for corporate directory services3.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS 1AM Identi-ty Center (AWS Single Sign-On) to AWS Directory Service. This solution will not work, as SCPs are used to restrict the actions that can be performed by the accounts in an organization, not to manage the accounts themselves1. Also, IAM Identity Center cannot be added to AWS Directory Service, as it is a separate service that connects with Microsoft Active Directory through AWS Directory Service2.
D. Create a new organization in AWS Organizations. Configure the organization’s authentication mechanism to use AWS Directory Service directly. This solution will not work, as AWS Organizations does not have an authentication mechanism that can use AWS Directory Service directly. AWS Organizations relies on IAM Identity Center to provide single sign-on access for the accounts in an organization.
Reference URL:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?
- A . Server-side encryption with customer-provided keys (SSE-C)
- B . Server-side encryption with Amazon S3 managed keys (SSE-S3)
- C . Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
- D . Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
D
Explanation:
SSE-KMS: Server-side encryption with AWS Key Management Service (SSE-KMS) provides robust encryption of data at rest, integrated with AWS KMS for key management and auditing.
Automatic Key Rotation: By enabling automatic rotation for the KMS keys, the system ensures that keys are rotated annually without manual intervention, meeting compliance requirements.
Logging and Auditing: AWS KMS automatically logs all key usage and management actions in AWS CloudTrail, providing the necessary audit logs.
Implementation:
Create a KMS key with automatic rotation enabled.
Configure the S3 bucket to use SSE-KMS with the created KMS key.
Ensure CloudTrail is enabled for logging KMS key usage.
Operational Efficiency: This solution provides encryption, automatic key management, and auditing in a seamless, fully managed way, reducing operational overhead.
Reference: AWS KMS Automatic Key Rotation
Amazon S3 Server-Side Encryption
An ecommerce company needs to run a scheduled daily job to aggregate and filler sales records for analytics. The company stores the sales records in an Amazon S3 bucket. Each object can be up to 10 G6 in size Based on the number of sales events, the job can take up to an hour to complete. The CPU and memory usage of the fob are constant and are known in advance.
A solutions architect needs to minimize the amount of operational effort that is needed for the job to run.
Which solution meets these requirements?
- A . Create an AWS Lambda function that has an Amazon EventBridge notification Schedule the EventBridge event to run once a day
- B . Create an AWS Lambda function Create an Amazon API Gateway HTTP API, and integrate the API with the function Create an Amazon EventBridge scheduled avert that calls the API and invokes the function.
- C . Create an Amazon Elastic Container Service (Amazon ECS) duster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
- D . Create an Amazon Elastic Container Service (Amazon ECS) duster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge scheduled event that launches an ECS task on the duster to run the job.
C
Explanation:
The solution that meets the requirements with the least operational overhead is to create a **Regional AWS WAF web ACL with a rate-based rule** and associate the web ACL with the API Gateway stage. This solution will protect the application from HTTP flood attacks by monitoring incoming requests and blocking requests from IP addresses that exceed the predefined rate. Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint is also a good solution but it requires more operational overhead than the previous solution. Using Amazon CloudWatch metrics to monitor the Count metric and alerting the security team when the predefined rate is reached is not a solution that can protect against HTTP flood attacks. Creating an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours is not a solution that can protect against HTTP flood attacks.
A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not accessed after 1 month. The company must keep the files indefinitely.
Which storage solution will meet these requirements MOST cost-effectively?
- A . Configure S3 Intelligent-Tiering to automatically migrate objects.
- B . Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
- C . Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month.
- D . Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month.
B
Explanation:
The storage solution that will meet these requirements most cost-effectively is B: Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. Amazon S3 Glacier Deep Archive is a secure, durable, and extremely low-cost Amazon S3 storage class for long-term retention of data that is rarely accessed and for which retrieval times of several hours are acceptable. It is the lowest-cost storage option in Amazon S3, making it a cost-effective choice for storing backup files that are not accessed after 1 month. You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. This will minimize the storage costs for the backup files that are not accessed frequently.