Practice Free SAA-C03 Exam Online Questions
A company has users all around the world accessing its HTTP-based application deployed on Amazon EC2 instances in multiple AWS Regions. The company wants to improve the availability and performance of the application. The company also wants to protect the application against common web exploits that may affect availability, compromise security, or consume excessive resources. Static IP addresses are required.
What should a solutions architect recommend to accomplish this?
- A . Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs. Create an accelerator using AWS Global Accelerator and register the NLBs as endpoints.
- B . Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Deploy AWS WAF on the ALBs. Create an accelerator using AWS Global Accelerator and register the ALBs as endpoints.
- C . Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the NLBs.
- D . Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the ALBs. Deploy AWS WAF on the CloudFront distribution.
A
Explanation:
The company wants to improve the availability and performance of the application, as well as protect it against common web exploits. The company also needs static IP addresses for the application. To meet these requirements, a solutions architect should recommend the following solution:
Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. NLBs are designed to handle millions of requests per second while maintaining high throughput at ultra-low latency. NLBs also support static IP addresses for each Availability Zone, which can be useful for whitelisting or firewalling purposes.
Deploy AWS WAF on the NLBs. AWS WAF is a web application firewall that helps protect web applications from common web exploits that could affect availability, security, or performance. AWS WAF lets you define customizable web security rules that control which traffic to allow or block to your web applications.
Create an accelerator using AWS Global Accelerator and register the NLBs as endpoints. AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in any AWS Region. It uses the AWS global network to optimize the path from your users to your applications, improving the performance of your TCP and UDP traffic.
This solution will provide high availability across Availability Zones and Regions, improve performance by routing traffic over the AWS global network, protect the application from common web attacks, and provide static IP addresses for the application.
Reference: Network Load Balancer
AWS WAF
AWS Global Accelerator
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the.
What should a solutions architect do to mitigate any single point of failure in this architecture?
- A . Add a set of VPNs between the Management and Production VPCs.
- B . Add a second virtual private gateway and attach it to the Management VPC.
- C . Add a second set of VPNs to the Management VPC from a second customer gateway device.
- D . Add a second VPC peering connection between the Management VPC and the Production VPC.
C
Explanation:
This answer is correct because it provides redundancy for the VPN connection between the Management VPC and the data center. If one customer gateway device or one VPN tunnel becomes unavailable, the traffic can still flow over the second customer gateway device and the second VPN tunnel. This way, the single point of failure in the VPN connection is mitigated.
Reference:
https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html
https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/VPC/vpn-tunnel-redundancy.html
A company’s infrastructure consists of Amazon EC2 instances and an Amazon RDS DB instance in a single AWS Region. The company wants to back up its data in a separate Region.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Backup to copy EC2 backups and RDS backups to the separate Region.
- B . Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.
- C . Create Amazon Machine Images (AMIs) of the EC2 instances. Copy the AMIs to the separate Region. Create a read replica for the RDS DB instance in the separate Region.
- D . Create Amazon Elastic Block Store (Amazon EBS) snapshots. Copy the EBS snapshots to the separate Region. Create RDS snapshots. Export the RDS snapshots to Amazon S3. Configure S3 Cross-Region Replication (CRR) to the separate Region.
A
Explanation:
To back up EC2 instances and RDS DB instances in a separate Region with the least operational overhead, AWS Backup is a simple and cost-effective solution. AWS Backup can copy EC2 backups and RDS backups to another Region automatically and securely. AWS Backup also supports backup policies, retention rules, and monitoring features.
Reference:
What Is AWS Backup?
Cross-Region Backup
A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on- premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows.
What should a solutions architect recommend?
- A . Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
- B . Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
- C . Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
- D . Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
D
Explanation:
it allows the company to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. By setting up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface, the company can store backup data on virtual tapes in S3 or Glacier. This preserves the existing investment in the on-premises backup applications and workflows while leveraging AWS storage services.
Reference: AWS Storage Gateway
Tape Gateway
A media company hosts a web application on AWS. The application gives users the ability to upload and view videos. The application stores the videos in an Amazon S3 bucket. The company wants to ensure that only authenticated users can upload videos. Authenticated users must have the ability to upload videos only within a specified time frame after authentication.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure the application to generate IAM temporary security credentials for authenticated users.
- B . Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.
- C . Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.
- D . Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.
B
Explanation:
Pre-Signed URLs: Allow temporary access to S3 buckets, making it easy to manage time-limited upload permissions without complex operational overhead.
Lambda for Automation: Automatically generates and provides pre-signed URLs when users authenticate, minimizing manual steps and code complexity.
Least Operational Overhead: Requires no custom authentication service or deep integration with STS or Cognito.
Reference: Amazon S3 Pre-Signed URLs Documentation
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company’s management team should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
- A . Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets.
Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles. - B . Create an analysis in Amazon OuickSighl. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
- C . Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
- D . Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PoslgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
B
Explanation:
Amazon QuickSight is a data visualization service that allows you to create interactive dashboards and reports from various data sources, including Amazon S3 and Amazon RDS for PostgreSQL. You can connect all the data sources and create new datasets in QuickSight, and then publish dashboards to visualize the data. You can also share the dashboards with the appropriate users and groups, and control their access levels using IAM roles and permissions.
Reference: https://docs.aws.amazon.com/quicksight/latest/user/working-with-data-sources.html
A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?
- A . Configure the web application to send an order message to Amazon Kinesis Data Firehose. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order.
- B . Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request Use Lambda to query the database, call the payment service, and pass in the order information.
- C . Store the order in the database. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to poll Amazon SNS. retrieve the message, and process the order.
- D . Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
D
Explanation:
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue, the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being processed multiple times.
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server trial coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
- A . Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling
- B . Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue
- C . Implement the primary server and the compute nodes with Amazon EC2 instances that are managed In an Auto Scaling group. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server
- D . implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes
B
Explanation:
To maximize resiliency and scalability, the best solution is to use an Amazon SQS queue as a destination for the jobs. This decouples the primary server from the compute nodes, allowing them to scale independently. This also helps to prevent job loss in the event of a failure. Using an Auto Scaling group of Amazon EC2 instances for the compute nodes allows for automatic scaling based on the workload. In this case, it’s recommended to configure the Auto Scaling group based on the size of the Amazon SQS queue, which is a better indicator of the actual workload than the load on the primary server or compute nodes. This approach ensures that the application can handle variable workloads, while also minimizing costs by automatically scaling up or down the compute nodes as needed.
A company hosts an application on Amazon EC2 instances that are part of a target group behind an Application Load Balancer (ALB). The company has attached a security group to the ALB.
During a recent review of application logs, the company found many unauthorized login attempts from IP addresses that belong to countries outside the company’s normal user base. The company wants to allow traffic only from the United States and Australia.
- A . Edit the default network ACL to block IP addresses from outside of the allowed countries.
- B . Create a geographic match rule in AWS WAF. Attach the rule to the ALB.
- C . Configure the ALB security group to allow the IP addresses of company employees. Edit the default network ACL to block IP addresses from outside of the allowed countries.
- D . Use a host-based firewall on the EC2 instances to block IP addresses from outside of the allowed countries. Configure the ALB security group to allow the IP addresses of company employees.
B
Explanation:
Why Option B is Correct:
AWS WAF: Provides a simple way to create geographic match rules to block or allow traffic based on country IP ranges.
Least Operational Overhead: Attaching the WAF rule to the ALB ensures centralized control without modifying ACLs or instance firewalls.
Why Other Options Are Not Ideal:
Option A: Network ACLs operate at the subnet level and can become complex to manage for dynamic or evolving IP ranges.
Option C: Managing IP-based rules in security groups and ACLs lacks scalability and does not provide
country-based filtering.
Option D: Configuring host-based firewalls increases operational overhead and does not leverage AWS-managed solutions.
AWS
Reference: AWS WAF Geomatch:
AWS Documentation – WAF Geomatch
A company has created an image analysis application in which users can upload photos and add
photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
- A . Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
- B . Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
- C . Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
- D . Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.
C
Explanation:
https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects
This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency. DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances and EBS volumes.
Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput.
Option B is incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift.
Option D is incorrect because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer and the application code. It also increases the cost and complexity of managing the infrastructure.
Reference: https://aws.amazon.com/certification/certified-solutions-architect-professional/
https://www.examtopics.com/discussions/amazon/view/7193-exam-aws-certified-solutions-architect-professional-topic-1/
https://aws.amazon.com/architecture/