Practice Free SAA-C03 Exam Online Questions
A company has an on-premises server that uses an Oracle database to process and store customer information. The company wants to use an AWS database service to achieve higher availability and to improve application performance. The company also wants to offload reporting from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions Point the reporting functions toward a separate DB instance from the primary DB instance.
- B . Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
- C . Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database Direct the reporting functions to use the reader instance in the cluster deployment
- D . Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the reader instances.
D
Explanation:
Amazon Aurora is a fully managed relational database that is compatible with MySQL and PostgreSQL. It provides up to five times better performance than MySQL and up to three times better performance than PostgreSQL. It also provides high availability and durability by replicating data across multiple Availability Zones and continuously backing up data to Amazon S31. By using Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database, the solution can achieve higher availability and improve application performance.
Amazon Aurora supports read replicas, which are separate instances that share the same underlying storage as the primary instance. Read replicas can be used to offload read-only queries from the primary instance and improve performance. Read replicas can also be used for reporting functions2. By directing the reporting functions to the reader instances, the solution can offload reporting from its primary database system.
A company needs to provide its employee with secure access to confidential and sensitive files. The company wants to ensure that the files can be accessed only by authorized users. The files must be downloaded security to the employees devices.
The files are stored in an on-premises Windows files server. However, due to an increase in remote usage, the file server out of capacity.
Which solution will meet these requirement?
- A . Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees ‚IP addresses.
- B . Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory Configure AWS Client VPN.
- C . Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow
download. - D . Migrate the files to Amazon S3, and create a public VPC endpoint Allow employees to sign on with AWS IAM identity Center (AWS Sing-On).
B
Explanation:
Windows file server is on-premise and we need something to replicate the data to the cloud, the only option we have is AWS FSx for Windows File Server. Also, since the information is confidential and sensitive, we also want to make sure that the appropriate users have access to it in a secure manner. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.
The company needs a managed solution with proactive engagement to detect against DDoS attacks.
Which solution will meet these requirements?
- A . Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
- B . Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
- C . Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
- D . Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as
protected resources.
D
Explanation:
AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.
AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.
Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.
Why Not Other Options?
Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.
Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.
Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.
AWS
Reference: AWS Shield Advanced – Overview of AWS Shield Advanced and its DDoS protection capabilities.
Integrating AWS Shield Advanced with Route 53 and ALB – Detailed guidance on how to protect Route 53 and ALB with AWS Shield Advanced.
A company’s application runs on Amazon EC2 instances that are in multiple Availability Zones. The application needs to ingest real-time data from third-party applications.
The company needs a data ingestion solution that places the ingested raw data in an Amazon S3 bucket.
Which solution will meet these requirements?
- A . Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams. Specify the S3 bucket as the destination of the delivery streams.
- B . Create database migration tasks in AWS Database Migration Service (AWS DMS). Specify replication instances of the EC2 instances as the source endpoints. Specify the S3 bucket as the target endpoint. Set the migration type to migrate existing data and replicate ongoing changes.
- C . Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from the EC2 instances to the S3 bucket.
- D . Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3 bucket as the destination of the delivery streams.
A
Explanation:
The solution that will meet the requirements is to create Amazon Kinesis data streams for data ingestion, create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams, and specify the S3 bucket as the destination of the delivery streams. This solution will allow the company’s application to ingest real-time data from third-party applications and place the ingested raw data in an S3 bucket. Amazon Kinesis data streams are scalable and durable streams that can capture and store data from hundreds of thousands of sources. Amazon Kinesis Data Firehose is a fully managed service that can deliver streaming data to destinations such as S3, Amazon Redshift, Amazon OpenSearch Service, and Splunk. Amazon Kinesis Data Firehose can also transform and compress the data before delivering it to S3.
The other solutions are not as effective as the first one because they either do not support real-time data ingestion, do not work with third-party applications, or do not use S3 as the destination. Creating database migration tasks in AWS Database Migration Service (AWS DMS) will not support real-time data ingestion, as AWS DMS is mainly designed for migrating relational databases, not streaming data. AWS DMS also requires replication instances, source endpoints, and target endpoints to be compatible with specific database engines and versions. Creating and configuring AWS DataSync agents on the EC2 instances will not work with third-party applications, as AWS DataSync is a service that transfers data between on-premises storage systems and AWS storage services, not between applications. AWS DataSync also requires installing agents on the source or destination servers. Creating an AWS Direct Connect connection to the application for data ingestion will not use S3 as the destination, as AWS Direct Connect is a service that establishes a dedicated network connection between on-premises and AWS, not between applications and storage services. AWS Direct Connect also requires a physical connection to an AWS Direct Connect location.
Reference: Amazon Kinesis
Amazon Kinesis Data Firehose
AWS Database Migration Service
AWS DataSync
AWS Direct Connect
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability.
What should a solutions architect do to meet these requirements?
- A . Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon S3 bucket.
- B . Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database.
- C . Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
- D . Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs
A
Explanation:
By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that provides highly scalable and available image resizing capabilities. Here’s how the solution would work: Set up an Amazon S3 bucket to store the original images uploaded by users. Configure an event trigger on the S3 bucket to invoke an AWS Lambda function whenever a new image is uploaded. The Lambda function can be designed to retrieve the uploaded image, perform the necessary resizing operations based on device requirements, and store the resized images back in the S3 bucket or a different bucket designated for resized images. Configure the Amazon S3 bucket to make the resized images publicly accessible for serving to users.
A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The company must protect the REST APIs from SQL injection and cross-site scripting attacks.
What is the MOST operationally efficient solution that meets these requirements?
- A . Configure AWS Shield.
- B . Configure AWS WAR
- C . Set up API Gateway with an Amazon CloudFront distribution Configure AWS Shield in CloudFront.
- D . Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront
D
Explanation:
Amazon API Gateway with CloudFront: API Gateway allows you to create, deploy, and manage APIs, while CloudFront provides a CDN to deliver content with low latency and high transfer speeds.
AWS WAF (Web Application Firewall):
AWS WAF can be configured in CloudFront to protect against common web exploits, including SQL injection and cross-site scripting (XSS).
WAF allows you to create custom rules to block specific attack patterns and can be managed centrally.
Configuration:
Deploy your APIs using Amazon API Gateway.
Set up an Amazon CloudFront distribution in front of the API Gateway.
Configure AWS WAF on the CloudFront distribution to apply security rules.
Operational Efficiency: This solution provides robust protection with minimal operational overhead by leveraging managed AWS services, ensuring that your APIs are secure without extensive custom implementation.
Reference: Using AWS WAF to Protect Your APIs
How CloudFront Works with AWS WAF
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application’s infrastructure?
- A . Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
- B . Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
- C . Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
- D . Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.
B
Explanation:
This answer is correct because it meets the requirements of maximizing the reliability of the application’s infrastructure. You can update the DB instance to be Multi-AZ, which means that Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. You can also enable deletion protection on the DB instance, which prevents the DB instance from being deleted by any user. You can place the EC2 instances behind an Application Load Balancer, which distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability and fault tolerance of your applications. You can run the EC2 instances in an EC2 Auto Scaling group across multiple Availability Zones, which ensures that you have the correct number of EC2 instances available to handle the load for your application. You can use scaling policies to adjust the number of instances in your Auto Scaling group in response to changing demand.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html#USER_DeleteInstance.DeletionProtection
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can access them.
Which solution will meet these requirements?
- A . Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only the web servers to access the application servers.
- B . Deploy a VPC endpoint in front of the application servers Configure the security group to allow only the web servers to access the application servers
- C . Deploy a Network Load Balancer with a target group that contains the application servers’ Auto Scaling group Configure the network ACL to allow only the web servers to access the application servers.
- D . Deploy an Application Load Balancer with a target group that contains the application servers’ Auto Scaling group. Configure the security group to allow only the web servers to access the application servers.
D
Explanation:
Application Load Balancer (ALB): ALB is suitable for routing HTTP/HTTPS traffic to the application servers. It provides advanced routing features and integrates well with Auto Scaling groups.
Target Group Configuration:
Create a target group for the application servers and register the Auto Scaling group with this target group.
Configure the ALB to forward requests from the web servers to the application servers.
Security Group Setup:
Configure the security group of the application servers to only allow traffic from the web servers’ security group.
This ensures that only the web servers can access the application servers, meeting the requirement to limit access.
Benefits:
Security: Using security groups to restrict access ensures a secure environment where only intended traffic is allowed.
Scalability: ALB works seamlessly with Auto Scaling groups, ensuring the application can handle varying loads efficiently.
Reference: Application Load Balancer
Security Groups for Your VPC
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store is data and wants to bu4d a new service that sends an alert to the managers of four Internal teams every time a new weather event is recorded. The company does not want true new service to affect the performance of the current application
What should a solutions architect do to meet these requirement with the LEAST amount of operational overhead?
- A . Use DynamoDB transactions to write new event data to the table Configure the transactions to notify internal teams.
- B . Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
- C . Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
- D . Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SOS) queue to which the teams can subscribe.
C
Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to enable Amazon DynamoDB Streams on the table and use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution requires minimal configuration and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically capture the changes and publish them to the SNS topic, which notifies the internal teams.
A company uses a legacy application to produce data in CSV format. The legacy application stores the output data In Amazon S3. The company is deploying a new commercial off-the-shelf (COTS)
application that can perform complex SQL queries to analyze data that is stored Amazon Redshift and Amazon S3 only However the COTS application cannot process the csv files that the legacy application produces. The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the COTS application can use the data that the legacy applicator produces.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store the processed data in Amazon Redshit.
- B . Develop a Python script that runs on Amazon EC2 instances to convert the. csv files to sql files invoke the Python script on cron schedule to store the output files in Amazon S3.
- C . Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an extract transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
- D . Use Amazon EventBridge (Amazon CloudWatch Events) to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract, tractform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.
A
Explanation:
This solution meets the requirements of implementing a solution so that the COTS application can use the data that the legacy application produces with the least operational overhead. AWS Glue is a fully managed service that provides a serverless ETL platform to prepare and load data for analytics. AWS Glue can process data in various formats, including .csv files, and store the processed data in Amazon Redshift, which is a fully managed data warehouse service that supports complex SQL queries. AWS Glue can run ETL jobs on a schedule, which can automate the data processing and loading process.
Option B is incorrect because developing a Python script that runs on Amazon EC2 instances to convert the .csv files to sql files can increase the operational overhead and complexity, and it may not provide consistent data processing and loading for the COTS application.
Option C is incorrect because creating an AWS Lambda function and an Amazon DynamoDB table to process the .csv files and store the processed data in the DynamoDB table does not meet the requirement of using Amazon Redshift as the data source for the COTS application.
Option D is incorrect because using Amazon EventBridge (Amazon CloudWatch Events) to launch an Amazon EMR cluster on a weekly schedule to process the .csv files and store the processed data in an Amazon Redshift table can increase the operational overhead and complexity, and it may not provide timely data processing and loading for the COTS application.
Reference: https://aws.amazon.com/glue/
https://aws.amazon.com/redshift/