Practice Free SAA-C03 Exam Online Questions
A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually reviews and copies the files from this initial S3 bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in larger sizes to the initial S3 bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial S3 bucket. The reporting team also wants to use AWS Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a pipeline in Amazon SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
- A . Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and SageMaker Pipelines as destinations of the event notification. Configure s30bjectCreated:Put as the event type.
- B . Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as targets for the rule.
- C . Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and SageMaker Pipelines as destinations of the event notification. Configure s30bjectCreated:Put as the event type.
- D . Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as targets for the rule.
D
Explanation:
This solution meets the requirements of moving the files automatically, running Lambda functions on the copied data, and sending the data files to SageMaker Pipelines with the least operational overhead. S3 replication can copy the files from the initial S3 bucket to the analysis S3 bucket as they arrive. The analysis S3 bucket can send event notifications to Amazon EventBridge (Amazon CloudWatch Events) when an object is created. EventBridge can trigger Lambda and SageMaker Pipelines as targets for the ObjectCreated rule. Lambda can run pattern-matching code on the copied data, and SageMaker Pipelines can execute a pipeline with the data files.
Option A is incorrect because creating a Lambda function to copy the files to the analysis S3 bucket is not necessary when S3 replication can do that automatically. It also adds operational overhead to manage the Lambda function.
Option B is incorrect because creating a Lambda function to copy the files to the analysis S3 bucket is not necessary when S3 replication can do that automatically. It also adds operational overhead to manage the Lambda function.
Option C is incorrect because using S3 event notification with multiple destinations can result in throttling or delivery failures if there are too many events.
Reference: https://aws.amazon.com/blogs/machine-learning/automate-feature-engineering-pipelines-with-amazon-sagemaker/
https://docs.aws.amazon.com/sagemaker/latest/dg/automating-sagemaker-with-eventbridge.html
https://aws.amazon.com/about-aws/whats-new/2021/04/new-options-trigger-amazon-sagemaker-pipeline-executions/
A company hosts a database that runs on an Amazon RDS instance that is deployed to multiple Availability Zones. The company periodically runs a script against the database to report new entries that are added to the database. The script that runs against the database negatively affects the performance of a critical application. The company needs to improve application performance with minimal costs.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Add functionality to the script to identify the instance that has the fewest active connections.
Configure the script to read from that instance to report the total new entries. - B . Create a read replica of the database. Configure the script to query only the read replica to report the total new entries.
- C . Instruct the development team to manually export the new entries for the day in the database at the end of each day.
- D . Use Amazon ElastiCache to cache the common queries that the script runs against the database.
B
Explanation:
Amazon RDS Read Replica: Creating a read replica offloads read traffic from the primary database, improving performance for critical applications without affecting their write operations.
Minimal Operational Overhead: The read replica automatically stays in sync with the primary database, requiring minimal management effort.
Cost Efficiency: A read replica avoids higher costs compared to building custom caching or exporting logic.
Reference: AWS RDS Read Replica Documentation
How can a company detect and notify security teams about PII in S3 buckets?
- A . Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
- B . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
- C . Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.
- D . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.
A
Explanation:
Amazon Macie is purpose-built for detecting PII in S3.
Option A uses EventBridge to filter SensitiveData findings and notify via SNS, meeting the requirements.
Options B and D involve GuardDuty, which is not designed for PII detection.
Option C uses SQS, which is less suitable for immediate notifications.
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiples speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files must be stored for 7 years for auditing piloses.
Which solution will meet these requirements?
- A . Use Amazon Recognition for multiple speaker recognition. Store the transcript files in Amazon S3 Use machine teaming models for transcript file analysis
- B . Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysts
- C . Use Amazon Translate lor multiple speaker recognition. Store the transcript files in Amazon Redshift Use SQL queues lor transcript file analysis
- D . Use Amazon Recognition for multiple speaker recognition. Store the transcript files in Amazon S3 Use Amazon Textract for transcript file analysis
B
Explanation:
Amazon Transcribe now supports speaker labeling for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to convert speech-to-text. In live audio transcription, each stream of audio may contain multiple speakers. Now you can conveniently turn on the ability to label speakers, thus helping to identify who is saying what in the output transcript. https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-transcribe-supports-speaker-labeling-streaming-transcription/
A company has an application that runs on several Amazon EC2 instances Each EC2 instance has multiple Amazon Elastic Block Store (Amazon EBS) data volumes attached to it. The application’s EC2 instance configuration and data need to be backed up nightly. The application also needs to be recoverable in a different AWS Region
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different Region
- B . Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region Add the application’s EC2 instances as resources
- C . Create a backup plan by using AWS Backup to perform nightly backups Copy the backups to another Region Add the application’s EBS volumes as resources
- D . Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different Availability Zone
B
Explanation:
The most operationally efficient solution to meet these requirements would be to create a backup plan by using AWS Backup to perform nightly backups and copying the backups to another Region. Adding the application’s EBS volumes as resources will ensure that the application’s EC2 instance configuration and data are backed up, and copying the backups to another Region will ensure that the application is recoverable in a different AWS Region.
A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications. The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.
- A . Set up a VPC peering connection for each VPC that needs access to the new application VPC.
Update route tables in each VPC to enable connectivity. - B . Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.
- C . Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs.
Control access to the application by using an endpoint policy. - D . Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.
C
Explanation:
AWS PrivateLink is the most suitable solution for providing fine-grained access control while allowing multiple VPCs, potentially across multiple accounts, to access the new application. This approach offers the following advantages:
Fine-grained control: Endpoint policies can restrict access to specific services or principals.
No need for route table updates: Unlike VPC peering or transit gateways, AWS PrivateLink does not require complex route table management.
Scalable architecture: PrivateLink scales to support traffic from multiple VPCs.
Secure connectivity: Ensures private connectivity over the AWS network, without exposing resources to the internet.
Why Other Options Are Not Ideal:
Option A:
VPC peering is not scalable when connecting multiple VPCs or accounts.
Route table management becomes complex as the number of VPCs increases.
Not scalable.
Option B:
While transit gateways provide scalable VPC connectivity, they are not ideal for fine-grained access control.
Transit gateways allow connectivity but do not inherently restrict access to specific applications.
Not ideal for fine-grained access control.
Option D:
Exposing the application through an ALB over the internet is not secure and does not align with the
requirement to use private network resources.
Security risk.
AWS
Reference: AWS PrivateLink:
AWS Documentation – PrivateLink
AWS Networking Services Comparison:
AWS Whitepaper – Networking Services
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored In an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
- A . Generate presigned URLs for the files.
- B . Use cross-Region replication to all Regions.
- C . Use the geoproximtty feature of Amazon Route 53.
- D . Use Amazon CloudFront with the S3 bucket as its origin.
D
Explanation:
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML pages, images, and videos. By using CloudFront, the HTML pages will be served to users from the edge location that is closest to them, resulting in faster delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests expected for the global event, ensuring that the HTML pages are available and accessible to users around the world.
A company runs database workloads on AWS that are the backend for the company’s customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL.
The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old.
Which solution will meet these requirements MOST cost-effectively?
- A . Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.
- B . Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.
- C . Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days
- D . Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.
A
Explanation:
Setting the RDS backup retention policy to 30 days for automated backups through AWS Backup allows the company to retain backups cost-effectively. Manual backups, however, are not automatically managed by RDS’s retention policy, so they need to be manually deleted if they are older than 30 days to avoid unnecessary storage costs.
Key AWS features:
Automated Backups: Can be configured with a retention policy of up to 35 days, ensuring that older automated backups are deleted automatically.
Manual Backups: These are not subject to the automated retention policy and must be manually managed to avoid extra costs.
AWS Documentation: AWS recommends using backup retention policies for automated backups while manually managing manual backups.
A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs to ensure that clients use a TLS certificate when accessing the company’s website. The company wants to automate the creation and renewal of the Tl S certificates.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use a CloudFront security policy lo create a certificate.
- B . Use a CloudFront origin access control (OAC) to create a certificate.
- C . Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
- D . Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
C
Explanation:
Understanding the Requirement: The company needs to ensure clients use a TLS certificate when accessing the website and automate the creation and renewal of TLS certificates.
Analysis of Options:
CloudFront Security Policy: Not applicable for creating certificates.
CloudFront Origin Access Control (OAC): Controls access to origins, not relevant for TLS certificate creation.
AWS Certificate Manager (ACM) with DNS Validation: Provides automated certificate management, including creation and renewal, with minimal manual intervention. DNS validation is automated and does not require manual intervention as email validation does.
AWS Certificate Manager (ACM) with Email Validation: Requires manual intervention to approve validation emails, which increases operational effort.
Best Solution:
AWS Certificate Manager (ACM) with DNS Validation: Ensures automated and efficient certificate management with the least operational effort.
Reference: AWS Certificate Manager (ACM)
DNS Validation in ACM
A company stores data in an on-premises Oracle relational database. The company needs to make the data available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to connect its on-premises network to AWS.
The company must capture the changes that occur to the source database during the migration to Aurora PostgreSQL.
Which solution will meet these requirements?
- A . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
- B . Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
- C . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
- D . Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
C
Explanation:
For the migration of data from an on-premises Oracle database to Amazon Aurora PostgreSQL, this solution effectively handles schema conversion, data migration, and ongoing data replication.
AWS Schema Conversion Tool (SCT): SCT is used to convert the Oracle database schema to a format compatible with Aurora PostgreSQL. This tool automatically converts the database schema and code
objects, like stored procedures, to the target database engine.
AWS Database Migration Service (DMS): DMS is employed to perform the data migration. It supports both full-load migrations (for initial data transfer) and continuous replication of ongoing changes (Change Data Capture, or CDC). This ensures that any updates to the Oracle database during the migration are captured and applied to the Aurora PostgreSQL database, minimizing downtime.
Why Not Other Options?
Option A (SCT + DMS full-load only): This option does not capture ongoing changes, which is crucial for a live database migration to ensure data consistency.
Option B (DataSync + S3): AWS DataSync is more suited for file transfers rather than database migrations, and it doesn’t support ongoing change replication.
Option D (Snowball + S3): Snowball is typically used for large-scale data transfers that don’t require continuous synchronization, making it less suitable for this scenario where ongoing changes must be captured.
AWS
Reference: AWS Schema Conversion Tool – Guidance on using SCT for database schema conversions.
AWS Database Migration Service – Detailed documentation on using DMS for data migrations and ongoing replication.