Practice Free SAA-C03 Exam Online Questions
A company has a batch processing application that runs every day. The process typically takes an average 3 hours to complete. The application can handle interruptions and can resume the process after a restart. Currently, the company runs the application on Amazon EC2 On-Demand Instances. The company wants to optimize costs while maintaining the same performance level.
Which solution will meet these requirements MOST cost-effectively?
- A . Purchase a 1-year EC2 Instance Savings Plan for the appropriate instance family and size to meet the requirements of the application.
- B . Use EC2 On-Demand Capacity Reservations based on the appropriate instance family and size to meet the requirements of the application. Run the EC2 instances in an Auto Scaling group.
- C . Determine the appropriate instance family and size to meet the requirements of the application. Convert the application to run on AWS Batch with EC2 On-Demand Instances. Purchase a 1-year Compute Savings Plan.
- D . Determine the appropriate instance family and size to meet the requirements of the application.
Convert the application to run on AWS Batch with EC2 Spot Instances.
D
Explanation:
Since the workload is interruptible and can resume after restarts, Amazon EC2 Spot Instances are the most cost-effective option, offering savings of up to 90% compared to On-Demand. Running the batch workload on AWS Batch with Spot Instances allows automatic job queue management, interruption handling, and scheduling.
Option A and C reduce cost but still rely on reserved or committed pricing for On-Demand capacity.
Option B (Capacity Reservations) increases cost instead of reducing it. Therefore, the most cost-optimized solution is D ― AWS Batch with EC2 Spot Instances.
Reference:
• AWS Batch User Guide ― Using Spot Instances with AWS Batch
• AWS Well-Architected Framework ― Cost Optimization Pillar
A finance company uses backup software to back up its data to physical tape storage on-premises. To comply with regulations, the company needs to store the data for 7 years. The company must be able to restore archived data within one week when necessary.
The company wants to migrate the backup data to AWS to reduce costs. The company does not want to change the current backup software.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Storage Gateway Tape Gateway to copy the data to virtual tapes. Use AWS DataSync to migrate the virtual tapes to the Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Change the target of the backup software to S3 Standard-IA.
- B . Convert the physical tapes to virtual tapes. Use AWS DataSync to migrate the virtual tapes to Amazon S3 Glacier Flexible Retrieval. Change the target of the backup software to the S3 Glacier Flexible Retrieval.
- C . Use AWS Storage Gateway Tape Gateway to copy the data to virtual tapes. Migrate the virtual tapes to Amazon S3 Glacier Deep Archive. Change the target of the backup software to the virtual
tapes. - D . Convert the physical tapes to virtual tapes. Use AWS Snowball Edge storage-optimized devices to migrate the virtual tapes to Amazon S3 Glacier Flexible Retrieval. Change the target of the backup software to S3 Glacier Flexible Retrieval.
C
Explanation:
AWS Storage Gateway Tape Gateway provides a seamless way to migrate backup data to AWS without requiring changes to the backup software. Migrating to S3 Glacier Deep Archive ensures long-term, cost-effective storage for data that rarely needs retrieval.
Option A: S3 Standard-IA is more expensive than Glacier for long-term storage.
Option B and D: Glacier Flexible Retrieval is costlier than Glacier Deep Archive for archival use cases with low retrieval frequency.
AWS Documentation
Reference: AWS Storage Gateway Tape Gateway
S3 Glacier Storage Classes
A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC. The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC contains a NAT gateway in a public subnet to access the S3 bucket. The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy.
Which solution meets these requirements?
- A . Replace the NAT gateway with a NAT instance.
- B . Replace the NAT gateway with an internet gateway.
- C . Replace the NAT gateway with a gateway VPC endpoint.
- D . Replace the NAT gateway with an AWS Direct Connect connection.
C
Explanation:
A VPC gateway endpoint for Amazon S3 enables private connectivity to S3 without routing traffic through a NAT gateway or over the internet, eliminating NAT gateway costs. This solution is secure and redundant, as S3 endpoints are highly available by design.
AWS Documentation Extract:
"A gateway VPC endpoint enables you to privately connect your VPC to supported AWS services without requiring a NAT gateway or internet gateway." (Source: Amazon VPC documentation, Gateway Endpoints)
A: NAT instances still incur operational overhead and costs.
B: Internet gateway exposes resources and does not provide private access.
D: Direct Connect is for hybrid networking, not for cost-efficient S3 access.
Reference: AWS Certified Solutions Architect C Official Study Guide, VPC Networking and Endpoints.
How can a company detect and notify security teams about PII in S3 buckets?
- A . Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
- B . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
- C . Use Amazon Macie. Create an EventBridge rule for SensitiveData: S3Object/Personal findings and send an SQS notification.
- D . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS
notification.
A
Explanation:
Amazon Macie is purpose-built for detecting PII in S3.
Option A uses EventBridge to filter Sensitive Data findings and notify via SNS, meeting the requirements.
Options B and D involve Guard Duty, which is not designed for PII detection.
Option C uses SQS, which is less suitable for immediate notifications.
How can a company detect and notify security teams about PII in S3 buckets?
- A . Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
- B . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
- C . Use Amazon Macie. Create an EventBridge rule for SensitiveData: S3Object/Personal findings and send an SQS notification.
- D . Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS
notification.
A
Explanation:
Amazon Macie is purpose-built for detecting PII in S3.
Option A uses EventBridge to filter Sensitive Data findings and notify via SNS, meeting the requirements.
Options B and D involve Guard Duty, which is not designed for PII detection.
Option C uses SQS, which is less suitable for immediate notifications.
A company is building a serverless application that processes large volumes of data from a mobile app. The application uses an AWS Lambda function to process the data and store the data in an Amazon DynamoDB table.
The company needs to ensure that the application can recover from failures and continue processing data without losing any records.
Which solution will meet these requirements?
- A . Configure the Lambda function to use a dead-letter queue with an Amazon Simple Queue Service (Amazon SQS) queue. Configure Lambda to retry failed records from the dead-letter queue. Use a retry mechanism by implementing an exponential backoff algorithm.
- B . Configure the Lambda function to read records from Amazon Data Firehose. Replay the Firehose records in case of any failures.
- C . Use Amazon OpenSearch Service to store failed records. Configure AWS Lambda to retry failed records from OpenSearch Service. Use Amazon EventBridge to orchestrate the retry logic.
- D . Use Amazon Simple Notification Service (Amazon SNS) to store the failed records. Configure Lambda to retry failed records from the SNS topic. Use Amazon API Gateway to orchestrate the retry calls.
A
Explanation:
Dead-letter queues (DLQs) with Amazon SQS allow Lambda functions to offload failed events for later inspection or retry. Using retry logic with exponential backoff ensures resilience and compliance with best practices for fault-tolerant serverless architectures. This guarantees no data is lost due to transient errors.
Reference: AWS Documentation C Lambda Error Handling and Dead-Letter Queues
A company runs all its business applications in the AWS Cloud. The company uses AWS Organizations to manage multiple AWS accounts.
A solutions architect needs to review all permissions granted to IAM users to determine which users have more permissions than required.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Use Network Access Analyzer to review all access permissions in the company’s AWS accounts.
- B . Create an AWS CloudWatch alarm that activates when an IAM user creates or modifies resources in an AWS account.
- C . Use AWS Identity and Access Management (IAM) Access Analyzer to review all the company’s resources and accounts.
- D . Use Amazon Inspector to find vulnerabilities in existing IAM policies.
C
Explanation:
IAM Access Analyzer analyzes permissions granted using policies to determine what resources are shared with an external entity, and helps identify excessive permissions or least privilege violations across all accounts in an AWS Organization. It is specifically designed for reviewing and refining IAM permissions with minimal administrative effort.
AWS Documentation Extract:
“IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. You can also use Access Analyzer policy checks to refine permissions and implement least privilege.” (Source: IAM Access Analyzer documentation)
A: Network Access Analyzer is for VPC network access analysis, not IAM permissions.
B: CloudWatch alarms are not suitable for detailed permission analysis.
D: Amazon Inspector is for security vulnerability assessment, not IAM policy review.
Reference: AWS Certified Solutions Architect C Official Study Guide, IAM Security Analysis.
An ecommerce company has an application that collects order-related information from customers. The company uses one Amazon DynamoDB table to store customer home addresses, phone numbers, and email addresses. Customers can check out without creating an account. The application copies the customer information to a second DynamoDB table if a customer does create an account.
The company requires a solution to delete personally identifiable information (PII) for customers who did not create an account within 28 days.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function to delete items from the first DynamoDB table that have a delivery date more than 28 days in the past. Use a scheduled Amazon EventBridge rule to run the Lambda function every day.
- B . Update the application to store PII in an Amazon S3 bucket. Create an S3 Lifecycle rule to expire the objects after 28 days. Move the data to DynamoDB when a user creates an account.
- C . Launch an Amazon EC2 instance. Configure a daily cron job to run on the instance. Configure the cron job to use AWS CLI commands to delete items from DynamoDB.
- D . Use a created At timestamp to set TTL for data in the first DynamoDB table to 28 days.
D
Explanation:
Explanation (AWS Docs):
DynamoDB has a built-in feature called Time to Live (TTL) which automatically deletes expired items without manual intervention. This requires adding a timestamp attribute and setting a TTL on the table. This is the lowest operational overhead approach.
“You can use DynamoDB TTL to automatically delete items after a specified time, reducing storage costs and administrative overhead.”
― DynamoDB TTL
A company is designing the architecture for a new mobile app that uses the AWS Cloud. The company uses organizational units (OUs) in AWS Organizations to manage its accounts. The company wants to tag Amazon EC2 instances with data sensitivity by using values of sensitive and nonsensitive IAM identities must not be able to delete a tag or create instances without a tag.
Which combination of steps will meet these requirements? (Select TWO.)
- A . In Organizations, create a new tag policy that specifies the data sensitivity tag key and the required values. Enforce the tag values for the EC2 instances Attach the tag policy to the appropriate OU.
- B . In Organizations, create a new service control policy (SCP) that specifies the data sensitivity tag key and the required tag values Enforce the tag values for the EC2 instances. Attach the SCP to the appropriate OU.
- C . Create a tag policy to deny running instances when a tag key is not specified. Create another tag policy that prevents identities from deleting tags Attach the tag policies to the appropriate OU.
- D . Create a service control policy (SCP) to deny creating instances when a tag key is not specified. Create another SCP that prevents identities from deleting tags Attach the SCPs to the appropriate OU.
- E . Create an AWS Config rule to check if EC2 instances use the data sensitivity tag and the specified values. Configure an AWS Lambda function to delete the resource if a noncompliant resource is found.
A, D
Explanation:
To meet the requirements for tagging and preventing instance creation or deletion without proper tags, the company can use a combination of AWS Organizations tag policies and service control policies (SCPs).
Tag Policies: These enforce specific tag values across resources. Creating a tag policy with required values (e.g., sensitive, non-sensitive) and attaching it to the appropriate organizational unit (OU) ensures consistency in tagging.
SCPs: SCPs can be used to enforce compliance by preventing instance creation without a tag and preventing tag deletion. These policies control actions at the account level across the organization. Key AWS features:
Tag Policies help standardize tags across accounts, and SCPsenforce governance by restricting actions that violate the policies.
AWS Documentation: AWS best practices recommend using tag policies and SCPs to enforce compliance across multiple accounts within AWS Organizations.
A company runs a three-tier web application in a VPC on AWS. The company deployed an Application Load Balancer (ALB) in a public subnet. The web tier and application tier Amazon EC2 instances are deployed in a private subnet. The company uses a self-managed MySQL database that runs on EC2 instances in an isolated private subnet for the database tier.
The company wants a mechanism that will give a DevOps team the ability to use SSH to access all the servers. The company also wants to have a centrally managed log of all connections made to the servers.
Which combination of solutions will meet these requirements with the MOST operational efficiency? (Select TWO.)
- A . Create a bastion host in the public subnet. Configure security groups in the public, private, and isolated subnets to allow SSH access.
- B . Create an interface VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- C . Create an IAM policy that grants access to AWS Systems Manager Session Manager. Attach the IAM policy to the EC2 instances.
- D . Create a gateway VPC endpoint for AWS Systems Manager Session Manager. Attach the endpoint to the VPC.
- E . Attach an Amazon SSM Managed Instance Core AWS managed IAM policy to all the EC2 instance roles.
B, E
Explanation:
AWS Systems Manager Session Manager allows secure, auditable SSH-like access to EC2 instances without the need to open SSH ports or manage bastion hosts. For this to work in a private subnet, an interface VPC endpoint is required (not a gateway endpoint).
The EC2 instances must have the Amazon SSM Managed Instance Core policy attached to their IAM roles to allow Systems Manager operations.
With Session Manager, all session activity can be logged centrally to Amazon CloudWatch Logs or S3, satisfying the audit requirement and improving operational efficiency over manual SSH and bastion configurations.
