Practice Free SAA-C03 Exam Online Questions
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.
Which solution resolves this issue in the MOST cost-effective way?
- A . Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
- B . Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
- C . Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type
- D . Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage
A
Explanation:
Amazon Aurora Serverless v2 is a cost-effective solution that can automatically scale the database capacity up and down based on the application’s needs. It can handle unpredictable traffic spikes without requiring any provisioning or management of database instances. It is compatible with PostgreSQL and offers high performance, availability, and durability1.
Reference: 1: AWS Ramp-Up Guide: Architect2, page 312: AWS Certified Solutions Architect – Associate exam guide3, page 9.
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)
- A . Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
- B . Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
- C . Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
- D . Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
- E . Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
DE
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application.
What should the solutions architect do to meet this requirement?
- A . Add an Amazon Inspector agent to the ALB.
- B . Configure Amazon Macie to prevent attacks.
- C . Enable AWS Shield Advanced to prevent attacks.
- D . Configure Amazon GuardDuty to monitor the ALB.
C
Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
- A . Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set
- B . Update the bucket policy to deny if the PutObject does not have an s3:x-amz-aci header set to private.
- C . Update the bucket policy to deny if the PutObject does not have an aws SecureTransport header set to true
- D . Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
D
Explanation:
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=Solution%20overview,console%2C%20CLI%2C%20or%20SDK.&text=To%20encrypt%20an%20object%20at,S3%2C%20or%20SSE%2DKMS.
A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in the company’s AWS account can have the ability to delete the objects.
What should a solutions architect do to meet these requirements?
- A . Create an S3 Glacier vault Apply a write-once, read-many (WORM) vault lock policy to the objects
- B . Create an S3 bucket with S3 Object Lock enabled Enable versioning Set a retention period of 100 years Use governance mode as the S3 bucket’s default retention mode for new objects
- C . Create an S3 bucket Use AWS CloudTrail to (rack any S3 API events that modify the objects Upon notification, restore the modified objects from any backup versions that the company has
- D . Create an S3 bucket with S3 Object Lock enabled Enable versioning Add a legal hold to the objects Add the s3 PutObjectLegalHold permission to the IAM policies of users who need to delete the objects
D
Explanation:
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn’t have an associated retention period and remains in effect until removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
A media company has a multi-account AWS environment in the us-east-1 Region. The company has an Amazon Simple Notification Service {Amazon SNS) topic in a production account that publishes performance metrics. The company has an AWS Lambda function in an administrator account to process and analyze log data.
The Lambda function that is in the administrator account must be invoked by messages from the SNS topic that is in the production account when significant metrics tM* reported.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an IAM resource policy for the Lambda function that allows Amazon SNS to invoke the function. Implement an Amazon Simple Queue Service (Amazon SQS) queue in the administrator account to buffer messages from the SNS topic that is in the production account. Configure the SOS queue to invoke the Lambda function.
- B . Create an IAM policy for the SNS topic that allows the Lambda function to subscribe to the topic.
- C . Use an Amazon EventBridge rule in the production account to capture the SNS topic notifications. Configure the EventBridge rule to forward notifications to the Lambda function that is in the administrator account.
- D . Store performance metrics in an Amazon S3 bucket in the production account. Use Amazon Athena to analyze the metrics from the administrator account.
AB
Explanation:
Requirement Analysis: The Lambda function in the administrator account needs to process messages from an SNS topic in the production account.
IAM Policy for SNS Topic: Allows the Lambda function to subscribe and be invoked by the SNS topic.
SQS Queue for Buffering: Using an SQS queue provides reliable message delivery and buffering between SNS and Lambda, ensuring all messages are processed.
Implementation:
Create an SQS queue in the administrator account.
Set an IAM policy to allow the Lambda function to subscribe to and be invoked by the SNS topic.
Configure the SNS topic to send messages to the SQS queue.
Set up the SQS queue to trigger the Lambda function.
Conclusion: This solution ensures reliable message delivery and processing with appropriate permissions.
Reference
Amazon SNS: Amazon SNS Documentation
Amazon SQS: Amazon SQS Documentation
AWS Lambda: AWS Lambda Documentation
A company containerized a Windows job that runs on .NET 6 Framework under a Windows container. The company wants to run this job in the AWS Cloud. The job runs every 10 minutes. The job’s runtime varies between 1 minute and 3 minutes.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an AWS Lambda function based on the container image of the job. Configure Amazon EventBridge to invoke the function every 10 minutes.
- B . Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling to run every 10 minutes.
- C . Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a scheduled task based on the container image of the job to run every 10 minutes.
- D . Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a standalone task based on the container image of the job. Use Windows task scheduler to run the job every 10 minutes.
A
Explanation:
AWS Lambda supports container images as a packaging format for functions. You can use existing container development workflows to package and deploy Lambda functions as container images of up to 10 GB in size. You can also use familiar tools such as Docker CLI to build, test, and push your container images to Amazon Elastic Container Registry (Amazon ECR). You can then create an AWS Lambda function based on the container image of your job and configure Amazon EventBridge to invoke the function every 10 minutes using a cron expression. This solution will be cost-effective as you only pay for the compute time you consume when your function runs.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html https://docs.aws.amazon.com/eventbridge/latest/userguide/run-lambda-schedule.html
A company has applications that run in an organization in AWS Organizations. The company outsources operational support of the applications. The company needs to provide access for the external support engineers without compromising security.
The external support engineers need access to the AWS Management Console. The external support engineers also need operating system access to the company’s fleet of Amazon EC2 instances that run Amazon Linux in private subnets.
Which solution will meet these requirements MOST securely?
- A . Confirm that AWS Systems Manager Agent (SSM Agent) is installed on all instances. Assign an instance profile with the necessary policy to connect to Systems Manager. Use AWS 1AM Identity Center to provide the external support engineers console access. Use Systems Manager Session Manager to assign the required permissions.
- B . Confirm that AWS Systems Manager Agent {SSM Agent) is installed on all instances. Assign an instance profile with the necessary policy to connect to Systems Manager. Use Systems Manager Session Manager to provide local 1AM user credentials in each AWS account to the external support engineers for console access.
- C . Confirm that all instances have a security group that allows SSH access only from the external support engineers source IP address ranges. Provide local 1AM user credentials in each AWS account to the external support engineers for console access. Provide each external support engineer an SSH key pair to log in to the application instances.
- D . Create a bastion host in a public subnet. Set up the bastion host security group to allow access from only the external engineers’ IP address ranges Ensure that all instances have a security group that allows SSH access from the bastion host. Provide each external support engineer an SSH key pair to log in to the application instances. Provide local account 1AM user credentials to the engineers for console access.
A
Explanation:
This solution provides the most secure access for external support engineers with the least exposure to potential security risks.
AWS Systems Manager (SSM) and Session Manager: Systems Manager Session Manager allows secure and auditable access to EC2 instances without the need to open inbound SSH ports or manage SSH keys. This reduces the attack surface significantly. The SSM Agent must be installed and configured on all instances, and the instances must have an instance profile with the necessary IAM permissions to connect to Systems Manager.
IAM Identity Center: IAM Identity Center provides centralized management of access to the AWS Management Console for external support engineers. By using IAM Identity Center, you can control console access securely and ensure that external engineers have the appropriate permissions based on their roles.
Why Not Other Options?
Option B (Local IAM user credentials): This approach is less secure because it involves managing local IAM user credentials and does not leverage the centralized management and security benefits of IAM Identity Center.
Option C (Security group with SSH access): Allowing SSH access opens up the infrastructure to potential security risks, even when restricted by IP addresses. It also requires managing SSH keys, which can be cumbersome and less secure.
Option D (Bastion host): While a bastion host can secure SSH access, it still requires managing SSH keys and opening ports. This approach is less secure and more operationally intensive compared to using Session Manager.
AWS
Reference: AWS Systems Manager Session Manager – Documentation on using Session Manager for secure instance access.
AWS IAM Identity Center – Overview of IAM Identity Center and its capabilities for managing user access.
A company runs an Oracle database on premises. As part of the company’s migration to AWS, the company wants to upgrade the database to the most recent available version. The company also wants to set up disaster recovery (DR) for the database. The company needs to minimize the operational overhead for normal operations and DR setup. The company also needs to maintain access to the database’s underlying operating system.
Which solution will meet these requirements?
- A . Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.
- B . Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to replicate the snapshots to another AWS Region.
- C . Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.
- D . Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another Availability Zone.
C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html and https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-custom-oracle.html
A company has an application that places hundreds of .csv files into an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file is uploaded, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function to download the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Invoke the Lambda function for each S3 PUT event.
- B . Create an Apache Spark job to read the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the Spark job.
- C . Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the .csv files. Schedule an AWS Lambda function to periodically use Amazon Athena to query the AWS Glue table, convert the query results into Parquet format, and place the output files into an S3 bucket.
- D . Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv files to Parquet format and place the output files into an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.
D
Explanation:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html