Practice Free SAA-C03 Exam Online Questions
A company has a web application that has thousands of users. The application uses 8-10 user-uploaded images to generate Al images. Users can download the generated Al Images once every 6 hours. The company also has a premium user option that gives users the ability to download the generated Al images anytime
The company uses the user-uploaded images to run Al model training twice a year. The company needs a storage solution to store the images.
Which storage solution meets these requirements MOST cost-effectively?
- A . Move uploaded images to Amazon S3 Glacier Deep Archive. Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
- B . Move uploaded images to Amazon S3 Glacier Deep Archive. Move all generated Al images to S3
Glacier Flexible Retrieval. - C . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
- D . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move all generated Al images to S3 Glacier Flexible Retrieval
C
Explanation:
S3 One Zone-IA:
Suitable for infrequently accessed data that doesn’t require multiple Availability Zone resilience.
Cost-effective for storing user-uploaded images that are only used for AI model training twice a year.
S3 Standard:
Ideal for frequently accessed data with high durability and availability.
Store premium user-generated AI images here to ensure they are readily available for download at any time.
S3 Standard-IA:
Cost-effective storage for data that is accessed less frequently but still requires rapid retrieval.
Store non-premium user-generated AI images here, as these images are only downloaded once every 6 hours, making it a good balance between cost and accessibility.
Cost-Effectiveness: This solution optimizes storage costs by categorizing data based on access patterns and durability requirements, ensuring that each type of data is stored in the most cost-effective manner.
Reference: Amazon S3 Storage Classes
S3 One Zone-IA
A company is reviewing a recent migration of a three-tier application to a VPC. The security team discovers that the principle of least privilege is not being applied to Amazon EC2 security group ingress and egress rules between the application tiers.
What should a solutions architect do to correct this issue?
- A . Create security group rules using the instance ID as the source or destination.
- B . Create security group rules using the security group ID as the source or destination.
- C . Create security group rules using the VPC CIDR blocks as the source or destination.
- D . Create security group rules using the subnet CIDR blocks as the source or destination.
B
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
A company uses Amazon S3 to store customer data that contains personally identifiable information (PII) attributes. The company needs to make the customer information available to company resources through an AWS Glue Catalog. The company needs to have fine-grained access control for the data so that only specific IAM roles can access the PII data.
- A . Create one IAM policy that grants access to PII. Create a second IAM policy that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- B . Create one IAM role that grants access to PII. Create a second IAM role that grants access to non-PII data. Assign the PII policy to the specified IAM roles.
- C . Use AWS Lake Formation to provide the specified IAM roles access to the PII data.
- D . Use AWS Glue to create one view for PII data. Create a second view for non-PII data. Provide the specified IAM roles access to the PII view.
C
Explanation:
AWS Lake Formation is designed for managing fine-grained access control to data in an efficient manner:
Granular Permissions: Lake Formation allows column-level, row-level, and table-level access controls, which can precisely define access to PII data.
Integration with AWS Glue Catalog: Lake Formation natively integrates with AWS Glue for seamless data cataloging and access control.
Operational Efficiency: Centralized access control policies minimize the need for separate IAM roles or policies.
Why Other Options Are Not Ideal:
Option A:
Creating multiple IAM policies introduces complexity and lacks column-level access control.
Not efficient.
Option B:
Managing multiple IAM roles for granular access is operationally complex.
Not efficient.
Option D:
Creating views in Glue adds unnecessary complexity and may not provide the level of granularity
that Lake Formation offers.
Not the best choice.
AWS
Reference: AWS Lake Formation:
AWS Documentation – Lake Formation
Fine-Grained Permissions with Lake Formation:
AWS Documentation – Fine-Grained Permissions
An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B. Both VPCs are in the same AWS account.
Which solution will provide the required access MOST securely?
- A . Create a DB instance security group that allows all traffic from the public IP address of the application server in VPC A.
- B . Configure a VPC peering connection between VPC A and VPC B.
- C . Make the DB instance publicly accessible. Assign a public IP address to the DB instance.
- D . Launch an EC2 instance with an Elastic IP address into VPC
- E . Proxy all requests through the new EC2 instance.
B
Explanation:
A VPC peering connection is a networking connection between two VPCs that enables users to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. A VPC peering connection can be created between VPCs in the same or different AWS accounts and Regions1. By configuring a VPC peering connection between VPC A and VPC B, the solution can provide the required access most securely.
A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data.
Which solutions will meet these requirements? (Choose two.)
- A . Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
- B . Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
- C . Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
- D . Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon
Kinesis Data Analytics to transform the data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3. - E . Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
AB
Explanation:
To ingest, transform, and query real-time streaming data from multiple sources, Amazon Kinesis and Amazon MSK are suitable solutions. Amazon Kinesis Data Streams can stream the data from various sources and integrate with other AWS services. Amazon Kinesis Data Analytics can transform the data using SQL or Apache Flink. Amazon Kinesis Data Firehose can write the data to Amazon S3 or other destinations. Amazon Athena can query the transformed data from Amazon S3 using standard SQL. Amazon MSK can stream the data using Apache Kafka, which is a popular open-source platform for streaming data. AWS Glue can transform the data using Apache Spark or Python scripts and write the data to Amazon S3 or other destinations. Amazon Athena can also query the transformed data from Amazon S3 using standard SQL.
Reference:
What Is Amazon Kinesis Data Streams?
What Is Amazon Kinesis Data Analytics?
What Is Amazon Kinesis Data Firehose?
What Is Amazon Athena?
What Is Amazon MSK?
What Is AWS Glue?
A company needs to retain application logs files for a critical application for 10 years. The application team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?
- A . Store the Iogs in Amazon S3 Use AWS Backup lo move logs more than 1 month old to S3 Glacier Deep Archive
- B . Store the logs in Amazon S3 Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive
- C . Store the logs in Amazon CloudWatch Logs Use AWS Backup to move logs more then 1 month old to S3 Glacier Deep Archive
- D . Store the logs in Amazon CloudWatch Logs Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive
B
Explanation:
You need S3 to be able to archive the logs after one month. Cannot do that with CloudWatch Logs.
A solutions architect needs to secure an Amazon API Gateway REST API. Users need to be able to log in to the API by using common external social identity providers (IdPs). The social IdPs must use standard authentication protocols such as SAML or OpenID Connect (OIDC). The solutions architect needs to protect the API against attempts to exploit application vulnerabilities.
Which combination of steps will meet these security requirements? (Select TWO.)
- A . Create an AWS WAF web ACL that is associated with the REST API. Add the appropriate managed rules to the ACL.
- B . Subscribe to AWS Shield Advanced. Enable DDoS protection. Associate Shield Advanced with the REST API.
- C . Create an Amazon Cognito user pool with a federation to the social IdPs. Integrate the user pool with the REST API.
- D . Create an API key in API Gateway. Associate the API key with the REST API.
- E . Create an IP address filter in AWS WAF that allows only the social IdPs. Associate the filter with the web ACL and the API.
A, C
Explanation:
Step A: AWS WAF with managed rules protects the API against application-layer attacks, such as SQL injection and cross-site scripting (XSS).
Step C: Amazon Cognito provides secure authentication and supports federation with social IdPs
using OIDC or SAML. It integrates seamlessly with API Gateway.
Option B: AWS Shield Advanced provides DDoS protection, which is not explicitly required in this scenario.
Option D: API keys provide identification, not authentication, and are insufficient for this use case.
Option E: IP filters in WAF are overly restrictive for federated authentication scenarios. AWS Documentation
Reference: Amazon Cognito Federation
AWS WAF Managed Rules
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the service to monitor data in customer AWS accounts. The new feature will call AWS APIs in customer accounts to describe Amazon EC2 instances and read Amazon CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
- A . Ensure that the customers create an 1AM role in their account with read-only EC2 and CloudWatch permissions and a trust policy to the company’s account.
- B . Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a role with read-only EC2 and CloudWatch permissions.
- C . Ensure that the customers create an 1AM user in their account with read-only EC2 and CloudWatch permissions. Encrypt and store customer access and secret keys in a secrets management system.
- D . Ensure that the customers create an Amazon Cognito user in their account to use an 1AM role with read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a secrets management system.
A
Explanation:
By having customers create an IAM role with the necessary permissions in their own accounts, the company can use AWS Identity and Access Management (IAM) to establish cross-account access. The trust policy allows the company’s AWS account to assume the customer’s IAM role temporarily, granting access to the specified resources (EC2 instances and CloudWatch metrics) within the customer’s account. This approach follows the principle of least privilege, as the company only requests the necessary permissions and does not require long-term access keys or user credentials from the customers.
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?
- A . S3 Standard
- B . S3 Intelligent-Tiering
- C . S3 Standard-Infrequent Access {S3 Standard-IA)
- D . S3 One Zone-Infrequent Access (S3 One Zone-IA)
B
Explanation:
S3 Intelligent-Tiering – Perfect use case when you don’t know the frequency of access or irregular patterns of usage.
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3 Outposts storage class to store your S3 data on-premises. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?nc1=h_ls
A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days
What should a solutions architect do to meet this requirement with the LEAST operational effort?
- A . Create a new AWS Key Management Service (AWS KMS) encryption key Use AWS Secrets Manager to create a new secret that uses the KMS key with the appropriate credentials Associate the secret with the Aurora DB cluster Configure a custom rotation period of 14 days
- B . Create two parameters in AWS Systems Manager Parameter Store one for the user name as a string parameter and one that uses the SecureStnng type for the password Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these parameters in the application tier Implement an AWS Lambda function that rotates the password every 14 days.
- C . Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon EFS) file system Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that the application can read the file and that only super users can modify the file Implement an AWS Lambda function that rotates the key in Aurora every 14 days and writes new credentials into the file
- D . Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application uses to load the credentials Download the file to the application regularly to ensure that the correct credentials are used Implement an AWS Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket
A
Explanation:
https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/