Practice Free SAA-C03 Exam Online Questions
A company wants to store a large amount of data as objects for analytics and long-term archiving. Resources from outside AWS need to access the data. The external resources need to access the data with unpredictable frequency. However, the external resource must have immediate access when necessary.
The company needs a cost-optimized solution that provides high durability and data security.
Which solution will meet these requirements?
- A . Store the data in Amazon S3 Standard. Apply S3 Lifecycle policies to transition older data to S3 Glacier Deep Archive.
- B . Store the data in Amazon S3 Intelligent-Tiering.
- C . Store the data in Amazon S3 Glacier Flexible Retrieval. Use expedited retrieval to provide immediate access when necessary.
- D . Store the data in Amazon Elastic File System (Amazon EFS) Infrequent Access (IA). Use lifecycle policies to archive older files.
B
Explanation:
Amazon S3 Intelligent-Tiering is designed for data with unknown or changing access patterns. It automatically moves data between frequent and infrequent access tiers based on usage. This tier offers immediate access to all objects, regardless of which tier they are stored in, while optimizing storage costs. S3 Intelligent-Tiering also provides the same high durability, availability, and security as other S3 storage classes and supports access from external resources using standard S3 APIs. Lifecycle policies and Glacier classes are more suitable for archival when infrequent access is predictable, but retrieval from Glacier classes is not immediate and incurs extra charges and delays. Reference Extract from AWS Documentation / Study Guide:
"S3 Intelligent-Tiering is designed to optimize costs by automatically moving data between two access tiers when access patterns change. Data is always available and immediately accessible, making it ideal for unknown or unpredictable access patterns."
Source: AWS Certified Solutions Architect C Official Study Guide, S3 Storage Classes section.
A company wants to migrate its accounting system from an on-premises data center to the AWS Cloud in a single AWS Region. Data security and an immutable audit log are the top priorities. The company must monitor all AWS activities for compliance auditing. The company has enabled AWS CloudTrail but wants to make sure it meets these requirements.
Which actions should a solutions architect take to protect and secure CloudTrail? (Select TWO.)
- A . Enable CloudTrail log file validation.
- B . Install the CloudTrail Processing Library.
- C . Enable logging of Insights events in CloudTrail.
- D . Enable custom logging from the on-premises resources.
- E . Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
A, E
Explanation:
CloudTrail log file validation ensures that the log files have not been altered or deleted after delivery, providing an immutable audit log. Using KMS-managed encryption keys for CloudTrail log files adds another layer of data security, and AWS Config can monitor compliance to ensure this security is always enforced.
Reference Extract:
"CloudTrail log file validation provides assurance about the integrity of CloudTrail logs. Using SSE-KMS encryption and monitoring with AWS Config helps secure and audit logs for compliance." Source: AWS Certified Solutions Architect C Official Study Guide, CloudTrail Security section.
A company hosts an application that processes highly sensitive customer transactions on AWS. The application uses Amazon RDS as its database. The company manages its own encryption keys to secure the data in Amazon RDS.
The company needs to update the customer-managed encryption keys at least once each year.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Set up automatic key rotation in AWS Key Management Service (AWS KMS) for the encryption keys.
- B . Configure AWS Key Management Service (AWS KMS) to alert the company to rotate the encryption keys annually.
- C . Schedule an AWS Lambda function to rotate the encryption keys annually.
- D . Create an AWS CloudFormation stack to run an AWS Lambda function that deploys new encryption keys once each year.
A
Explanation:
AWS KMS automatic key rotation is the simplest and most operationally efficient solution. Enabling automatic key rotation ensures that KMS automatically generates new key material for the key every year without requiring manual intervention.
Option B: Configuring alerts to rotate keys introduces operational overhead as the actual rotation must still be managed manually.
Option C: Scheduling a Lambda function to rotate keys adds unnecessary complexity compared to enabling automatic key rotation.
Option D: Using a CloudFormation stack to run a Lambda function for key rotation increases operational overhead and complexity unnecessarily. AWS Documentation
Reference: AWS KMS Key Rotation
Using Customer-Managed Keys with Amazon RDS
A company runs an application on EC2 instances that need access to RDS credentials stored in AWS Secrets Manager.
Which solution meets this requirement?
- A . Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.
- B . Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.
- C . Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.
- D . Create an identity-based policy for the secret. Grant direct access to the EC2 instances.
A
Explanation:
Option Auses an IAM role attached to the EC2 instance profile, enabling secure and automated access to Secrets Manager. This is the recommended approach.
Option B uses IAM users, which is less secure and harder to manage.
Option C is not practical for accessing secrets programmatically.
Option D violates best practices by granting direct access to the EC2 instance.
A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and other website assets in the content management system.
A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content with the least possible lag time.
- A . Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
- B . Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
- C . Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
- D . Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EBS volume.
B
Explanation:
Amazon EFS provides a shared, elastic, low-latency file system that can be mounted concurrently by many EC2 instances across multiple Availability Zones, delivering strong read-after-write consistency so all instances see updates almost immediately. This is the standard pattern for CMS-style workloads that require shared, up-to-date assets with minimal lag. Syncing local copies from S3 (C) introduces polling windows and eventual consistency delays; hourly sync is not near-real time. Copying from a “newest instance” (A) is brittle and not scalable. EBS volumes/snapshots (D) are single-instance, single-AZ block devices and not designed for multi-writer sharing across instances/AZs. EFS’s multi-AZ design and POSIX semantics provide the simplest, most reliable solution with the least operational overhead.
Reference: Amazon EFS ― Use cases and benefits; Performance and consistency model; Mount targets across multiple AZs; Shared file storage for web content and CMS.
Note: Explanations are based on authoritative AWS documentation and Well-Architected guidance. Because live browsing is disabled here, verbatim extracts cannot be provided; titles above indicate the specific AWS docs to consult for exact wording.
An application uses an Amazon SQS queue and two AWS Lambda functions. One of the Lambda functions pushes messages to the queue, and the other function polls the queue and receives
queued messages.
A solutions architect needs to ensure that only the two Lambda functions can write to or read from the queue.
Which solution will meet these requirements?
- A . Attach an IAM policy to the SQS queue that grants the Lambda function principals read and write access. Attach an IAM policy to the execution role of each Lambda function that denies all access to the SQS queue except for the principal of each function.
- B . Attach a resource-based policy to the SQS queue to deny read and write access to the queue for any entity except the principal of each Lambda function. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.
- C . Attach a resource-based policy to the SQS queue that grants the Lambda function principals read and write access to the queue. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.
- D . Attach a resource-based policy to the SQS queue to deny all access to the queue. Attach an IAM policy to the execution role of each Lambda function that grants read and write access to the queue.
C
Explanation:
To ensure that only specific AWS Lambda functions can read from or write to an Amazon SQS queue, use resource-based policies attached directly to the SQS queue. These policies explicitly grant permissions to the IAM roles used by the Lambda functions. Additionally, the Lambda execution roles must also have IAM policies that permit SQS access. This dual-layer approach follows the AWS security best practice of granting least privilege access and ensures that no other service or entity can interact with the queue.
This is a common and supported pattern documented in the Amazon SQS Developer Guide, where resource-based policies restrict access at the queue level while IAM roles control permissions at the function level.
Reference: AWS Documentation C Amazon SQS Access Control, Lambda Permissions, and Resource-Based Policies
A company wants to use a data lake that is hosted on Amazon S3 to provide analytics services for historical data. The data lake consists of 800 tables but is expected to grow to thousands of tables. More than 50 departments use the tables, and each department has hundreds of users. Different departments need access to specific tables and columns.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an IAM role for each department. Use AWS Lake Formation based access control to grant each IAM role access to specific tables and columns. Use Amazon Athena to analyze the data.
- B . Create an Amazon Redshift cluster for each department. Use AWS Glue to ingest into the Redshift cluster only the tables and columns that are relevant to that department. Create Redshift database users. Grant the users access to the relevant department’s Redshift cluster. Use Amazon Redshift to analyze the data.
- C . Create an IAM role for each department. Use AWS Lake Formation tag-based access control to grant each IAM role access to only the relevant resources. Create LF-tags that are attached to tables and columns. Use Amazon Athena to analyze the data.
- D . Create an Amazon EMR cluster for each department. Configure an IAM service role for each EMR cluster to access relevant S3 files. For each department’s users, create an IAM role that provides access to the relevant EMR cluster. Use Amazon EMR to analyze the data.
C
Explanation:
The requirement is to provide granular, scalable access to thousands of tables and columns in a data lake across many users and departments, with the least operational overhead.
AWS Lake Formation supports tag-based access control (TBAC) using LF-tags (Lake Formation tags), which allows you to assign tags to tables, columns, and databases. You can then define permissions on resources by specifying tags rather than managing permissions for individual resources. This approach is highly scalable and efficient when dealing with a growing number of tables and columns. By associating IAM roles to departments and granting access based on LF-tags, you dramatically reduce the operational burden as new tables or columns are added; you only need to assign the appropriate tags.
Amazon Athena can directly query data in S3 with Lake Formation providing fine-grained access control.
AWS Documentation Extract:
"With LF-tag-based access control, you can grant permissions to resources based on tags, making it easy to manage access at scale, especially in environments with large and dynamic numbers of resources."
"LF-tags provide a scalable way to manage permissions for large numbers of resources without having to define permissions individually for each table or column."
(Source: AWS Lake Formation documentation, Access Control, Tag-Based Access Control)
Other options:
A: Would require managing explicit permissions for each table and column as the environment grows, increasing operational overhead.
B & D: Involve significant duplication of resources (clusters) and do not scale as efficiently as a centralized data lake with tag-based access.
Reference: AWS Certified Solutions Architect C Official Study Guide, Chapter on Data Lakes and Access Control.
A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?
- A . Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
- B . Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS Cloud.
- C . Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
- D . Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS Cloud, then perform analytics on this data in the cloud.
A
Explanation:
AWS Storage Gateway file gateway presents a file interface backed by Amazon S3 and supports NFS. This allows local applications to access data via NFS while also enabling cloud applications to use the data stored in S3 for analytics and processing, fulfilling both hybrid and cloud-native requirements. Reference Extract:
"AWS Storage Gateway file gateway offers NFS and SMB access to data stored in Amazon S3, supporting hybrid workloads for local and cloud access."
Source: AWS Certified Solutions Architect C Official Study Guide, Hybrid and Storage Gateway section.
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The company also wants to minimize the cost and configuration effort required to operate the volume encryption check.
Which solution will meet these requirements?
- A . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
- B . Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls on an AWS Fargate task.
- C . Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources manually.
- D . Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is not encrypted.
D
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By creating a Config rule, you can automatically check whether your Amazon EBS volumes are encrypted and flag those that are not, with minimal cost and configuration effort.
AWS Config Rule: AWS Config provides managed rules that you can use to automatically check the compliance of your resources against predefined or custom criteria. In this case, you would create a rule to evaluate EBS volumes and determine if they are encrypted. If a volume is not encrypted, the rule will flag it, allowing you to take corrective action.
Operational Overhead: This approach significantly reduces operational overhead because once the
rule is in place, it continuously monitors your EBS volumes for compliance, and there’s no need for
manual checks or custom scripting.
Why Not Other Options?:
Option A (Lambda with API calls and EventBridge): While this can work, it involves writing and maintaining custom code, which increases operational overhead compared to using a managed AWS Config rule.
Option B (API calls on Fargate): Running API calls on Fargate is more complex and costly compared to using AWS Config, which provides a simpler, managed solution.
Option C (IAM policy with Cost Explorer): This option does not directly enforce encryption compliance and involves manual intervention, making it less efficient and more prone to errors. AWS
Reference: AWS Config Rules- Overview of AWS Config rules and how they can be used to evaluate resource configurations.
Amazon EBS Encryption- Information on how to manage and enforce encryption for EBS volumes.
A company runs a Node.js function on a server in its on-premises data center. The data center stores data in a PostgreSQL database. The company stores the credentials in a connection string in an environment variable on the server. The company wants to migrate its application to AWS and to replace the Node.js application server with AWS Lambda. The company also wants to migrate to Amazon RDS for PostgreSQL and to ensure that the database credentials are securely managed.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.
- B . Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.
- C . Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.
- D . Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS
key.
B
Explanation:
AWS Secrets Manager is designed specifically to securely store and manage sensitive information such as database credentials. It integrates seamlessly with AWS services like Lambda and RDS, and it provides automatic credential rotation with minimal operational overhead.
AWS Secrets Manager: By storing the database credentials in Secrets Manager, you ensure that the credentials are securely stored, encrypted, and managed. Secrets Manager provides a built-in mechanism to automatically rotate credentials at regular intervals (e.g., every 30 days), which helps in maintaining security best practices without requiring additional manual intervention.
Lambda Integration: The Lambda function can be easily configured to retrieve the credentials from Secrets Manager using the AWS SDK, ensuring that the credentials are accessed securely at runtime.
Why Not Other Options?:
Option A (Parameter Store with Rotation): While Parameter Store can store parameters securely, Secrets Manager is more tailored for secrets management and automatic rotation, offering more features and less operational overhead.
Option C (Encrypted Lambda environment variable): Storing credentials directly in Lambda environment variables, even when encrypted, requires custom code to manage rotation, which increases operational complexity.
Option D (KMS with automatic rotation): KMS is for managing encryption keys, not for storing and rotating secrets like database credentials. This option would require more custom implementation to manage credentials securely.
AWS
Reference: AWS Secrets Manager- Detailed documentation on how to store, manage, and rotate secrets using AWS Secrets Manager.
Using Secrets Manager with AWS Lambda- Guidance on integrating Secrets Manager with Lambda for secure credential management.
