Practice Free SAA-C03 Exam Online Questions
A company website hosted on Amazon EC2 instances processes classified data stored in. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
- A . Create an 1AM role that specifies EBS encryption Attach the role to the EC2 instances
- B . Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2 instances
- C . Create an EC2 instance tag that has a key of Encrypt and a value of True Tag all instances that require encryption at the EBS level
- D . Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account Ensure that the key policy is active
B
Explanation:
The simplest and most effective way to ensure that all data that is written to the EBS volumes is encrypted at rest is to create the EBS volumes as encrypted volumes. You can do this by selecting the encryption option when you create a new EBS volume, or by copying an existing unencrypted volume to a new encrypted volume. You can also specify the AWS KMS key that you want to use for encryption, or use the default AWS-managed key. When you attach the encrypted EBS volumes to the EC2 instances, the data will be automatically encrypted and decrypted by the EC2 host. This solution does not require any additional IAM roles, tags, or policies.
Reference: Amazon EBS encryption
Creating an encrypted EBS volume
Encrypting an unencrypted EBS volume
A company needs to create an AWS Lambda function that will run in a VPC in the company’s primary AWS account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system the solution must scale to meet the demand.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a new EPS file system in the primary account Use AWS DataSync to copy the contents of the original EPS file system to the new EPS file system
- B . Create a VPC peering connection between the VPCs that are in the primary account and the secondary account
- C . Create a second Lambda function In the secondary account that has a mount that is configured for the file system. Use the primary account’s Lambda function to invoke the secondary account’s Lambda function
- D . Move the contents of the file system to a Lambda Layer’s Configure the Lambda layer’s permissions to allow the company’s secondary account to use the Lambda layer.
B
Explanation:
This option is the most cost-effective and scalable way to allow the Lambda function in the primary account to access the EFS file system in the secondary account. VPC peering enables private connectivity between two VPCs without requiring gateways, VPN connections, or dedicated network connections. The Lambda function can use the VPC peering connection to mount the EFS file system as a local file system and access the files as needed. The solution does not incur additional data transfer or storage costs, and it leverages the existing EFS file system without duplicating or moving the data.
Option A is not cost-effective because it requires creating a new EFS file system and using AWS DataSync to copy the data from the original EFS file system. This would incur additional storage and data transfer costs, and it would not provide real-time access to the files.
Option C is not scalable because it requires creating a second Lambda function in the secondary account and configuring cross-account permissions to invoke it from the primary account. This would add complexity and latency to the solution, and it would increase the Lambda invocation costs.
Option D is not feasible because Lambda layers are not designed to store large amounts of data or provide file system access. Lambda layers are used to share common code or libraries across multiple Lambda functions. Moving the contents of the EFS file system to a Lambda layer would exceed the size limit of 250 MB for a layer, and it would not allow the Lambda function to read or write files to the layer.
Reference:
What Is VPC Peering?
Using Amazon EFS file systems with AWS Lambda
What Are Lambda Layers?
A 4-year-old media company is using the AWS Organizations all features feature set fo organize its AWS accounts. According to he company’s finance team, the billing information on the member accounts
must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
- A . Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
- B . Attach an identity-based policy to deny access to the billing information to all users, including the root user.
- C . Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
- D . Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
C
Explanation:
Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-grained permissions on the organizational units (OUs) within your AWS Organization. SCPs provide central control over the maximum permissions that can be granted to member accounts, including the root user. Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing information for all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including billing-related services. Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
A company hosts a web application on multiple Amazon EC2 instances. The EC2 instances are in an Auto Scaling group that scales in response to user demand. The company wants to optimize cost savings without making a long-term commitment
Which EC2 instance purchasing option should a solutions architect recommend to meet these requirements?
- A . Dedicated Instances only
- B . On-Demand Instances only
- C . A mix of On-Demand instances and Spot Instances
- D . A mix of On-Demand instances and Reserved instances
C
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests.
- B . Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests
- C . Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions.
- D . Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the appropriate scale.
A
Explanation:
AWS Fargate is a serverless compute engine that lets users run containers without having to manage servers or clusters of Amazon EC2 instances1. Users can use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Amazon ECS is a fully managed container orchestration service that supports both Docker and Kubernetes2. Service Auto Scaling is a feature that allows users to adjust the desired number of tasks in an ECS service based on CloudWatch metrics, such as CPU utilization or request count3. Users can use AWS Fargate on Amazon ECS to migrate the application to AWS with minimum code changes and minimum development effort, as they only need to package their application in containers and specify the CPU and memory requirements.
Users can also use an Application Load Balancer to distribute the incoming requests. An Application Load Balancer is a load balancer that operates at the application layer and routes traffic to targets based on the content of the request. Users can register their ECS tasks as targets for an Application Load Balancer and configure listener rules to route requests to different target groups based on path or host headers. Users can use an Application Load Balancer to improve the availability and performance of their web application.
An ecommerce company runs several internal applications in multiple AWS accounts. The company uses AWS Organizations to manage its AWS accounts.
A security appliance in the company’s networking account must inspect interactions between applications across AWS accounts.
Which solution will meet these requirements?
- A . Deploy a Network Load Balancer (NLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the NLB by using an interface VPC endpoint in the application accounts
- B . Deploy an Application Load Balancer (ALB) in the application accounts to send traffic directly to the security appliance.
- C . Deploy a Gateway Load Balancer (GWLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the GWLB by using an interface GWLB endpoint in the application accounts
- D . Deploy an interface VPC endpoint in the application accounts to send traffic directly to the security appliance.
C
Explanation:
The Gateway Load Balancer (GWLB) is specifically designed to route traffic through a security appliance in a hub-and-spoke model, making it the ideal solution for inspecting traffic between multiple AWS accounts. GWLB enables you to simplify, scale, and deploy third-party virtual appliances transparently, and it can work across multiple VPCs or accounts using interface endpoints (Gateway Load Balancer Endpoints).
Key AWS features:
Traffic Inspection: The GWLB allows the centralized security appliance to inspect traffic between different VPCs, making it suitable for inspecting inter-account interactions.
Interface VPC Endpoints: By using interface endpoints in the application accounts, traffic can securely and efficiently be routed to the security appliance in the networking account.
AWS Documentation: The use of GWLB aligns with AWS’s best practices for centralized network security, simplifying architecture and reducing operational complexity.
A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual postprocessing.
Which solution will meet these requirements?
- A . Use Amazon Elastic File System (Amazon EFS) as a shared fie system. Access the dataset from Amazon EFS.
- B . Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.
- C . Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.
- D . Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.
C
Explanation:
Amazon FSx for Lustre is the ideal solution for high-performance computing (HPC) workloads that require parallel access to a shared file system with low latency. FSx for Lustre is designed specifically to meet the needs of such workloads, offering sub-millisecond latencies, which makes it well-suited for the 1 ms latency requirement mentioned in the question.
Here is why FSx for Lustre is the best fit:
Parallel File System: FSx for Lustre is a parallel file system that can scale across hundreds of Amazon EC2 instances, providing high throughput and low-latency access to data. It is optimized for processing large datasets in parallel, which is essential for HPC workloads.
Low Latency: FSx for Lustre is capable of providing access latencies well within 1 ms, making it ideal for performance-sensitive workloads like HPC.
Seamless Integration with Amazon S3: FSx for Lustre can be linked to an Amazon S3 bucket. This integration allows data to be imported from S3 into FSx for Lustre before the workload begins and exported back to S3 after processing. This feature is crucial for manual postprocessing because it enables engineers to access the dataset in S3 after processing.
Performance: FSx for Lustre is built for workloads that require high performance, such as machine learning, analytics, media processing, and financial simulations, which are typical for HPC environments.
In contrast:
Amazon EFS (Option A): While EFS provides shared file storage and scales across multiple EC2 instances, it does not offer the same level of performance or sub-millisecond latencies as FSx for Lustre. EFS is more suited for general-purpose workloads, not high-performance computing.
Mounting S3 as a file system (Option B and D): S3 is object storage, not a file system designed for low-latency access and parallel processing. Mounting S3 buckets directly or using AWS Resource Access Manager to share the bucket would not meet the low-latency (1 ms) or performance requirements needed for HPC workloads.
Therefore, Amazon FSx for Lustre (Option C) is the most appropriate and verified solution for this scenario.
AWS
Reference: Amazon FSx for Lustre
Best Practices for High Performance Computing (HPC)
Amazon FSx and Amazon S3 Integration
00/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering connection to the development VPC.
What is the SMALLEST CIOR block that meets these requirements?
- A . 10.0.1.0/32
- B . 192.168.0.0/24
- C . 192.168.1.0/32
- D . 10.0.1.0/24
D
Explanation:
The allowed block size is between a /28 netmask and /16 netmask. The CIDR block must not overlap with any existing CIDR block that’s associated with the VPC. https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html
A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose security risks. The company’s security team must be able to query data in the IT infrastructure map and quickly identify security risks.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
- B . Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
- C . Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
- D . Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.
B
Explanation:
Understanding the Requirement: The company needs to map its IT infrastructure to identify and enforce security policies, with the ability to quickly query and identify security risks.
Analysis of Options:
Amazon RDS: While suitable for relational data, it is not optimized for handling complex relationships and querying those relationships, which is essential for an IT infrastructure map.
Amazon Neptune: A graph database service designed for handling highly connected data. It uses SPARQL to query graph data efficiently, making it ideal for mapping IT infrastructure and identifying relationships that pose security risks.
Amazon Redshift: A data warehouse solution optimized for complex queries on large datasets but not specifically for graph data.
Amazon DynamoDB: A NoSQL database that uses PartiQL for querying, but it is not optimized for
complex relationships in graph data.
Best Option for Mapping and Querying IT Infrastructure:
Amazon Neptune provides the most suitable solution with the least operational overhead. It is purpose-built for graph data and enables efficient querying of complex relationships to identify security risks.
Reference: Amazon Neptune
Querying with SPARQL
A company’s website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
- A . Move the catalog to Amazon ElastiCache for Redis.
- B . Deploy a larger EC2 instance with a larger instance store.
- C . Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
- D . Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
D
Explanation:
Moving the catalog to an Amazon Elastic File System (Amazon EFS) file system provides both high availability and durability. Amazon EFS is a fully-managed, highly-available, and durable file system that is built to scale on demand. With Amazon EFS, the catalog data can be stored and accessed from multiple EC2 instances in different availability zones, ensuring high availability. Also, Amazon EFS automatically stores files redundantly within and across multiple availability zones, making it a durable storage option.