Practice Free SOA-C02 Exam Online Questions
A company is running Amazon RDS for PostgreSOL Multi-AZ DB clusters. The company uses an AWS Cloud Formation template to create the databases individually with a default size of 100 GB. The company creates the databases every Monday and deletes the databases every Friday.
Occasionally, the databases run low on disk space and initiate an Amazon CloudWatch alarm. A SysOps administrator must prevent the databases from running low on disk space in the future.
Which solution will meet these requirements with the FEWEST changes to the application?
- A . Modify the CloudFormation template to use Amazon Aurora PostgreSOL as the DB engine.
- B . Modify the CloudFormation template to use Amazon DynamoDB as the database. Activate storage auto scaling during creation of the tables
- C . Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.
- D . Create a CloudWatch alarm to monitor DB instance storage space. Configure the alarm to invoke the VACUUM command.
C
Explanation:
To prevent Amazon RDS for PostgreSQL Multi-AZ DB instances from running low on disk space, enabling storage auto-scaling is the most straightforward solution. This feature automatically adjusts the storage capacity of the DB instance when it approaches its limit, thus preventing the database from running out of space and triggering CloudWatch alarms. Option C is the least intrusive and most effective solution as it only requires a modification to the existing CloudFormation template to enable auto-scaling on storage. For reference, see AWS documentation on managing RDS storage automatically Managing RDS Storage Automatically.
A company uses AWS CloudFormation to deploy its application infrastructure Recently, a user accidentally changed a property of a database in a CloudFormation template and performed a stack update that caused an interruption to the application A SysOps administrator must determine how to modify the deployment process to allow the DevOps team to continue to deploy the infrastructure, but prevent against accidental modifications to specific resources.
Which solution will meet these requirements?
- A . Set up an AWS Config rule to alert based on changes to any CloudFormation stack An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operation
- B . Set up an Amazon CloudWatch Events event with a rule to trigger based on any CloudFormation API call An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operation
- C . Launch the CloudFormation templates using a stack policy with an explicit allow for all resources and an explicit deny of the protected resources with an action of Update
- D . Attach an IAM policy to the DevOps team role that prevents a CloudFormation stack from updating, with a condition based on the specific Amazon Resource Names (ARNs) of the protected resources
C
Explanation:
A stack policy is used to protect specific resources within a CloudFormation stack from being unintentionally updated or deleted. By using a stack policy, you can explicitly deny updates to critical resources while allowing updates to other parts of the stack.
Create a Stack Policy:
Define a JSON stack policy that includes an explicit allow for all resources and an explicit deny for the protected resources. For example:
json
Copy code
{
"Statement": [
{
"Effect": "Allow",
"Action": "Update:*",
"Principal": "*",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "Update:*",
"Principal": "*",
"Resource": "arn:aws:cloudformation:region:account-id:stack/stack-name/protected-resource"
}
]
}
Replace region, account-id, stack-name, and protected-resource with the appropriate values.
Apply the Stack Policy:
Navigate to the CloudFormation console.
Select the stack you want to protect.
Choose "Stack actions" and then "Edit stack policy".
Paste the stack policy JSON and save the policy.
Perform Stack Updates:
When performing stack updates, the stack policy will enforce the rules specified, preventing accidental updates to the protected resources.
Review and Adjust:
Periodically review the stack policy to ensure it still meets the needs of the organization and update it as necessary.
Reference: AWS CloudFormation Stack Policies
Creating and Applying a Stack Policy
A SysOps administrator needs to configure the Amazon Route 53 hosted zone for example.com and www.example.com to point to an Application Load Balancer (ALB).
Which combination of actions should the SysOps administrator take to meet these requirements? (Select TWO.)
- A . Configure anArecordforexample.com to point to the IP address of the ALB.
- B . Configure an A record for www.example.com to point to the IP address of the ALB.
- C . Configure an alias record for example.com to point to the CNAME of the ALB.
- D . Configure an alias record for www.example.com to point to the Route 53 example.com record.
- E . Configure a CNAME record for example com to point to the CNAME of the ALB.
C, D
Explanation:
You are correct that an A record typically points to an IP address. However, in the case of an Application Load Balancer (ALB), you cannot use an A record with an IP address because the IP addresses of an ALB can change over time. Instead, you can use an alias record to point to the DNS name of the ALB. An alias record is a Route 53 extension to DNS that allows you to route traffic to selected AWS resources, such as an ALB, by using a friendly DNS name, such as example.com, instead of the resource’s IP address or DNS name.
A SysOps administrator needs to configure a solution that will deliver digital content to a set of authorized users through Amazon CloudFront. Unauthorized users must be restricted from access.
Which solution will meet these requirements?
- A . Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed URLs to access the S3 bucket through CloudFront.
- B . Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Restrict S3 bucket access with signed URLs in CloudFront.
- C . Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Enable field-level encryption.
- D . Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed cookies for restricted delivery of the content through CloudFront.
B
Explanation:
To deliver digital content to authorized users through CloudFront while restricting unauthorized access, you can use an origin access identity (OAI) with signed URLs.
Store Content in S3 with Public Access Blocked:
Ensure the S3 bucket has public access blocked.
Navigate to the S3 console, select the bucket, and configure the "Block Public Access" settings.
Reference: Blocking public access to your Amazon S3 storage Create an OAI for CloudFront:
In the CloudFront console, create an OAI to securely access the S3 bucket.
Associate the OAI with the CloudFront distribution.
Reference: Using an OAI
Restrict S3 Bucket Access to the OAI:
Update the S3 bucket policy to grant access to the OAI.
Example bucket policy:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI-ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Use Signed URLs for Restricted Access:
Configure CloudFront to use signed URLs to control access to the content.
Reference: Serving private content with signed URLs and signed cookies
This setup ensures that only authorized users can access the content through CloudFront using signed URLs, while the S3 bucket remains private and secure.
A company uploaded its website files to an Amazon S3 bucket that has S3 Versioning enabled. The company uses an Amazon CloudFront distribution with the S3 bucket as the origin. The company recently modified the tiles, but the object names remained the same. Users report that old content is still appearing on the website.
How should a SysOps administrator remediate this issue?
- A . Create a CloudFront invalidation, and add the path of the updated files.
- B . Create a CloudFront signed URL to update each object immediately.
- C . Configure an S3 origin access identity (OAI) to display only the updated files to users.
- D . Disable S3 Versioning on the S3 bucket so that the updated files can replace the old files.
A
Explanation:
When users report that old content is still appearing on the website after modifying files in an S3 bucket used by CloudFront, creating a CloudFront invalidation is the best solution.
CloudFront Invalidation:
Invalidation is the process of removing objects from the CloudFront cache before they expire.
This ensures that the updated content is served to the users.
Creating an Invalidation:
Open the CloudFront console.
Select the distribution and go to the "Invalidations" tab.
Create a new invalidation and specify the paths of the updated files.
Reference: Invalidating Files in Amazon CloudFront
A company is managing a website with a global user base hosted on Amazon EC2 with an Application Load Balancer (ALB). To reduce the load on the web servers, a SysOps administrator configures an Amazon CloudFront distribution with the ALB as the origin. After a week of monitoring the solution, the administrator notices that requests are still being served by the ALB and there is no change in the web server load.
What are possible causes for this problem? (Choose two.)
- A . CloudFront does not have the ALB configured as the origin access identity.
- B . The DNS is still pointing to the ALB instead of the CloudFront distribution.
- C . The ALB security group is not permitting inbound traffic from CloudFront.
- D . The default, minimum, and maximum Time to Live (TTL) are set to 0 seconds on the CloudFront distribution.
- E . The target groups associated with the ALB are configured for sticky sessions.
BD
Explanation:
To effectively use Amazon CloudFront as a content delivery network for an application using an Application Load Balancer as the origin, several configuration steps need to be correctly implemented:
DNS Configuration: Ensure that the DNS records for the domain serving the content point to the CloudFront distribution’s DNS name rather than directly to the ALB. If the DNS still points to the ALB, users’ requests will bypass CloudFront, leading directly to the ALB and maintaining the existing load on your web servers.
TTL Settings: The Time to Live (TTL) settings in the CloudFront distribution dictate how long the content is cached in CloudFront edge locations before CloudFront fetches a fresh copy from the origin. If the TTL values are set to 0, it means that CloudFront does not cache the content at all, resulting in each user request being forwarded to the ALB, which does not reduce the load.
AWS Documentation
Reference: For more information on DNS and TTL configurations for CloudFront, you can refer to the following AWS documentation:
Configuring DNS
CloudFront TTL Settings.
The SysOps administrator needs to prevent launching EC2 instances without a specific tag in the application OU.
- A . Create an IAM group that has a policy allowing ec2:RunInstances when the CostCenter-Project tag is present. Place all IAM users in this group.
- B . Create a service control policy (SCP) that denies ec2:RunInstances when the CostCenter-Project tag is missing. Attach the SCP to the application OU.
- C . Create an IAM role with a policy that allows ec2:RunInstances when the CostCenter-Project tag is present. Attach the IAM role to users in the application OU accounts.
- D . Create a service control policy (SCP) that denies ec2:RunInstances when the CostCenter-Project tag is missing. Attach the SCP to the root OU.
B
Explanation:
An SCP applied to the application OU that denies ec2:RunInstances when the CostCenter-Project tag is missing ensures that all accounts in the OU adhere to the tagging policy. This approach is centralized and applies only to the intended OU.
A company has an application that is running on Amazon EC2 instances in a VPC. The application needs access to download software updates from the internet. The VPC has public subnets and private signets. The company’s security policy requires all ECS instances to be deployed in private subnets
What should a SysOps administrator do to meet those requirements?
- A . Add an internet gateway to the VPC In the route table for the private subnets, odd a route to the interne; gateway.
- B . Add a NAT gateway to a private subnet. In the route table for the private subnets, add a route to the NAT gateway.
- C . Add a NAT gateway to a public subnet in the route table for the private subnets, add a route to the NAT gateway.
- D . Add two internet gateways to the VPC. In The route tablet for the private subnets and public subnets, add a route to each internet gateway.
C
Explanation:
To ensure that EC2 instances in private subnets can access the internet for software updates while complying with the security policy that requires instances to be in private subnets, you should use a NAT gateway. A NAT gateway allows instances in private subnets to initiate outbound traffic to the internet but prevents the internet from initiating connections to those instances.
Steps:
Create a NAT Gateway:
Open the Amazon VPC console.
In the navigation pane, choose "NAT Gateways".
Choose "Create NAT Gateway".
Select the public subnet where you want to create the NAT gateway.
Choose an Elastic IP address for the NAT gateway.
Choose "Create a NAT Gateway".
Update the Route Table for Private Subnets:
Open the Amazon VPC console.
In the navigation pane, choose "Route Tables".
Select the route table associated with your private subnets.
Choose the "Routes" tab and then "Edit routes".
Add a route with the destination 0.0.0.0/0 and the target as the NAT gateway ID.
Save the changes.
This setup ensures that instances in private subnets can access the internet via the NAT gateway in the public subnet.
Reference: AWS NAT Gateway Documentation
VPC Route Tables
A company is running an application on a group of Amazon EC2 instances behind an Application Load Balancer The EC2 instances run across three Availability Zones The company needs to provide the customers with a maximum of two static IP addresses for their applications
How should a SysOps administrator meet these requirement?
- A . Add AWS Global Accelerator in front of the Application Load Balancer
- B . Add an internal Network Load Balancer behind the Application Load Balancer
- C . Configure the Application Load Balancer in only two Availability Zones.
- D . Create two Elastic IP addresses and assign them to the Application Load Balancer.
A
Explanation:
AWS Global Accelerator:
AWS Global Accelerator is a service that improves the availability and performance of your applications with a global user base. It provides static IP addresses that act as a fixed entry point to your application endpoints (such as ALBs).
Steps:
Go to the AWS Management Console.
Navigate to Global Accelerator.
Click on "Create accelerator."
Configure the accelerator by providing a name and adding listeners.
Add your Application Load Balancer as an endpoint.
Allocate two static IP addresses.
This setup ensures that your application is accessible via two static IP addresses, fulfilling the requirement.
Reference: AWS Global Accelerator
A company manages a set of accounts on AWS by using AWS Organizations. The company’s security team wants to use a native AWS service to regularly scan all AWS accounts against the Center for Internet Security (CIS) AWS Foundations Benchmark.
What is the MOST operationally efficient way to meet these requirements?
- A . Designate a central security account as the AWS Security Hub administrator account. Create a script that sends an invitation from the Security Hub administrator account and accepts the invitation from the member account. Run the script every time a new account is created. Configure Security Hub to run the CIS AWS Foundations Benchmark scans.
- B . Run the CIS AWS Foundations Benchmark across all accounts by using Amazon Inspector.
- C . Designate a central security account as the Amazon GuardDuty administrator account. Create a script that sends an invitation from the GuardDuty administrator account and accepts the invitation
from the member account. Run the script every time a new account is created. Configure GuardDuty to run the CIS AWS Foundations Benchmark scans. - D . Designate an AWS Security Hub administrator account. Configure new accounts in the organization to automatically become member accounts. Enable CIS AWS Foundations Benchmark scans.
D
Explanation:
To ensure comprehensive and automated security scanning across multiple AWS accounts:
Security Hub Administrator Account: Designate one account within AWS Organizations as the Security Hub administrator account. This centralizes security findings management.
Automate Account Association: Configure Security Hub to automatically associate new accounts in the organization as member accounts. This ensures all new and existing accounts are continuously monitored under the same security policies.
Enable CIS Benchmark Scans: Within Security Hub, enable the CIS AWS Foundations Benchmark standard. This automatically scans all member accounts against this set of security best practices and compliance standards.
This configuration provides an operationally efficient and scalable way to manage security and compliance across an extensive AWS environment, leveraging the native integration of AWS services.