Practice Free SOA-C02 Exam Online Questions
A company’s SysOps administrator manages a fleet of hundreds of Amazon EC2 instances that run Windows-based workloads and Linux-based workloads. Each EC2 instance has a tag that identifies its operating system. All the EC2 instances run AWS Systems Manager Session Manager.
A zero-day vulnerability is reported, and no patches are available. The company’s security team provides code for all the relevant operating systems to reduce the risk of the vulnerability. The SysOps administrator needs to implement the code on the EC2 instances and must provide a report that shows that the code has successfully run on all the instances.
What should the SysOps administrator do to meet these requirements as quickly as possible?
- A . Use Systems Manager Run Command. Choose either the AWS-RunShellScript document or the AWS-RunPowerShellScript document. Configure Run Command with the code from the security team. Specify the operating system tag in the Targets parameter. Run the command. Provide the command history’s evidence to the security team.
- B . Create an AWS Lambda function that connects to the EC2 instances through Session Manager. Configure the Lambda function to identify the operating system, run the code from the security team, and return the results to an Amazon RDS DB instance. Query the DB instance for the results. Provide the results as evidence to the security team.
- C . Log on to each EC2 instance. Run the code from the security team on each EC2 instance. Copy and paste the results of each run into a single spreadsheet. Provide the spreadsheet as evidence to the security team.
- D . Update the launch templates of the EC2 instances to include the code from the security team in the user data. Relaunch the EC2 instances by using the updated launch templates. Retrieve the EC2 instance logs of each instance. Provide the EC2 instance logs as evidence to the security team.
A
Explanation:
AWS Systems Manager Run Command provides an efficient method to execute administrative tasks on EC2 instances. This solution will minimize the time and complexity involved:
Select Document: Choose AWS-RunShellScript for Linux-based instances or AWS-RunPowerShellScript for Windows-based instances.
Configure Command: Enter the mitigation script provided by the security team into the command document.
Target Instances: Use the tagging system to target only the instances that match the specific OS as identified by their tags.
Execute Command: Run the command across the targeted instances.
Verification and Reporting: The command history in Systems Manager will serve as evidence of execution and success, which can be reported back to the security team.
AWS Documentation
Reference: More about Run Command can be found here: AWS Systems Manager Run Command.
A SysOps administrator needs to give users the ability to upload objects to an Amazon S3 bucket. The SysOps administrator creates a presigned URL and provides the URL to a user, but the user cannot upload an object to the S3 bucket. The presigned URL has not expired, and no bucket policy is applied to the S3 bucket.
Which of the following could be the cause of this problem?
- A . The user has not properly configured the AWS CLI with their access key and secret access key.
- B . The SysOps administrator does not have the necessary permissions to upload the object to the S3 bucket.
- C . The SysOps administrator must apply a bucket policy to the S3 bucket to allow the user to upload the object.
- D . The object already has been uploaded through the use of the presigned URL, so the presigned URL is no longer valid.
B
Explanation:
Step-by-Step
Understand the Problem:
A user cannot upload an object to an S3 bucket using a presigned URL, even though the URL is valid and the bucket has no policy applied.
Analyze the Requirements:
Determine the cause of the issue preventing the upload via the presigned URL.
Evaluate the Options:
Option A: The user has not properly configured the AWS CLI.
CLI configuration is not relevant to using a presigned URL.
Option B: The SysOps administrator does not have the necessary permissions.
The administrator’s permissions are required to generate a valid presigned URL with sufficient permissions.
Option C: A bucket policy is required.
A bucket policy is not necessary if the presigned URL has the correct permissions.
Option D: The object has already been uploaded.
A presigned URL remains valid until it expires or the specified permissions are revoked.
Select the Best Solution:
Option B: Ensuring that the SysOps administrator has the necessary permissions to upload objects to the S3 bucket is crucial for generating valid presigned URLs.
Reference: Amazon S3 Presigned URLs
IAM Policies for Amazon S3
The SysOps administrator must have the necessary permissions to upload objects to the S3 bucket, ensuring that the presigned URL generated allows the user to upload successfully.
A company has a new requirement stating that all resources In AWS must be tagged according to a set policy.
Which AWS service should be used to enforce and continually Identify all resources that are not in compliance with the policy?
- A . AWS CloudTrail
- B . Amazon Inspector
- C . AWS Config
- D . AWS Systems Manager
C
Explanation:
Step-by-Step
Understand the Problem:
Enforce a policy that requires all AWS resources to be tagged according to company policy.
Continuously identify non-compliant resources.
Analyze the Requirements:
Implement a solution to monitor and enforce resource tagging compliance.
Evaluate the Options:
Option A: AWS CloudTrail.
Provides logging and monitoring of API calls but does not enforce tagging policies.
Option B: Amazon Inspector.
Primarily used for security assessments, not resource tagging compliance.
Option C: AWS Config.
Monitors and evaluates the configurations of AWS resources.
Can enforce compliance by using AWS Config rules to check resource tags.
Option D: AWS Systems Manager.
Provides operational insights and management but is not specifically designed for compliance enforcement.
Select the Best Solution:
Option C: AWS Config is designed for compliance and configuration monitoring, making it the ideal service for enforcing and identifying non-compliant resource tags.
Reference: AWS Config
Managing Resource Tag Compliance with AWS Config
AWS Config provides the necessary tools to enforce and monitor resource tagging compliance, ensuring all resources adhere to the set policy.
A SysOps administrator is optimizing the cost of a workload. The workload is running in multiple AWS Regions and is using AWS Lambda with Amazon EC2 On-Demand Instances for the compute. The overall usage is predictable. The amount of compute that is consumed in each Region varies, depending on the users’ locations.
Which approach should the SysOps administrator use to optimize this workload?
- A . Purchase Compute Savings Plans based on the usage during the past 30 days
- B . Purchase Convertible Reserved Instances by calculating the usage baseline.
- C . Purchase EC2 Instance Savings Plane based on the usage during the past 30 days
- D . Purchase Standard Reserved Instances by calculating the usage baseline.
A
Explanation:
To optimize the cost of a workload running in multiple AWS Regions using AWS Lambda and EC2 On-Demand Instances, the SysOps administrator should purchase Compute Savings Plans based on the usage during the past 30 days.
Compute Savings Plans:
Compute Savings Plans provide the most flexibility and can be applied across various compute services such as EC2, AWS Lambda, and Fargate.
They offer significant savings compared to On-Demand pricing.
Analysis and Purchase:
Analyze the compute usage over the past 30 days to determine the baseline usage.
Purchase Compute Savings Plans based on this baseline to maximize savings while maintaining flexibility across different regions and compute services.
Reference: AWS Savings Plans
A company has an AWS Config rule that identifies open SSH ports in security groups. The rule has an automatic remediation action to delete the SSH inbound rule for noncompliant security groups. However, business units require SSH access and can provide a list of trusted IPs to restrict access.
- A . Create a new AWS Systems Manager Automation runbook that adds an IP set to the security group’s inbound rule. Update the AWS Config rule to change the automatic remediation action to use the new runbook.
- B . Create a new AWS Systems Manager Automation runbook that updates the security group’s inbound rule with the IP addresses from the business units. Update the AWS Config rule to change the automatic remediation action to use the new runbook.
- C . Create an AWS Lambda function that adds an IP set to the security group’s inbound rule. Update the AWS Config rule to change the automatic remediation action to use the Lambda function.
- D . Create an AWS Lambda function that updates the security group’s inbound rule with the IP addresses from the business units. Update the AWS Config rule to change the automatic remediation action to use the Lambda function.
B
Explanation:
The problem requires modifying the inbound SSH rule to restrict access to a list of trusted IPs instead of deleting it entirely. AWS Config rules can be configured with automatic remediation actions using either Systems Manager Automation runbooks or Lambda functions. However, AWS Systems Manager Automation runbooks are often more appropriate for managing infrastructure changes like security group modifications because they are reusable, parameterized, and easier to audit.
Create a Systems Manager Automation runbook: This runbook will contain steps to add or modify the existing security group rule, allowing SSH access only from the specified IP addresses.
Update the AWS Config rule: Modify the Config rule to call this new runbook for its automatic remediation. This will prevent deletion of the SSH rule and instead update it based on the IP list.
A company needs to monitor the disk utilization of Amazon Elastic Block Store (Amazon EBS) volumes The EBS volumes are attached to Amazon EC2 Linux Instances A SysOps administrator must set up an Amazon CloudWatch alarm that provides an alert when disk utilization increases to more than 80%.
Which combination of steps must the SysOps administrator lake lo meet these requirements? (Select THREE.)
- A . Create an 1AM role that includes the Cloud Watch AgentServerPol icy AWS managed policy Attach me role to the instances
- B . Create an 1AM role that includes the CloudWatchApplicationInsightsReadOnlyAccess AWS managed policy. Attach the role to the instances
- C . Install and start the CloudWatch agent by using AWS Systems Manager or the command line
- D . Install and start the CloudWatch agent by using an 1AM role. Attach the Cloud Watch AgentServerPolicy AWS managed policy to the role.
- E . Configure a CloudWatch alarm to enter ALARM state when the disk_used_percent CloudWatch metric is greater than 80%.
- F . Configure a CloudWatch alarm to enter ALARM state when the disk_used CloudWatch metric is greater than 80% or when the disk_free CloudWatch metric is less than 20%.
A, C, E
Explanation:
Create an IAM role with the CloudWatchAgentServerPolicy:
This policy grants the necessary permissions for the CloudWatch agent to collect and send metrics.
Steps:
Go to the AWS Management Console.
Navigate to IAM and create a new role.
Choose "EC2" as the trusted entity.
Attach the "CloudWatchAgentServerPolicy" managed policy to the role.
Attach this IAM role to your EC2 instances.
Reference: AWS IAM Roles
Install and start the CloudWatch agent:
The CloudWatch agent must be installed and configured to collect disk utilization metrics.
Steps:
Use AWS Systems Manager or SSH to connect to your instances.
Install the CloudWatch agent using the following commands:
sudo yum install amazon-cloudwatch-agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 –
c file:/path/to/your-config-file.json -s Start the agent:
sudo systemctl start amazon-cloudwatch-agent Configure a CloudWatch alarm:
Create an alarm based on the disk_used_percent metric. Steps:
Go to the AWS Management Console.
Navigate to CloudWatch and select "Alarms" from the left-hand menu. Click on "Create alarm."
Select the disk_used_percent metric.
Set the threshold to 80% and configure the alarm actions (e.g., sending a notification).
Reference: Creating a CloudWatch Alarm
A company is experiencing issues with legacy software running on Amazon EC2 instances. Errors occur when the total CPU utilization on the EC2 instances exceeds 80%. A short-term solution is required while the software is being rewritten. A SysOps administrator is tasked with creating a solution to restart the instances when the CPU utilization rises above 80%.
Which solution meets these requirements with the LEAST operational overhead?
- A . Write a script that monitors the CPU utilization of the EC2 instances and reboots the instances when utilization exceeds 80%. Run the script as a cron job.
- B . Add an Amazon CloudWatch alarm for CPU utilization and configure the alarm action to reboot the EC2 instances.
- C . Create an Amazon EventBridge rule using the predefined patterns for CPU utilization of the EC2 instances. When utilization exceeds 80%, invoke an AWS Lambda function to restart the instances.
- D . Add an Amazon CloudWatch alarm for CPU utilization and configure an AWS Systems Manager Automation runbook to reboot the EC2 instances when utilization exceeds 80%.
B
Explanation:
The simplest and most efficient solution to ensure that EC2 instances are restarted when CPU utilization exceeds 80% is to use Amazon CloudWatch alarms:
Create a CloudWatch Alarm: Navigate to the CloudWatch dashboard in the AWS Management Console and create a new alarm. Set the alarm to monitor the CPU utilization metric of the EC2 instances.
Set the Alarm Condition: Configure the alarm to trigger when the CPU utilization exceeds 80%. You can specify this threshold in the alarm settings.
Configure Alarm Actions: In the actions settings of the alarm, select the option to reboot the instance. This action ensures that the instance is automatically restarted whenever the alarm condition is met, without the need for manual intervention or additional scripts.
This method leverages AWS’s native capabilities, minimizing operational overhead and eliminating the need for external tools or custom scripts.
A SysOps administrator recently configured Amazon S3 Cross-Region Replication on an S3 bucket.
Which of the following does this feature replicate to the destination S3 bucket by default?
- A . Objects in the source S3 bucket for which the bucket owner does not have permissions
- B . Objects that are stored in S3 Glacier
- C . Objects that existed before replication was configured
- D . Object metadata
D
Explanation:
Amazon S3 Cross-Region Replication (CRR) is a feature that automatically replicates objects across AWS regions. When CRR is configured, certain aspects are replicated by default, and some are not. Here are the details:
Objects in the source S3 bucket for which the bucket owner does not have permissions: CRR does not replicate objects for which the bucket owner does not have permissions.
Objects that are stored in S3 Glacier: Objects in the S3 Glacier storage class are not replicated by CRR.
Objects that existed before replication was configured: Only objects created or modified after the replication configuration will be replicated. Objects that existed before the configuration are not replicated by default.
Object metadata: CRR replicates the object metadata along with the object to ensure that the replica in the destination bucket is as accurate as possible.
Reference: Amazon S3 Cross-Region Replication
Replication Configuration Examples
A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.anycompany.com and the S3 bucket name is anycompany-static. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser.
Which of the following is a cause of this?
- A . The S3 bucket must be configured with Amazon CloudFront first.
- B . The Route 53 record set must have an IAM role that allows access to the S3 bucket.
- C . The Route 53 record set must be in the same region as the S3 bucket.
- D . The S3 bucket name must match the record set name in Route 53.
C
Explanation:
Step-by-Step
Understand the Problem:
The application on a general-purpose EC2 instance experiences performance issues during I/O intensive tasks.
High VolumeQueueLength indicates a bottleneck in I/O operations.
Analyze the Requirements:
Improve disk I/O performance to handle intensive read/write operations.
Evaluate the Options:
Option A: Modify the instance type to be storage optimized.
This could help but may not be necessary if increasing IOPS on the EBS volume resolves the issue.
Option B: Deselect Auto-Enable Volume I/O.
This option is not relevant to addressing high VolumeQueueLength.
Option C: Modify the volume properties to increase the IOPS.
Increasing IOPS directly addresses the I/O performance bottleneck.
Option D: Enable enhanced networking.
Enhanced networking improves network performance but not disk I/O performance.
Select the Best Solution:
Option C: Increasing the IOPS of the EBS volume directly addresses the high VolumeQueueLength and improves I/O performance.
Reference: Amazon EBS Volume Types
Optimizing EBS Performance
Modifying the volume properties to increase the IOPS ensures that the application can handle intensive read/write operations more effectively.
A company is managing multiple AWS accounts in AWS Organizations The company is reviewing internal security of Its AWS environment The company’s security administrator has their own AWS account and wants to review the VPC configuration of developer AWS accounts
Which solution will meet these requirements in the MOST secure manner?
- A . Create an IAM policy in each developer account that has read-only access related to VPC resources Assign the policy to an IAM user Share the user credentials with the security administrator
- B . Create an IAM policy in each developer account that has administrator access to all Amazon EC2 actions, including VPC actions Assign the policy to an IAM user Share the user credentials with the security administrator
- C . Create an IAM policy in each developer account that has administrator access related to VPC
resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their account - D . Create an IAM policy m each developer account that has read-only access related to VPC resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their account
D
Explanation:
To review the VPC configuration of developer AWS accounts securely, the best practice is to use cross-account IAM roles with read-only access.
Create an IAM Policy with Read-Only Access:
Navigate to the IAM console in each developer account.
Create a new policy with read-only access to VPC resources. For example:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkAcls"
],
"Resource": "*"
}
]
}
Save the policy.
Create a Cross-Account IAM Role:
In the IAM console, choose "Roles" and then "Create role".
Select "Another AWS account" and enter the AWS account ID of the security administrator’s account.
Attach the read-only policy created in step 1 to the role.
Save the role and note the role ARN.
Assume the Role from the Security Administrator’s Account:
In the security administrator’s account, navigate to the IAM console.
Use the "Switch Role" option to assume the cross-account role created in the developer account using the role ARN.
The security administrator can now access the VPC configuration of the developer accounts with read-only permissions.
Reference: Cross-Account Access
Creating and Managing IAM Policies