Practice Free SOA-C02 Exam Online Questions
A development team recently deployed a new version of a web application to production. After the release penetration testing revealed a cross-site scripting vulnerability that could expose user data.
Which AWS service will mitigate this issue?
- A . AWS Shield Standard
- B . AWS WAF
- C . Elastic Load Balancing
- D . Amazon Cognito
B
Explanation:
AWS WAF (Web Application Firewall) helps protect web applications from common web exploits that could affect availability, compromise security, or consume excessive resources. It can be used to mitigate cross-site scripting (XSS) vulnerabilities.
Set Up AWS WAF:
Open the AWS WAF console at AWS WAF Console.
Create a new Web ACL.
Add Rules for Protection:
Add managed rules that include protection against common vulnerabilities, including XSS.
AWS provides managed rule groups, such as the AWS Managed Rules for Common Vulnerabilities and Exposures (CVE) which include protections against XSS.
Associate WAF with the Application:
Associate the Web ACL with the resources you want to protect (e.g., CloudFront distribution, Application Load Balancer).
Reference: AWS WAF
AWS WAF Managed Rules
While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS The customer gateway device resides in a data center with a NAT gateway in front of it
What address should be used to create the customer gateway resource?
- A . The private IP address of the customer gateway device
- B . The MAC address of the NAT device in front of the customer gateway device
- C . The public IP address of the customer gateway device
- D . The public IP address of the NAT device in front of the customer gateway device
D
Explanation:
When setting up an AWS managed VPN connection and creating a customer gateway resource, if the customer gateway device resides behind a NAT device, you should use the public IP address of the NAT device. This is because the VPN connection from AWS will be established to the public IP address that AWS can reach.
Identify the Public IP Address of the NAT Device:
Determine the public IP address assigned to the NAT device in front of the customer gateway.
Create Customer Gateway Resource:
Navigate to the VPC console in the AWS Management Console.
In the navigation pane, choose "Customer Gateways" and then click "Create Customer Gateway".
Enter a name for the customer gateway.
For the "IP Address", enter the public IP address of the NAT device.
Configure VPN Connection:
Create a VPN connection by navigating to the "VPN Connections" section and clicking "Create VPN Connection".
Select the created customer gateway and complete the VPN setup wizard.
Update Routing and Configuration:
Ensure that the routing configurations on both the AWS side and the on-premises side are updated to route traffic through the VPN connection.
Configure the customer gateway device (behind the NAT) to accept traffic from the NAT device and route it appropriately.
Reference: AWS Managed VPN Connections
Customer Gateway Resource
A SysOps administrator is responsible for managing a fleet of Amazon EC2 instances. These EC2 instances upload build artifacts to a third-party service. The third-party service recently implemented a strict IP allow list that requires all build uploads to come from a single IP address.
What change should the systems administrator make to the existing build fleet to comply with this new requirement?
- A . Move all of the EC2 instances behind a NAT gateway and provide the gateway IP address to the service.
- B . Move all of the EC2 instances behind an internet gateway and provide the gateway IP address to the service.
- C . Move all of the EC2 instances into a single Availability Zone and provide the Availability Zone IP address to the service.
- D . Move all of the EC2 instances to a peered VPC and provide the VPC IP address to the service.
A
Explanation:
To ensure all EC2 instances upload build artifacts through a single IP address:
A: Move all of the EC2 instances behind a NAT gateway. Provide the IP address of the NAT gateway to the third-party service for the allow list. A NAT gateway enables instances in a private subnet to connect to services outside AWS (such as a third-party service) but prevents the internet from initiating connections with those instances. Using a NAT gateway standardizes all outgoing traffic to use a single IP address. More information on NAT gateways can be found in AWS documentation NAT Gateways.
A SysOps administrator is responsible for managing a fleet of Amazon EC2 instances. These EC2 instances upload build artifacts to a third-party service. The third-party service recently implemented a strict IP allow list that requires all build uploads to come from a single IP address.
What change should the systems administrator make to the existing build fleet to comply with this new requirement?
- A . Move all of the EC2 instances behind a NAT gateway and provide the gateway IP address to the service.
- B . Move all of the EC2 instances behind an internet gateway and provide the gateway IP address to the service.
- C . Move all of the EC2 instances into a single Availability Zone and provide the Availability Zone IP address to the service.
- D . Move all of the EC2 instances to a peered VPC and provide the VPC IP address to the service.
A
Explanation:
To ensure all EC2 instances upload build artifacts through a single IP address:
A: Move all of the EC2 instances behind a NAT gateway. Provide the IP address of the NAT gateway to the third-party service for the allow list. A NAT gateway enables instances in a private subnet to connect to services outside AWS (such as a third-party service) but prevents the internet from initiating connections with those instances. Using a NAT gateway standardizes all outgoing traffic to use a single IP address. More information on NAT gateways can be found in AWS documentation NAT Gateways.
A company has multiple Amazon EC2 instances that run a resource-intensive application in a
development environment. A SysOps administrator is implementing a solution to stop these EC2 instances when they are not in use.
Which solution will meet this requirement?
- A . Assess AWS CloudTrail logs to verify that there is no EC2 API activity. Invoke an AWS Lambda function to stop the EC2 instances.
- B . Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than 5% for a 30-minute period.
- C . Create an Amazon CloudWatch metric to stop the EC2 instances when the VolumeReadBytes metric is lower than 500 for a 30-minute period.
- D . Use AWS Config to invoke an AWS Lambda function to stop the EC2 instances based on resource configuration changes.
C
Explanation:
To stop EC2 instances in a development environment when they are not in use, you can create a CloudWatch alarm based on CPU utilization.
Create CloudWatch Alarm:
Navigate to the CloudWatch console.
Select "Alarms" and click on "Create Alarm".
Choose the EC2 instance metric for CPU utilization.
Set the condition to trigger the alarm when the average CPU utilization is less than 5% for a continuous 30-minute period.
Reference: Creating Amazon CloudWatch Alarms
Configure Alarm Actions:
In the actions section of the alarm creation, specify the action to stop the instance.
This can be done by creating an alarm action that uses an AWS Lambda function or directly through EC2 actions.
Example using Lambda:
def lambda_handler(event, context):
ec2 = boto3.client(‘ec2’)
response = ec2.stop_instances(
InstanceIds=[
‘instance-id’
]
)
return response
Reference: Using Amazon CloudWatch Alarms
By setting up this CloudWatch alarm, the EC2 instances will automatically stop when they are not being utilized, reducing costs in the development environment.
A company has multiple Amazon EC2 instances that run a resource-intensive application in a
development environment. A SysOps administrator is implementing a solution to stop these EC2 instances when they are not in use.
Which solution will meet this requirement?
- A . Assess AWS CloudTrail logs to verify that there is no EC2 API activity. Invoke an AWS Lambda function to stop the EC2 instances.
- B . Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than 5% for a 30-minute period.
- C . Create an Amazon CloudWatch metric to stop the EC2 instances when the VolumeReadBytes metric is lower than 500 for a 30-minute period.
- D . Use AWS Config to invoke an AWS Lambda function to stop the EC2 instances based on resource configuration changes.
C
Explanation:
To stop EC2 instances in a development environment when they are not in use, you can create a CloudWatch alarm based on CPU utilization.
Create CloudWatch Alarm:
Navigate to the CloudWatch console.
Select "Alarms" and click on "Create Alarm".
Choose the EC2 instance metric for CPU utilization.
Set the condition to trigger the alarm when the average CPU utilization is less than 5% for a continuous 30-minute period.
Reference: Creating Amazon CloudWatch Alarms
Configure Alarm Actions:
In the actions section of the alarm creation, specify the action to stop the instance.
This can be done by creating an alarm action that uses an AWS Lambda function or directly through EC2 actions.
Example using Lambda:
def lambda_handler(event, context):
ec2 = boto3.client(‘ec2’)
response = ec2.stop_instances(
InstanceIds=[
‘instance-id’
]
)
return response
Reference: Using Amazon CloudWatch Alarms
By setting up this CloudWatch alarm, the EC2 instances will automatically stop when they are not being utilized, reducing costs in the development environment.
A company uses AWS CloudFormation to manage a stack of Amazon EC2 instances on AWS. A SysOps administrator needs to keep the instances and all of the instances’ data, even if someone deletes the stack.
Which solution will meet these requirements?
- A . Set the DeletionPolicy attribute to Snapshot for the EC2 instance resource in the CloudFormation template.
- B . Automate backups by using Amazon Data Lifecycle Manager (Amazon DLM).
- C . Create a backup plan in AWS Backup.
- D . Set the DeletionPolicy attribute to Retain for the EC2 instance resource in the CloudFormation template.
D
Explanation:
To prevent the EC2 instances and their data from being deleted when a CloudFormation stack is deleted:
DeletionPolicy Attribute: In the CloudFormation template that defines the EC2 instances, set the DeletionPolicy attribute to Retain for each EC2 instance resource. This setting ensures that when the CloudFormation stack is deleted, the EC2 instances are not terminated.
Impact of the Retain Policy: With this policy, all data on the instance, such as data on its attached EBS volumes, remains intact even after the stack deletion. The resources will remain in your AWS account and will need to be managed or deleted manually thereafter.
This approach is directly supported by AWS CloudFormation and provides a simple and effective way to protect EC2 instances and their data during stack deletions.
A company runs an application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application sometimes becomes slow and unresponsive. Amazon CloudWatch metrics show that some EC2 instances are experiencing high CPU load.
A SysOps administrator needs to create a CloudWatch dashboard that can automatically display CPU metrics of all the EC2 instances. The metrics must include new instances that are launched as part of the Auto Scaling group.
What should the SysOps administrator do to meet these requirements in the MOST operationally efficient way?
- A . Create a CloudWatch dashboard. Use activity notifications from the Auto Scaling group to invoke a custom AWS Lambda function. Use the Lambda function to update the CloudWatch dashboard to monitor the CPUUtilization metric for the new instance IDs.
- B . Create a CloudWatch dashboard. Run a custom script on each EC2 instance to stream the CPU utilization to the dashboard.
- C . Use CloudWatch metrics explorer to filter by the aws:autoscaling:groupName tag and to create a visualization for the CPUUtilization metric. Add the visualization to a CloudWatch dashboard.
- D . Use CloudWatch metrics explorer to filter by instance state and to create a visualization for the CPUUtilization metric. Add the visualization to a CloudWatch dashboard.
C
Explanation:
CloudWatch Metrics Explorer is a powerful tool for creating dynamic dashboards based on tags. This method is efficient for monitoring Auto Scaling groups:
Use Metrics Explorer: Navigate to the Metrics Explorer in the CloudWatch console and select the CPUUtilization metric. Use the aws:autoscaling:groupName tag to filter the metric, ensuring that it only shows data for EC2 instances within the specified Auto Scaling group.
Create Visualization: Configure the visualization settings as needed and add it to a CloudWatch dashboard.
Monitor Automatically: This setup will automatically update to include metrics from new EC2 instances that join the Auto Scaling group, without any need for manual intervention or scripting.
AWS Documentation
Reference: You can learn more about using Metrics Explorer here: Using CloudWatch Metrics Explorer.
A company runs an application on an Amazon EC2 instance A SysOps administrator creates an Auto Scaling group and an Application Load Balancer (ALB) to handle an increase in demand However, the EC2 instances are failing tie health check.
What should the SysOps administrator do to troubleshoot this issue?
- A . Verity that the Auto Scaling group is configured to use all AWS Regions.
- B . Verily that the application is running on the protocol and the port that the listens is expecting.
- C . Verify the listener priority in the ALB Change the priority if necessary.
- D . Verify the maximum number of instances in the Auto Scaling group Change the number if necessary
B
Explanation:
When EC2 instances fail health checks from the Application Load Balancer (ALB), it’s often due to a mismatch between what the ALB is checking for and what the application is providing. Ensuring that the application is correctly configured to respond on the expected protocol and port is crucial.
Steps:
Check ALB Health Check Configuration:
Open the Amazon EC2 console.
Navigate to the Load Balancers section.
Select your ALB and go to the "Health Checks" tab.
Verify the protocol (HTTP/HTTPS) and port specified in the health check configuration.
Verify Application Configuration:
Ensure that the application on the EC2 instances is running and listening on the same protocol and port configured in the ALB health check.
Check firewall settings, security groups, and network ACLs to ensure that traffic is allowed on the specified port.
Test Connectivity:
Use tools like curl or telnet from another instance or from within the same instance to ensure the application responds correctly to health check requests.
Review Health Check Logs:
Look at the logs for the application and the ALB to identify any errors or misconfigurations.
Reference: ALB Health Check Configuration
Troubleshooting ALB Health Checks
Application A runs on Amazon EC2 instances behind a Network Load Balancer (NLB). The EC2 instances are in an Auto Scaling group and are in the same subnet that is associated with the NLB. Other applications from an on-premises environment cannot communicate with Application A on port 8080.
To troubleshoot the issue, a SysOps administrator analyzes the flow logs. The flow logs include the following records:
What is the reason for the rejected traffic?
- A . The security group of the EC2 instances has no Allow rule for the traffic from the NLB.
- B . The security group of the NLB has no Allow rule for the traffic from the on-premises environment.
- C . The ACL of the on-premises environment does not allow traffic to the AWS environment.
- D . The network ACL that is associated with the subnet does not allow outbound traffic for the ephemeral port range.
D
Explanation:
The rejected traffic in the flow logs is due to the network ACL associated with the subnet not allowing outbound traffic for the ephemeral port range.
Network ACLs:
Network ACLs act as a firewall for controlling traffic in and out of one or more subnets.
By default, NACLs allow all inbound and outbound traffic, but custom NACLs require specific rules to allow traffic.
Ephemeral Ports:
Ephemeral ports are temporary ports used for client-side communication. The default range is 1024-65535.
Ensure that the network ACL allows outbound traffic on these ports.
Steps to Resolve:
Check the network ACL rules for the associated subnet.
Add outbound rules to allow traffic from the ephemeral port range (1024-65535).
Reference: Amazon VPC Network ACLs