Practice Free DOP-C02 Exam Online Questions
A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
- A . Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
- B . Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
- C . Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatchLogs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
- D . Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.
What is the primary purpose of AWS CodePipeline in DevOps workflows?
- A . Storing source code
- B . Building machine learning models
- C . Automating continuous integration and continuous delivery (CI/CD)
- D . Monitoring network performance
A company wants to use AWS Systems Manager documents to bootstrap physical laptops for developers The bootstrap code Is stored in GitHub A DevOps engineer has already created a Systems Manager activation, installed the Systems Manager agent with the registration code, and installed an activation ID on all the laptops.
Which set of steps should be taken next?
- A . Configure the Systems Manager document to use the AWS-RunShellScnpt command to copy the files from GitHub to Amazon S3, then use the aws-downloadContent plugin with a sourceType of S3
- B . Configure the Systems Manager document to use the aws-configurePackage plugin with an install action and point to the Git repository
- C . Configure the Systems Manager document to use the aws-downloadContent plugin with a sourceType of GitHub and sourcelnfo with the repository details.
- D . Configure the Systems Manager document to use the aws:softwarelnventory plugin and run the script from the Git repository
C
Explanation:
Configure the Systems Manager Document to Use the aws-downloadContent Plugin with a sourceType of GitHub and sourcelnfo with the Repository Details:
The aws-downloadContent plugin can download content from various sources, including GitHub, which is necessary for bootstrapping the laptops with the code stored in the GitHub repository. schemaVersion: ‘2.2’
description: "Download and run bootstrap script from GitHub"
mainSteps:
– action: aws:downloadContent
name: downloadBootstrapScript
inputs:
sourceType: GitHub
sourceInfo: ‘{"owner":"my-org","repository":"my-repo","path":"scripts/bootstrap.sh","getOptions":"branch:main"}’
destinationPath: /tmp/bootstrap.sh
– action: aws:runShellScript
name: runBootstrapScript
inputs:
runCommand:
– chmod +x /tmp/bootstrap.sh
– /tmp/bootstrap.sh
This setup ensures that the bootstrap code is downloaded from GitHub and executed on the laptops
using Systems Manager.
Reference: AWS Systems Manager aws-downloadContent Plugin
Running Commands Using Systems Manager
A company uses Amazon RDS for all databases in Its AWS accounts. The company uses AWS Control Tower to build a landing zone that has an audit and logging account All databases must be encrypted at rest for compliance reasons. The company’s security engineer needs to receive notification about any noncompliant databases that are in the company’s accounts
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company’s audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer’s email address to the SNS topic
- B . Use AWS Cloud Formation StackSets to deploy AWS Lambda functions to every account. Write the Lambda function code to determine whether the RDS storage is encrypted in the account the function is deployed to Send the findings as an Amazon CloudWatch metric to the management account Create an Amazon Simple Notification Service (Amazon SNS) topic. Create a CloudWatch alarm that notifies the SNS topic when metric thresholds are met. Subscribe the security engineer’s email address to the SNS topic.
- C . Create a custom AWS Config rule in every account to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the audit account Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer’s email address to the SNS topic
- D . Launch an Amazon EC2 instance. Run an hourly cron job by using the AWS CLI to determine whether the RDS storage is encrypted in each AWS account Store the results in an RDS database. Notify the security engineer by sending email messages from the EC2 instance when noncompliance is detected
A
Explanation:
Activate AWS Control Tower Guardrail:
Use AWS Control Tower to activate a detective guardrail that checks whether RDS storage is encrypted.
Create SNS Topic for Notifications:
Set up an Amazon Simple Notification Service (SNS) topic in the audit account to receive notifications about non-compliant databases.
Create EventBridge Rule to Filter Non-compliant Events:
Create an Amazon EventBridge rule that filters events related to the guardrail’s findings on non-compliant RDS instances.
Configure the rule to send notifications to the SNS topic when non-compliant events are detected.
Subscribe Security Engineer’s Email to SNS Topic:
Subscribe the security engineer’s email address to the SNS topic to receive notifications when non-compliant databases are detected.
By using AWS Control Tower to activate a detective guardrail and setting up SNS notifications for non-compliant events, the company can efficiently monitor and ensure that all RDS databases are encrypted at rest.
Reference: AWS Control Tower Guardrails
Amazon SNS
Amazon EventBridge
A company is developing a web application’s infrastructure using AWS CloudFormation The database engineering team maintains the database resources in a Cloud Formation template, and the software development team maintains the web application resources in a separate CloudFormation template. As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team However, both teams have their own review and lifecycle management processes that they want to keep. Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their Cl/CD pipeline.
Which solution will meet these requirements?
- A . Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template
- B . Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.
- C . Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks.
- D . Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.
A
Explanation:
Stack Export and Import:
Use the Export feature in CloudFormation to share outputs from one stack (e.g., database resources) and use them as inputs in another stack (e.g., web application resources).
Steps to Create Stack Export:
Define the resources in the database CloudFormation template and use the Outputs section to export necessary values.
Outputs:
DBInstanceEndpoint:
Value: !GetAtt DBInstance.Endpoint.Address
Export:
Name: DBInstanceEndpoint
Steps to Import into Web Application Stack:
In the web application CloudFormation template, use the ImportValue function to import these exported values.
Resources:
MyResource:
Type: "AWS::SomeResourceType"
Properties:
SomeProperty: !ImportValue DBInstanceEndpoint
Resource-Level Change-Set Reviews:
Both teams can continue using their respective review processes, as changes to each stack are managed independently.
Use CloudFormation change sets to preview changes before deploying.
By exporting resources from the database stack and importing them into the web application stack, both teams can maintain their separate review and lifecycle management processes while sharing necessary resources.
Reference: AWS CloudFormation Export
AWS CloudFormation ImportValue
A company is performing vulnerability scanning for all Amazon EC2 instances across many accounts. The accounts are in an organization in AWS Organizations. Each account’s VPCs are attached to a shared transit gateway. The VPCs send traffic to the internet through a central egress VPC. The company has enabled Amazon Inspector in a delegated administrator account and has enabled scanning for all member accounts.
A DevOps engineer discovers that some EC2 instances are listed in the "not scanning" tab in Amazon Inspector.
Which combination of actions should the DevOps engineer take to resolve this issue? (Choose three.)
- A . Verify that AWS Systems Manager Agent is installed and is running on the EC2 instances that Amazon Inspector is not scanning.
- B . Associate the target EC2 instances with security groups that allow outbound communication on port 443 to the AWS Systems Manager service endpoint.
- C . Grant inspector:StartAssessmentRun permissions to the IAM role that the DevOps engineer is using.
- D . Configure EC2 Instance Connect for the EC2 instances that Amazon Inspector is not scanning.
- E . Associate the target EC2 instances with instance profiles that grant permissions to communicate with AWS Systems Manager.
- F . Create a managed-instance activation. Use the Activation Code and the Activation ID to register the EC2 instances.
A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?
- A . Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
- B . Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
- C . Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
- D . Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.