Practice Free DOP-C02 Exam Online Questions
A company has microservices running in AWS Lambda that read data from Amazon DynamoDB. The
Lambda code is manually deployed by developers after successful testing The company now needs the tests and deployments be automated and run in the cloud Additionally, traffic to the new versions of each microservice should be incrementally shifted over time after deployment.
What solution meets all the requirements, ensuring the MOST developer velocity?
- A . Create an AWS CodePipelme configuration and set up a post-commit hook to trigger the pipeline after tests have passed Use AWS CodeDeploy and create a Canary deployment configuration that specifies the percentage of traffic and interval
- B . Create an AWS CodeBuild configuration that triggers when the test code is pushed Use AWS CloudFormation to trigger an AWS CodePipelme configuration that deploys the new Lambda versions and specifies the traffic shift percentage and interval
- C . Create an AWS CodePipelme configuration and set up the source code step to trigger when code is pushed. Set up the build step to use AWS CodeBuild to run the tests Set up an AWS CodeDeploy configuration to deploy, then select the CodeDeployDefault.LambdaLinearlDPercentEvery3Minut.es Option.
- D . Use the AWS CLI to set up a post-commit hook that uploads the code to an Amazon S3 bucket after tests have passed. Set up an S3 event trigger that runs a Lambda function that deploys the new version. Use an interval in the Lambda function to deploy the code over time at the required percentage
C
Explanation:
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
A company hired a penetration tester to simulate an internal security breach The tester performed port scans on the company’s Amazon EC2 instances. The company’s security measures did not detect the port scans.
The company needs a solution that automatically provides notification when port scans are performed on EC2 instances. The company creates and subscribes to an Amazon Simple Notification Service (Amazon SNS) topic.
What should the company do next to meet the requirement?
- A . Ensure that Amazon GuardDuty is enabled Create an Amazon CloudWatch alarm for detected EC2 and port scan findings. Connect the alarm to the SNS topic.
- B . Ensure that Amazon Inspector is enabled Create an Amazon EventBridge event for detected network reachability findings that indicate port scans Connect the event to the SNS topic.
- C . Ensure that Amazon Inspector is enabled. Create an Amazon EventBridge event for detected CVEs that cause open port vulnerabilities. Connect the event to the SNS topic
- D . Ensure that AWS CloudTrail is enabled Create an AWS Lambda function to analyze the CloudTrail logs for unusual amounts of traffic from an IP address range Connect the Lambda function to the SNS topic.
A
Explanation:
Ensure that Amazon GuardDuty is Enabled:
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior.
It can detect port scans and generate findings for these events.
Create an Amazon CloudWatch Alarm for Detected EC2 and Port Scan Findings:
Configure GuardDuty to monitor for port scans and other threats.
Create a CloudWatch alarm that triggers when GuardDuty detects port scan activities.
Connect the Alarm to the SNS Topic:
The CloudWatch alarm should be configured to send notifications to the SNS topic subscribed by the security team.
This setup ensures that the security team receives near-real-time notifications when a port scan is
detected on the EC2 instances.
Example configuration steps:
Enable GuardDuty and ensure it is monitoring the relevant AWS accounts.
Create a CloudWatch alarm:
{
"AlarmName": "GuardDutyPortScanAlarm",
"MetricName": "ThreatIntelIndicator",
"Namespace": "AWS/GuardDuty",
"Statistic": "Sum",
"Dimensions": [
{
"Name": "FindingType",
"Value": "Recon:EC2/Portscan"
}
],
"Period": 300,
"EvaluationPeriods": 1,
"Threshold": 1,
"ComparisonOperator": "GreaterThanOrEqualToThreshold",
"AlarmActions": ["arn:aws:sns:region:account-id:SecurityAlerts"]
}
Reference: Amazon GuardDuty
Creating CloudWatch Alarms for GuardDuty Findings
A DevOps engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group. The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform in-place deployments with codeDeployDefault.oneAtATime During an ongoing new deployment, the engineer discovers that, although the overall deployment finished successfully, two out of five instances have the previous application revision deployed. The other three instances have the newest application revision
What is likely causing this issue?
- A . The two affected instances failed to fetch the new deployment.
- B . A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances
- C . The CodeDeploy agent was not installed in two affected instances.
- D . EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
D
Explanation:
When using the AWS CodeDeploy and EC2 Auto Scaling integration, if Auto Scaling launches new instances as part of the deployment process, those new instances may get the previous version of the application instead of the version currently being deployed. This is because the new instance is created based on the launch configuration of the Auto Scaling group or an existing AMI, which may contain an older application version. Once the deployment is complete, the existing instances are updated to the new version, but new instances launched during the deployment retain the version they were launched in.
A company recently deployed its web application on AWS. The company is preparing for a large-scale sales event and must ensure that the web application can scale to meet the demand.
The application’s frontend infrastructure includes an Amazon CloudFront distribution that has an Amazon S3 bucket as an origin. The backend infrastructure includes an Amazon API Gateway API. several AWS Lambda functions, and an Amazon Aurora DB cluster.
The company’s DevOps engineer conducts a load test and identifies that the Lambda functions can fulfill the peak number of requests However, the DevOps engineer notices request latency during the initial burst of requests Most of the requests to the Lambda functions produce queries to the database A large portion of the invocation time is used to establish database connections
Which combination of steps will provide the application with the required scalability? (Select TWO)
- A . Configure a higher reserved concurrency for the Lambda functions.
- B . Configure a higher provisioned concurrency for the Lambda functions
- C . Convert the DB cluster to an Aurora global database Add additional Aurora Replicas in AWS Regions based on the locations of the company’s customers.
- D . Refactor the Lambda Functions Move the code blocks that initialize database connections into the function handlers.
- E . Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambda functions to use the proxy endpoints for database connections.
BE
Explanation:
The correct answer is B and E. Configuring a higher provisioned concurrency for the Lambda functions will ensure that the functions are ready to respond to the initial burst of requests without any cold start latency. Using Amazon RDS Proxy to create a proxy for the Aurora database will enable the Lambda functions to reuse existing database connections and reduce the overhead of establishing new ones. This will also improve the scalability and availability of the database by managing the connection pool size and handling failovers. Option A is incorrect because reserved concurrency only limits the number of concurrent executions for a function, not pre-warms them.
Option C is incorrect because converting the DB cluster to an Aurora global database will not address the issue of database connection latency, and may introduce additional costs and complexity. Option D is incorrect because moving the code blocks that initialize database connections into the function handlers will not improve the performance or scalability of the Lambda functions, and may actually worsen the cold start latency.
Reference:
AWS Lambda Provisioned Concurrency
Using Amazon RDS Proxy with AWS Lambda
Certified DevOps Engineer – Professional (DOP-C02) Study Guide (page 173)
A company is using an organization in AWS Organizations to manage multiple AWS accounts. The
company’s development team wants to use AWS Lambda functions to meet resiliency requirements and is rewriting all applications to work with Lambda functions that are deployed in a VPC. The development team is using Amazon Elastic Pile System (Amazon EFS) as shared storage in Account A in the organization.
The company wants to continue to use Amazon EPS with Lambda Company policy requires all serverless projects to be deployed in Account B.
A DevOps engineer needs to reconfigure an existing EFS file system to allow Lambda functions to access the data through an existing EPS access point.
Which combination of steps should the DevOps engineer take to meet these requirements? (Select THREE.)
- A . Update the EFS file system policy to provide Account B with access to mount and write to the EFS file system in Account A.
- B . Create SCPs to set permission guardrails with fine-grained control for Amazon EFS.
- C . Create a new EFS file system in Account B Use AWS Database Migration Service (AWS DMS) to keep data from Account A and Account B synchronized.
- D . Update the Lambda execution roles with permission to access the VPC and the EFS file system.
- E . Create a VPC peering connection to connect Account A to Account B.
- F . Configure the Lambda functions in Account B to assume an existing IAM role in Account A.
ADE
Explanation:
A company gives its employees limited rights to AWS DevOps engineers have the ability to assume an administrator role. For tracking purposes, the security team wants to receive a near-real-time notification when the administrator role is assumed.
How should this be accomplished?
- A . Configure AWS Config to publish logs to an Amazon S3 bucket Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed
- B . Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team
- C . Create an Amazon EventBridge event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed
- D . Create an Amazon EventBridge events rule using an AWS API call that uses an AWS CloudTrail event pattern to invoke an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
D
Explanation:
Create an Amazon EventBridge Rule Using an AWS CloudTrail Event Pattern:
AWS CloudTrail logs API calls made in your account, including actions performed by roles.
Create an EventBridge rule that matches CloudTrail events where the AssumeRole API call is made to assume the administrator role.
Invoke an AWS Lambda Function:
Configure the EventBridge rule to trigger a Lambda function whenever the rule’s conditions are met.
The Lambda function will handle the logic to send a notification.
Publish a Message to an Amazon SNS Topic:
The Lambda function will publish a message to an SNS topic to notify the security team.
Subscribe the security team’s email address to this SNS topic to receive real-time notifications.
Example EventBridge rule pattern:
{
"source": ["aws.cloudtrail"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["sts.amazonaws.com"],
"eventName": ["AssumeRole"],
"requestParameters": {
"roleArn": ["arn:aws:iam::<account-id>:role/AdministratorRole"]
}
}
}
Example Lambda function (Node.js) to publish to SNS:
const AWS = require(‘aws-sdk’);
const sns = new AWS.SNS();
exports.handler = async (event) => {
const params = {
Message: `Administrator role assumed: ${JSON.stringify(event.detail)}`,
TopicArn: ‘arn:aws:sns:<region>:<account-id>:<sns-topic>’
};
await sns.publish(params).promise();
};
Reference: Creating EventBridge Rules
Using AWS Lambda with Amazon SNS
A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.
The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action.
The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.
Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)
- A . Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.
- B . Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
- C . Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.
- D . Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.
- E . Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
A company detects unusual login attempts in many of its AWS accounts. A DevOps engineer must implement a solution that sends a notification to the company’s security team when multiple failed login attempts occur. The DevOps engineer has already created an Amazon Simple Notification Service (Amazon SNS) topic and has subscribed the security team to the SNS topic.
Which solution will provide the notification with the LEAST operational effort?
- A . Configure AWS CloudTrail to send log management events to an Amazon CloudWatch Logs log group. Create a CloudWatch Logs metric filter to match failed ConsoleLogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.
- B . Configure AWS CloudTrail to send log management events to an Amazon S3 bucket. Create an Amazon Athena query that returns a failure if the query finds failed logins in the logs in the S3 bucket. Create an Amazon EventBridge rule to periodically run the query. Create a second EventBridge rule to detect when the query fails and to send a message to the SNS topic.
- C . Configure AWS CloudTrail to send log data events to an Amazon CloudWatch Logs log group. Create a CloudWatch logs metric filter to match failed Consolel_ogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.
- D . Configure AWS CloudTrail to send log data events to an Amazon S3 bucket. Configure an Amazon S3 event notification for the s3:ObjectCreated event type. Filter the event type by ConsoleLogin failed events. Configure the event notification to forward to the SNS topic.
A company deploys an application to Amazon EC2 instances. The application runs Amazon Linux 2 and uses AWS CodeDeploy.
The application has the following file structure for its code repository:
The appspec.yml file has the following contents in the files section:
What will the result be for the deployment of the config.txt file?
- A . The config.txt file will be deployed to only /var/www/html/config/config txt
- B . The config.txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/config/config txt.
- C . The config.txt file will be deployed to only /usr/local/src/config txt
- D . The config txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/application/web/config txt
B
Explanation:
Deployment of config.txt file based on the appspec.yml:
The appspec.yml file specifies that config/config.txt should be copied to /usr/local/src/config.txt. The source: / directive in the appspec.yml indicates that the entire directory structure starting from the root of the application source should be copied to the specified destination, which is /var/www/html.
Result of the Deployment:
The config.txt file will be specifically deployed to /usr/local/src/config.txt as per the explicit file mapping.
The entire directory structure including application/web will be copied to /var/www/html, but this does not include config/config.txt since it has a specific destination defined. Thus, the config.txt file will be deployed only to /usr/local/src/config.txt.
Therefore, the correct answer is:
C. The config.txt file will be deployed to only /usr/local/src/config.txt.
Reference: AWS CodeDeploy AppSpec File Reference
AWS CodeDeploy Deployment Process
A company is migrating its on-premises Windows applications and Linux applications to AWS. The company will use automation to launch Amazon EC2 instances to mirror the on-premises configurations. The migrated applications require access to shared storage that uses SMB for Windows and NFS for Linux.
The company is also creating a pilot light disaster recovery (DR) environment in another AWS Region. The company will use automation to launch and configure the EC2 instances in the DR Region. The company needs to replicate the storage to the DR Region.
Which storage solution will meet these requirements?
- A . Use Amazon S3 for the application storage. Create an S3 bucket in the primary Region and an S3 bucket in the DR Region. Configure S3 Cross-Region Replication (CRR) from the primary Region to the DR Region.
- B . Use Amazon Elastic Block Store (Amazon EBS) for the application storage. Create a backup plan in AWS Backup that creates snapshots of the EBS volumes that are in the primary Region and replicates the snapshots to the DR Region.
- C . Use a Volume Gateway in AWS Storage Gateway for the application storage. Configure Cross-Region Replication (CRR) of the Volume Gateway from the primary Region to the DR Region.
- D . Use Amazon FSx for NetApp ONTAP for the application storage. Create an FSx for ONTAP instance in the DR Region. Configure NetApp SnapMirror replication from the primary Region to the DR Region.
D
Explanation:
To meet the requirements of migrating its on-premises Windows and Linux applications to AWS and creating a pilot light DR environment in another AWS Region, the company should use Amazon FSx for NetApp ONTAP for the application storage. Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, high-performing, and feature-rich file storage built on NetApp’s popular ONTAP file system. FSx for ONTAP supports multiple protocols, including SMB for Windows and NFS for Linux, so the company can access the shared storage from both types of applications. FSx for ONTAP also supports NetApp SnapMirror replication, which enables the company to replicate the storage to the DR Region. NetApp SnapMirror replication is efficient, secure, and incremental, and it preserves the data deduplication and compression benefits of FSx for ONTAP. The company can use automation to launch and configure the EC2 instances in the DR Region and then use NetApp SnapMirror to restore the data from the primary Region.
The other options are not correct because they do not meet the requirements or follow best practices. Using Amazon S3 for the application storage is not a good option because S3 is an object storage service that does not support SMB or NFS protocols natively. The company would need to use additional services or software to mount S3 buckets as file systems, which would add complexity and cost. Using Amazon EBS for the application storage is also not a good option because EBS is a block storage service that does not support SMB or NFS protocols natively. The company would need to set up and manage file servers on EC2 instances to provide shared access to the EBS volumes, which would add overhead and maintenance. Using a Volume Gateway in AWS Storage Gateway for the application storage is not a valid option because Volume Gateway does not support SMB protocol. Volume Gateway only supports iSCSI protocol, which means that only Linux applications can access the shared storage.
References:
1: What is Amazon FSx for NetApp ONTAP? – FSx for ONTAP
2: Amazon FSx for NetApp ONTAP
3: Amazon FSx for NetApp ONTAP | NetApp
4: AWS Announces General Availability of Amazon FSx for NetApp ONTAP
: Replicating Data with NetApp SnapMirror – FSx for ONTAP
: What Is Amazon S3? – Amazon Simple Storage Service
: What Is Amazon Elastic Block Store (Amazon EBS)? – Amazon Elastic Compute Cloud
: What Is AWS Storage Gateway? – AWS Storage Gateway