Practice Free DOP-C02 Exam Online Questions
A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.
What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?
- A . Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.
- B . Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.
- C . Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.
- D . Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443. and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB.
A company has its AWS accounts in an organization in AWS Organizations. AWS Config is manually configured in each AWS account. The company needs to implement a solution to centrally configure AWS Config for all accounts in the organization The solution also must record resource changes to a central account.
Which combination of actions should a DevOps engineer perform to meet these requirements? (Choose two.)
- A . Configure a delegated administrator account for AWS Config. Enable trusted access for AWS Config in the organization.
- B . Configure a delegated administrator account for AWS Config. Create a service-linked role for AWS Config in the organization’s management account.
- C . Create an AWS CloudFormation template to create an AWS Config aggregator. Configure a CloudFormation stack set to deploy the template to all accounts in the organization.
- D . Create an AWS Config organization aggregator in the organization’s management account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.
- E . Create an AWS Config organization aggregator in the delegated administrator account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.
A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing.
Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.
How can log collection be automated?
- A . Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- B . Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- C . Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- D . Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
A video-sharing company stores its videos in Amazon S3. The company has observed a sudden increase in video access requests, but the company does not know which videos are most popular. The company needs to identify the general access pattern for the video files. This pattern includes the number of users who access a certain file on a given day, as well as the number of pull requests for certain files.
How can the company meet these requirements with the LEAST amount of effort?
- A . Activate S3 server access logging. Import the access logs into an Amazon Aurora database. Use an Aurora SQL query to analyze the access patterns.
- B . Activate S3 server access logging. Use Amazon Athena to create an external table with the log files. Use Athena to create a SQL query to analyze the access patterns.
- C . Invoke an AWS Lambda function for every S3 object access event. Configure the Lambda function to write the file access information, such as user. S3 bucket, and file key, to an Amazon Aurora database. Use an Aurora SQL query to analyze the access patterns.
- D . Record an Amazon CloudWatch Logs log message for every S3 object access event. Configure a CloudWatch Logs log stream to write the file access information, such as user, S3 bucket, and file key, to an Amazon Kinesis Data Analytics for SQL application. Perform a sliding window analysis.
A company is refactoring applications to use AWS. The company identifies an internal web application that needs to make Amazon S3 API calls in a specific AWS account.
The company wants to use its existing identity provider (IdP) auth.company.com for authentication. The IdP supports only OpenID Connect (OIDC). A DevOps engineer needs to secure the web application’s access to the AWS account.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Configure AWS 1AM Identity Center. Configure an IdP. Upload the IdP metadata from the existing IdP.
- B . Create an 1AM IdP by using the provider URL, audience, and signature from the existing IdP.
- C . Create an 1AM role that has a policy that allows the necessary S3 actions. Configure the role’s trust policy to allow the OIDC IdP to assume the role if the sts.amazon.conraud context key is appid from idp.
- D . Create an 1AM role that has a policy that allows the necessary S3 actions. Configure the role’s trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
- E . Configure the web application lo use the AssumeRoleWith Web Identity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
- F . Configure the web application to use the GetFederationToken API operation to retrieve temporary credentials Use the temporary credentials to make the S3 API calls.
B, D, E
Explanation:
Step 1: Creating an Identity Provider in IAM
You first need to configure AWS to trust the external identity provider (IdP), which in this case supports OpenID Connect (OIDC). The IdP will handle the authentication, and AWS will handle the authorization based on the IdP’s token.
Action: Create an IAM Identity Provider (IdP) in AWS using the existing provider’s URL, audience, and signature. This step is essential for establishing trust between AWS and the external IdP.
Why: This allows AWS to accept tokens from your external IdP (auth.company.com) for authentication.
Reference: AWS documentation on IAM Identity Providers.
So, this corresponds to Option B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
Step 2: Creating an IAM Role with Specific Permissions
Next, you need to create an IAM role with a trust policy that allows the external IdP to assume it when certain conditions are met. Specifically, the trust policy needs to allow the role to be assumed based on the context key auth.company.com:aud (audience claim in the token).
Action: Create an IAM role that has the necessary permissions (e.g., Amazon S3 access). The role’s trust policy should specify the OIDC IdP as the trusted entity and validate the audience claim (auth.company.com:aud), which comes from the token provided by the IdP.
Why: This step ensures that only the specified web application authenticated via OIDC can assume the IAM role to make API calls.
Reference: AWS documentation on OIDC and Role Assumption.
This corresponds to Option D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role’s trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
Step 3: Using Temporary Credentials via AssumeRoleWithWebIdentity API
To securely make Amazon S3 API calls, the web application will need temporary credentials. The web application can use the AssumeRoleWithWebIdentity API call to assume the IAM role configured in the previous step and obtain temporary AWS credentials. These credentials can then be used to interact with Amazon S3.
Action: The web application must be configured to call the AssumeRoleWithWebIdentity API operation, passing the OIDC token from the IdP to obtain temporary credentials.
Why: This allows the web application to authenticate via the external IdP and then authorize access to AWS resources securely using short-lived credentials.
Reference: AWS documentation on AssumeRoleWithWebIdentity.
This corresponds to Option E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
Summary of Selected Answers:
B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role’s trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
This setup enables the web application to use OpenID Connect (OIDC) for authentication and securely interact with Amazon S3 in a specific AWS account using short-lived credentials obtained through AWS Security Token Service (STS).
A company has set up AWS CodeArtifact repositories with public upstream repositories The company’s development team consumes open source dependencies from the repositories in the company’s internal network.
The company’s security team recently discovered a critical vulnerability in the most recent version of a package that the development team consumes. The security team has produced a patched version to fix the vulnerability. The company needs to prevent the vulnerable version from being downloaded. The company also needs to allow the security team to publish the patched version.
Which combination of steps will meet these requirements? {Select TWO.)
- A . Update the status of the affected CodeArtifact package version to unlisted
- B . Update the status of the affected CodeArtifact package version to deleted
- C . Update the status of the affected CodeArtifact package version to archived.
- D . Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations
- E . Update the CodeArtifact package origin control settings to block direct publishing and to allow upstream operations.
B, D
Explanation:
Update the status of the affected CodeArtifact package version to deleted:
Deleting the vulnerable package version prevents it from being available for download by any users or systems, ensuring that the compromised version is not consumed.
Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations:
By allowing direct publishing, the security team can publish the patched version of the package directly to the CodeArtifact repository.
Blocking upstream operations prevents the repository from automatically fetching and serving the vulnerable package version from upstream public repositories.
By deleting the vulnerable version and configuring the origin control settings to allow direct publishing and block upstream operations, the company ensures that only the patched version is available and the vulnerable version cannot be downloaded.
Reference: Managing Package Versions in CodeArtifact
Package Origin Controls in CodeArtifact
A company uses an Amazon Aurora PostgreSQL global database that has two secondary AWS Regions. A DevOps engineer has configured the database parameter group to guarantee an RPO of 60 seconds. Write operations on the primary cluster are occasionally blocked because of the RPO setting.
The DevOps engineer needs to reduce the frequency of blocked write operations.
Which solution will meet these requirements?
- A . Add an additional secondary cluster to the global database.
- B . Enable write forwarding for the global database.
- C . Remove one of the secondary clusters from the global database.
- D . Configure synchronous replication for the global database.
C
Explanation:
Step 1: Reducing Replication Lag in Aurora Global Databases
In Amazon Aurora global databases, write operations on the primary cluster can be delayed due to the time it takes to replicate to secondary clusters, especially when there are multiple secondary regions involved.
Issue: The write operations are occasionally blocked due to the RPO setting, which guarantees replication within 60 seconds.
Action: Remove one of the secondary clusters from the global database.
Why: Fewer secondary clusters will reduce the overall replication lag, improving write performance and reducing the frequency of blocked writes.
Reference: AWS documentation on Aurora Global Database.
This corresponds to Option C: Remove one of the secondary clusters from the global database.
A company uses AWS Secrets Manager to store a set of sensitive API keys that an AWS Lambda
function uses. When the Lambda function is invoked, the Lambda function retrieves the API keys and makes an API call to an external service. The Secrets Manager secret is encrypted with the default AWS Key Management Service (AWS KMS) key.
A DevOps engineer needs to update the infrastructure to ensure that only the Lambda function’s execution role can access the values in Secrets Manager. The solution must apply the principle of least privilege.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Update the default KMS key for Secrets Manager to allow only the Lambda function’s execution role to decrypt.
- B . Create a KMS customer managed key that trusts Secrets Manager and allows the Lambda function’s execution role to decrypt. Update Secrets Manager to use the new customer managed key.
- C . Create a KMS customer managed key that trusts Secrets Manager and allows the account’s :root principal to decrypt. Update Secrets Manager to use the new customer managed key.
- D . Ensure that the Lambda function’s execution role has the KMS permissions scoped on the resource level. Configure the permissions so that the KMS key can encrypt the Secrets Manager secret.
- E . Remove all KMS permissions from the Lambda function’s execution role.
A production account has a requirement that any Amazon EC2 instance that has been logged in to manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with the Amazon CloudWatch Logs agent configured.
How can this process be automated?
- A . Create a CloudWatch Logs subscription to an AWS Step Functions application. Configure an AWS Lambda function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a second Lambda function once a day that will terminate all instances with this tag.
- B . Create an Amazon CloudWatch alarm that will be invoked by the login event. Send the notification to an Amazon Simple Notification Service (Amazon SNS) topic that the operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
- C . Create an Amazon CloudWatch alarm that will be invoked by the login event. Configure the alarm to send to an Amazon Simple Queue Service (Amazon SQS) queue. Use a group of worker instances to process messages from the queue, which then schedules an Amazon EventBridge rule to be invoked.
- D . Create a CloudWatch Logs subscription to an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily Lambda function that terminates all instances with this tag.
A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)
- A . In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
- B . In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
- C . In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
- D . In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
- E . In the source account, share the unencrypted AMI with the target account.
- F . In the source account, share the encrypted AMI with the target account.