Practice Free SOA-C02 Exam Online Questions
A company runs its entire suite of applications on Amazon EC2 instances. The company plans to move the applications to containers and AWS Fargate. Within 6 months, the company plans to retire its EC2 instances and use only Fargate. The company has been able to estimate its future Fargate costs.
A SysOps administrator needs to choose a purchasing option to help the company minimize costs. The SysOps administrator must maximize any discounts that are available and must ensure that there are no unused reservations.
Which purchasing option will meet these requirements?
- A . Compute Savings Plans for 1 year with the No Upfront payment option
- B . Compute Savings Plans for 1 year with the Partial Upfront payment option
- C . EC2 Instance Savings Plans for 1 year with the All Upfront payment option
- D . EC2 Reserved Instances for 1 year with the Partial Upfront payment option
A
Explanation:
To minimize costs while moving from EC2 instances to AWS Fargate, Compute Savings Plans are the most flexible and cost-effective option. Compute Savings Plans apply to a variety of compute services including AWS Fargate, Amazon EC2, and AWS Lambda, allowing for greater flexibility in managing costs as the company transitions to using only Fargate.
Compute Savings Plans:
Savings Plans provide significant savings over On-Demand pricing, up to 66% savings.
Compute Savings Plans offer the flexibility to move across instance types, AWS Regions, and operating systems.
Payment Options:
The No Upfront payment option provides the most flexibility and avoids large upfront capital expenditures.
The Partial Upfront payment option offers more savings but requires an initial payment.
1-Year Term:
A 1-year term is suitable for the company’s 6-month transition period, allowing for cost savings without a long-term commitment.
Reference: AWS Savings Plans
Compute Savings Plans
The SysOps administrator needs to resolve high disk I/O issues during the bootstrap process of Nitro-based EC2 instances in an Auto Scaling group with gp3 EBS volumes. (Select TWO):
- A . Increase the EC2 instance size.
- B . Increase the EBS volume capacity.
- C . Increase the EBS volume IOPS.
- D . Increase the EBS volume throughput.
- E . Change the instance type to an instance that is not Nitro-based.
C, D
Explanation:
To address high I/O requirements during the bootstrap process, increasing both IOPS and throughput on the EBS volume is recommended:
Increase EBS Volume IOPS: Enhances the instance’s ability to handle multiple read and write operations per second, essential for data-heavy tasks like downloading Docker images.
Increase EBS Volume Throughput: Provides higher data transfer rates, reducing bottlenecks during intensive I/O operations.
Increasing instance size is unnecessary if the primary issue is disk I/O, and changing from Nitro-based instances would not address the underlying storage performance need.
A company is tunning a website on Amazon EC2 instances thai are in an Auto Scaling group When the website traffic increases, additional instances lake several minutes to become available because ot a long-running user data script that installs software A SysOps administrator must decrease the time that is required (or new instances to become available
Which action should the SysOps administrator take to meet this requirement?
- A . Reduce the scaling thresholds so that instances are added before traffic increases
- B . Purchase Reserved Instances to cover 100% of the maximum capacity of the Auto Scaling group
- C . Update the Auto Scaling group to launch instances that have a storage optimized instance type
- D . Use EC2 Image Builder to prepare an Amazon Machine Image (AMI) that has pre-installed software
D
Explanation:
To reduce the time required for new instances to become available in an Auto Scaling group, pre-installing the necessary software in the AMI used by the Auto Scaling group is the most effective solution.
Use EC2 Image Builder:
Utilize EC2 Image Builder to create a custom AMI that includes all the required software and configurations.
This reduces the setup time during instance launch, as the user data script will no longer need to
install the software.
Reference: EC2 Image Builder
Update Auto Scaling Group:
Update the Auto Scaling group to use the new AMI with pre-installed software.
Reference: Auto Scaling Groups
This solution ensures that new instances can handle traffic more quickly, reducing latency during scaling events.
A SysOps administrator manages a company’s Amazon S3 buckets. The SysOps administrator has identified 5 GB of incomplete multipart uploads in an S3 bucket in the company’s AWS account. The SysOps administrator needs to reduce the number of incomplete multipart upload objects in the S3 bucket.
Which solution will meet this requirement?
- A . Create an S3 Lifecycle rule on the S3 bucket to delete expired markers or incomplete multipart uploads
- B . Require users that perform uploads of files into Amazon S3 to use the S3 TransferUtility.
- C . Enable S3 Versioning on the S3 bucket that contains the incomplete multipart uploads.
- D . Create an S3 Object Lambda Access Point to delete incomplete multipart uploads.
A
Explanation:
To manage incomplete multipart uploads in an Amazon S3 bucket, creating an S3 Lifecycle rule to specifically address these uploads is the most effective method. The rule can be configured to automatically delete expired multipart upload parts, which helps in cleaning up unused data and reducing storage costs. Option A is correct as it directly addresses the requirement to manage incomplete uploads effectively. Reference on setting up S3 Lifecycle policies can be found here Amazon S3 Lifecycle.
A company has a list of pre-appf oved Amazon Machine Images (AMIs) for developers lo use to launch Amazon EC2 instances However, developers are still launching EC2 instances from unapproved AMIs.
A SysOps administrator must implement a solution that automatically terminates any instances that are launched from unapproved AMIs.
Which solution will meet mis requirement?
- A . Set up an AWS Config managed rule to check if instances are running from AMIs that are on the list of pre-approved AMIs. Configure an automatic remediation action so that an AWS Systems Manager Automation runbook terminates any instances that are noncompliant with the rule
- B . Store the list of pre-approved AMIs in an Amazon DynamoDB global table that is replicated to all AWS Regions that the developers use. Create Regional EC2 launch templates. Configure the launch templates to check AMIs against the list and to terminate any instances that are not on the list
- C . Select the Amazon CloudWatch metric that shows all running instances and the AMIs that the instances were launched from Create a CloudWatch alarm that terminates an instance if the metric shows the use of an unapproved AMI.
- D . Create a custom Amazon Inspector finding to compare a running instance’s AMI against the list of pre-approved AMIs Create an AWS Lambda function that
terminates instances. Configure Amazon Inspector to report findings of unapproved AMIs to an Amazon Simple Queue Service (Amazon SQS) queue to invoke the Lambda function.
A
Explanation:
AWS Config Managed Rule:
AWS Config can be used to assess, audit, and evaluate the configurations of AWS resources. The managed rule can check if instances are launched from approved AMIs.
Steps:
Go to the AWS Management Console.
Navigate to AWS Config.
Create a managed rule that checks for EC2 instances running approved AMIs.
Configure the rule to use a list of approved AMIs.
Automatic Remediation with Systems Manager Automation:
AWS Systems Manager Automation runbooks can automate the process of remediating non-compliant resources.
Steps:
Create a Systems Manager Automation runbook that terminates instances not running approved AMIs.
Attach the runbook to the AWS Config rule for automatic remediation.
Reference: AWS Config Rules, AWS Systems Manager Automation
A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin. During a review of the access logs, the company determines that some requests are going directly to the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3 bucket to allow requests only from CloudFront.
What should the SysOps administrator do to meet this requirement?
- A . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Remove access to and from other principals in the S3 bucket policy. Update the S3 bucket policy to allow access only from the OAI.
- B . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
- C . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
- D . Update the S3 bucket policy to allow access only from the CloudFront distribution. Remove access to and from other principals in the S3 bucket policy. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
A
Explanation:
To secure the S3 bucket and allow access only from CloudFront, the following steps should be taken:
Create an OAI in CloudFront:
In the CloudFront console, create an origin access identity (OAI) and associate it with your CloudFront distribution.
Reference: Restricting Access to S3 Buckets
Update S3 Bucket Policy:
Modify the S3 bucket policy to allow access only from the OAI. This involves adding a policy statement that grants the OAI permission to get objects from the bucket and removing any other public access permissions.
Example Policy:
json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3EXAMPLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}
Reference: Bucket Policy Examples
Test Configuration:
Ensure that the S3 bucket is not publicly accessible and that requests to the bucket through the CloudFront distribution are successful.
Reference: Testing CloudFront OAI
A company wants to reduce costs for jobs that can be completed at any time. The jobs currently run by using multiple Amazon EC2 On-Demand Instances, and the jobs take slightly less than 2 hours to complete. If a job fails for any reason, it must be restarted from the beginning.
Which solution will meet these requirements MOST cost-effectively?
- A . Purchase Reserved Instances for the jobs.
- B . Submit a request for a one-time Spot Instance for the jobs.
- C . Submit a request for Spot Instances with a defined duration for the jobs.
- D . Use a mixture of On-Demand Instances and Spot Instances for the jobs.
C
Explanation:
To reduce costs effectively for jobs that are flexible in their scheduling and have a clear, predictable runtime:
Spot Instances with Defined Duration (Spot Blocks): Spot Instances offer significant discounts compared to On-Demand pricing. For workloads like the described jobs that have a predictable duration (slightly less than 2 hours), requesting Spot Instances with a defined duration (also known as Spot Blocks) is ideal. This option allows you to request Spot Instances guaranteed to not be terminated by AWS due to price changes for the duration specified.
Cost Efficiency: This method ensures that the instances will run for the duration required to complete the jobs without interruption, unless AWS experiences an exceptional shortage of capacity. The cost savings compared to On-Demand Instances can be substantial, especially for regular, predictable workloads.
Risk Mitigation: Although Spot Instances can be interrupted, using them with a defined duration reduces the risk of interruptions within the set time frame, making them suitable for jobs that can tolerate a restart in rare cases of interruption after the block time expires.
This strategy combines cost savings with the performance requirements of the jobs, making it an optimal choice for tasks that are not time-critical but need completion within a predictable timeframe.
A SysOps administrator needs to implement a backup strategy for Amazon EC2 resources and Amazon RDS resources.
The backup strategy must meet the following retention requirements:
• Daily backups: must be kept for 6 days
• Weekly backups: must be kept for 4 weeks:
• Monthly backups: must be kept for 11 months
• Yearly backups: must be kept for 7 years
Which backup strategy will meet these requirements with the LEAST administrative effort?
- A . Use Amazon Data Lifecycle Manager to create an Amazon Elastic Block Store (Amazon EBS) snapshot policy. Create tags on each resource that needs to be backed up. Create multiple schedules according to the requirements within the policy. Set the appropriate frequency and retention period.
- B . Use AWS Backup to create a new backup plan for each retention requirement with a backup frequency of daily, weekly, monthly, or yearly. Set the retention period to match the requirement. Create tags on each resource that needs to be backed up. Set up resource assignment by using the tags.
- C . Create an AWS Lambda function. Program the Lambda function to use native tooling to take backups of file systems in Amazon EC2 and to make copies of databases in Amazon RDS. Create an Amazon EventBridge rule to invoke the Lambda function.
- D . Use Amazon Data Lifecycle Manager to create an Amazon Elastic Block Store (Amazon EBS) snapshot policy. Create tags on each resource that needs to be backed up. Set up resource assignment by using the tags. Create multiple schedules according to the requirements within the policy. Set the appropriate frequency and retention period. In Amazon RDS, activate automated backups on the required DB instances.
B
Explanation:
AWS Backup provides a centralized way to manage backups across AWS services. Here’s how to implement the required backup strategy with minimal administrative effort:
Create Backup Plans: Set up different backup plans in AWS Backup, each configured for a specific backup frequency―daily, weekly, monthly, and yearly.
Set Retention Periods: For each backup plan, configure the retention settings to align with the required retention durations: 6 days, 4 weeks, 11 months, and 7 years respectively.
Tag Resources: Apply tags to each EC2 and RDS resource that needs to be backed up. This allows for the automated inclusion of these resources in the respective backup plans based on their tags.
Assign Resources to Backup Plans: Use the tags to define which resources are included in each
backup plan, ensuring that all necessary resources are backed up according to the defined schedules and retention policies.
AWS Documentation
Reference: More details on setting up and managing AWS Backup can be found here: AWS Backup.
A SysOps administrator has Nocked public access to all company Amazon S3 buckets. The SysOps administrator wants to be notified when an S3 bucket becomes publicly readable in the future.
What is the MOST operationally efficient way to meet this requirement?
- A . Create an AWS Lambda function that periodically checks the public access settings for each S3 bucket. Set up Amazon Simple Notification Service (Amazon SNS) to send notifications.
- B . Create a cron script that uses the S3 API to check the public access settings for each S3 bucket. Set up Amazon Simple Notification Service (Amazon SNS) to send notifications
- C . Enable S3 Event notified tons for each S3 bucket. Subscribe S3 Event Notifications to an Amazon Simple Notification Service (Amazon SNS) topic.
- D . Enable the s3-bucket-public-read-prohibited managed rule in AWS Config. Subscribe the AWS Config rule to an Amazon Simple Notification Service (Amazon SNS) topic.
D
Explanation:
AWS Config can continuously monitor and record your AWS resource configurations. It provides AWS Config rules that automatically check the configuration of AWS resources and notify you of compliance and non-compliance.
Steps:
Enable AWS Config:
Open the AWS Config console.
Follow the steps to set up AWS Config if it is not already enabled.
Add AWS Managed Rules:
In the AWS Config console, choose "Rules".
Add the s3-bucket-public-read-prohibited managed rule.
Configure the rule to check all S3 buckets.
Set Up SNS Notifications:
Create an Amazon SNS topic.
Subscribe your email or other communication channels to the SNS topic.
In AWS Config, configure the rule to send notifications to the SNS topic whenever there is a compliance change.
This approach ensures operational efficiency as AWS Config will automatically monitor S3 buckets and notify you through SNS if any bucket becomes publicly accessible.
Reference: AWS Config Managed Rules
Setting Up AWS Config
A SysOps administrator has Nocked public access to all company Amazon S3 buckets. The SysOps administrator wants to be notified when an S3 bucket becomes publicly readable in the future.
What is the MOST operationally efficient way to meet this requirement?
- A . Create an AWS Lambda function that periodically checks the public access settings for each S3 bucket. Set up Amazon Simple Notification Service (Amazon SNS) to send notifications.
- B . Create a cron script that uses the S3 API to check the public access settings for each S3 bucket. Set up Amazon Simple Notification Service (Amazon SNS) to send notifications
- C . Enable S3 Event notified tons for each S3 bucket. Subscribe S3 Event Notifications to an Amazon Simple Notification Service (Amazon SNS) topic.
- D . Enable the s3-bucket-public-read-prohibited managed rule in AWS Config. Subscribe the AWS Config rule to an Amazon Simple Notification Service (Amazon SNS) topic.
D
Explanation:
AWS Config can continuously monitor and record your AWS resource configurations. It provides AWS Config rules that automatically check the configuration of AWS resources and notify you of compliance and non-compliance.
Steps:
Enable AWS Config:
Open the AWS Config console.
Follow the steps to set up AWS Config if it is not already enabled.
Add AWS Managed Rules:
In the AWS Config console, choose "Rules".
Add the s3-bucket-public-read-prohibited managed rule.
Configure the rule to check all S3 buckets.
Set Up SNS Notifications:
Create an Amazon SNS topic.
Subscribe your email or other communication channels to the SNS topic.
In AWS Config, configure the rule to send notifications to the SNS topic whenever there is a compliance change.
This approach ensures operational efficiency as AWS Config will automatically monitor S3 buckets and notify you through SNS if any bucket becomes publicly accessible.
Reference: AWS Config Managed Rules
Setting Up AWS Config