Practice Free CLF-C02 Exam Online Questions
A company’s information security manager is supervising a move to AWS and wants to ensure that AWS best practices are followed. The manager has concerns about the potential misuse of AWS account root user credentials.
Which of the following is an AWS best practice for using the AWS account root user credentials?
- A . Allow only the manager to use the account root user credentials for normal activities.
- B . Use the account root user credentials only for Amazon EC2 instances from the AWS Free Tier.
- C . Use the account root user credentials only when they alone must be used to perform a required function.
- D . Use the account root user credentials only for the creation of private VPC subnets.
C
Explanation:
The AWS best practice for using the AWS account root user credentials is to use them only when they alone must be used to perform a required function. The AWS account root user credentials have full access to all the resources in the account, and therefore pose a security risk if compromised or misused. You should create individual IAM users with the minimum necessary permissions for everyday tasks, and use AWS Organizations to manage multiple accounts. You should also enable multi-factor authentication (MFA) and rotate the password for the root user regularly. Some of the functions that require the root user credentials are changing the account name, closing the account, changing the support plan, and restoring an IAM user’s access.
Which factors affect costs in the AWS Cloud? (Select TWO.)
- A . The number of unused AWS Lambda functions
- B . The number of configured Amazon S3 buckets
- C . Inbound data transfers without acceleration
- D . Outbound data transfers without acceleration
- E . Compute resources that are currently in use
D, E
Explanation:
Outbound data transfers without acceleration and compute resources that are currently in use are the factors that affect costs in the AWS Cloud. Outbound data transfers without acceleration refer to the amount of data that is transferred from AWS to the internet, without using any service that can optimize the speed and cost of the data transfer, such as AWS Global Accelerator or Amazon CloudFront. Outbound data transfers are charged at different rates depending on the source and destination AWS Regions, and the volume of data transferred. Compute resources that are currently in use refer to the AWS services and resources that provide computing capacity, such as Amazon EC2 instances, AWS Lambda functions, or Amazon ECS tasks. Compute resources are charged based on the type, size, and configuration of the resources, and the duration and frequency of their usage.
A company wants to use the latest technologies and wants to minimize its capital investment. Instead of upgrading on-premises infrastructure, the company wants to move to the AWS Cloud .
Which AWS Cloud benefit does this scenario describe?
- A . Increased speed to market
- B . The trade of infrastructure expenses for operating expenses
- C . Massive economies of scale
- D . The ability to go global in minutes
B
Explanation:
The trade of infrastructure expenses for operating expenses is one of the benefits of the AWS Cloud. By moving to the AWS Cloud, the company can avoid the upfront costs of purchasing and maintaining on-premises infrastructure, such as servers, storage, network, and software. Instead, the company can pay only for the AWS resources and services that they use, as they use them. This reduces the risk and complexity of planning and managing IT infrastructure, and allows the company to focus on innovation and growth. Increased speed to market, massive economies of scale, and the ability to go global in minutes are also benefits of the AWS Cloud, but they are not the best ones to describe this scenario. Increased speed to market means that the company can launch new products and services faster by using AWS services and tools. Massive economies of scale means that the company can benefit from the lower costs and higher performance that AWS achieves by operating at a large scale. The ability to go global in minutes means that the company can deploy their applications and data in multiple regions and availability zones around the world to reach their customers faster and improve performance and reliability5
Which AWS service is a fully managed NoSQL database service?
- A . Amazon RDS
- B . Amazon Redshift
- C . Amazon DynamoDB
- D . Amazon Aurora
C
Explanation:
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It supports both document and key-value data models and is designed to handle large amounts of data across multiple servers. Other options, like Amazon RDS and Aurora, are managed relational database services, and Amazon Redshift is a data warehousing service.
Which AWS database service provides in-memory data storage?
- A . Amazon DynamoDB
- B . Amazon ElastiCache
- C . Amazon RDS
- D . Amazon Timestream
B
Explanation:
The correct answer is B because Amazon ElastiCache is a service that provides in-memory data storage. Amazon ElastiCache is a fully managed, scalable, and high-performance service that supports two popular open-source in-memory engines: Redis and Memcached. Amazon ElastiCache allows users to store and retrieve data from fast, low-latency, and high-throughput in-memory systems. Users can use Amazon ElastiCache to improve the performance of their applications by caching frequently accessed data, reducing database load, and enabling real-time data processing. The other options are incorrect because they are not services that provide in-memory data storage. Amazon DynamoDB is a service that provides key-value and document data storage. Amazon RDS is a service that provides relational data storage. Amazon Timestream is a service that provides time series data storage.
Reference: Amazon ElastiCache FAQs
A company wants to run its workload on Amazon EC2 instances for more than 1 year. This workload will run continuously.
Which option offers a discounted hourly rate compared to the hourly rate of On-Demand Instances?
- A . AWS Graviton processor
- B . Dedicated Hosts
- C . EC2 Instance Savings Plans
- D . Amazon EC2 Auto Scaling instances
C
Explanation:
EC2 Instance Savings Plans are a flexible pricing model that offer discounted hourly rates on Amazon EC2 instance usage for a 1 or 3 year term. EC2 Instance Savings Plans provide savings up to 72% off On-Demand rates, in exchange for a commitment to a specific instance family in a chosen AWS Region (for example, M5 in Virginia). These plans automatically apply to usage regardless of size (for example, m5.xlarge, m5.2xlarge, etc.), OS (for example, Windows, Linux, etc.), and tenancy (Host, Dedicated, Default) within the specified family in a Region. With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your EC2 Instance Savings Plan4567.
References: 4: Compute Savings Plans C Amazon Web Services, 5: What are Savings Plans? – Savings Plans, 6: How To Cut Your AWS Bill With Savings Plans (and avoid some common …, 7: AWS Savings Plans vs Reserved Instances – GorillaStack
Which services can be used to deploy applications on AWS? (Select TWO.)
- A . AWS Elastic Beanstalk
- B . AWS Config
- C . AWS OpsWorks
- D . AWS Application Discovery Service
- E . Amazon Kinesis
A, C
Explanation:
The services that can be used to deploy applications on AWS are:
AWS Elastic Beanstalk. This is a service that simplifies the deployment and management of web applications on AWS. Users can upload their application code and Elastic Beanstalk automatically handles the provisioning, scaling, load balancing, monitoring, and health checking of the resources needed to run the application. Users can also retain full control and access to the underlying resources and customize their configuration settings. Elastic Beanstalk supports multiple platforms, such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. [AWS Elastic Beanstalk Overview] AWS Certified Cloud Practitioner – aws.amazon.com
AWS OpsWorks. This is a service that provides configuration management and automation for AWS resources. Users can define the application architecture and the configuration of each resource using Chef or Puppet, which are popular open-source automation platforms. OpsWorks then automatically creates and configures the resources according to the user’s specifications. OpsWorks also provides features such as auto scaling, monitoring, and integration with other AWS services. OpsWorks has
two offerings: OpsWorks for Chef Automate and OpsWorks for Puppet Enterprise. [AWS OpsWorks Overview] AWS Certified Cloud Practitioner – aws.amazon.com
Which AWS service can a company use to manage encryption keys in the cloud?
- A . AWS License Manager
- B . AWS Certificate Manager (ACM)
- C . AWS CloudHSM
- D . AWS Directory Service
C
Explanation:
AWS CloudHSM provides hardware-based key management to manage and protect encryption keys in the AWS Cloud. It allows customers to generate and use their own encryption keys while complying with rigorous security requirements. While AWS Certificate Manager (ACM) manages SSL/TLS certificates, it does not handle encryption keys independently, and AWS License Manager and AWS Directory Service are not designed for managing encryption keys. AWS KMS is also relevant for key management but wasn’t listed as an option in this question.
A company is running applications on Amazon EC2 instances in the same AWS account for several different projects. The company wants to track the infrastructure costs for each of the projects separately. The company must conduct this tracking with the least possible impact to the existing infrastructure and with no additional cost.
What should the company do to meet these requirements?
- A . Use a different EC2 instance type for each project.
- B . Publish project-specific custom Amazon CloudWatch metrics for each application.
- C . Deploy EC2 instances for each project in a separate AWS account.
- D . Use cost allocation tags with values that are specific to each project.
D
Explanation:
The correct answer is D because cost allocation tags are a way to track the infrastructure costs for each of the projects separately. Cost allocation tags are key-value pairs that can be attached to AWS resources, such as EC2 instances, and used to categorize and group them for billing purposes. The other options are incorrect because they do not meet the requirements of the question. Use a different EC2 instance type for each project does not help to track the costs for each project, and may impact the performance and compatibility of the applications. Publish project-specific custom Amazon CloudWatch metrics for each application does not help to track the costs for each project, and may incur additional charges for using CloudWatch. Deploy EC2 instances for each project in a separate AWS account does help to track the costs for each project, but it impacts the existing infrastructure and incurs additional charges for using multiple accounts.
Reference: Using Cost Allocation Tags
6 1. A company has an online shopping website and wants to store customers’ credit card dat a. The company must meet Payment Card Industry (PCI) standards.
Which service can the company use to access AWS compliance documentation?
A company provides a web-based ecommerce service that runs in two Availability Zones within a single AWS Region. The web service distributes content that is stored in the Amazon S3 Standard storage class. The company wants to improve the web service’s performance globally .
What should the company do to meet this requirement?
- A . Change the S3 storage class to S3 Intelligent-Tiering.
- B . Deploy an Amazon CloudFront distribution to cache web server content in edge locations.
- C . Use Amazon API Gateway for the web service.
- D . Migrate the website ecommerce servers to Amazon EC2 with enhanced networking.
B
Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront can cache web server content in edge locations, which are located closer to the end users, to improve the web service’s performance globally2.