Practice Free CAS-005 Exam Online Questions
You want to set up JSA to collect network traffic flows from network devices on your network.
Which two statements are correct when performing this task? (Choose two.)
- A . BGP FlowSpec is used to collect traffic flows from Junos OS devices.
- B . Statistical sampling increases processor utilization
- C . Statistical sampling decreases event correlation accuracy.
- D . Superflows reduce traffic licensing requirements.
C D
Explanation:
Statistical sampling involves collecting a representative subset of data rather than examining all traffic. While this method decreases processor utilization by reducing the volume of data that must be analyzed and stored, it can also lead to decreased accuracy in event correlation because not all events are captured.
Superflows in JSA are aggregated flow records that represent summaries of multiple flow records. This aggregation reduces the number of flows that need to be processed and stored, which can help in managing licensing requirements related to the volume of traffic being analyzed, especially in environments with high traffic volumes.
A company that uses containers to run its applications is required to identify vulnerabilities on every container image in a private repository The security team needs to be able to quickly evaluate whether to respond to a given vulnerability.
Which of the following, will allow the security team to achieve the objective with the last effort?
- A . SAST scan reports
- B . Centralized SBoM
- C . CIS benchmark compliance reports
- D . Credentialed vulnerability scan
B
Explanation:
A centralized Software Bill of Materials (SBoM) is the best solution for identifying vulnerabilities in container images in a private repository. An SBoM provides a comprehensive inventory of all components, dependencies, and their versions within a container image, facilitating quick evaluation and response to vulnerabilities.
Why Centralized SBoM?
Comprehensive Inventory: An SBoM lists all software components, including their versions and dependencies, allowing for thorough vulnerability assessments.
Quick Identification: Centralizing SBoM data enables rapid identification of affected containers when a vulnerability is disclosed.
Automation: SBoMs can be integrated into automated tools for continuous monitoring and alerting of vulnerabilities.
Regulatory Compliance: Helps in meeting compliance requirements by providing a clear and auditable record of all software components used.
Other options, while useful, do not provide the same level of comprehensive and efficient vulnerability management:
A hospital provides tablets to its medical staff to enable them to more quickly access and edit patients’ charts. The hospital wants to ensure that if a tablet is Identified as lost or stolen and a remote command is issued, the risk of data loss can be mitigated within seconds.
The tablets are configured as follows to meet hospital policy
• Full disk encryption is enabled
• "Always On" corporate VPN is enabled
• ef-use-backed keystore is enabled’ready.
• Wi-Fi 6 is configured with SAE.
• Location services is disabled. •Application allow list is configured
- A . Revoking the user certificates used for VPN and Wi-Fi access
- B . Performing cryptographic obfuscation
- C . Using geolocation to find the device
- D . Configuring the application allow list to only per mil emergency calls
- E . Returning on the device’s solid-state media to zero
E
Explanation:
To mitigate the risk of data loss on a lost or stolen tablet quickly, the most effective strategy is to return the device’s solid-state media to zero, which effectively erases all data on the device.
Here’s why:
Immediate Data Erasure: Returning the solid-state media to zero ensures that all data is wiped instantly, mitigating the risk of data loss if the device is lost or stolen.
Full Disk Encryption: Even though the tablets are already encrypted, physically erasing the data ensures that no residual data can be accessed if someone attempts to bypass encryption. Compliance and Security: This method adheres to best practices for data security and compliance, ensuring that sensitive patient data cannot be accessed by unauthorized parties.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl NIST Special Publication 800-88: Guidelines for Media Sanitization ISO/IEC 27002:2013 – Information Security Management
A security architect wants to develop a baseline of security configurations These configurations automatically will be utilized machine is created.
Which of the following technologies should the security architect deploy to accomplish this goal?
- A . Short
- B . GASB
- C . Ansible
- D . CMDB
C
Explanation:
To develop a baseline of security configurations that will be automatically utilized when a machine is created, the security architect should deploy Ansible.
Here’s why:
Automation: Ansible is an automation tool that allows for the configuration, management, and deployment of applications and systems. It ensures that security configurations are consistently applied across all new machines.
Scalability: Ansible can scale to manage thousands of machines, making it suitable for large enterprises that need to maintain consistent security configurations across their infrastructure. Compliance: By using Ansible, organizations can enforce compliance with security policies and standards, ensuring that all systems are configured according to best practices.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl Ansible Documentation: Best Practices
NIST Special Publication 800-40: Guide to Enterprise Patch Management Technologies
A company recently experienced an incident in which an advanced threat actor was able to shim malicious code against the hardware static of a domain controller The forensic team cryptographically validated that com the underlying firmware of the box and the operating system had not been compromised. However, the attacker was able to exfiltrate information from the server using a steganographic technique within LOAP.
Which of the following is me b»« way to reduce the risk oi reoccurrence?
- A . Enforcing allow lists for authorized network pons and protocols
- B . Measuring and attesting to the entire boot chum
- C . Rolling the cryptographic keys used for hardware security modules
- D . Using code signing to verify the source of OS updates
A
Explanation:
The scenario describes a sophisticated attack where the threat actor used steganography within LDAP to exfiltrate data. Given that the hardware and OS firmware were validated and found uncompromised, the attack vector likely exploited a network communication channel. To mitigate such risks, enforcing allow lists for authorized network ports and protocols is the most effective strategy.
Here’s why this option is optimal:
Port and Protocol Restrictions: By creating an allow list, the organization can restrict communications
to only those ports and protocols that are necessary for legitimate business operations. This reduces the attack surface by preventing unauthorized or unusual traffic.
Network Segmentation: Enforcing such rules helps in segmenting the network and ensuring that only approved communications occur, which is critical in preventing data exfiltration methods like steganography.
Preventing Unauthorized Access: Allow lists ensure that only predefined, trusted connections are allowed, blocking potential paths that attackers could use to infiltrate or exfiltrate data.
Other options, while beneficial in different contexts, are not directly addressing the network communication threat:
B. Measuring and attesting to the entire boot chain: While this improves system integrity, it doesn’t directly mitigate the risk of data exfiltration through network channels.
C. Rolling the cryptographic keys used for hardware security modules: This is useful for securing data and communications but doesn’t directly address the specific method of exfiltration described.
D. Using code signing to verify the source of OS updates: Ensures updates are from legitimate sources, but it doesn’t mitigate the risk of network-based data exfiltration.
Reference: CompTIA SecurityX Study Guide
NIST Special Publication 800-41, "Guidelines on Firewalls and Firewall Policy"
CIS Controls Version 8, Control 9: Limitation and Control of Network Ports, Protocols, and Services
An organization is implementing Zero Trust architecture A systems administrator must increase the effectiveness of the organization’s context-aware access system.
Which of the following is the best way to improve the effectiveness of the system?
- A . Secure zone architecture
- B . Always-on VPN
- C . Accurate asset inventory
- D . Microsegmentation
D
Explanation:
Microsegmentation is a critical strategy within Zero Trust architecture that enhances context-aware access systems by dividing the network into smaller, isolated segments. This reduces the attack surface and limits lateral movement of attackers within the network. It ensures that even if one segment is compromised, the attacker cannot easily access other segments. This granular approach to network security is essential for enforcing strict access controls and monitoring within Zero Trust environments.
Reference: CompTIA SecurityX Study Guide, Chapter on Zero Trust Security, Section on Microsegmentation and Network Segmentation.
Which of the following best describes the challenges associated with widespread adoption of homomorphic encryption techniques?
- A . Incomplete mathematical primitives
- B . No use cases to drive adoption
- C . Quantum computers not yet capable
- D . insufficient coprocessor support
D
Explanation:
Homomorphic encryption allows computations to be performed on encrypted data without decrypting it, providing strong privacy guarantees.
However, the adoption of homomorphic encryption is challenging due to several factors:
A threat hunter is identifying potentially malicious activity associated with an APT. When the threat hunter runs queries against the SIEM platform with a date range of 60 to 90 days ago, the involved account seems to be typically most active in the evenings. When the threat hunter reruns the same query with a date range of 5 to 30 days ago, the account appears to be most active in the early morning.
Which of the following techniques is the threat hunter using to better understand the data?
- A . TTP-based inquiries
- B . User behavior analytics
- C . Adversary emulation
- D . OSINT analysis activities
B
Explanation:
User behavior analytics (UBA) detects anomalous activity by analyzing historical patterns and comparing them to recent behavior. The time shift in account activity suggests potential compromise or misuse.
TTP-based inquiries (A) focus on known attack tactics, techniques, and procedures but do not involve behavior tracking.
Adversary emulation (C) simulates attacks but does not analyze real data trends.
OSINT analysis (D) gathers intelligence from public sources, which is unrelated to internal account behavior analysis.
Reference: CompTIA SecurityX (CAS-005) Exam Objectives – Domain 4.0 (Security Operations), Section on Threat Intelligence and User Behavior Analytics (UBA)
Users are experiencing a variety of issues when trying to access corporate resources examples include
• Connectivity issues between local computers and file servers within branch offices
• Inability to download corporate applications on mobile endpoints wtiilc working remotely
• Certificate errors when accessing internal web applications
Which of the following actions are the most relevant when troubleshooting the reported issues? (Select two).
- A . Review VPN throughput
- B . Check IPS rules
- C . Restore static content on lite CDN.
- D . Enable secure authentication using NAC
- E . Implement advanced WAF rules.
- F . Validate MDM asset compliance
A, F
Explanation:
The reported issues suggest problems related to network connectivity, remote access, and certificate management:
A company is having issues with its vulnerability management program New devices/lPs are added and dropped regularly, making the vulnerability report inconsistent.
Which of the following actions should the company lake to most likely improve the vulnerability management process?
- A . Request a weekly report with all new assets deployed and decommissioned
- B . Extend the DHCP lease lime to allow the devices to remain with the same address for a longer period.
- C . Implement a shadow IT detection process to avoid rogue devices on the network
- D . Perform regular discovery scanning throughout the 11 landscape using the vulnerability management tool
D
Explanation:
To improve the vulnerability management process in an environment where new devices/IPs are added and dropped regularly, the company should perform regular discovery scanning throughout the IT landscape using the vulnerability management tool.
Here’s why:
Accurate Asset Inventory: Regular discovery scans help maintain an up-to-date inventory of all assets, ensuring that the vulnerability management process includes all relevant devices and IPs. Consistency in Reporting: By continuously discovering and scanning new and existing assets, the company can generate consistent and comprehensive vulnerability reports that reflect the current state of the network.
Proactive Management: Regular scans enable the organization to proactively identify and address vulnerabilities on new and existing assets, reducing the window of exposure to potential threats.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl
NIST Special Publication 800-40: Guide to Enterprise Patch Management Technologies
CIS Controls: Control 1 – Inventory and Control of Hardware Assets