Practice Free SPLK-2002 Exam Online Questions
What is the minimum reference server specification for a Splunk indexer?
- A . 12 CPU cores, 12GB RAM, 800 IOPS
- B . 16 CPU cores, 16GB RAM, 800 IOPS
- C . 24 CPU cores, 16GB RAM, 1200 IOPS
- D . 28 CPU cores, 32GB RAM, 1200 IOPS
A
Explanation:
The minimum reference server specification for a Splunk indexer is 12 CPU cores, 12GB RAM, and 800 IOPS. This specification is based on the assumption that the indexer will handle an average indexing volume of 100GB per day, with a peak of 300GB per day, and a typical search load of 1 concurrent search per 1GB of indexing volume. The other specifications are either higher or lower than the minimum requirement. For more information, see [Reference hardware] in the Splunk documentation.
What is the default log size for Splunk internal logs?
- A . 10MB
- B . 20 MB
- C . 25MB
- D . 30MB
C
Explanation:
Splunk internal logs are stored in the SPLUNK_HOME/var/log/splunk directory by default. The default log size for Splunk internal logs is 25 MB, which means that when a log file reaches 25 MB, Splunk rolls it to a backup file and creates a new log file. The default number of backup files is 5, which means that Splunk keeps up to 5 backup files for each log file
What is the default log size for Splunk internal logs?
- A . 10MB
- B . 20 MB
- C . 25MB
- D . 30MB
C
Explanation:
Splunk internal logs are stored in the SPLUNK_HOME/var/log/splunk directory by default. The default log size for Splunk internal logs is 25 MB, which means that when a log file reaches 25 MB, Splunk rolls it to a backup file and creates a new log file. The default number of backup files is 5, which means that Splunk keeps up to 5 backup files for each log file
On search head cluster members, where in $splunk_home does the Splunk Deployer deploy app content by default?
- A . etc/apps/
- B . etc/slave-apps/
- C . etc/shcluster/
- D . etc/deploy-apps/
B
Explanation:
According to the Splunk documentation1, the Splunk Deployer deploys app content to the etc/slave-apps/ directory on the search head cluster members by default. This directory contains the apps that the deployer distributes to the members as part of the configuration bundle.
The other options are false because:
The etc/apps/ directory contains the apps that are installed locally on each member, not the apps that are distributed by the deployer2.
The etc/shcluster/ directory contains the configuration files for the search head cluster, not the apps that are distributed by the deployer3.
The etc/deploy-apps/ directory is not a valid Splunk directory, as it does not exist in the Splunk file system structure4.
When designing the number and size of indexes, which of the following considerations should be applied?
- A . Expected daily ingest volume, access controls, number of concurrent users
- B . Number of installed apps, expected daily ingest volume, data retention time policies
- C . Data retention time policies, number of installed apps, access controls
- D . Expected daily ingest volumes, data retention time policies, access controls
D
Explanation:
When designing the number and size of indexes, the following considerations should be applied: Expected daily ingest volumes: This is the amount of data that will be ingested and indexed by the Splunk platform per day. This affects the storage capacity, the indexing performance, and the license usage of the Splunk deployment. The number and size of indexes should be planned according to the expected daily ingest volumes, as well as the peak ingest volumes, to ensure that the Splunk deployment can handle the data load and meet the business requirements12.
Data retention time policies: This is the duration for which the data will be stored and searchable by the Splunk platform. This affects the storage capacity, the data availability, and the data compliance of the Splunk deployment. The number and size of indexes should be planned according to the data retention time policies, as well as the data lifecycle, to ensure that the Splunk deployment can retain the data for the desired period and meet the legal or regulatory obligations13.
Access controls: This is the mechanism for granting or restricting access to the data by the Splunk users or roles. This affects the data security, the data privacy, and the data governance of the Splunk deployment. The number and size of indexes should be planned according to the access controls, as well as the data sensitivity, to ensure that the Splunk deployment can protect the data from unauthorized or inappropriate access and meet the ethical or organizational standards14.
Option D is the correct answer because it reflects the most relevant and important considerations for designing the number and size of indexes.
Option A is incorrect because the number of concurrent users is not a direct factor for designing the number and size of indexes, but rather a factor for designing the search head capacity and the search head clustering configuration5.
Option B is incorrect because the number of installed apps is not a direct factor for designing the number and size of indexes, but rather a factor for designing the app compatibility and the app performance.
Option C is incorrect because it omits the expected daily ingest volumes, which is a crucial factor for designing the number and size of indexes.
Reference: 1: Splunk Validated Architectures 2: [Indexer capacity planning] 3: [Set a retirement and archiving policy for your indexes] 4: [About securing Splunk Enterprise] 5: [Search head capacity planning]: [App installation and management overview]
Where does the Splunk deployer send apps by default?
- A . etc/slave-apps/<app-name>/default
- B . etc/deploy-apps/<app-name>/default
- C . etc/apps/<appname>/default
- D . etc/shcluster/<app-name>/default
D
Explanation:
The Splunk deployer sends apps to the search head cluster members by default to the path etc/shcluster/<app-name>/default. The deployer is a Splunk component that distributes apps and configurations to members of a search head cluster.
Splunk’s documentation recommends placing the configuration bundle in the $SPLUNK_HOME/etc/shcluster/apps directory on the deployer, which then gets distributed to the search head cluster members. However, it should be noted that within each app’s directory, configurations can be under default or local subdirectories, with local taking precedence over default for configurations. The reference to etc/shcluster/<app-name>/default is not a standard directory structure and might be a misunderstanding. The correct path where the deployer pushes configuration bundles is $SPLUNK_HOME/etc/shcluster/apps
metrics. log is stored in which index?
- A . main
- B . _telemetry
- C . _internal
- D . _introspection
C
Explanation:
According to the Splunk documentation1, metrics.log is a file that contains various metrics data for reviewing product behavior, such as pipeline, queue, thruput, and tcpout_connections. Metrics.log is stored in the _internal index by default2, which is a special index that contains internal logs and metrics for Splunk Enterprise.
The other options are false because:
main is the default index for user data, not internal data3.
_telemetry is an index that contains data collected by the Splunk Telemetry feature, which sends anonymous usage and performance data to Splunk4.
_introspection is an index that contains data collected by the Splunk Monitoring Console, which monitors the health and performance of Splunk components.
What log file would you search to verify if you suspect there is a problem interpreting a regular expression in a monitor stanza?
- A . btool.log
- B . metrics.log
- C . splunkd.log
- D . tailing_processor.log
D
Explanation:
The tailing_processor.log file would be the best place to search if you suspect there is a problem interpreting a regular expression in a monitor stanza. This log file contains information about how Splunk monitors files and directories, including any errors or warnings related to parsing the monitor stanza. The splunkd.log file contains general information about the Splunk daemon, but it may not have the specific details about the monitor stanza. The btool.log file contains information about the configuration files, but it does not log the runtime behavior of the monitor stanza. The metrics.log file contains information about the performance metrics of Splunk, but it does not log the event breaking issues. For more information, see About Splunk Enterprise logging in the Splunk documentation.
Which of the following are true statements about Splunk indexer clustering?
- A . All peer nodes must run exactly the same Splunk version.
- B . The master node must run the same or a later Splunk version than search heads.
- C . The peer nodes must run the same or a later Splunk version than the master node.
- D . The search head must run the same or a later Splunk version than the peer nodes.
A, D
Explanation:
The following statements are true about Splunk indexer clustering:
All peer nodes must run exactly the same Splunk version. This is a requirement for indexer clustering, as different Splunk versions may have different data formats or features that are incompatible with each other. All peer nodes must run the same Splunk version as the master node and the search heads that connect to the cluster.
The search head must run the same or a later Splunk version than the peer nodes. This is a recommendation for indexer clustering, as a newer Splunk version may have new features or bug fixes that improve the search functionality or performance. The search head should not run an older Splunk version than the peer nodes, as this may cause search errors or failures. The following statements are false about Splunk indexer clustering:
The master node must run the same or a later Splunk version than the search heads. This is not a requirement or a recommendation for indexer clustering, as the master node does not participate in the search process. The master node should run the same Splunk version as the peer nodes, as this ensures the cluster compatibility and functionality.
The peer nodes must run the same or a later Splunk version than the master node. This is not a requirement or a recommendation for indexer clustering, as the peer nodes do not coordinate the cluster activities. The peer nodes should run the same Splunk version as the master node, as this ensures the cluster compatibility and functionality. For more information, see [About indexer clusters and index replication] and [Upgrade an indexer cluster] in the Splunk documentation.
New data has been added to a monitor input file. However, searches only show older data.
Which splunkd. log channel would help troubleshoot this issue?
- A . Modularlnputs
- B . TailingProcessor
- C . ChunkedLBProcessor
- D . ArchiveProcessor
B
Explanation:
The TailingProcessor channel in the splunkd.log file would help troubleshoot this issue, because it contains information about the files that Splunk monitors and indexes, such as the file path, size, modification time, and CRC checksum. It also logs any errors or warnings that occur during the file monitoring process, such as permission issues, file rotation, or file truncation. The TailingProcessor channel can help identify if Splunk is reading the new data from the monitor input file or not, and what might be causing the problem.
Option B is the correct answer.
Option A is incorrect because the ModularInputs channel logs information about the modular inputs that Splunk uses to collect data from external sources, such as scripts, APIs, or custom applications. It does not log information about the monitor input file.
Option C is incorrect because the ChunkedLBProcessor channel logs information about the load balancing process that Splunk uses to distribute data among multiple indexers. It does not log information about the monitor input file.
Option D is incorrect because the ArchiveProcessor channel logs information about the archive process that Splunk uses to move data from the hot/warm buckets to the cold/frozen buckets. It does not log information about the monitor input file12
1:
https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/WhatSplunklogsaboutitself#splunkd.log 2:
https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Didyouloseyourfishbucket#Check_the_splunkd.log_file