bitsIO is a Splunk Professional Services Company bitsIO is a Splunk Professional Services Company bitsIO is a Splunk Professional Services Company bitsIO is a Splunk Professional Services Company bitsIO is a Splunk Professional Services Company

bitsIO is a Splunk Professional Services Company

bitsIO is a Splunk Professional Services Company

Decoding Splunk Indexes definition

Decoding Indexes.conf

Indexes.conf configuration file is used to manage and configure index settings. Use the [default] stanza to define any global settings and [<index>] stanza to define index level settings. If a setting is defined at both the global level and in a specific stanza, the value in the specific stanza takes precedence.

Below is the list of all available settings in indexes.conf categorized into settings type such as path, size, time, count, boolean and value.

Global Settings :

Path
 tsidxStatsHomePath
Size
 rtRouterQueueSize
 bucketRebuildMemoryHint
 memPoolMB
Time
 serviceInactiveIndexesPeriod
 serviceSubtaskTimingPeriod
 processTrackerServiceInterval
 hotBucketTimeRefreshInterval
Count
 indexThreads
 selfStorageThreads
 maxRunningProcessGroups
 maxRunningProcessGroupsLowPriority
 sync
Boolean
 rtRouterThreads
 assureUTF8
 enableRealtimeSearch
 inPlaceUpdates
Value
 defaultDatabase
 lastChanceIndex
 suppressBannerList

Per Index Settings:

Path
 bloomHomePath
 summaryHomePath
 tstatsHomePath
 remotePath

Size
 maxTotalDataSizeMB
 maxGlobalDataSizeMB
 rawChunkSizeBytes
 maxMemMB
 minStreamGroupQueueSize

Boolean
 disabled
 deleted
 isReadOnly
 createBloomfilter
 enableOnlineBucketRepair
 enableDataIntegrityControl
 syncMeta
 enableTsidxReduction
 suspendHotRollByDeleteQuery
 tsidxWritingLevel
Time
 maxBloomBackfillBucketAge
 hotlist_recency_secs
 hotlist_bloom_filter_recency_hours
 rotatePeriodInSecs
 minRawFileSyncSecs
 quarantinePastSecs
 quarantineFutureSecs
 serviceMetaPeriod
 partialServiceMetaPeriod
 throttleCheckPeriod
 maxTimeUnreplicatedWithAcks
 maxTimeUnreplicatedNoAcks
 streamingTargetTsidxSyncPeriodMsec
 tsidxReductionCheckPeriodInSec
 timePeriodInSecBeforeTsidxReduction

Count
 maxMetaEntries
 maxConcurrentOptimizes

Value
 splitByIndexKeys
 journalCompression

APPENDIX

SettingDescription
bucketRebuildMemoryHintSuggestion for the bucket rebuild process for the size (bytes) of tsidx file it will try to build
hotlist_bloom_filter_recency_hoursThe cache manager attempts to defer eviction of the non-journal and non-tsidx bucket files, such as the bloomfilter file, until the interval between the bucket's latest time and the current time exceeds this setting
hotlist_recency_secsThe cache manager attempts to defer bucket eviction until the interval between the bucket's latest time and the current time exceeds this setting
inPlaceUpdatesIf true, metadata updates are written to the .data files directly
journalCompressiongzip|lz4|zstd * Defaults to gzip. * zstd is only supported in Splunk 7.2.x and later
maxConcurrentOptimizesThe number of concurrent optimize processes that can run against the hot DB
maxGlobalDataSizeMBThe maximum amount of local disk space (in MB) that a remote storage enabled index can occupy, shared across all peers in the cluster
maxMemMBThe amount of memory to allocate for indexing
maxMetaEntriesSets the maximum number of unique lines in .data files in a bucket
maxTimeUnreplicatedWithAcksPuts an upper limit on how long events can sit unacknowledged in a raw slice
minRawFileSyncSecsHow frequently we force a filesystem sync while compressing journal slices
minStreamGroupQueueSizeMinimum size of the queue that stores events in memory before committing them to a tsidx file
partialServiceMetaPeriodRelated to serviceMetaPeriod. If set, it enables metadata sync every seconds
processTrackerServiceIntervalControls how often, indexer checks status of the child OS processes it had launched to see if it can launch new processes for queued requests
rawChunkSizeBytesTarget uncompressed size in bytes for individual raw slice in the rawdata journal of the index
rotatePeriodInSecsControls the service period: how often splunkd performs certain housekeeping tasks
rtRouterThreadsSet this to 1 if you expect to use non-indexed real time searches regularly
rtRouterThreadsSet this to 1 if you expect to use non-indexed real time searches regularly
selfStorageThreadsSpecifies the number of threads used to transfer data to customer-owned remote storage
serviceMetaPeriodDefines how frequently metadata is synced to disk, in seconds
splitByIndexKeysValid values are: host, sourcetype, source, metric_name. This setting only applies to metric indexes.
streamingTargetTsidxSyncPeriodMsecPeriod we force sync tsidx files on streaming targets
suppressBannerListSuppresses index missing warning banner messages for specified indexes
syncThe index processor syncs events every number of events
throttleCheckPeriodDefines how frequently Splunk checks for index throttling condition
tsidxStatsHomePathAn absolute path that specifies where Splunk creates namespace data with 'tscollect' command

Decoding Index definitions in Splunk

Indexes.conf configuration file is used to manage and configure index settings. Use the [default] stanza to define any global settings and [<index>] stanza to define index level settings. If a setting is defined at both the global level and in a specific stanza, the value in the specific stanza takes precedence.

Decoding Splunk Indexes definition

Decoding Indexes.conf Indexes.conf configuration file is used to manage and configure index settings. Use the [default] stanza to define any global settings and [<index>] stanza

Read More »

More on Lookups

CSV Lookup:
 
CSV type lookup are file-based lookups that match field values from your events to field values in the static table represented by a CSV file. They output corresponding field values from the table to your events. They are also referred to as static lookups.
CSV lookups are best for small sets of data. The general workflow for creating a CSV lookup in Splunk Web is to upload a file, share the lookup table file, and then create the lookup definition from the lookup table file. CSV inline lookup table files, and inline lookup definitions that use CSV files, are both dataset types.
CSV lookups can be invoked by using the following search commands: lookup, inputlookup, and outputlookup.
 
KV Store Lookup:
 
KV Store lookup, Matches fields in your events to fields in a KV store collection and outputs corresponding fields in that collection to your events. Best practice is to use a KV Store lookup when you have a large lookup table or a table that is updated often.
 
KV Store lookups can be invoked through REST endpoints or by using the following search commands: lookup, inputlookup, and outputlookup.

Differences:

Lookup Type
ProsCons
KV Store
  • Enables per-record insert/updates (“upserts”).
  • Allows optional data type enforcement on write operations.
  • Allows you to define field accelerations to improve search performance.
  • Provides REST API access to the data collection.
  • Does not support case-insensitive field lookups.
CSV
  • Performs well for files that are small or rarely modified.
  • CSV files are easier to modify manually.
  • Integrating with other applications such as Microsoft Excel is easier because CSV is a standard format.
  • Supports case-sensitive field lookups.
  • Requires a full rewrite of a file for edit operations.
  • Does not support REST API access.

Therefore, depending on your use cases choose your lookup type
Below are examples:
  • The KV Store is designed for large collections, and is the easiest way to develop an application that uses key-value data.
  • The KV Store is a good solution when data requires user interaction using the REST interface and when you have a frequently-changing data set.
  • A CSV-based lookup is a good solution when the data set is small or changes infrequently, and when distributed search is required.
References:

 

 

Using Lookups in Splunk

We all know, lookups are very useful in enhancing the your original event data. It will add key value pairs to your existing event to make more sense of your data. Let’s dive into below topic on how to use CSV lookups.

Limitations the csv files:
There are some restrictions to the files that can be used for CSV lookups.

  1. The table in the CSV file should have at least two columns. One column represents a field with a set of values that includes values belonging to a field in your events. The column does not have to have the same name as the event field. Any column can have multiple instances of the same value, which is a multivalued field.
  2. The characters in the CSV file must be plain ASCII text and valid UTF-8 characters. Non-UTF-8 characters are not supported.
  3. CSV files cannot have “\r” line endings (OSX 9 or earlier)
  4. CSV files cannot have header rows that exceed 4096 characters.
Upload the lookup table file:
To use a lookup table file, you must upload the file to your Splunk platform.
Steps
  1. SelectSettings > Lookupsto go to the Lookups manager page.
  2. In the Actions column, clickAdd newnext toLookup table files.
  3. Select a Destination app from the list.
    Your lookup table file is saved in the directory where the application resides. For example: $SPLUNK_HOME/etc/users/<username>/<app_name>/lookups/.
  4. ClickChoose Fileto look for the CSV file to upload. The Splunk software saves your CSV file in$SPLUNK_HOME/etc/system/lookups/, or in$SPLUNK_HOME/etc/<app_name>/lookups/if the lookup belongs to a specific app.
  5. Enter the destination filename. This is the name the lookup table file will have on the Splunk server. If you are uploading a gzipped CSV file, enter a filename ending in “.gz”. If you are uploading a plaintext CSV file, use a filename ending in “.csv”.
  6. ClickSave.
Share a lookup table file with apps:
After you upload the lookup file, tell the Splunk software which applications can use this file. The default app is Launcher.
  1. SelectSettings > Lookups.
  2. From the Lookup manager, clickLookup table files.
  3. ClickPermissionsin the Sharing column of the lookup you want to share.
  4. In the Permissions dialog box, underObject should appear in, selectAll appsto share globally. If you want the lookup to be specific to this app only, selectThis app only. You can also keep your lookup private by selectingKeep private.
  5. ClickSave.
Create a CSV lookup definition :
Steps
  1. SelectSettings > Lookups.
  2. ClickLookup definitions.
  3. ClickNew.
  4. Select a Destination app from the drop-down list.
    Your lookup table file is saved in the directory where the application resides. For example: $SPLUNK_HOME/etc/users/<username>/<app_name>/lookups/.
  5. Give your lookup definition a uniqueName.
  6. SelectFile-basedas the lookupType.
  7. Select theLookup filefrom the drop-down list. For a CSV lookup, the file extension must be .csv
  8. ClickSave.
Your lookup is defined as a file-based CSV lookup and appears in the list of lookup definitions.
Share the lookup definition with apps:
After you create the lookup definition, specify in which apps you want to use the definition.
  1. SelectSettings > Lookups.
  2. Click Lookup definitions.
  3. In the Lookup definitions list, clickPermissionsin the Sharing column of the lookup definition you want to share.
  4. In the Permissions dialog box, underObject should appear in, selectAll appsto share globally. If you want the lookup to be specific to this app only, selectThis app only. You can also keep your lookup private by selectingKeep private.
  5. ClickSave.
Define an automatic lookup:
Manual lookups are applied to the results of a search when they are invoked with thelookupcommand.Automatic lookupsare applied to all searches at search time.
A lookup definition that you have defined previously.
Steps
  1. In Splunk Web, selectSettings > Lookups.
  2. Under Actions for Automatic Lookups, clickAdd new.
  3. Select theDestination app.
  4. Give your automatic lookup a uniqueName.
  5. Select the Lookup table that you want to use in your fields lookup.

    This is the name of the lookup definition that you defined on the Lookup Definition page.

  6. In theApply tomenu, select a host, source, or source type value to apply the lookup and give it a name in thenamedfield.
  7. Under Lookup input fields provide one or more pairs of input fields.

    The first field is the field in the lookup table that you want to match. The second field is a field from your events that matches the lookup table field. For example, you can have an ip_address field in your events that matches an ip field in the lookup table. So you would enter ip = ip_address in the automatic lookup definition.

  8. Under Lookup output fields provide one or more pairs of output fields.

    The first field is the corresponding field that you want to output to events. The second field is the name that the output field should have in your events. For example, the lookup table may have a field named country that you may want to output to your events as ip_city. So you would enter country=ip_city in the automatic lookup definition.

  9. You can select the checkbox for Overwrite field values to overwrite the field values when the lookup runs.
    Note: This is equivalent to configuring your fields lookup in props.conf.
  10. ClickSave.
The Automatic lookup view appears, and the lookup that you have defined is listed.
The Automatic lookup field “descritption” is shown below in the event search.

IOT – New insights from sensors, devices and industrial control systems

Experts at bitsIO can help you monitor your Industrial Sensors/Devices.

Splunk is ultimate choice to:

  • Gain real-time insight from sensors, devices and industrial and operational technologies
  • Collect, manage and analyze the velocity, volume and variety of data
  • Complement and integrate with existing operational technologies

Monitoring and Diagnostics

Ensure that equipment in the field operates as intended. Monitor and track unplanned device or system downtime. Understand the cause of failure on a device to improve efficiency and availability. Identify outliers and issues in device production or deployment.

Security, Safety and Compliance

Help protect mission-critical assets and industrial systems against cybersecurity threats. Gain visibility into system performance or set points that could put machines or people at risk and satisfy compliance reporting requirements.

Predictive Maintenance

Gain real-time insight into asset deployment, utilization and resource consumption. Recognize patterns and trends, and use operational data to proactively approach long-term industrial asset management, maintenance and performance.Asset Performance ManagementGain real-time insights into the health and performance of your industrial assets. Use machine learning to detect anomalies and deviations from normal behavior to take corrective action—improving uptime, reliability and longevity

 

Reach out to us at info@bitsioinc.com, we are ELITE partner for Splunk, specializing in Splunk Professional Services with many years of experience in various use cases like Security, IT Ops and IOT.

Splunk Partner

bitsIO is now Elite Splunk partner

We are proud to announce that bitsIO is now an Elite Splunk professional Services partner. It’s been an honor working as Splunk Partner. We couldn’t have achieved this without our passionate bitsIO team.

What bitsIO brings To The Table?

For numerous years, organizations worldwide have vouched for the success rate of Splunk based solutions. Not only does Splunk serve as a reliable tool that ensures excellent big data processing, it also offers heightened security within dynamic environments.

But, “Why bitsIO?,” you might wonder.

Our holistic 360-degree approach will help you draw better ROI on your Splunk investments. Our team of seasoned consultants understand the Splunk ecosystem inside-out. And, they fully believe in developing customized solutions based on thorough analysis, with a clear end-goal of improving business.

Reap the benefits of steadfast Enterprise Security and IT Service Intelligence that impacts the bottom-line while also improving your data security and processing.

From development, architecture, implementation and configuration to monitoring, maintenance, analytics and targeted modifications, we will work with you at each stage of this journey, using the very best practices. Furthermore, we will train your own team to also maintain Splunk systems so you remain in full control.

Teaming up with bitsIO Inc. provides you access to a team of ace Splunk experts at a fraction of the cost. Enjoy hassle-free support, a focussed approach, ‘round the clock support, full confidentiality and real-time access to reports and analytics.