Consolidating indexing/forwarding apps

Oftentimes, there is a plausible reason to consolidate apps that either forward data to Splunk or transform data before it is written to disk. This reduces administrative overhead, and allows a single package to be deployed to all systems that spin up with those criteria.

I will use Hadoop for this example. Let's hypothetically say you have 600 nodes in a Hadoop cluster (all on a Linux platform) on which we would also like to monitor CPU, Memory, and disk metrics. Within that Hadoop system, apps such as Spark or Hive and Hive2 and Platfora each have their own logs and data inputs. Some of these components have Apache web frontends, which will also need to be parsed, but not all nodes will need this.

It takes ...

Get Splunk Best Practices now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.