Reduce Log Volume with the Severity Filter
Reducing the volume of logs you’re sending to your destinations is a great way to increase the signal in your analysis tools and mitigate the costs associated with long-term log storage. This tutorial will show you how to use the Severity Filter processor in BindPlane OP to filter out logs in your pipeline.
Below, I have a simple pipeline configured that’s sending Postgres logs to both a Google and Splunk destination. Postgres is being used as an example, log filtering will work with any log source.
To start, Navigate to any agent and use Snapshots to inspect the log stream and determine what to filter.
- Navigate to the bottom of the configuration page and click one of the agents.
- On the agent details page, click the button called "View Recent Telemetry" at the top right of the page.
When I do that, I see a lot of info and debug logs that I’d like to continue sending to Google, but I don’t need in Splunk.
Let’s filter them out.
- Navigate back to your configuration.
- Click the processor node just before the Splunk destination.
- Choose the “Severity Filter” processor and set the minimum severity to Warn.
- You’ll see a “1” appear on the processor node, indicating that the processor was successfully added.
- After the agent receives the new configuration, you’ll see the throughput measurements update to reflect the reduction of data going to Splunk.
For even more control, you can use the Filter Log Record Attribute processor to filter your logs based on other attributes. Use Snapshots to inspect the log and determine what you’d like to filter.