The team at observIQ are avid programmers, gamers, traders, thinkers, and innovators who build elaborate home networks for fun, for work, and simply because we enjoy technology. We believe in it as medium for improving life. Everyone is constantly growing the size and footprint of their home networks and labs – adding custom apps, devices, and servers, making it challenging to gauge our technical footprint. With rising risks, it’s important to monitor performance and any potential security threats to keep yourself protected, even in your own home.
Enter log management, just as you would do with your financial health, keeping a paper trail of all your technical transactions is considered best practice. Log management solutions, such as observIQ, make your homelab activities visible to you at a glance so you are always aware of what breadcrumbs you leave online, and can maintain your home network with confidence.
With a log management solution, we can:
- Track and monitor logon behavior – know who is accessing our network and devices
- More effectively use our network’s assets; declutter hardware/ software usage with useful insights from logs
- Gain insights into the operational health of our labs – apps, servers, networks
- Visualize your log data in dashboards to easily understand trends
- React faster to incidents in our networks
Follow along as we implement log management in a typical homelab consisting of a Ubiquiti access point and switch, pfSense firewall, Fedora workstation, and a Mac as a daily driver.
The Setup (A Simple Diagram)
Setup in 3 Simple Steps
While other tools make gathering logs from disparate sources and devices a long, technical, drawn out process, observIQ simplifies and gets this done in minutes. With observIQ, we can use the Syslog Source to make the observIQ log agent function as a Syslog server with just a couple of clicks, allowing you to gather logs from any network device. We can also use that same agent to grab system logs directly from the Fedora workstation using observIQ’s journald Source as well.
Prerequisites: Configure your Network Devices to output to Syslog
As a prerequisite, we must first configure our network devices to output events over a TCP/UDP to a Syslog server (in this case, observIQ log agent). Below, you can see a quick snippet of the configuration for our pfSense firewall and Ubiquiti access point to send logs to a Syslog server.
Step 1: Install the observIQ log agent
We then install the observIQ log agent on the Fedora workstation by copying and running the one-line agent installation command. Installation typically takes less than 15 seconds.
Step 2: Add Syslog and pfSense log sources to the observIQ log agent
Next, we’ll add two Sources to our agent. First, we’ll add a journald source that will allow us to gather system logs from our Fedora machine, in just a few clicks. After that, we’ll add a Syslog source to our agent as well – which turns the observIQ agent into a functioning syslog server. Both Sources require next-to-no configuration.
Step 3: Complete setup and begin exploring the logs
After the sources are added to the agent, we will start seeing logs from all our sources being accessible within observIQ, allowing us to sort, search and visualize all of our logs in a single place.
Using Your Logs Effectively
Now that you have all your logs collated, how can you use them to your advantage?
With the additional visibility, we can search for and identify:
- Errors in your network’s devices
- Login attempts and failures from both authorized and unauthorized users
- New devices accessing your network
- Malware attacks and service attacks that are denied
- Password and access credentials changes
- Event logs for shared access folders and files
- Service installations and configurations
- Process runs and unwarranted pauses in processes
Create Saved Searches
observIQ lets you save time by allowing you to save search queries. So, the next time you need a search query, simply pull it up from the dropdown menu. Searching for errors with a specific application or host? Looking to track system reboots or unusual traffic? A saved search allows you to get those results quickly by delivering a quick list of queries at your fingertips.
Create Alerts Definitions
Some events in your network might need immediate attention; ideally, you’d be notified proactively. For such events, observIQ gives you the ability to create threshold-based alert definitions. When there’s a logon attempt, network failure, app, or system failure, observIQ creates an alert in-app and can notify you via email, Slack, or Pagerduty.
Lastly, visualizing your log data is important. Using built-in Dashboards or creating custom dashboards is a simple and effective way to understand what’s happening in your environment at a quick glance.
Best Practices in Log Management for Homelabs
As with every software routine, there are some tried and tested actions that are important to consider when implementing effective log management. Like a fingerprint, every homelab is unique and it is your call to pick and choose the practices that would be a right fit for you.
Gather the logs that matter most
Implementing effective log management means being able to gather all the logs that matter most and being able to filter through the noise to understand an incident or get to a root cause quickly. With observIQ, it’s incredibly simple to gather any log file.
Enriching your logs
To get the most out of your log data, it’s important to enrich and tag your logs with useful context. With observIQ, all of your logs are enriched automatically and provide important context like IP address, Hostname, Source Name, User ID, Event ID, Severity, and more, making it easy to trace issues back to their source.
Keep your logs safe
Logs say a lot about your network, often it is information that is confidential. It is best if you keep your log access to a limited number of users within your home network. Sharing access for test runs or debugging network failures should be done with caution.
Why is observIQ the Obvious Choice for Your Network?
For starters, we give you the ability to get up and running in under 5 mins. With observIQ’s simple yet powerful Sources like Syslog and Journald, you can gather logs from almost any device. With the observIQ Free Plan, you also gain access to the full feature set of the platform as well, including pre-made Dashboards and Live tail as well.
When you need help, our helpful support staff will assist you with your setup, for free! Get started with a trial today!
Today, we’re excited to announce a new completely free pricing tier for observIQ: the 3-day free plan.
With the observIQ free plan, you can ingest and index up to 3 gigabytes of logs per day with a 3 day rolling retention period. The free plan also provides unrestricted access to the observIQ feature set – including guided one-line agent installation, fleet, and agent lifecycle management, built-in dashboards, vast Source/integration library, live tail, and alerts — all attached to an optimized and hosted Elastic stack. No standing-up and managing your own logging backend – no wasting time digging around in docs or configuration files or docs.
The free plan gives anyone access to a simple yet powerful hosted log management – all completely free.
What observIQ Features Are Available in the Free Plan?
All the features, quite simply – there are no restrictions; no functionality is hidden behind a paywall. Here’s a quick rundown:
- Blazing-fast log agent powered by Stanza – outperforming other popular OSS agents like Fluentd, Fluent Bit, and Logstash
- Single-line agent installation commands for Linux, Mac, Windows, Kubernetes
- Over 40 pre-built Sources/integrations for popular technologies
- Fleet management: install and manage your Agents and Sources from the UI
- Automated log parsing and enrichment
- Built-in Dashboards
- Alerts – notifications for E-mail, Pagerduty, and Slack
- Live Tail – giving the ability to debug, live
- In-app chat with our support team to answer questions and listen to feedback
Who can take advantage of the observIQ free plan?
Professionals: DevOps, ITOps, SRE, DevSecOps
If you’re a professional managing a fleet of containerized applications, databases, or Windows machines, the free plan gives you the needed headroom and history to investigate and analyze incidents from a wide array of technologies and cloud-native applications. With simple yet powerful platform support for Kubernetes, Docker, Linux, Windows, and more – you can deploy a scalable logging solution in minutes and create threshold-based alerts to notify on critical incidents in your environment.
Additionally, installation commands are also automation-friendly and are compatible with popular frameworks like Ansible and Microsoft System Center Configuration Manager (SCCM).
Enthusiasts: Homelabbers, Gamers, Personal Users
If you’re building out a homelab or just looking to monitor your gaming desktop, mining rig, or Plex server, the observIQ the Free plan is a perfect fit for you as well. observIQ offers broad support for generic log sources like File, Journald, JSON, CSV, and Syslog – giving you the ability to monitor activity in any log you’re interested in. Home networking gear – appliances – firewalls like Ubiquiti Unifi, PFsense, can commonly output to Syslog, and can be easily ingested and parsed with observIQ’s Syslog integration with a few clicks. observIQ can be used to monitor and map common security incidents as well, such as logon activity in Windows.
How Do I Access The Free observIQ 3-day Plan?
To get started, sign-up for a free trial for an observIQ here:
At any time during your trial, navigate to the billing page and choose the 3-Day Retention plan. No credit card required. Hit ‘apply’, and you’re good to go.
What Can I Do With The observIQ Free Plan?
Gather, Parse, and Ship your logs to observIQ in Less Than 5 Minutes
Utilize Over 40 Built-in Sources
Out of the box, observIQ offers more than 40 different Sources to add your Agents. You can see a full list of supported Sources on our integrations page: https://observiq.com/integrations/
Just so you’re aware, a Source in observIQ is a pre-made parsing pipeline for the targeted technology. The pipeline contains parsing rules and provides the observIQ agent instructions as to which files are to be read. The raw pipeline is hidden from the user; the user only verifies file path and simple configuration options as a part of Source configuration – the observIQ agent does the rest. Below are some of the most popular Sources you can utilize:
Explore Your Logs With Powerful Search and Dynamic Filters
After you’ve shipped your logs to observIQ, you can use the Explore page to search and filter your logs to identify and investigate incidents in your environment. The dynamic filter bar allows you to easily search your logs by Severity, Agent, Source, or Type so you can cut through any noise and find the events you’re looking for.
Visualize Your Logs With Pre-made Dashboards
For many of the Sources in observIQ, a pre-made source-specific dashboard will automatically be deployed to your account as soon as the Source is created and added to your Agent. Dashboards provide insight into the health of your environment at a quick glance and the perfect starting point for incident investigation. Kubernetes, Windows, NGINX, Syslog are just a few examples of sources with pre-made dashboards. You can find a full list of dashboards here.
Manage the Lifecycle of Your Agents From Fleet
From the Fleet page, you can manage the lifecycle of your Agents and Sources – all from the comfort of the UI. You can install, update, modify, and delete without digging around in configuration files. You can also track Agent health and can keep tabs on per Agent log usage as well.
Create Threshold-based Alerts – Notify Your Existing Channels
With the free plan, you can also create threshold-based alerts with your log data. Using Search and Filters, you can create an alert definition directly from the Explore page in observIQ, and avoid alert fatigue by using customizable frequency controls. You can also utilize Notifiers to notify Email, Slack, or Pagerduty when an alert triggers – allowing you to incorporate them into your existing workflow.
Debug Live with Live Tail
With the free plan, you’ll have full access to observIQ’s Live Tail functionality as well. Live Tail gives you the ability to stream and analyze your logs in real-time, without having to SSH or RDP into a specific system and running tail -f and grep.
If you’re running Kubernetes, Live Tail is a great replacement for tools like kubetail or kail, allowing you to easily tail your logs from a specific deployment, daemonset, or pod with dynamic filters.
Why Choose the observIQ Free Plan?
ObservIQ provides simple yet powerful hosted log management, and the free plan makes it accessible to individual users, enthusiasts, and professionals alike – quite simply, you’ve got nothing to lose. With 3 gigabytes of ingestion and 3 days of retention, you have the flexibility you need to monitor the health of your environment, investigate incidents and alert on undesirable behavior.
If you’re interested in integrating a log management solution in your stack, you can save time and money by checking out the free plan, avoiding the potential headache of manually configuring log agents, and standing up and maintaining your own logging backend.
To sign-up for a free plan, sign-up for your account at https://app.observiq.com/sign_up/ and select the 3-day plan on the billing.
Signing-up for a free plan will yield you a free observIQ t-shirt as well! Happy logging!
What Is Log Management?
Log management encompasses the processes of managing this trove of computer-generated event log data, including:
- Storing & Archiving
There are two ways that IT teams typically approach event log management. Using a log management tool, you can filter and discard events you don’t need, only gathering relevant information – eliminating noise and redundancy at the point of ingestion. This makes it easier to find what you need quickly, and also helps to maximize the performance of the log management system tool itself.
Conversely, you can also collect every generated event and allow your log management tools to sort, filter and search the data. This can increase storage costs due to the size of the data, potentially impact system performance, and increase noise – but will allow you to analyze all of your log data, instead of a subset.
Your log management system will also simplify the collection of log data by aggregating all of your logs from various sources into one place. A log management system is also important in this step because it can normalize data into a consistent format and output to make log analysis easier, or even possible at all.
While you may need to retain log data to comply with industry regulations, perhaps the biggest benefit of a log management system is the ability to search, sort, and analyze data. By utilizing saved searches, filters, and complex queries, you can surface irregularities and instantly drill down to the underlying data.
With a centralized approach, you can more quickly search across all your log data to discover patterns that might otherwise be impossible to recognize if logs only exist on isolated systems
Storage & Archiving
Your log management system also plays an important role in storing and archiving your data. For long-term trending and analysis, it’s important to implement a log management tool that can store and archive your data for a time period that meets your compliance and analytical needs. Though log rotation rules are often defined on a per-app, service, or component basis, it’s important to capture and send logs to an external backend or log management tool as well.
Without log management, even if you know something isn’t working properly within your systems, finding the log event to diagnose the problem can be challenging. When there’s an incident, you can’t afford to waste time trying to manually sift through log data on disparate systems.
Lastly, aside from the sheer volume of data that is produced every day, much of the data is created as an audit trail and not produced in a human-readable format. Converting the data can be costly, time-expensive, and stressful under a looming deadline.
Why Is Log Management Important?
Log management plays a significant role in maintaining a healthy, efficient, and secure infrastructure. It helps system administrators, developers, and IT security teams.
System Administrators need to ensure systems are working optimally. Log management tools provide a baseline of how systems function normally and can flag anomalies when they arise. This allows sysadmins to quickly see irregularities that need investigation.
Log management tools also allow system admins to create their own rules and triggers for generating alerts based on activity, patterns, and thresholds.
Developers also benefit from log management to monitor for errors and streamline their development process. By aggregating data — and converting it from unstructured data into a searchable format — developers can more quickly identify problems and debug software.
IT Security Teams
Cyber threats continue to escalate. The FBI reports a more than 300% increase in threat complaints within the past year and estimates are that cybercrime cost more than $1 trillion in 2020. With more employees than ever working from home and using Wi-Fi resources, security has become increasingly complex as well.
Log management tools allow security teams to more quickly identify suspicious activity. These tools monitor systems 24/7, so potential breaches and dangers can trigger alerts. When an authorized action occurs, this allows security teams to take immediate action to mitigate the threat or stop its spread.
Using Log Management Tools
Three of the most important things you need log management tools for will be investigating incidents, proactive alerting, and aggregation.
With 72% of companies reporting hybrid or private cloud strategies and 58% of all workloads existing in private or hybrid cloud, managing all of the log information is a challenge of its own. Mixing proprietary software and hardware with open source log management only accentuates the complexity.
Between on-prem and off-premises data centers, in-house servers, legacy applications, and cloud platforms, maintaining an on-premises log solution can also be costly.
The solution is to efficiently and automatically aggregate logs from hybrid-cloud environments, containerized environments, and microservice architectures into one log platform at scale. observIQ has a robust API for integrations outside of log agents and can help solve for common technologies like Windows events, database,s and Kubernetes.
Customized real-time alerts let you react quickly. This is helpful for both security threats and system performance issues.
With observIQ, you can set alert definitions, customize conditions that trigger alerts, and set warning levels for different events. You can also define notification channels, such as e-mail, Slack or Pagerduty to incorporate observIQ into your existing workflow.
When you experience instability or suffer an outage, you must be able to quickly assess the situation, identify the source, and take remedial action. Tracing an incident in your logs can be a painfully slow and tedious problem — especially when time is of the essence.
With observIQ Cloud, your logs are parsed automatically and enriched to provide the necessary context to filter, search, and visualize events quickly and easily. You can also tag logs with custom labels to specify the data center, region, or environment for complete traceability. This allows you to identify the root cause of issues fast and reduce your MTTR.
Next-Generation Log Management and Observability Solutions
observIQ builds next-generation observability solutions for ITOps and DevOps teams. Our solutions are built by engineers for engineers to accelerate, simplify, and enhance observability across today’s hybrid environments.
We offer four simple pricing tiers based on ingestion and retention. You can also try observIQ for free with full access to our platform with unlimited users for three-day retention of up to 3 GM per day.
Contact us to get started!
The OpenTelemetry project is an ambitious endeavor with of goal of bringing together various technologies to form a vendor neutral observability platform. Within the past year, many of the biggest names in tech provide native support within their commercial projects.
Formed through a merger of the OpenTracing and OpenCensus projects under the Cloud Native Computing Foundation (CNCF), OpenTelemetry (powered by observIQ’s log agent, Stanza) aims to make rich data collections across applications easier and more consumable.
What Is the OpenTelemetry Project?
OpenTelemetry is an ecosystem of instrumentation libraries and tools that are used to generate, collect, process, and export telemetry data for analysis. OpenTelemetry helps you better understand your software behavior and performance.
OpenTelemetry provides a standard using open-source software to produce metrics from cloud-native infrastructure and applications. Using language-specific instrumentation libraries, signals can be exported directly from applications. Alternately, the OpenTelemetry Collector can capture signals from web frameworks, RPC systems, storage clients, and other applications in use.
OpenTelemetry can be used to capture logs, metrics, and traces. This allows you to observe the state of microservices within applications. You can trace requests as they flow between microservices, capture related events, and track resource usage of shared systems.
Another way OpenTelemetry is being used is to identify potential constraints, to create tiered requests within applications so shared resources can be prioritized.
The OpenTelemetry ecosystem makes real-life tasks easier. Here some examples:
- Adding custom attributes to automatically track spans, thereby making it easy to query data
- Filtering out synthetic traffic
- Identify long-running tasks
- Segment telemetry data using resource APIs
Importance of the OpenTelemetry Project
Modern cloud-native applications and data are distributed. This can make it difficult to compile the data you need into a single source. OpenTelemetry solves this problem by tracing and extracting data cross-platform. By standardizing the way telemetry data is collected and transmitted, it creates a common instrumentation format across various services.
Managing the performance of these complex and diverse environments has become a significant concern for development teams. It takes instrumentation for all of your frameworks and libraries, across multiple programming languages, to understand the collective behavior of all your services and applications.
Without such a standard, teams may be left with data silos or blind spots that negatively impact troubleshooting. OpenTelemetry make the detection and resolution of problems easier. With complete interoperability, it provides a standard form of instrumentation across all of your services.
Why Use OpenTelemetry
By providing a standard for observability for native cloud applications, OpenTelemetry can significantly reduce the amount of time developing and implementing mechanisms to collect application telemetry. This frees up developers to spend time working on enhancing features.
The Benefits of Open Telemetry
OpenTelemetry is used by developers to examine features and finds bugs. It provides several important benefits, including:
- The flexibility to change backends without having to change instrumentation
- Creating a single set of standards to allow you to work with more vendors, projects, and platforms
- Simplifying telemetry data management and export
- Installing and integrating OpenTelemetry is often as simple as dropping in a few lines of code
- Avoids being locked into vendor configuration and roadmap priorities
- When new technologies emerge, you don’t have to wait for vendor support for instrumentation
Broad Language Support
OpenTelemetry Architecture Components
OpenTelemetry is made up of a series of components, including:
- APIs – A core component, APIs are language-specific, such as Python, Java, .Net, etc.) and providing the basic pathway for adding OpenTelemetry to applications
- SDK – A language-specific SDK acts as the bridge to deliver data gathered from the AP and the Exporter. SDKs allow for configuration, including transaction sampling or request filtering.
- Exporters – OpenTelemetry Exporters let you configure where you want the telemetry sent. They decouple instrumentation from the backend configuration to make it easier to switch backends.
- Collector – The OpenTelemetry Collector is optional but helps create a seamless solution. It creates greater flexibility for sending and receiving telemetry within the backend. The Collector utilizes two models for deployment. You can use either an agent residing on the application host or implement a standalone process separate from the application itself.
What Is OpenTracing?
OpenTracing was a project undertaken by CNCF in 2016. It aimed to provide vendor-neutral ways of managing distributed tracing to help developers trace requests from start to finish.
The OpenCensus project, which Google made open source in 2018 after using it internally for their Census library, had much the same application.
observIQ Contributes Log Agent to the OpenTelemetry Project
observIQ’s Stanza Log Agent is a key part of the OpenTelemetry project. The log management platform is the first to take full advantage of the log agent technology that has been incorporated into OpenTelemetry.
observIQ accelerated the OpenTelemetry Project by contributing the Stanza open-source agent to the project as part of its effort to enable high-quality telemetry for all. Stanza is a small footprint, high capability, log shipping agent. It uses roughly 10% of the CPU and memory of other popular log agents.
You can sign up for a free trial of observIQ to see Stanza in action as a native component.
If you’re interested in taking part in the OpenTelemetry Project, join the GitHub community to get started.
If you’re investigating incidents on your Windows hosts, sifting through the Event Viewer can be a painful experience. It’s best to collect and ship Windows Events to a separate backend for easier visualization and analysis – but depending on the solution you choose, this can take some significant legwork. Often, this can require manually configuring a 3rd party tool or agent, just to get started.
In this post, I’m going to walk through just how easy it is to collect, parse, and visualize Windows Events from multiple Windows machines with observIQ – all in less than 5 minutes – without needing to set up any 3rd-party tools. No digging around in configuration files to specify log formats or parsing rules – no need to stand up your own backend and storage.
Whether you’re an enthusiast, or an ITOps or Devops professional, observIQ provides tools you need to collect, parse and analyze Windows events, faster and easier than any other solution on the market.
Before We Start: A Few Simple Pre-Reqs
1. Sign-up for an observIQ Cloud Trial
First, sign-up for an observIQ Cloud free 14-day trial – no credit card is required.
2. Choose Your Windows Machines
Next, assemble the list of Windows machines you want to monitor. These can be Windows 10 workstations or servers, ranging from version 2008 – 2019.
3. Verify Your Access
For the selected machines, verify you have both Administrator privileges and RDP (Remote Desktop) access for any remote machines – you’ll need both to install observIQ log agent.
That’s it! Now we’re ready to proceed.
Install observIQ Agents on Your Windows Hosts: A Few Simple Steps
To begin, log into your newly-created observIQ account and follow the 3 simple steps below:
1. Create a Template
Time: [1 minute]
The first thing you’ll need to do is create a Template in observIQ. Navigate to the Fleet > Templates page and click Add Template.
On the Add New Template page, select Windows as the platform, and provide a friendly name for your Template. In this case, we’ll call it something simple: Windows Event Log Template. Next, click Create.
2. Add a Windows Event Log Source to Your Template
Time: [1 minute]
Next, you’ll be taken to your newly-created Windows Event Log Template. From here, we’ll add a Source to our template. Click Add Source.
On the Choose Source Type page, search for Windows Event Log in the list.
On the Configure Source panel, provide a friendly name for your Source. Again, we’ll name it something simple: Windows Event Log Source. Then choose the event channels you’re interested in collecting events from. For this example, let’s leave the 3 default selections for System, Application, and Security, as these are typically the most important channels to monitor.
3. Install the observIQ Log Agent Using a One-Line Installation Command
Time: [3 minutes, (30 seconds per Windows host)]
Next, click Add Agents to generate a one-line agent installation command.
Copy the one-line agent installation command to your clipboard.
Now, we can install the observIQ log agent on each of the Windows hosts. Simply RDP into each system, open the CMD Prompt as an Administrator, paste and run the command. The necessary installation files will be downloaded and installed automatically on your Windows machine in 5-10 seconds.
As each installation succeeds, the agent will be automatically detected by observIQ, and associated with your Template. Configuration is complete!
Now you have the observIQ log agent installed on each of your machines. Each agent is collecting and parsing the Windows Events based on options we’ve selected (Application, System, Security) in our Windows Event Log source that we’ve added to our Template. Let’s go take a look.
Exploring Your Windows Events Discover Page
Return to the Explore > Discover page in observIQ. You’ll now see Windows Events flowing into your account. In the Type column, you’ll see logs from the three channels we selected in our Source, the severity, and a summary of the event as well.
To quickly sort through your logs, you can use observIQ’s dynamic filter bar and easily filter your events by Severity, Agent, Source, or Type.
The moment we created our Windows Event Log Source, Windows Event dashboards were automatically deployed to this account: one for application and system events – one for security events. You can find them on the Explore > Dashboards page.
And there you have it: Windows events in 5 minutes – it’s really that simple with observIQ. With guided configuration, support for popular Sources like Windows Event Log, and automatically installed dashboards, you can easily start analyzing your events in minutes, as opposed to hours or days.
For more information about the other Windows integrations observIQ supports, check out our integrations page: https://observiq.com/integrations/
If you’re interested in starting an observIQ Cloud Trial, you can sign-up here.