Today, we’re excited to announce a new completely free pricing tier for observIQ: the 3-day free plan.
With the observIQ free plan, you can ingest and index up to 3 gigabytes of logs per day with a 3 day rolling retention period. The free plan also provides unrestricted access to the observIQ feature set – including guided one-line agent installation, fleet, and agent lifecycle management, built-in dashboards, vast Source/integration library, live tail, and alerts — all attached to an optimized and hosted Elastic stack. No standing-up and managing your own logging backend – no wasting time digging around in docs or configuration files or docs.
The free plan gives anyone access to a simple yet powerful hosted log management – all completely free.
What observIQ Features Are Available in the Free Plan?
All the features, quite simply – there are no restrictions; no functionality is hidden behind a paywall. Here’s a quick rundown:
- Blazing-fast log agent powered by Stanza – outperforming other popular OSS agents like Fluentd, Fluent Bit, and Logstash
- Single-line agent installation commands for Linux, Mac, Windows, Kubernetes
- Over 40 pre-built Sources/integrations for popular technologies
- Fleet management: install and manage your Agents and Sources from the UI
- Automated log parsing and enrichment
- Built-in Dashboards
- Alerts – notifications for E-mail, Pagerduty, and Slack
- Live Tail – giving the ability to debug, live
- In-app chat with our support team to answer questions and listen to feedback
Who can take advantage of the observIQ free plan?
Professionals: DevOps, ITOps, SRE, DevSecOps
If you’re a professional managing a fleet of containerized applications, databases, or Windows machines, the free plan gives you the needed headroom and history to investigate and analyze incidents from a wide array of technologies and cloud-native applications. With simple yet powerful platform support for Kubernetes, Docker, Linux, Windows, and more – you can deploy a scalable logging solution in minutes and create threshold-based alerts to notify on critical incidents in your environment.
Additionally, installation commands are also automation-friendly and are compatible with popular frameworks like Ansible and Microsoft System Center Configuration Manager (SCCM).
Enthusiasts: Homelabbers, Gamers, Personal Users
If you’re building out a homelab or just looking to monitor your gaming desktop, mining rig, or Plex server, the observIQ the Free plan is a perfect fit for you as well. observIQ offers broad support for generic log sources like File, Journald, JSON, CSV, and Syslog – giving you the ability to monitor activity in any log you’re interested in. Home networking gear – appliances – firewalls like Ubiquiti Unifi, PFsense, can commonly output to Syslog, and can be easily ingested and parsed with observIQ’s Syslog integration with a few clicks. observIQ can be used to monitor and map common security incidents as well, such as logon activity in Windows.
How Do I Access The Free observIQ 3-day Plan?
To get started, sign-up for a free trial for an observIQ here:
At any time during your trial, navigate to the billing page and choose the 3-Day Retention plan. No credit card required. Hit ‘apply’, and you’re good to go.
What Can I Do With The observIQ Free Plan?
Gather, Parse, and Ship your logs to observIQ in Less Than 5 Minutes
Utilize Over 40 Built-in Sources
Out of the box, observIQ offers more than 40 different Sources to add your Agents. You can see a full list of supported Sources on our integrations page: https://observiq.com/integrations/
Just so you’re aware, a Source in observIQ is a pre-made parsing pipeline for the targeted technology. The pipeline contains parsing rules and provides the observIQ agent instructions as to which files are to be read. The raw pipeline is hidden from the user; the user only verifies file path and simple configuration options as a part of Source configuration – the observIQ agent does the rest. Below are some of the most popular Sources you can utilize:
Explore Your Logs With Powerful Search and Dynamic Filters
After you’ve shipped your logs to observIQ, you can use the Explore page to search and filter your logs to identify and investigate incidents in your environment. The dynamic filter bar allows you to easily search your logs by Severity, Agent, Source, or Type so you can cut through any noise and find the events you’re looking for.
Visualize Your Logs With Pre-made Dashboards
For many of the Sources in observIQ, a pre-made source-specific dashboard will automatically be deployed to your account as soon as the Source is created and added to your Agent. Dashboards provide insight into the health of your environment at a quick glance and the perfect starting point for incident investigation. Kubernetes, Windows, NGINX, Syslog are just a few examples of sources with pre-made dashboards. You can find a full list of dashboards here.
Manage the Lifecycle of Your Agents From Fleet
From the Fleet page, you can manage the lifecycle of your Agents and Sources – all from the comfort of the UI. You can install, update, modify, and delete without digging around in configuration files. You can also track Agent health and can keep tabs on per Agent log usage as well.
Create Threshold-based Alerts – Notify Your Existing Channels
With the free plan, you can also create threshold-based alerts with your log data. Using Search and Filters, you can create an alert definition directly from the Explore page in observIQ, and avoid alert fatigue by using customizable frequency controls. You can also utilize Notifiers to notify Email, Slack, or Pagerduty when an alert triggers – allowing you to incorporate them into your existing workflow.
Debug Live with Live Tail
With the free plan, you’ll have full access to observIQ’s Live Tail functionality as well. Live Tail gives you the ability to stream and analyze your logs in real-time, without having to SSH or RDP into a specific system and running tail -f and grep.
If you’re running Kubernetes, Live Tail is a great replacement for tools like kubetail or kail, allowing you to easily tail your logs from a specific deployment, daemonset, or pod with dynamic filters.
Why Choose the observIQ Free Plan?
ObservIQ provides simple yet powerful hosted log management, and the free plan makes it accessible to individual users, enthusiasts, and professionals alike – quite simply, you’ve got nothing to lose. With 3 gigabytes of ingestion and 3 days of retention, you have the flexibility you need to monitor the health of your environment, investigate incidents and alert on undesirable behavior.
If you’re interested in integrating a log management solution in your stack, you can save time and money by checking out the free plan, avoiding the potential headache of manually configuring log agents, and standing up and maintaining your own logging backend.
To sign-up for a free plan, sign-up for your account at https://app.observiq.com/sign_up/ and select the 3-day plan on the billing.
Signing-up for a free plan will yield you a free observIQ t-shirt as well! Happy logging!
What Is Log Management?
Log management encompasses the processes of managing this trove of computer-generated event log data, including:
- Storing & Archiving
There are two ways that IT teams typically approach event log management. Using a log management tool, you can filter and discard events you don’t need, only gathering relevant information – eliminating noise and redundancy at the point of ingestion. This makes it easier to find what you need quickly, and also helps to maximize the performance of the log management system tool itself.
Conversely, you can also collect every generated event and allow your log management tools to sort, filter and search the data. This can increase storage costs due to the size of the data, potentially impact system performance, and increase noise – but will allow you to analyze all of your log data, instead of a subset.
Your log management system will also simplify the collection of log data by aggregating all of your logs from various sources into one place. A log management system is also important in this step because it can normalize data into a consistent format and output to make log analysis easier, or even possible at all.
While you may need to retain log data to comply with industry regulations, perhaps the biggest benefit of a log management system is the ability to search, sort, and analyze data. By utilizing saved searches, filters, and complex queries, you can surface irregularities and instantly drill down to the underlying data.
With a centralized approach, you can more quickly search across all your log data to discover patterns that might otherwise be impossible to recognize if logs only exist on isolated systems
Storage & Archiving
Your log management system also plays an important role in storing and archiving your data. For long-term trending and analysis, it’s important to implement a log management tool that can store and archive your data for a time period that meets your compliance and analytical needs. Though log rotation rules are often defined on a per-app, service, or component basis, it’s important to capture and send logs to an external backend or log management tool as well.
Without log management, even if you know something isn’t working properly within your systems, finding the log event to diagnose the problem can be challenging. When there’s an incident, you can’t afford to waste time trying to manually sift through log data on disparate systems.
Lastly, aside from the sheer volume of data that is produced every day, much of the data is created as an audit trail and not produced in a human-readable format. Converting the data can be costly, time-expensive, and stressful under a looming deadline.
Why Is Log Management Important?
Log management plays a significant role in maintaining a healthy, efficient, and secure infrastructure. It helps system administrators, developers, and IT security teams.
System Administrators need to ensure systems are working optimally. Log management tools provide a baseline of how systems function normally and can flag anomalies when they arise. This allows sysadmins to quickly see irregularities that need investigation.
Log management tools also allow system admins to create their own rules and triggers for generating alerts based on activity, patterns, and thresholds.
Developers also benefit from log management to monitor for errors and streamline their development process. By aggregating data — and converting it from unstructured data into a searchable format — developers can more quickly identify problems and debug software.
IT Security Teams
Cyber threats continue to escalate. The FBI reports a more than 300% increase in threat complaints within the past year and estimates are that cybercrime cost more than $1 trillion in 2020. With more employees than ever working from home and using Wi-Fi resources, security has become increasingly complex as well.
Log management tools allow security teams to more quickly identify suspicious activity. These tools monitor systems 24/7, so potential breaches and dangers can trigger alerts. When an authorized action occurs, this allows security teams to take immediate action to mitigate the threat or stop its spread.
Using Log Management Tools
Three of the most important things you need log management tools for will be investigating incidents, proactive alerting, and aggregation.
With 72% of companies reporting hybrid or private cloud strategies and 58% of all workloads existing in private or hybrid cloud, managing all of the log information is a challenge of its own. Mixing proprietary software and hardware with open source log management only accentuates the complexity.
Between on-prem and off-premises data centers, in-house servers, legacy applications, and cloud platforms, maintaining an on-premises log solution can also be costly.
The solution is to efficiently and automatically aggregate logs from hybrid-cloud environments, containerized environments, and microservice architectures into one log platform at scale. observIQ has a robust API for integrations outside of log agents and can help solve for common technologies like Windows events, database,s and Kubernetes.
Customized real-time alerts let you react quickly. This is helpful for both security threats and system performance issues.
With observIQ, you can set alert definitions, customize conditions that trigger alerts, and set warning levels for different events. You can also define notification channels, such as e-mail, Slack or Pagerduty to incorporate observIQ into your existing workflow.
When you experience instability or suffer an outage, you must be able to quickly assess the situation, identify the source, and take remedial action. Tracing an incident in your logs can be a painfully slow and tedious problem — especially when time is of the essence.
With observIQ Cloud, your logs are parsed automatically and enriched to provide the necessary context to filter, search, and visualize events quickly and easily. You can also tag logs with custom labels to specify the data center, region, or environment for complete traceability. This allows you to identify the root cause of issues fast and reduce your MTTR.
Next-Generation Log Management and Observability Solutions
observIQ builds next-generation observability solutions for ITOps and DevOps teams. Our solutions are built by engineers for engineers to accelerate, simplify, and enhance observability across today’s hybrid environments.
We offer four simple pricing tiers based on ingestion and retention. You can also try observIQ for free with full access to our platform with unlimited users for three-day retention of up to 3 GM per day.
Contact us to get started!
The OpenTelemetry project is an ambitious endeavor with of goal of bringing together various technologies to form a vendor neutral observability platform. Within the past year, many of the biggest names in tech provide native support within their commercial projects.
Formed through a merger of the OpenTracing and OpenCensus projects under the Cloud Native Computing Foundation (CNCF), OpenTelemetry (powered by observIQ’s log agent, Stanza) aims to make rich data collections across applications easier and more consumable.
What Is the OpenTelemetry Project?
OpenTelemetry is an ecosystem of instrumentation libraries and tools that are used to generate, collect, process, and export telemetry data for analysis. OpenTelemetry helps you better understand your software behavior and performance.
OpenTelemetry provides a standard using open-source software to produce metrics from cloud-native infrastructure and applications. Using language-specific instrumentation libraries, signals can be exported directly from applications. Alternately, the OpenTelemetry Collector can capture signals from web frameworks, RPC systems, storage clients, and other applications in use.
OpenTelemetry can be used to capture logs, metrics, and traces. This allows you to observe the state of microservices within applications. You can trace requests as they flow between microservices, capture related events, and track resource usage of shared systems.
Another way OpenTelemetry is being used is to identify potential constraints, to create tiered requests within applications so shared resources can be prioritized.
The OpenTelemetry ecosystem makes real-life tasks easier. Here some examples:
- Adding custom attributes to automatically track spans, thereby making it easy to query data
- Filtering out synthetic traffic
- Identify long-running tasks
- Segment telemetry data using resource APIs
Importance of the OpenTelemetry Project
Modern cloud-native applications and data are distributed. This can make it difficult to compile the data you need into a single source. OpenTelemetry solves this problem by tracing and extracting data cross-platform. By standardizing the way telemetry data is collected and transmitted, it creates a common instrumentation format across various services.
Managing the performance of these complex and diverse environments has become a significant concern for development teams. It takes instrumentation for all of your frameworks and libraries, across multiple programming languages, to understand the collective behavior of all your services and applications.
Without such a standard, teams may be left with data silos or blind spots that negatively impact troubleshooting. OpenTelemetry make the detection and resolution of problems easier. With complete interoperability, it provides a standard form of instrumentation across all of your services.
Why Use OpenTelemetry
By providing a standard for observability for native cloud applications, OpenTelemetry can significantly reduce the amount of time developing and implementing mechanisms to collect application telemetry. This frees up developers to spend time working on enhancing features.
The Benefits of Open Telemetry
OpenTelemetry is used by developers to examine features and finds bugs. It provides several important benefits, including:
- The flexibility to change backends without having to change instrumentation
- Creating a single set of standards to allow you to work with more vendors, projects, and platforms
- Simplifying telemetry data management and export
- Installing and integrating OpenTelemetry is often as simple as dropping in a few lines of code
- Avoids being locked into vendor configuration and roadmap priorities
- When new technologies emerge, you don’t have to wait for vendor support for instrumentation
Broad Language Support
OpenTelemetry Architecture Components
OpenTelemetry is made up of a series of components, including:
- APIs – A core component, APIs are language-specific, such as Python, Java, .Net, etc.) and providing the basic pathway for adding OpenTelemetry to applications
- SDK – A language-specific SDK acts as the bridge to deliver data gathered from the AP and the Exporter. SDKs allow for configuration, including transaction sampling or request filtering.
- Exporters – OpenTelemetry Exporters let you configure where you want the telemetry sent. They decouple instrumentation from the backend configuration to make it easier to switch backends.
- Collector – The OpenTelemetry Collector is optional but helps create a seamless solution. It creates greater flexibility for sending and receiving telemetry within the backend. The Collector utilizes two models for deployment. You can use either an agent residing on the application host or implement a standalone process separate from the application itself.
What Is OpenTracing?
OpenTracing was a project undertaken by CNCF in 2016. It aimed to provide vendor-neutral ways of managing distributed tracing to help developers trace requests from start to finish.
The OpenCensus project, which Google made open source in 2018 after using it internally for their Census library, had much the same application.
observIQ Contributes Log Agent to the OpenTelemetry Project
observIQ’s Stanza Log Agent is a key part of the OpenTelemetry project. The log management platform is the first to take full advantage of the log agent technology that has been incorporated into OpenTelemetry.
observIQ accelerated the OpenTelemetry Project by contributing the Stanza open-source agent to the project as part of its effort to enable high-quality telemetry for all. Stanza is a small footprint, high capability, log shipping agent. It uses roughly 10% of the CPU and memory of other popular log agents.
You can sign up for a free trial of observIQ to see Stanza in action as a native component.
If you’re interested in taking part in the OpenTelemetry Project, join the GitHub community to get started.
If you’re investigating incidents on your Windows hosts, sifting through the Event Viewer can be a painful experience. It’s best to collect and ship Windows Events to a separate backend for easier visualization and analysis – but depending on the solution you choose, this can take some significant legwork. Often, this can require manually configuring a 3rd party tool or agent, just to get started.
In this post, I’m going to walk through just how easy it is to collect, parse, and visualize Windows Events from multiple Windows machines with observIQ – all in less than 5 minutes – without needing to set up any 3rd-party tools. No digging around in configuration files to specify log formats or parsing rules – no need to stand up your own backend and storage.
Whether you’re an enthusiast, or an ITOps or Devops professional, observIQ provides tools you need to collect, parse and analyze Windows events, faster and easier than any other solution on the market.
Before We Start: A Few Simple Pre-Reqs
1. Sign-up for an observIQ Cloud Trial
First, sign-up for an observIQ Cloud free 14-day trial – no credit card is required.
2. Choose Your Windows Machines
Next, assemble the list of Windows machines you want to monitor. These can be Windows 10 workstations or servers, ranging from version 2008 – 2019.
3. Verify Your Access
For the selected machines, verify you have both Administrator privileges and RDP (Remote Desktop) access for any remote machines – you’ll need both to install observIQ log agent.
That’s it! Now we’re ready to proceed.
Install observIQ Agents on Your Windows Hosts: A Few Simple Steps
To begin, log into your newly-created observIQ account and follow the 3 simple steps below:
1. Create a Template
Time: [1 minute]
The first thing you’ll need to do is create a Template in observIQ. Navigate to the Fleet > Templates page and click Add Template.
On the Add New Template page, select Windows as the platform, and provide a friendly name for your Template. In this case, we’ll call it something simple: Windows Event Log Template. Next, click Create.
2. Add a Windows Event Log Source to Your Template
Time: [1 minute]
Next, you’ll be taken to your newly-created Windows Event Log Template. From here, we’ll add a Source to our template. Click Add Source.
On the Choose Source Type page, search for Windows Event Log in the list.
On the Configure Source panel, provide a friendly name for your Source. Again, we’ll name it something simple: Windows Event Log Source. Then choose the event channels you’re interested in collecting events from. For this example, let’s leave the 3 default selections for System, Application, and Security, as these are typically the most important channels to monitor.
3. Install the observIQ Log Agent Using a One-Line Installation Command
Time: [3 minutes, (30 seconds per Windows host)]
Next, click Add Agents to generate a one-line agent installation command.
Copy the one-line agent installation command to your clipboard.
Now, we can install the observIQ log agent on each of the Windows hosts. Simply RDP into each system, open the CMD Prompt as an Administrator, paste and run the command. The necessary installation files will be downloaded and installed automatically on your Windows machine in 5-10 seconds.
As each installation succeeds, the agent will be automatically detected by observIQ, and associated with your Template. Configuration is complete!
Now you have the observIQ log agent installed on each of your machines. Each agent is collecting and parsing the Windows Events based on options we’ve selected (Application, System, Security) in our Windows Event Log source that we’ve added to our Template. Let’s go take a look.
Exploring Your Windows Events Discover Page
Return to the Explore > Discover page in observIQ. You’ll now see Windows Events flowing into your account. In the Type column, you’ll see logs from the three channels we selected in our Source, the severity, and a summary of the event as well.
To quickly sort through your logs, you can use observIQ’s dynamic filter bar and easily filter your events by Severity, Agent, Source, or Type.
The moment we created our Windows Event Log Source, Windows Event dashboards were automatically deployed to this account: one for application and system events – one for security events. You can find them on the Explore > Dashboards page.
And there you have it: Windows events in 5 minutes – it’s really that simple with observIQ. With guided configuration, support for popular Sources like Windows Event Log, and automatically installed dashboards, you can easily start analyzing your events in minutes, as opposed to hours or days.
For more information about the other Windows integrations observIQ supports, check out our integrations page: https://observiq.com/integrations/
If you’re interested in starting an observIQ Cloud Trial, you can sign-up here.
In my first post in the Kubernetes Logging Simplified blog series, I touched on some of the ‘need to know’ concepts and architectures to effectively manage your application logs in Kubernetes – providing steps on how to implement a Cluster-level logging solution to debug and analyze your application workloads.
In my second post, I’m going to touch on another signal to keep an eye on: Kubernetes events. Kubernetes events are important objects that can provide visibility into your cluster resources, and can be useful to correlate with your application and system logs.
What is a Kubernetes Event?
Kubernetes events are JSON objects made accessible via the Kubernetes API that signify a state change of a Kubernetes resource. These changes are reported to the API by their related component. For example, if a pod is evicted or created, a container fails to start, or a node restarts – all of these state-changes would generate a Kubernetes Event, made accessible via API via
Unlike container logs, Kubernetes events don’t ultimately get logged to a file somewhere; Kubernetes lacks a built-in mechanism to ship these events to an external backend. As a result, attempting to utilize a typical node-level log agent architecture to grab these events may not work. These events can be captured with a custom application, a number of OSS tools, or with an observIQ Event Collector, which I’ll walkthrough below.
What information does a Kubernetes Event Contain?
In addition to useful environment metadata, a Kubernetes Event contains the following key bits of information
- When the event occurred
- Severity of the event (info, warning, error)
- Reason the event occurred (abbreviated description of the event)
- Kind of Kubernetes resource (node, pod, container)
- Description of the event
- Component that reported the event (kubelet, kube-proxy, kube-api, etc)
Why is a Kubernetes Event useful to capture?
Tracking Kubernetes Events can be useful to understand what’s happening in your cluster over time, which can be particularly helpful to review during a postmortem. Digging into the ‘when’ and ‘why’ over time can reveal useful trends and a good discussion point when an application or service fails.
Real-Time Environment Awareness
If you’re using a complete Cluster-Level Logging solution like observIQ, KubernetesEvents can be used as a basis to create informational or error-level alerts that notify Slack for example, that provide real-time notifications that can keep your entire time in the loop as the state of your cluster resources change.
Container Log Correlation
Having visibility into the state of your resources can help provide useful hints as to what’s happening with your applications. Kubernetes Events gathered by observIQ are automatically enriched with Kubernetes Metadata like namespace, deployment, and pod names – all of which allow you to correlate an application log directly to a resource event with a single filter.
How do I get Kubernetes Events?
By default, events are stored in etcd for a limited amount of time, typically ~60 minutes, and are made accessible by
kubectl commands. Though the commands are useful to learn and employ in certain situations, utilizing a custom application or implementing a complete Cluster-level logging solution that captures, ships, and stores events for long-term analysis is highly recommended.
Accessing Kubernetes Events with Kubectl
Here are a few commands that will allow you to see your events:
kubectl describe pods <podname>
Describing a pod will provide you with related Kubernetes event information, if available:
kubectl get events
Provides a list of current Kubernetes events for all resources:
kubectl get events -o json
Same as above, but each Kubernetes Event is presented as the raw JSON object:
Accessing Kubernetes Events with OSS Tools:
Both kube-events and kubernetes-event-exporter are really nifty, highly customizable tools that have the functionality to capture and forward Kubernetes events to a preferred output or sink (e.g. S3, Kafka, etc)
Kubernetes Event Explorer
Accessing Kubernetes Events with observIQ
With observIQ, you can easily enable Kubernetes Event collection by deploying the observIQ log agent as an Event Collector. Just select the option on your Kubernetes Template. See steps below:
observIQ and OpenTelemtry
Deploying the observIQ Agent as a Kubernetes Event Collector
In my first post in the series, I walked through how to create a Kubernetes Template and enable container log collection. With observIQ, you can easily enable or disable logging options from your template, even after you’ve deployed agents to your cluster. In this example, we’ll enable event collection in our existing template, re-apply the observ-agent.yaml, and add an observIQ Event Collector to our existing deployment. This will leave us with both 1) an observIQ agent daemonset that will gather the application’s logs and 2) a single Event Collector deployment that will gather the Kubernetes Events, running side by side.
Update your Kubernetes Template in observIQ
Navigate to the Fleet > Templates page and choose your previously created Kubernetes template. Select the ‘Enable Cluster Events’ option, then click ‘Update’.
Next, click ‘Add Agents’.
On the Install Kubernetes Agents page, download and copy the newly-generated
observiq-agent.yaml to your cluster, and apply by running
kubectl apply -f observiq-agent.yaml command. After 15-30 seconds, you’ll see the new Event Collector in the discovery panel below.
View your Kubernetes Events in observIQ
After a few minutes, you’ll start to see Kubernetes Events on the observIQ Explore page. The messages will be typed as
Opening up one of the
k8s.events, you can see parsed JSON object, as well as useful labels and metadata that have been automatically added to the event to help correlate to a specific application.
Gathering your Kubernetes events is important if you want a complete understanding of what’s going in your cluster. Kubernetes Events are easily accessible with
kubectl commands but are short-lived. Container logs and Kubernetes events can be correlated together – but it can be challenging without the right tool.
For a complete log management solution that will capture your Kuberentes Events, Container logs and more, sign-up for a free 14 day trial here for observIQ Cloud here: