The Illusive World of Monitoring for Legacy Systems

by Paul S on November 23, 2021

In the tech industry, we obsess over the latest and greatest. When it comes to observability, we’re always looking at the most advanced hardware, the enthusiasts’ favorite systems, and the tech venture capital trends to get an idea of what to build for next. observIQ is no exception. We watched as ARM architecture went from an enthusiasts’ fringe technology for neat homemade projects to mainstream, and we were the first to natively support log management for ARM-based networks on our cloud platform. 

Consumers tend to replace their personal electronics about every four years, and software has reached a point where updates are constant and automatic. Developers around the world pour hours and billions of dollars into maximizing the value of every last flop. But what about the systems that were left behind, the old-school infrastructures that are too old to make use of new advancements? There are billions of dollars worth of legacy systems still in use around the world, with no hope of receiving an update any time soon. The United States Department of Defense, for example, still uses a computer system that was designed in 1958. Retail companies collectively spend 58% of their IT budgets on maintaining legacy systems built before 2000. Have observability solutions left these infrastructures behind as well?

Largely, yes. That’s probably a big mistake.

The biggest concern with legacy systems is obviously security. As hardware, firmware, and software age, hacking techniques overtake them. That’s one reason why keeping systems updated is so important. When it comes to defense and other government related systems, the policy is to handle vulnerability from a physical access perspective. Access points into legacy systems are kept physically guarded, even to the extent that authorized users are not allowed to bring personal electronics to access points. That works, for the most part, but when it comes to retail, school districts, utility companies, banks, and dozens of other organizations that depend on legacy systems, armed guards are not a viable option. 

The hard truth is that legacy systems are hacked all the time. In high school, it was a running joke how easy it was to break into the district’s academic database and change whatever we wanted. Of course, doing so would be noticed almost immediately since physical records were also retained, but it didn’t change the fact that the system security was beyond laughable. They’ve since updated their system and gone fully digital, but many districts across the country have not, and have no plans to in the near future.

Observability is a roundabout method for increased security. It can’t prevent breaches, but it can track them. That would allow the owners of compromised legacy systems to compensate for breaches in several different ways, such as correcting information changed during breaches, collecting more information to track down attackers, retaining data about breaches for insurance, compliance, and other future uses, and so on. At the end of the day. It’s data about what’s going on in the system, and that can be made beneficial in a myriad of imaginative ways. So if there are so many legacy systems out there, and they could almost all benefit from enhanced observability, why don’t log management companies support legacy systems?

The answer is multifaceted, but one intuitive factor is incentive. Even though billions are spent on legacy systems every year, the companies and infrastructures that depend on them have evolved without log management for so long that they’ve invested heavily in other means of compensating for the extra expense of the old systems. Even though 80% of legacy system users profess that the lack of new tech adoption will hurt their organizations in the long run, no one has the will to pack it up and roll out something new. If they did, they would have done it already. If a log management company built a robust solution for a pre 90’s system, there is no guarantee that anyone would actually want to use it. In fact, it’s unlikely. Anyone still using legacy systems with the will to implement a log management system for higher visibility and security would likely have the will to implement an entirely new system anyway. In the case of government agencies, which are held back by bureaucracy more than will to change, there is no guarantee that your system will pass the bureaucratic approval process, and if it could, then the agency in question likely already implemented something. 

Another reason is the origin of observability. Observability really emerged as an industry in the 90’s. From its inception on, it’s been focussed on supporting the next big technology, not the last one. There were many big shifts in the 90’s both with hardware and operating systems that mark a sort of flashpoint in the history of computing. Personal computing took off. Work in the United States started down the inevitable path of digitization. The need for robust observability solutions then and well into the future was clear. No one thought legacy systems would last this long. And little by little, they are disappearing, which brings us to the final reason.

No one wants to build an amazing technology for the mere purpose of compensating for the vulnerabilities of that which is obsolete and on its way out the door. There may be some money in it, but there is no prestige, and if you have the capability to construct solutions for the past, you also have the ability to solve problems today and in the future, and you’re probably better off focusing on where the money is going, not where it was. 

So, as unsatisfying as it is, it looks like legacy systems won’t ever receive the same intense, competitive support that contemporary systems get from the observability industry. That said, there are still workarounds. Many open source observability solutions are out there, created by individuals and published on platforms like Github, and constructed by large, dedicated open source teams, like OpenTelemetry. It’s not out of the question that an inspired IT pro at an organization that depends on legacy systems could adapt any combination of open source tools to work on their system. They probably won’t get more than a ceremonial employee of the month, but it’s at least a possibility. 

How to set up Stanza as the log agent for your GCP?

by Deepa Ramachandra on November 12, 2021

Stanza is a robust log agent. GCP users can use Stanza for ingesting large volumes of log data. Before we dive into the configuration steps, here’s a matrix detailing the functional differences between all the common log agents used by GCP users.Chart comparing the specifications of four common logging agents, FluentD, FluentBit, Logstash, and Stanza. The chart depicts in objective terms the advantages of Stanza over the other agents

Stanza was built as a modernized version of FluentD, Fluentbit, and Logstash. GCP users now have the ability to install Stanza to their VMs/ GKE clusters to ingest logs and route them to GCP log explorer. In this post we detail the steps for installing Stanza to Linux, Windows and Kubernetes environments and viewing the logs in the GCP log explorer.


Stanza for a Linux VM in GCP:

1. Single line installation command: In your VM, run the following single line installation command for Stanza

sh -c "$(curl -fsSlL"
Install log agent for Linux VM in GCP
Stanza installation for Linux

2. Once the installation is complete, the following message displays. It provides the commands for starting and stopping Stanza. In addition, it shows the path for the config file.

Log agent installation for linux complete in GCP VM
Post installation instructions

3. Check if Stanza is running using

 ps -ef | grep stanza
Verify if log agent is installed for Linux
Check if Stanza is available on the VM

4. Open the config.yaml using the command vi config.yaml.  In this example, we use the vi editor, alternatively any linux/unix  editor could be used to edit and save the config file. 

5. Comment everything except the following in the config.yaml and save the file

type: file_input

include: file path

output: example_output

id: example_output

type: google_cloud_output 
Linux log agent installed for GCP VM configuration
Change the config.yaml to route logs to the GCP Logs Explorer


6. After the config file is saved, stop and start Stanza, using the following commands and very if the service is running.

systemctl stop stanza

systemctl start stanza

ps -ef | grep stanza
Stop and Start Log Agent in GCP
Stop and Start Stanza after changing the config file

7. Run a search query as shown below to verify if the logs are sent to GCP cloud and it is available for view in GCP’s log Explorer.

Verify Linux logs in GCP

Stanza for a Windows VM in GCP:

1. Access the command line and run the single line installation command for Windows.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 ; Invoke-Expression ((New-Object net.webclient).DownloadString('')); Log-Agent-Install
Windows log agent installation
Windows VM in GCP log agent installation

2. Check if Stanza is running using

ps -ef | grep stanza

3. Make the following edits to the config.yaml

type: file_input

include: file path

output: example_output

id: example_output

type: google_cloud_output 

Log agent configuration for windows logs in GCP

4. After the config file is saved, stop and start Stanza, using the following commands.

net stop stanza 

net start stanza 

Config Windows event logs in GCP

5. Verify if the logs are routed to the correct path and are available in GCP’s log explorer. To do this, in the Log Explorer console enter the following query: ”stanza.log” stanza
Windows event logs in GCP via Stanza
Windows logs in GCP

Stanza for ingesting logs from GKE clusters:

1. As a prerequisite, follow the instructions in the link to create credentials secret on a JSON file.

2. Download and add the following files to the bucket in GCP. The files are available at

Config files in GCP bucket

3. Run the following command in the GKE cluster to create the service account. Our sample application is running an e-commerce application. For test purposes use this application linked here. 

kubectl apply -f service_account.yaml

4. Run the following command to create the config map. 

kubectl apply -f config.yaml

5. Run the following command to deploy the agent to the GKE cluster

kubectl apply -f deployment.yaml

6. Here’s a sample of the expected configuration if you choose to do it manually.

7. Verify in the logs explorer that Stanza ingests GKE cluster logs for the application or the test application.  

GKE Cluster logs stanza log agent GCP
Verify that Stanza ingests the GKE cluster logs

Stanza for  ingesting logs from MySQL databases in GCP:

1. Use the following default configurations to begin ingesting logs. 

- enable_error_log: true

  enable_general_log: false

  enable_mariadb_audit_log: false

  enable_slow_log: false

  error_log_path: /var/log/mysql/error.log

  start_at: end

  type: mysql

- type: google_cloud_output

2. Verify that the logs are ingested from the database in the logs explorer.

MySQL logs in GCP through Stanza
View the MySQL logs ingested in Log Explorer

Stanza is a lightweight logs ingestor and transporter. Clubbed with the great features of GCP, you should be able to set up flawless end-to-end observability for your applications. Try this out and write to us with your requests/ suggestions.

What’s Wrong With Observability Pricing?

by Paul S on November 9, 2021

Observability Pricing: Bargain or Rip-off?


There’s something wrong with the pricing of observability services. Not just because it costs a lot – it certainly does – but also because it’s almost impossible to discern, in many cases, exactly how the costs are calculated. The service itself, the number of users, the number of sources, the analytics, the retention period, and extended data retention, and the engineers on staff who maintain the whole system are all relevant factors that feed into the final expense. For some companies, it’s as much as thirty percent of their outside vendor expenses. Observability is a $17 billion industry, and that is projected to increase by at least $5 billion within five years. More new customers are entering the market than new players, which is enough to keep prices rising despite the increased competition. But there’s more going on than mere demand driving up prices. Observability is also one of the fastest innovating spaces in the tech world today. So are these immense costs really worth the innovations?


Spoiler alert: no.


Most consumers are simply overpaying for metrics, traces, and logs.


Okay, there is obviously more nuance than that. Observability is critical. They can’t just opt out. Every tech company needs it, many non-tech companies find value in it, and as security, compliance, and administrative needs turn digital in nearly every industry around the world, the observability market is only going to expand and dig deeper into every day business until it is a universal norm. Indispensable products often come with a premium price, sans price regulation. Look at the average cost of smartphones over the last decade – phones have become the single most essential item for anyone to have, even in underdeveloped regions, and “entry level” phones now cost double what flagships were in the late 2000’s. 


Like smartphones, much of the increase in price is justified by increased value. In observability, that comes in the form of unique services, improved efficiency, direct customer support, and ease of use. But also like smartphones, observability can trap you in a specific ecosystem (like *cough cough* Apple), and those trapped in an ecosystem they cannot easily move out of pay a premium for no added value beyond remaining in that ecosystem. It’s probably not fair to go so far as to call such practices “predatory,” since there is still value in the ecosystem and a relatively unobstructed choice between observability solutions, but no one competing in the observability space is going out of their way to make it easier to switch to a competing platform. 


Pricing structures vary wildly across observability services and it’s easier to look internally at what expenses those companies undertake to provide their services than it is to look at how they price their services externally. The costs are based on a myriad of factors that boil down to feature set, user base, data sources, and data volume. Most observability solutions base their pricing on some combination of those factors, which makes sense, because they correlate directly to costs that their customers create for them. The problem is that the largest observability solutions toy with their pricing schemes in order to appeal to a wider variety of customers, rope in small companies with apparently low costs only to double or triple them later, upsell for “premium” features that are actually essential for most businesses, and carefully craft large jumps in their pricing models that correlate to the customer’s cost of switching. It’s smart if you’re staring at the bottom line, but frustrating for customers.


No two companies have identical observability needs. A large portion of the increased competition and increased prices of observability solutions can be attributed to the fact that more companies are constructing broader platforms to service the widening breadth of use cases, and then subsidizing the cost of those new features by charging more for their services overall – meaning more to firms that don’t necessarily need the same bells and whistles as everyone else. Look at a whale like Netflix. They are primarily concerned with maintaining large bandwidth connections, hosting content as cheaply as possible, and moving video from their servers to their customers on the most efficient paths. It isn’t important to them if you’re loading the video from a data center across the street or on the other side of the world, they just want to get the data to you as cheaply as possible from somewhere where they are keeping it as cheaply as possible without any severe slowdowns in connection speed. Security, while still a concern internally, is less of an issue when it comes to the actual delivery of their services. Sending The Office from one machine to another isn’t exactly a national security threat.  Whenever a company like Netflix is hacked, the worst thing that happens is we have to get a new credit card mailed to us in 3-5 business days. Observability solutions for them serve as a tool primarily for optimization. Another massive whale you may not have even heard of, APEX Clearing, handles market transactions for brokers like Robinhood, SoFi, and Acorns. Chances are pretty good that a financial institution you regularly engage with does business with APEX clearing and you aren’t even aware of it. In a way, that’s a good thing. They also move massive amounts of data, and speed and efficiency are nice, but unlike Netflix, security is literally a make or break factor for their business. A major hack in their system or observability solution, like the Solarwinds hack in 2020 that exposed the financial data of millions of companies and citizens, can have massive economic consequences. For the same observability solution to appeal to both types of users, the featureset needs to be twice as large and twice as robust. There is certainly some overlap, but the ‘must-have’ featureset between any two customers can look wildly different. 


Observability is changing fast. Firms need to innovate or acquire innovators to keep up. That leads to increased quality, but also increased cost and feature bloat that isn’t relevant to many customers. Pricing models are ultimately based on expenses incurred by observability services, but are often composed of an obscure combination of factors that leaves customers uncertain exactly how their bills are calculated. Observability platforms promise customers competitive pricing, but often fail to include crucial services in basic plans or levy massive fees and price increases as the customer becomes more dependent on their services and switching becomes less feasible. 


There is no clear solution. People need observability and are willing to pay a lot for it, but no one wants to pay more than they have to, especially if it is not maximizing their value. The first step to avoiding this situation, as a customer, is finding an observability solution that offers transparent pricing. This can be tricky. Many solutions boast pricing pages that seem easy enough to understand, but one of the four factors discussed above (features, users, sources, and volume) is probably hidden in the fine print as a means to trigger a massive price jump somewhere down the road. The next best solution is to find a service that charges based on only one factor. Most likely, that will be volume. Volume has the most direct correlation to cost for the providers, so providers with the most transparent pricing models simply charge directly based on consumption, like observIQ. A volume-based pricing model makes it easy to predict what your costs will be, even as you scale. You may end up paying for features you don’t need, but you won’t get roped into upsells or locked into pricing jumps as you grow your business.

observIQ simple pricing plans

At observIQ we hope to lead by example. We include all of our features in every plan. You only pay based on volume and retention. Our costs are transparent and predictable, and you never pay for adding users, sources, or new features. Check us out!

5 Weird Use Cases for Log Management

by Paul S on October 26, 2021


We’re all familiar with the typical use cases for log management, such as monitoring cloud infrastructures, development environments, and local IT infrastructure. So we thought it would be fun to cover some of the less usual, more wild use cases for log management, just to show that log management tools are more versatile, and more interesting, than they may seem. If any of these use cases look too interesting to ignore, let us know and we can do a full article on them!

1. Monitoring Household Video Game Time (Or Netflix, HBO, etc.)


I think we all saw our gaming and streaming hours increasing during the COVID-19 lockdowns. For parents, the value of tracking game and stream time is an easy sell. Most parents want to limit their young children’s screen time. There’s nothing wrong with games and shows, but most agree that variety and moderation is the key to a fulfilling lifestyle. Any adult can make use of monitoring their own gaming and streaming. For the rare few of us with the power to self-moderate unassisted, there’s less value, but most of us are used to seeing hours, even days, disappear into screens. Maybe it won’t motivate us to change anything, but at least we will have some visibility into our screen time.

2. Collecting data on your IOT devices


I don’t have any kids, but the other day I had a pregnant friend over and some of our conversation was naturally about babies. Within twenty-four hours, Amazon was advertising me baby diapers. Which of my eleven networked devices recorded and sent that information? It’s no secret that many IOT devices, like Alexa, Google Home, and our TV’s are constantly collecting data on us. It’s about time we return the favor. With plug and play observability solutions like observIQ supporting a wider range of sources and log types every day, it’s becoming much easier to monitor the tech that’s monitoring you. You could check, for example, how frequently applications on your IOT devices are active, when they’re communicating with the cloud, and maybe even get some insight into what data they’re sending out. 

3. Checking which apps on your PC are hogging resources


I can’t count how often my computer’s fans are running at full tilt while I’m doing something innocuous, like word processing. I’m always left wondering, what is my computer doing in the background that’s heating up its internal components so much? The big fear is always that someone has managed to sneak a crypto-mining or data-stealing program onto your machine, but usually it’s just bloatware that came with some free application, or a confused application eating resources to do a whole lot of nothing. On PC, you can see real time hardware usage with task manager, and there are a host of other hardware monitoring softwares for PC and Mac, but local hardware monitoring applications don’t provide the same depth of information as log management platforms, nor can they effectively track usage over time or present information on useful dashboards.

4. Generating insights from trading bot activity


It’s always fun when a human gets a rare chance to play the meta game against machines and win. When it comes to day trading, bots make up 80% of all trades. Mostly that’s actually due solely to the fact that bots can trade faster, and not that they’re actually smarter than us when it comes to financial predictions. The stock market is a complicated beast that responds to literally everything our species does, our planet does, and sometimes it even seems like it responds to the stars, as if connected to some otherworldly informant. Ultimately, a lot of it feels random, and whether that’s true or not, we’re a long way off from any AI understanding what’s really going on. (And manipulating prices by moving large amounts of wealth around is not the same as understanding how and why prices are what they are. AI has the same data to look at as everyone else, and it’s no less obscure just because they can do math faster.) 

So where does log management come in? Well, if you have access to a network of machines running trading bots, each of those bots will behave differently, based on their programming and instructions. Collectively, they can generate trends that become apparent to an intuitive mind before they are clear to the bots themselves. Sure, you could hook a bot up to that information pipeline and instruct a bot to trade based on the other bots’ trades, but what’s the fun in that when you can watch for trends yourself and take advantage of a big swing? It doesn’t always work, so this is certainly not financial advice, but it’s quite satisfying when it pays off.

5. Learn about your AI as it learns about you


Personal assistants are taking hold. Siri, the original AI assistant that began as an iOS app that Apple purchased and folded into its OS, was quickly followed by Amazon’s Alexa and Google’s Personal Assistant. After that, Samsung released Bixby, and a handful of other small players emerged over time. The original version of Siri was worse than an average high school coding project would be today, but back in the early 2010’s, it was the first program that could reliably listen and understand what a human was saying to it. Now our AI understands almost everything we say, incorporating context and tones to derive deeper significance. They can make appointments for us and remind us of things we may have forgotten even without us asking for a reminder. How does it know all of that?

It’s all linked to your profile on whatever platform your AI assistant of choice lives on. Monitoring your AI assistant is similar to monitoring your IOT devices, but more focussed on one application and often on only one device – your phone. With the right log management set up, you can see when your AI is sifting through your phone’s data, other applications, and listening in on your daily life, to ship it off to the cloud for processing and incorporation into your digital clone. Different people regard our machines’ surveillance of us in different ways, but whether you think it’s good or bad, it’s happening, so why not get some insight into exactly who Siri thinks you are?


Why use observIQ


If any of these use cases look interesting to you, check out observIQ. observIQ is the easiest log management platform to use, especially for those who are new to observability. It has all the log management features expected in a robust observability solution, wrapped in an approachable package with an intuitive user interface. Let us know on twitter if you want to see a longer article about any of these use cases, or think of one of your own you want to share.

Log Management 101: Log Sources to Monitor

by Deepa Ramachandra on October 22, 2021

Log management software gives the primary diagnostic data in your applications’ development, deployment, and maintenance. However, choosing the log sources to log and monitor could often be a daunting task. The primary cause of concern in monitoring all sources is the high price tag that many SIEM tools in the market charge based on the number of users and sources ingesting logs. At observIQ, we offer unlimited users and sources. You only pay for what you ingest and how long you retain; if your retention needs are minimal, you can use observIQ for free. Ingesting logs from some sources such as firewalls, IDS, active directories, and endpoint tools is pretty straightforward. But some critical sources of your incident response plan could have complex configurations which may deter you away from implementing Log Management for those sources. With advanced log management tools such as observIQ, most sources come preconfigured to work with the log agent, making adding sources simple. As a best practice in logging, businesses evaluate and implement logging for sources that:

  • Are required for actionable solutions such as monitoring, troubleshooting, etc. The suggested best practice is to archive non-actionable logs if necessary for compliance. This evaluation can tone down the noise in your log ingestion and make a more need-based log management process. 
  • Maximize the return of investment through increased visibility into application events 
  • Address the existing need for compliance 
  • Cover event logs from areas of their network/ infrastructure that are prone to hacks and malicious attacks
  • Voids the blind spots in a network

On the web, you will find generic information about log sources and what you can do with data that is ingested into any SIEM tool. But there is a critical step that every business/ individual evaluating a log management tool must do; take stock of all your network/application components. In-house or custom applications developed recently often create logs in a standard format such as JavaScript Object Notation (JSON) or Syslog. All logs are never saved in the standard format. Logs from servers and firewalls are easily ingestible and are parsed seamlessly. However, when working with DNS and other physical security platforms, log management is a challenge. Not logging DNS or security platforms could deter the security and block insights into key network components. Many businesses skip challenging logging components fearing the high human effort involved in developing the necessary plug-ins for ingesting from a multitude of sources. Multiple surveys conducted from organizations of all levels estimate that businesses ingest less than half of the sources in their network. Be aware of all aspects of your application and infrastructure that you need to log from. 

A handy chart of what to log in log management tools

1. DNS

It provides highly detailed information about DNS data that is sent and received by the DNS server. DNS attacks include DNS hijacking, DNS tunneling, Denial-of-Service (DoS) attacks, Command and control, and cache poisoning. Hence, DNS logs help to identify information related to these attacks so that source could be found out. These include detailed data on records requested, client IP, request flags, zone transfers, query logs, rate timing, and DNS signing.


DNS logs are a wealth of data from user site access, malicious site requests, DNS attacks, Denial of Service (DoS) events, etc. Based on the DNS logs, businesses can make critical network security decisions. DNS is the basic protocol that facilitates applications and web browsers to operate based on domain names. Although DNS was initially not intended for general purpose tunneling, there have been many utilities developed to tunnel over DNS. Since the data transfer capabilities of DNS are unintentional, DNS is often an ignored space in log ingestion. If the tunneling capabilities of DNS are not monitored, it could lead to a severe risk to a network. The two core components that need monitoring in DNS are payload and traffic analysis. 

Logging all components of DNS can become very noisy and make analysis impossible. The DNS log data is often voluminous and is in a multi-line format. Listed below are a few common scenarios where the DNS logs ingestion is mandatory and helpful:

  • When the DNS queries are done using the TCP instead of the UDP
  • When there are requests from an internal RFC 1918 IP address that are not associated with the business’ domain. 
  • When a zone transfer occurs to an unauthorized/ unknown DNS server
  • When a record request mentions an unconventional file type and has many meaningless characters embedded in it.
  • When there are requests to non-standard ports at hours that are not standard usage times. 
  • When there are country domain extensions that are uncommon from the business’ network, such as .ru(Russia) or .cn(China). A very common trigger will be if the business does not operate in those countries.
  • When there are increased lookup requests and failures.


2. Packet Capture Logs

Packet Capture(PCAP) is an API used to capture the packet data information for the network traffic in the OSI layers. It is important to note that PCAP does not log by itself. Instead, a network analyzer collects and records the packet data information. The events logged in a .pcap file include:

  • Malware detections
  • Bandwidth usage
  • DHCP server issues
  • DNS events

During network threat analysis, the packet file data helps detect all network intrusions and all other suspicious packet transfers. The packet data helps drill down to the root cause in a network issue. For example, if you see a response failure from an application call, a study of the packet rate and packet information can reveal if the issue is with the request or the response. PCAP logs are in a simple format, making ingestion and parsing simple for most log agents. 

3. Cloud platform logs

 The majority of businesses host their applications on cloud platforms such as Google Cloud Platform(GCP), Amazon Web Services(AWS), etc. It is inevitable to have the log management software ingest the logs from these cloud platforms. Many businesses shy away from ingesting cloud platform logs owing to the discrepancies in the log formats and the parsing agent. Companies like observIQ have ready-to-use plug-ins for these platforms, making log ingestion from these platforms possible in a matter of minutes. Our log agent, Stanza, can manage the number of events in the cloud platforms efficiently, an area where most log parsers fail. However, it is recommended to limit the ingestion to actionable events only to reduce the noise in the cloud platform events to a manageable level. Some of the most critical cloud platform events that you need to consider monitoring are:

  • Changes in user permissions and roles
  • Any changes made to the instances, such as creation/deletion in the cloud infrastructure
  • Requests to data buckets containing sensitive data buckets by users who are accessing remotely. 
  • Unauthorized account or file access
  • Communication to external sources 
  • Alerts for the transformation of personally identifiable information


4. Windows Events Logs

Event log from windows is critical for securing the network, troubleshooting issues, and retaining events for compliance purposes. Windows events such as network connections, errors, network traffic, system behavior, and unauthorized access are logged. A windows system can produce a large volume of log data on a daily basis. While managing such volume is difficult, log management software makes handling windows event logs easier. In a Windows environment, there are three primary categories of event logs tracked, system logs, application logs, and security logs. Logs from System Monitor, which is a driver, also logs to the Windows events log. System monitors and logs events such as files creation and modification; driver installations/ deletions, process creation, accessing raw disks, etc. Centralizing the windows events onto a windows server from which the SIEM/ log agent can read the logs is the recommended practice. The log management software chosen should be able to aggregate all windows logs into a common format, provide an alerting mechanism for network anomalies and offer visualization capabilities. observIQ offers simplified logging for windows events, check our user documentation for simple plug-in configuration for windows logs.


5. Database Logs

Companies have apprehensions about logging database events because they worry that logging could inversely affect the server’s performance. Capturing events from databases and tables could be challenging considering the large number of databases in applications. If there are databases created by third-party service providers, gaining access to log events in those databases would be a challenge due to access restrictions. To gain visibility into databases, having a good Database Activity Monitoring system in place helps. Since the DAM works similar to a firewall in its restrictive functionality, logging database events via the DAM makes it a more compliance-oriented monitoring practice. Using a stored procedure to capture specific events and log with the accurate information about the event with the record ID. Stored procedures can be used to track administrator accesses, malicious and unauthorized access attempts, authorization failures, start and end event logs for servers, all schema-related operations, requests for modifications to a large set of records in the database, etc. 

6. Linux Logs

Logs all Linux events to get a timeline of system, application, and operation system-related events. Errors and issues in the desktop are logged in different locations. The location where the logs are written is configurable in most cases. All Linux logs are entered in plain text format. /var/log is the directory to which the logs are saved. In Linux everything event has the potential to be logged, anything from package manager events to Apache servers can be logged. All logs except authorization-related logs are logged to a Syslog in the /var/log directory. The most critical Linux logs are Application logs, event logs, service logs, and system logs.


7. Infrastructure Device Logs

Infrastructure devices are the lifeline of information transport architecture. Monitoring the routers, switches, and switches in a network gives an insight into the health and functioning of the network. Enriching the log data from infrastructure devices can give increased visibility into machine and user interaction. Logs from infrastructure are vital submissions in the compliance package. Infrastructure logs from highly distributed environments do not have a straightforward alerting mechanism in place. When there are network failures, only through a detailed log analysis can the DevOps triage and fix the issue. Sophisticated monitoring tools such as observIQ, have alerting mechanisms based on thresholds and metric indicators for issues. Logs from different infrastructure devices are output in various formats. In most cases these logs are unstructured data that are most suitable for batch processing through a tool instead of manual analysis. Most application developers are interested in logging configuration changes to infrastructure devices. Knowing the origination of configuration changes helps address any issues that may arise from misconfigurations. 


8. Containerized Application Logs

Although containerized applications are a new and emerging form of application management, it is becoming increasingly business-critical. This happens to be the most in-vogue source for log ingestion for businesses of all sizes. To effectively log a containerized application, it is necessary to collate the logs from the application, the host OS, and Docker. We have written extensively about logging Kubernetes applications in this space and we will continue to document more use cases in the future. 


9. WebServers Logs

Logging WebServer events can be tough but WebServer logs help businesses understand the end user’s interaction with the application. While in the past businesses ignored the need to log webServers, with increasing need to study user behavior logs from web servers such as Apache, NGINX are looked at with new interest. Web server logs contain useful information such as:

  • User visits and application access logs
  • User login and duration for which the user is logged in
  • Page view count
  • User information such as browser used to access the application, version of the OS
  • Bots access to the application
  • HTTP requests


10. Security Device Logs

As more businesses begin to adopt a more cloud-based approach for application delivery, the devices that are at the edge of customer interaction are becoming extremely valuable. Security devices such as firewalls are experiencing a large spike in traffic loads with this shift to cloud infrastructure. Logs from security devices give details about network security and user activity. Not logging security devices is equivalent to not fixing the final piece in a puzzle. 


We outlined a generic set of log sources in this post. But the complexities of logging from a myriad of sources is a problem for most of our users, and observIQ is working at fixing that. We have over 60 integration with log sources at the time of this post and this list is growing rapidly. We work on these integrations based on popular requests. Feel free to pitch your log source request to our customer support team. In the next post, we would like you to join us in configuring these log sources to your observIQ account, that you can set it up for Free!



Sign Up for the observIQ Cloud Beta

Download the Splunk Solution Brief

Sign Up to receive updates on our products

observIQ Support

For support on observIQ Cloud, please contact:

For the Open Source Log Agent, community-based support is available on our:

GitHub Repository

Sign Up for Our Newsletter