Blog

Unexpected Parallels Between Yoga and Observability

by Deepa Ramachandra on September 22, 2021

Parallels between Yoga and Observability

Yoga is to ideal human health what observability is to an application’s ideal functioning. It is well established that observability is a critical factor for the successful implementation and maintenance of cloud-native, serverless, cloud-agnostic, and microservices-based applications. Well-established observability helps DevOps and development teams cross the boundaries of complex systems and get complete visibility into their functioning. The definition of observability is derived from control theory, which states “A system is said to be observable if, for any possible evolution of state and control vectors, the current state can be estimated using only the information from outputs (physically, this generally corresponds to information obtained by sensors). In other words, one can determine the behavior of the entire system from the system’s outputs. On the other hand, if the system is not observable, there are state trajectories that are not distinguishable by only measuring the outputs.” (Wikipedia)

A woman in a yogic pose beside a graphic representing obersability

There are many parallels between a yogic practice and observability. We think it’s a fun and interesting way to define the basic concepts of observability. 

 

1. Observability is preventive 

The most basic rule of thumb in yoga is to articulate the mind to circumvent the pressures of disruption in any of the three doshas, Vata, pitta, and Kapha, via regulated yogic practices. This translates in observability to aligning the three pillars of observability, logs, metrics, and traces, to smoothen out the application’s functions. As in yoga, the process of regulating observability is prophylactic. 

2. Observability is reactive

Yogic stances are reactive to the body’s transitions, flows, and capabilities. With each pose, the mind reacts to the flow of the body and the body adapts to the cues from the surrounding. This reactive capability in yoga is called vinyasa. In observability, there needs to be a synchronized flow between the development, DevOps, and the observability platform to be reactive to events in the application. Most organizations that operate in a service-based implementation model have preset SLAs for their mean time to resolution( MTTR). Adhering to the SLA without a logging solution could in most cases be impossible. 

3. Observability is scalable

An individual can practice yoga at any level, starting at the most primary level called Karma yoga, and work their way up to Kriya yoga which is more of endurance training for the mind and body. So is observability. An observability implementation does not have a limit to its scalability. Organizations can start from a simple log management solution, which is the core of observability, and build their way to a more sophisticated observability strategy. 

4. Observability is not restrictive

Yoga does not require expensive equipment or gear to practice. The simplicity of yoga makes it adaptable across the globe. People from all walks of life are able to embrace yoga in their own way. Similar to yoga, observability solutions such as observIQ factor that users of all levels access their tool and offer free plans. There are also a lot of open-source observability solutions such as OpenTelemetry, that businesses can implement with little to no monetary investment. 

5. Observability is derivative

As a yogi continues to practice, they learn more about the strengths and weaknesses in their body and mind. A stronger yogic practice called Gnana yoga allows the individual to build the intellect to overcome their weaknesses. This derivative nature is in parallel to observability, rendering users the capability to study, analyze and formulate solutions to problems based on the data. 

6. Observability’s approach is holistic 

According to yoga, every body has five transitional phases called Kosha, they are annamay, manomaya, pranamaya, vijnanamaya and anandamaya. To observe each of these phases closely and to practice the relevant yogic stances for each of these phases is the best approach to embracing yoga. Similarly, in observability, it is necessary to manage the events from the five phases of an SDLC namely, planning, analysis, design, implementation, and maintenance. 

7. Observability is collaborative

Modern yogic stances have evolved to be collaborative between two or more individuals, who elevate each other’s strengths to gain health and mental benefits. So is observability. Most observability platforms today are highly collaborative allowing users to collaborate in data analysis, error handling, and debugging. 

8. Observability is transitional

The yoga that is practiced today has changed drastically from the initial practice 5000 years ago. The modern versions of yoga factor the current day need to be quicker, result driven and less strenuous. So are all observability platforms in the cloud era. Observability has been around since we started using computing systems. With the advent of cloud-native, serverless, and microservices-driven application development, observability has taken center stage. The tools catering to modern-day needs have new-age solutions such as quick implementations, pre-configured plugins, and more. The implementation-to-use timeline is a matter of minutes. 

9. Observability is integrative

New age yogis are experimenting with yogic postures with other physical training disciplines such as acrobatics, Pilates, etc. This approach to yoga brings in people mastering in those disciplines to make yogic postures more efficient. This is similar to observability because in the observability space, there’s an agreement that there is never a one size fits all solution. OpenTelemetry was the result of this thought process. With an integrative approach like OpenTelemetry, organizations can pick and choose the most appropriate solution for themselves, without compromising on quality. 

10. Observability is simple with the right solution

Yoga, as we discussed earlier, is practiced in many forms. There’s a growing consensus that there is no perfect way to do a specific yogic pose. The poses dictated by the rhythmic movements of the body and breathing techniques are long gone. Now yoga is simplified for everyone to do. So is observability. Companies like ours offer a solution for small to medium size businesses, individuals, contractors, or even a user who simply wants a smart home logging solution. Such solutions do not have endless querying on terminals or exhaustive installation steps, they offer a quick start to get up and running for everyone. 

If this post makes you curious about observability or our product, reach out to our awesome customer support team. Until we get back with our next installment of interesting new features and product insights, stay observant with observIQ. 

 

 

 

 

 

observIQ Cloud and the OpenTelemetry Collector

by Deepa Ramachandra on September 17, 2021

An upgrade to observIQ’s log agent – incorporating OpenTelemetry

Our log agent is powerful, efficient, and highly adaptable. Now, with OpenTelemetry setting new standards in the observability space, we wanted to incorporate that collaboration into our log agent and  offer our users the ability to take advantage of the OpenTelemetry ecosystem. Starting today, you can upgrade the log agents in your observIQ account to the new Open Telemetry-based observIQ log agent with a single click. 

OpenTelemetry’s logging USP adheres to the “textbook” definition of a log. By aligning our agent to the OpenTelemtry collector, we aim to attain textbook perfection for our log management capabilities. To understand why this is a game-changer, let’s dive into the basic architecture of the OpenTelemetry log collector.

Log Collector’s Architecture

The open telemetry collector is designed to support logs from legacy systems, logging libraries, and cloud-native applications. The problem the open telemetry collector addresses is the incohesive logging solutions and libraries that have an incomplete correlation between the aspects of observability data, namely, metrics, logs, and traces. By implementing a standardization in how observability data is parsed, ingested, distributed, and consumed, OpenTelemetry has made the telemetry data very relevant and highly informative. 

Open Telemetry has standardized data models, for all logs, metrics, and traces that they parse. Once parsed, the OpenTelemetry collector enriches the data further to create more correlation between the data. The most notable factor here is that the enrichment across logs, traces, and metrics has the same attribute names and values, maintaining uniformity across all observability data. OpenTelemetry’s log collector follows a defined log data model that dictates the information that should or should not be recorded in the log data. This log data model is created with the intent of having log data transmitted, saved, and analyzed in a standardized manner. It is expected that the existing log libraries would align themselves to this log data model in their future versions. 

The components of the log collector’s architecture are Receivers, Processors, and Exporters. The traces and metrics data’s trajectory is defined by a pipeline in the collector but not the log data. The receiver is essentially the entry point for the log data, where the data is collected, assimilated, and forwarded to the processor, which enriches and correlates the data. Once enriched the data is transmitted to the destination path/ applications via the exporters. 

An illustration of OptenTelemetry's collector architecture

How does correlation work?

The primary aspect of correlation according to OpenTelemetry standards is the time-based correlation, where logs, traces, and metrics are mapped to each other based on the time or time period of execution. The next level of correlation is based on the execution. Logs and traces in the same execution context are associated with each other using TraceID and SpanID. Another significant correlation factor is the resource name that is included in the traces and metrics data.

What difference does this upgrade make to your log management?

  • It leverages the benefits of the evolving OpenTelemetry log collector’s components
  • Now restarting an agent would be a unique event not necessitating restarting the application or the other components of your infrastructure.
  • There are more refined log level standardizations, mirroring the log levels/ service text in OpenTelemetry such as trace, debug, error, warn, info, and fatal.
  • observIQ’s log agent now adapts a new log rotation and checkpoint resuming capability using OpenTelemetry’s filelog receiver. 

Steps to upgrade to the log agent

Technically the upgrade to the new log agent works at the click of a button. This is in line with our mantra – keep everything simple. 

a gif of the simple log installation and upgrade on observIQ

The gif above is unedited to show how simple the upgrade works. It takes under 15 seconds to navigate, initiate and complete the upgrade. Try this out in your observIQ account or sign up for an account. 

 

The upgraded log agent combined with the vast set of source plugins that observIQ offers makes your log management a breeze. As always, we are around to assist you with any log management questions or requests you may have. Please reach out to our awesome support team. Stay observant with observIQ

 

Logging Gitlab Runners for MacOS and Linux

by Deepa Ramachandra on September 14, 2021

Gitlab is the DevOps lifecycle tool of choice for most application developers. It was developed to offer continuous integration and deployment pipeline features on an open-source licensing model.

GitLab Runner is an open-source application that is integrated within the GitLab CI/ CD pipeline to automate running jobs in the pipeline. It is written in GoLang, making it platform agnostic. It is installed onto any supported operating system, a locally hosted application environment, or within a container. A Gitlab runner executes jobs outlined in the. gitlab-ci.yml file and sends the results back to GitLab.

Gitlab build runner common workflow

The. GitLab-ci.yml file is installed in the root directory of GitLab’s repository.  The most common stages in a Gitlab workflow are outlined below:     

  • Pre-build – These are the set of operations that are to be performed before the application build commences when there are new code changes/ commits. As a precursor to this set of operations, all libraries and tools that are required to create the build are executed. 
  • Build – These are the set of executable operations that create a new build of the application with the new changes/ commits. 
  • Test – A set of tests to ensure that the new build of the application is functioning as expected. 
  • Deploy – A set of executable operations that follow a successful test run to move the new build to the designated production or other environments. 

Advantages of using Gitlab build runners

Before we dive into logging build runners, it is necessary to know the reason for implementing a Gitlab build runner:

  • To streamline the otherwise complicated application development and deployment process with a declarative approach, instead of one large deployment, the process of deployment is handled in a trimmed-down, repetitive and automated series of tasks that create interdependent builds automatically – Thus making every software build automated, saving the developers time and effort spent on running and testing builds. 
  • Easily validating the code commits and changes. Both development and DevOps teams have full control over build management irrespective of the application model. 
  • A more systematic process for validating code changes ahead of merge via automated tests. The test run could be anything from a functional test to a more enhanced application performance test. This ensures that the application is continuously tested to meet functional and business requirement standards and to easily identify all possible application defects and security-related vulnerabilities.
  • Gitlab is built in GoLang, making it a multi-platform CI solution, working well across operating systems such as Windows, MacOS, and Linux. It is also language agnostic, offering support for most commonly used development languages such as Java, Ruby, PHP, etc. 
  • The declarative process of the Gitlab runner enables developers to build security into each automated process ensuring access control and management across the CI pipeline. User access control is established at the job level and branch level, giving way to establishing compliance practices in the automated build runs. It is also easier to ensure code compliance to set standards with built-in automated code quality checks. 
  • Gitlab build runners offer a strong integrative approach to both upstream and downstream processes. This works advantageous for developers who want to strike a balance between cloud-native solutions and integrations. Processes such as source code control, project management, versioning, artifact repository management, compliance adherence, etc. are built into the CI pipeline. 
  • The most critical use of all is the increased observability into the CI pipeline execution. As builds are run Gitlab logs all events to the designated location. Identifying build errors, code quality issues, test failures, and other application metrics are at a more granular level. 

Gitlab build runners for MacOS and Linux

Gitlab released Beta versions of custom build cloud runners for MacOS and Linux operating systems. The build cloud runner for Linux is aimed at managing Linux-based applications in the GCP environment while the MacOS version targets build cloud management in an Apple devices-based ecosystem. 

The build cloud runners are architecturally different from the other Gitlab shared runners. Gitlab’s detailed posts offer more information about the architecture and performance enhancements of the build cloud runners. 

Logging build cloud runners

Gitlab logs the outcome of every job as it processes them. By default, Gitlab sends the job logs from the GitLab Runner in chunks. For efficiency, Gitlab recommends incremental logging, in which Redis is used as a temporary cache for job logs. Designated object storage in the log archiving process ensures that the logs are rotated as the CI pipeline executes new builds. The common logging workflow of Gitlab:

  • During the patching phase, Gitlab logs every job execution status onto the file storage in the stored path.
  • During the overwriting phase, Gitlab logs the status of all completed jobs onto the file storage in the stored path. 
  • During the archiving phase, Gitlab moves all logs of a completed build to the artifacts folder in the stored path. 
  • During the unloading phase, Gitlab moves all logs from the artifacts folders to the configured object storage in the selected path. 

When using incremental logging, the first two steps of the logging workflow change. Gitlab uses Redis to store chunks of logs and a preset persistent store to save the logs instead of the file storage. In this case, Redis is used as a first-class storage option that retains the log data until it reaches a set limit. Once that threshold is reached, the chunk of the log is transmitted to the persistent store, from there it is moved to object storage at the end of the stipulated build run time. 

observIQ solution

While the logging option offered by Gitlab build runners are elaborate, it is most productive and useful to have the logs ingested into a log management tool with an effective log analysis option. observIQ offers that solution. With observIQ you can have your Gitlab build runner logs ingested seamlessly, parsed, indexed, and aligned graphically for extensive analysis. To check out our solution, signup for a free account and try it out. If you have any comments or suggestions, send us feedback. We love making plug-in requests possible. 

observIQ Releases First PnP Solution for monitoring arm-based Kubernetes

by Paul S on September 10, 2021

observIQ Cloud – the first plug and play observability solution for arm-based Kubernetes

 

The Emergence of Arm-Based Kubernetes

 

Arm-based Kubernetes clusters have been in use for a while, albeit mostly for niche uses, by enthusiasts, and DIY hobbyists. But that is changing. Arm architecture offers an efficiency and scalability that other architectures do not, and that makes it appealing to businesses. There are a number of hardware, firmware, software, and optimization hurdles to overcome before arm processors can compete with the leading architectures for cloud environments, but it’s already taking off for IOT, and more companies, such as Apple, are betting on arm as the future of computing. One big advantage, which arises from the architecture’s overall simplicity, is that it makes it much easier to construct and scale multiprocessor systems, and it’s usually possible to add new hardware to a system without ditching the old. Processes can be redistributed so that more powerful hardware handles more demanding tasks, boosting overall performance.

Investment on the hardware side will be what ultimately drives arm from a niche architecture into the mainstream, but a lot needs to be done throughout our global computing infrastructure to facilitate that additional technology. Software needs to evolve to maximize the value of the hardware. That’s going to take time and effort, but when it comes to observability, observIQ already has you covered.

k8s dashboard in observIQ Cloud

The Software Lifecycle

 

There is a pervasive inverse relationship between the versatility of software and ease of use. Generally, the more versatile something is, the harder it is to understand, implement, and maintain. Many companies approach this issue from two angles simultaneously. First, they construct add-ons and tools that make their versatile software more streamlined for common, basic use cases. The prime example of this is operating systems. Operating systems are the foundation of our computing world. They are endlessly versatile, yet very few people know how to use them beyond the predefined paths offered by OS publishers like Apple and Microsoft, let alone understand what is going on under the hood. Doing more requires in-depth knowledge and tedious effort. Second, they segment or partition their software into packages that suit different needs. It’s mostly a strategy for generating extra revenue by attracting customers who don’t need everything they have to offer, but will pay a little bit less for access to what they can use. This leads to competition – companies emerge with simple tools that more efficiently address the segmented use cases of the big, versatile software. New companies develop broader applications to push their growth, and the cycle continues. 

Kubernetes is a very versatile framework. There are plenty of tools that streamline common use cases, but arm-based Kubernetes is still in its emergent phase. It’s more common as an IOT framework, but only because enthusiasts at home are driving that side of the market. It has tremendous potential for cloud services. It’s more energy efficient. It scales beautifully. It’s versatile. It’s not widely used.

 

Monitoring Kubernetes ARM with observIQ Cloud 

 

One of the biggest barriers keeping Kubernetes arm from mainstream success is the lack of arm-optimized software – beyond that, the lack of tools needed to build out arm-optimized software. Visibility is a massive piece of the puzzle. Whether you’re monitoring your home network or building out a massive cloud infrastructure, there aren’t any observability tools at your disposal for arm-based computing. Sure, there are some open-source options and personal projects shared publicly on Github, but nothing enterprise-level. Nothing plug and play. Until now.

observIQ just released custom integrations that support monitoring arm-based infrastructures. If you’re familiar with us, you already know that all of our agents only take seconds to configure and install, and prefilled templates with the appropriate paths and kubeconfig make it simple, even for a newbie. You can monitor all of your Kubernetes clusters on one account, and add as many users as you want for free. If you’re monitoring a home-network that ships less than 3GB of logs per day, you’ll never pay a penny. Try it out.

OpenTelemetry – Defining Observability Industry Standards

by Paul S on September 7, 2021

What is OpenTelemetry?

 

Plenty of blogs have answered the very Google-able question, “What is OpenTelemetry?” To keep it short and sweet, OpenTelemetry is a collaborative effort across the observability space to create industry-wide standards that will benefit all cloud service providers and observability customers. 

Technically speaking, OpenTelemetry is a collection of APIs, SDKs, exporters, and collectors. Many vendors are contributing to the project, and it’s ultimately up to them how much of OpenTelemetry’s tech they incorporate into their cloud service platforms. The goal is simple: streamline the aspects of observability technology that can be so that innovation can accelerate everything else. So far, the project is on track to do exactly that. 

5 OpenTelemetry Contributors

 

What is the Value of OpenTelemetry?

 

Observability” is a broadly defined industry, and that broad definition has led to the emergence of an absolute zoo of service providers. Some providers are certainly better than others in measurable ways, but for the most part it’s up to the customer to determine what collection of features at what price best suits their needs. A few choices is good for competition and good for customers, but too many choices is a burden, especially in a highly technical space where many customers are not interested in spending too much time learning about the technology – they have a need that can be met without a deep understanding of the technology, but then they have to learn to pick a solution that’s right for them anyway. OpenTelemetry is, in some ways, a consolidation of industry standards, which will make it easier for providers to target specific niches and for customers to understand in which niche their needs lie. 

The density of providers in observability also leads to inefficiencies for the entire industry. Many observability developers are happy to release improvements they’ve made to the technology to opensource, but without clear instrumentation standards, implementation of the new technologies is often as difficult as developing it in the first place. OpenTelemetry will serve as an ongoing repository for the latest and greatest technologies in the industry, and by frontloading the integration work, implementing new tech into cloud platforms will be much quicker. When all is said and done, customers will feel the benefit. 

Data analytics is one of the most prominent ways of extracting value from observability. Analytics tools need to be built around or adjusted to the format of the data they are meant to examine. OpenTelemetry integrates easily with popular frameworks, such as MySQL, Kafka, and WSGI, and standardizes the way data is collected and stored. This will ultimately allow customers to collect more data, the right data, and analyze it in more valuable ways.

All in all, OpenTelemetry is a collaboration that is propelling technology in observability forward in a way that could never be possible without cooperation across the industry. 

 

What is Exciting About OpenTelemetry?

 

OpenTelemetry is exciting for many reasons, and different reasons depending on who is asking. End users ought to be the most excited about OpenTelemetry, since the most value will flow to them, though they are ironically the least aware. It’s exciting both in terms of the technology, and the culture of the industry. Observability needs didn’t emerge overnight – they’ve been constantly changing and growing as tech expanded from Silicon Valley around the world from the early nineties till today, and it continues to evolve every day. Most companies arrived as needs emerged that existing companies couldn’t adapt to as quickly as a new startup. That’s typical behavior for any market, but the now $17 Billion and growing industry is at an inflection point. It can either fragment into several smaller markets, which will lead to more of the same, or it can do some soul searching and figure out exactly what it needs to be to add the most value to the world. Achieving the latter, which is hard to do with so many competitors and far-reaching technologies, is the ambition of OpenTelemetry. The project marks a long-needed conversion in the observability industry. Over 300 companies have come together to contribute and share, strengthening the solutions available to customers in every niche. 

That doesn’t mean the observability space is colluding its way out of fair competition – in fact, the opposite. OpenTelemetry standards are becoming so integral and critical to the space that companies are now competing on the implementation of their technology rather than the foundation of it. What that means in the long run is that giants like Splunk and new players like observIQ (both OpenTelemetry contributors) will have essentially the same toolkit to work with. All cloud service observability solutions will become broader, more scalable, easier to implement, and niche solutions will have more capacity to focus on specific problems in their niche, rather than their observability foundations.

 

What Are the Limitations and Risks of OpenTelemetry?

 

OpenTelemetry is built on a somewhat paradoxical collaboration. The vast majority of its contributors work for competing companies, which begs the question, what’s in it for them? The obvious answer is that the OpenTelemetry project is far from a zero sum game. Every company contributing the project, though they may be competing with the others on their cloud services side, stands to gain beneficial technology from the collaborative effort that will make their cloud services stronger. And a more unified observability infrastructure makes different solutions work better together in instances where customers have use for more than one service. The flip side of that is that OpenTelemetry will probably never be a full-service observability solution. There will always be hoops to jump through with implementation, and likely ongoing maintenance. Observability customers can look confidently on OpenTelemetry and trust that all of its contributors have a strong service to offer them, but implementing OpenTelemetry solutions without paying for a cloud service will present a challenge for many.

 

observIQ is a Proud Contributor of OpenTelemetry

observIQ has been working alongside the OpenTelemetry community for several years, and we are proud of our team and contributions, especially the Stanza agent for log management, which was incorporated into OpenTelemetry by the OTEL team as the primary logging agent in the last year. 

If you want to learn more about the observIQ cloud log management platform, bringing OpenTelemetry standards to customers with simple installation and an easy user interface, go try it out for yourself! It’s free. 

Sign Up for the observIQ Cloud Beta

Download the Splunk Solution Brief

Sign Up to receive updates on our products

observIQ Support

For support on observIQ Cloud, please contact:

support@observIQ.com

For the Open Source Log Agent, community-based support is available on our:

GitHub Repository

Sign Up for Our Newsletter