Auto-instrumentation with SigNoz
Instrumenting applications with SigNoz using OpenTelemetry auto-instrumentation on Kubernetes.
Auto-instrumentation with SigNoz
Modern observability is no longer optional. As systems grow in complexity, manually adding tracing and metrics to every service quickly becomes unmanageable. This is where application instrumentation and OpenTelemetry auto-instrumentation step in.
In this guide, we’ll walk through how to enable application-level observability at scale using SigNoz, the OpenTelemetry Operator, and Kubernetes auto-instrumentation — with minimal code changes and maximum consistency.
What is application instrumentation?
Application instrumentation is the process of adding hooks into your application so it can emit telemetry data:
- Traces → how requests flow across services
- Metrics → latency, throughput, errors
- Logs → structured, correlated runtime events
Traditionally, this meant modifying application code, importing SDKs, and manually wiring exporters. That approach works — but it doesn’t scale well in large platforms or shared clusters.
Enter OpenTelemetry
OpenTelemetry (OTel) is an open standard and set of tools that define how telemetry is generated, processed, and exported. It provides:
- Language SDKs (Java, Go, Python, Node.js, etc.)
- A vendor-neutral data model
- The OpenTelemetry Collector for processing and exporting telemetry
SigNoz is built on top of OpenTelemetry, which means anything you emit using OTel can be visualized and analyzed directly in SigNoz.
From manual instrumentation to auto-instrumentation
The classic approach (manual)
In a manual setup, developers typically:
- Add OpenTelemetry SDKs to the application
- Configure exporters and resource attributes
- Wrap handlers, HTTP clients, database calls, etc.
This gives full control — but it also means:
- Code changes per service
- Inconsistent setups across teams
- Slow adoption at scale
Auto-instrumentation: the shift
OpenTelemetry supports auto-instrumentation, where the SDK is injected at runtime, without touching application code.
This is especially powerful in Kubernetes environments:
- Platform teams manage observability
- Developers keep focusing on business logic
- Instrumentation becomes declarative
The upstream OpenTelemetry documentation explains this concept well: 👉 https://opentelemetry.io/docs/platforms/kubernetes/operator/automatic/
As an SRE or platform engineer, this is exactly what you want:
“I want to enable tracing for hundreds of workloads using annotations — not pull requests.”
Auto-instrumentation with the OpenTelemetry Operator
What is the OpenTelemetry Operator?
The OpenTelemetry Operator is a Kubernetes Operator that manages:
- OpenTelemetry Collectors
- Auto-instrumentation lifecycle
- Configuration through Kubernetes CRDs
It provides a Kubernetes-native way to deploy and operate OpenTelemetry at scale.
Why use the Operator?
Key benefits:
- Simplified deployment of collectors
- Centralized configuration using CRDs
- Automated lifecycle management
- Auto-instrumentation for supported runtimes
Architecture overview
At a high level, the Operator works like this:
- You declare telemetry intent using Custom Resources
- The Operator reconciles desired vs actual state
- Collectors and instrumentation are injected automatically
OpenTelemetry Collector deployment modes
The Operator supports multiple collector deployment strategies:
Sidecar
A collector runs alongside each application pod. This keeps telemetry local but increases pod complexity.
DaemonSet
A collector runs on every node. Ideal for:
- Host metrics
- Container logs
- Kubelet metrics
Deployment (default)
A centralized collector deployment that’s easy to scale and manage.
StatefulSet
Useful when you need stable identities or persistent storage.
In this guide, we’ll use the DaemonSet mode, as it’s well-suited for Kubernetes infrastructure telemetry and works perfectly with SigNoz.
k8s-infra vs OpenTelemetry Operator
Before going further, it’s important to understand when to use what.
When to use SigNoz k8s-infra
- Fast, one-command setup
- SigNoz-ready dashboards and alerts
- Minimal configuration overhead
Use k8s-infra when you want a quick, production-ready Kubernetes observability setup.
When to use the OpenTelemetry Operator
- You need application auto-instrumentation
- You want Kubernetes-native CRD management
- You need fine-grained control
Recommended approach
Start with k8s-infra for cluster visibility.
Add the OpenTelemetry Operator when you need auto-instrumentation for specific workloads.
Installing the OpenTelemetry Operator
Step 1: Install cert-manager
The OpenTelemetry Operator relies on admission webhooks, which must be secured using TLS.
cert-manager is used to automatically generate and rotate certificates for these webhooks.
The Operator uses validating and mutating webhooks to inject instrumentation into pods. Kubernetes requires these webhooks to be secured with TLS certificates — cert-manager handles this automatically.
Step 2: Install the OpenTelemetry Operator (Helm)
We’ll use Helm to simplify upgrades and lifecycle management.
Configuring the OpenTelemetry Collector (DaemonSet)
We’ll deploy a DaemonSet collector to:
- Collect logs from
/var/log/pods - Collect host metrics
- Receive OTLP data from instrumented apps
- Export everything to SigNoz
RBAC configuration
Collector Custom Resource
This collector receives telemetry and forwards it to SigNoz via OTLP.
Testing auto-instrumentation with a real app
To validate everything end-to-end, we’ll deploy a Java application and auto-instrument it.
Deploy Nexus Repository Manager
Enabling Java auto-instrumentation
Since Nexus is a Java application, we define an Instrumentation CR:
Annotate the workload
Add the following annotation to the pod template:
Once applied, the Operator injects the OpenTelemetry Java agent automatically.
Instead of annotating each workload, you can annotate the namespace.
All workloads in that namespace will be instrumented automatically — ideal when using one namespace per application.
Observing traces in SigNoz
Finally, generate some traffic:
- Log into Nexus
- Upload artifacts
- Create repositories
Within seconds, traces appear in SigNoz — fully correlated and enriched with Kubernetes metadata.
Final thoughts
Auto-instrumentation completely changes how observability is adopted:
- No code changes
- Platform-driven
- Consistent across teams
Combined with SigNoz and the OpenTelemetry Operator, it provides a clean, scalable, and future-proof observability stack.
If you’re running Kubernetes at scale — this is the way forward.