Network Observer
The network observer is an optional companion to the K8s agent. It consumes network flow data from Cilium Hubble Relay, maps connections between workloads, and pushes topology information to the Shoehorn API. This gives you a live map of TCP traffic between services without any application instrumentation.
Prerequisites
Section titled “Prerequisites”- Cilium as the cluster CNI with Hubble enabled
- Hubble Relay deployed (aggregates flows from all nodes)
The observer connects to Hubble Relay via gRPC. It does not require privileged containers or eBPF capabilities — Cilium handles all eBPF-level observation at the CNI layer.
How It Works
Section titled “How It Works”The observer runs as a single-replica Deployment (not a DaemonSet). Hubble Relay already aggregates flows from every node, so one replica is sufficient.
The pipeline:
- Hubble client — connects to Hubble Relay via gRPC, streams network flows
- Flow processor — filters to EGRESS flows (to avoid double-counting), discards reply packets
- Entity mapper — maps Hubble pod/workload metadata to Shoehorn catalog entity IDs using the
namespace.workloadconvention - Aggregator — counts connections per source/destination pair in 60-second windows
- Pusher — flushes aggregated data to the Shoehorn API
What You See in Shoehorn
Section titled “What You See in Shoehorn”Once the observer is running and the API is processing flow data:
- Network topology in the Operations dashboard showing which services talk to each other
- Per-workload connections in the entity detail view
- Traffic volume as connection counts per time window
Things to Consider
Section titled “Things to Consider”TLS to Hubble Relay
Section titled “TLS to Hubble Relay”By default, the observer connects to Hubble Relay over plaintext gRPC. For production clusters, enable TLS in the Helm values. The chart supports configuring TLS certificates for the gRPC connection.
Resource Usage
Section titled “Resource Usage”The observer is lightweight. A single replica handles flow data from clusters with hundreds of nodes. Typical consumption is similar to the main agent (50m CPU, 64Mi memory).
Deployment Topology
Section titled “Deployment Topology”Deploy one observer per cluster, in the same namespace as the main agent. The Helm chart supports enabling the observer as an optional component alongside the agent.
Entity ID Matching
Section titled “Entity ID Matching”The observer maps flows to entity IDs using namespace.workload-name. For flows to appear correctly in the topology view, the workloads must also be discovered by the main K8s agent so that matching catalog entities exist.
Flows involving pods that do not map to a known entity (e.g., external traffic, kube-system pods) are still aggregated but may appear as unresolved endpoints in the UI.
Installation
Section titled “Installation”The network observer is included in the shoehorn-dev/helm-charts Helm chart as an optional component. See the chart values for configuration options.