NodeByte Now Discovers Your Kubernetes Infrastructure Automatically
A single script registers your entire K8s cluster — nodes, namespaces, workloads, services, and ingresses — into your NodeByte inventory with full parent-child hierarchy.
The Gap
If you manage both traditional infrastructure and Kubernetes clusters, your inventory tool probably only knows about one of them. Physical servers, VMs, and network devices live in one system. Pods, deployments, and services live in kubectl. You end up context-switching between dashboards to answer basic questions about what's running where.
NodeByte's new Kubernetes inventory script closes that gap.
What It Does
A single Bash script queries your cluster via kubectl and registers every resource into NodeByte through the existing registration API. It discovers:
- The cluster itself — version, API server URL, node and namespace counts
- Nodes — kubelet version, OS, architecture, container runtime, capacity, allocatable resources, conditions, labels, and taints
- Namespaces — status and labels
- Workloads — Deployments, StatefulSets, and DaemonSets with replica counts, container images, and rollout strategy
- Services — type, cluster IP, external IPs, ports, and load balancer addresses
- Ingresses — ingress class, routing rules, TLS configuration, and endpoints
Every resource lands in NodeByte as an inventory node with rich metadata stored in structured JSONB fields — the same data you'd get from kubectl describe, but searchable and browsable alongside the rest of your infrastructure.
Hierarchical Relationships
Resources aren't dumped into a flat list. The script registers them top-down with parent-child relationships:
Cluster
├── Node: worker-1
├── Node: worker-2
├── Namespace: production
│ ├── Deployment: api-server (3/3 ready)
│ ├── Service: api-server (ClusterIP, port 8080)
│ └── Ingress: api.example.com
└── Namespace: monitoring
├── StatefulSet: prometheus (1/1 ready)
└── DaemonSet: node-exporter (2/2 ready)
This means you can drill into a cluster in the NodeByte UI the same way you'd navigate kubectl — but with your bare metal servers, VMs, and network gear sitting right alongside.
Running It
The script needs three things: kubectl access to your cluster, a NodeByte registration token, and your NodeByte URL.
NODEBYTE_TOKEN="your-registration-token" \
NODEBYTE_URL="https://your-nodebyte-instance" \
bash kubernetes-inventory.sh
Output looks like this:
Kubernetes inventory for cluster: production (context: prod-cluster)
Nodebyte URL: https://nodebyte.example.com
── Cluster ────────────────────────────────────────────────────────
production (cluster) cluster ✓ registered
── Nodes ──────────────────────────────────────────────────────────
worker-1 device ✓ registered
worker-2 device ↻ updated
── Namespace: default ─────────────────────────────────────────────
default (ns) namespace ✓ registered
api-server workload ✓ registered
api-server service ✓ registered
Re-running is safe — the registration endpoint is idempotent, upserting by hostname. Run it once manually, or put it on a cron to keep your inventory in sync.
Configuration
Environment variables control what gets collected:
| Variable | Default | Description |
|----------|---------|-------------|
| K8S_CLUSTER_NAME | kubectl context name | Custom cluster display name |
| K8S_CONTEXT | current context | Which kubectl context to query |
| K8S_SKIP_SYSTEM | 0 | Set to 1 to skip kube-system and friends |
| K8S_NAMESPACES | all | Comma-separated namespace whitelist |
| K8S_RESOURCES | all types | Comma-separated resource types to collect |
| NODEBYTE_PARENT_HOSTNAME | none | Nest the cluster under an existing node |
| NODEBYTE_TAGS | none | Extra tags appended to every registration |
Scoping a token with allowed_kinds means you can create a token that only permits Kubernetes resource types — keeping it isolated from your manually-managed inventory.
Why a Script, Not an Operator
We considered building a Kubernetes operator that watches resources and syncs to NodeByte automatically. We went with a script instead because:
- No cluster-side dependencies. The script runs from any machine with
kubectlaccess. Nothing gets installed in the cluster. - Works with any cluster. Managed Kubernetes, bare metal k3s, kind — it doesn't matter. If
kubectl getworks, the script works. - Easy to audit. It's 600 lines of Bash. You can read it, modify it, and wrap it in whatever scheduling system you already use.
An operator might make sense later for real-time sync. But for inventory — a periodic snapshot is usually all you need.
What's Next
The Kubernetes script joins the existing Docker and LXD inventory scripts, giving NodeByte coverage across the three most common ways teams run workloads today. If your infrastructure spans physical hardware, VMs, containers, and orchestrators, they all show up in one place now.
The script is available in the scripts/ directory of the NodeByte repository. Create a registration token from the dashboard, point the script at your cluster, and your Kubernetes infrastructure shows up in your inventory in seconds.