Use when the user has live Kubernetes resources already running in a cluster — applied by `kubectl apply`, installed via a different workflow, or inherited from a previous operator — and wants to bring them under ConfigHub management without a GitOps tool or a Helm chart in hand. Phrases like "adopt these existing resources into ConfigHub", "I have stuff in the cluster, how do I get it into cub?", "reverse engineer my namespace into Units", "import from the cluster", "cub unit import", "we kubectl-apply'd everything and want to migrate", or "bring the running state into ConfigHub". Creates Units pre-bound to a cluster Target, runs `cub unit import` with `--where-resource` filters to pull live manifests, then hands off to `config-as-data` doctrine for ongoing management. Do not load for Helm-installed charts (use `import-from-helm`), ArgoCD Applications (use `import-from-argocd`), Flux HelmReleases / Kustomizations (use `import-from-flux`), or new YAML being authored from scratch (use `config-as-data`).
89
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Onboarding ramp for teams whose workloads are running in Kubernetes but are not managed by Helm, ArgoCD, or Flux — typically kubectl apply-driven setups, or configurations inherited from a previous operator. Pulls the current live state of selected resources into ConfigHub Units so that configuration-as-data management can start from what's actually running.
This skill targets users with existing cluster state and no DRY source of truth. The import captures the live state as literal YAML in ConfigHub Units. From that point forward:
cub-mutate + cub-apply, not ad-hoc kubectl apply.--where-resource filters to scope the import to what the user actually wants to own, not everything in a namespace.If the user's cluster state came from a Helm chart, ArgoCD, or Flux — even if they're not sure — route them to those skills instead. import-from-cluster is the path of last resort; a chart/GitOps history is better preserved by the specialized import skills.
kubectl apply-driven workloads and wants to start managing them with ConfigHub.import-from-helm), some kubectl-applied (handle here).import-from-helm.import-from-argocd.import-from-flux.config-as-data.cub unit refresh + the forthcoming drift-reconcile skill.cub organization list succeeds (proves a valid token; cub context get / cub info / cub version don't require one).kubectl config current-context points at the right cluster.Kubernetes bridge is installed in that cluster (cub worker list-function <worker> advertises Kubernetes/YAML functions). If not, run worker-bootstrap first — the Worker is what performs the import.workers-<cluster> Space per space-topology) backed by that Worker. Confirm with cub target list --space "*".<app>-imported-<env>); split into env-Spaces later via cub unit create --dest-space if the layout from space-topology applies.--where-resourcecub unit import takes a --where-resource expression that filters which cluster resources are pulled into the Unit. SQL-ish, similar to --where-data. Key toggles (Kubernetes-specific):
import.include_system — default false. When true, pulls resources from system namespaces (kube-system, kube-public, kube-node-lease). Almost never what the user wants.import.include_cluster — default false. When true, pulls cluster-scoped resources (ClusterRole, ClusterRoleBinding, CRD, StorageClass, Namespace, etc.). Reach for this only when intentionally adopting cluster-wide state.import.include_custom — default false. When true, pulls custom resources (CRs). Usually needed when adopting an operator's state (e.g., cert-manager Certificates, Flux HelmReleases — though if those are Flux-managed you should be using import-from-flux).Typical scopes:
# Everything user-land in one namespace.
--where-resource "metadata.namespace = 'payments-prod'"
# Workloads only (Deployments + Services + ConfigMaps) in a namespace.
--where-resource "metadata.namespace = 'payments-prod' AND kind IN ('Deployment','Service','ConfigMap')"
# Multiple namespaces, include the operator's custom resources.
--where-resource "metadata.namespace IN ('payments-prod','payments-staging') AND import.include_custom = true"
# Everything pinned to a specific image (audit use).
--where-resource "spec.template.spec.containers.*.image = 'ghcr.io/acme/payments:v1.2.3'"cub unit import works on a Unit that already exists and has a Target bound. For onboarding, choose one of:
payments-prod"). Easy to review as a block; ongoing changes mutate one big Unit.import-unit-granularity.For the first pass, default to one Unit per logical workload (<app> slug, contents = that app's resources):
cub unit create --space <app>-<env> <app>
cub unit set-target --space <app>-<env> <app> <workers-space>/<cluster-target>The Unit starts empty; the target binding tells cub unit import which cluster to pull from.
cub unit import <app> --space <app>-<env> \
--where-resource "metadata.namespace = '<ns>' AND metadata.labels.app = '<app>'" \
--dry-runInspect the dry-run output. Confirm:
kube-system leakage, no unexpected CRs, no Service-of-loadBalancer-in-spec kinds of pulls).kubectl.kubernetes.io/last-applied-configuration). These can be cleaned up post-import via cub-mutate / a cub function set delete-path-style function.Tighten the filter and re-dry-run until the set is right.
cub unit import <app> --space <app>-<env> \
--where-resource "metadata.namespace = '<ns>' AND metadata.labels.app = '<app>'" \
--waitThe Worker pulls the matching resources, strips transient fields, and sets the Unit's Data to the resulting YAML. The Unit's LiveState also reflects what's currently running (same content, assuming nothing changed between the import start and finish).
cub unit get <app> --space <app>-<env> -o yamlScan for:
imagePullPolicy: Always, schedulerName: default-scheduler, resources with defaulted limits). Decide per field: keep as-is, or strip via cub function set.strip-metadata functions (see references/functions-catalog.md).data values are base64-encoded but unencrypted. Do not commit these — manage via an external SecretStore (see references/yaml-patterns.md).config-as-data disciplineFrom here, ongoing management follows config-as-data:
kubectl edit; don't kubectl apply directly. Changes go through cub-mutate + cub-apply.platform/standard-vets Filter to the Space (see triggers-and-applygates) so every future mutation runs schema and policy validation.vet-schemas gate (common — live resources often have admission-added fields that trip schema validation), use cub-mutate to clean the data, not a re-import.Applying the Unit is now a no-op (the data matches live state). The first meaningful apply comes when the user makes their first ConfigHub-driven change:
cub unit apply <app> --space <app>-<env> --waitFrom there, verify-apply takes over.
cub unit create/update/set-target/import/apply, cub worker get/list-function, read-only kubectl get/describe for scoping confirmation.kubectl apply/edit/delete on the resources being adopted (breaks the single-source-of-truth assumption), importing into a Space without a Target (cub unit import will fail), importing resources still actively managed by Helm / ArgoCD / Flux (route to the right skill instead — importing them here creates a split-brain between that controller and ConfigHub).Kubernetes bridge — stop, run worker-bootstrap.cub unit import is called — cub will reject with "target must be attached". Pre-create and bind first.cub unit list --space <app>-<env> — the Unit exists.cub unit get <app> --space <app>-<env> -o yaml — Data contains the expected resources, stripped of transient fields.cub unit bridgestate <app> --space <app>-<env> — target binding healthy.cub unit livestate <app> --space <app>-<env> — LiveState matches Data (no drift at import time).platform/standard-vets: cub function vet vet-schemas --space <app>-<env> --unit <app> — passes, or produces a readable set of cleanup items for cub-mutate.cub unit get <app> --space <app>-<env> --web — the imported Unit with revision 1 showing the import source.cub revision list <app> --space <app>-<env> --web — provenance starting at import.cub unit import --help — full --where-resource syntax and Kubernetes-specific toggles (authoritative over anything this skill says about flag names).references/cub-cli.md — CLI discipline; --where-data vs --where-resource scoping.references/functions-catalog.md — cleanup functions for post-import (strip-metadata, etc.).references/yaml-patterns.md — Secrets handling.worker-bootstrap (prereq), space-topology (Space layout), target-bind (pre-bind the Unit), config-as-data (post-import doctrine), cub-mutate (cleanup), triggers-and-applygates (add policy post-import), import-unit-granularity (one-Unit-per-what decision).59ea831
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.