Use when the user is deciding how to slice Kubernetes YAML into ConfigHub Units — phrases like "one Unit or many?", "how should I split these resources?", "should Deployment and Service be one Unit?", "where do CRDs go?", "Unit per resource or per bundle?", "should my namespace and its RBAC be together?", "per-app or per-namespace?". Applies a short set of rules (CRDs separate; rendered-from-generator stays bundled; otherwise split by ownership / references / lifecycle / blast radius) and routes the user to the right import skill with a concrete Unit-slug plan. Do not load for authoring new YAML (use `config-as-data`), for executing an import the user has already scoped (use the matching `import-from-*` skill), or when the split is already forced by the tool in use (`cub helm install` and `cub gitops import` split CRDs automatically — just run them).
92
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
A decision helper for one question: how many Units, and what goes in each? Produces a concrete split — Unit slugs + the --where-resource predicate (or equivalent) for each — and hands off to the import skill that will execute it.
import-from-helm / -kustomize / -argocd / -flux / -cluster) but is unsure how many Units it should produce.config-as-data.import-from-* skill after the decision is made here.space-topology.cub helm install always produces <release> + <release>-crds; cub gitops import always produces dry + wet + crds with the right link predicates. Don't re-litigate their defaults.Independent of everything else, CustomResourceDefinitions go in a Unit separate from anything that uses them. This is not negotiable:
kubectl wait --for=condition=established). Separate Units let you sequence apply cleanly without a shell script around one big Unit.cub helm install and cub gitops import already split CRDs (<release>-crds / -crds wet Unit linked with where-resource "kind = 'CustomResourceDefinition'"). Hand-rolled imports should match.Slug convention: <app>-crds.
If the source is Helm, Kustomize, Argo, or Flux, the generator already reasoned about what belongs together. Don't re-split unless there's a concrete reason. One workload-Unit + one CRDs-Unit is the default.
Reasons that would justify further splitting from a generator:
Otherwise, one Unit. Run cub helm install or cub gitops import and stop.
import-from-cluster → split by Kubernetes best practicesWhen there's no generator to defer to, split by how the resources actually want to be operated. Axes, in order:
| Unit slug | Contents | Rationale |
|---|---|---|
<ns>-namespace | The Namespace resource itself | Cluster-scoped; platform-team owned; changes almost never; different lifecycle from anything inside the namespace. |
<ns>-policy | NetworkPolicy, ServiceAccount, Role, RoleBinding, per-namespace ResourceQuota, LimitRange | Namespace-scoped policy; usually platform- or platform-plus-app co-owned; changes with policy updates, not app releases. |
<app> | Deployment (or StatefulSet / DaemonSet), Service, ConfigMap, HorizontalPodAutoscaler, PodDisruptionBudget, ServiceMonitor | App-team owned; day-to-day change cadence; tightly cross-referenced. Blast radius is the workload. |
<app>-crds | Any CRDs the app ships | Lifecycle + apply-order distinct from the workload. |
<infra>-cluster | ClusterRole, ClusterRoleBinding, StorageClass, PriorityClass, cluster-scoped CRDs | Cluster-wide blast radius; typically platform-team owned. |
<operator>-crs | Custom resources (Certificate, HelmRelease, etc.) | Referenced resources; change independently of the operator itself. |
Multi-workload apps: one <app> Unit per workload (<app>-api, <app>-worker), not one mega-Unit with every workload — they usually have different replica counts, different canary policies, and different incident ownership.
CRD not established before CR), rollback of a workload accidentally takes down its CRDs.import.include_cluster.kind. Doesn't map to ownership or lifecycle; produces "all ConfigMaps" or "all Services" Units that cut across unrelated apps.cub unit import and hand rolls)To execute a split with cub unit import: pre-create each Unit, bind it to the same cluster Target, then call cub unit import per Unit with a scoped --where-resource. This is exactly what cub gitops import does internally when it splits CRDs off the wet Unit.
Example for splitting a namespace's state into three Units:
SPACE=<app>-<env>
TARGET=<workers-space>/<cluster-target>
NS=<namespace>
# Shell
for u in <ns>-namespace <ns>-policy <app> <app>-crds; do
cub unit create --space "$SPACE" "$u"
cub unit set-target --space "$SPACE" "$u" "$TARGET"
done
# CRDs (cluster-scoped; needs include_cluster)
cub unit import --space "$SPACE" <app>-crds \
--where-resource "kind = 'CustomResourceDefinition' AND import.include_cluster = true AND metadata.labels.app = '<app>'" \
--dry-run
# Namespace (cluster-scoped; platform-team owned).
cub unit import --space "$SPACE" <ns>-namespace \
--where-resource "kind = 'Namespace' AND metadata.name = '$NS' AND import.include_cluster = true" \
--dry-run
# Namespace-scoped policy (NetworkPolicy, RBAC, ResourceQuota, LimitRange).
# `--where-resource` supports AND only — if you'd want Namespace + policy in a
# single Unit, do two imports into separate Units rather than trying to OR.
cub unit import --space "$SPACE" <ns>-policy \
--where-resource "metadata.namespace = '$NS' AND kind IN ('NetworkPolicy','ServiceAccount','Role','RoleBinding','ResourceQuota','LimitRange')" \
--dry-run
# Workload
cub unit import --space "$SPACE" <app> \
--where-resource "metadata.namespace = '$NS' AND kind IN ('Deployment','StatefulSet','DaemonSet','Service','ConfigMap','HorizontalPodAutoscaler','PodDisruptionBudget','ServiceMonitor') AND metadata.labels.app = '<app>'" \
--dry-runDry-run each, confirm the resource set, then re-run without --dry-run. Same pattern for Helm-rendered bundles split by hand: store the output of helm template in files, then cub unit create each from its scoped file. (Though unless you have a specific reason, cub helm install already does CRD splitting correctly — prefer it.)
<release> + <release>-crds (Helm) or dry/wet/crds (Argo/Flux) per the matching import skill. Only split further if ownership / lifecycle / blast radius diverge within the render.--where-resource predicates.import-from-* skill for execution.kubectl get -n <ns> -o yaml > /tmp/inventory.yaml or a similar inventory is available for reference — you need to see what's actually there before prescribing a split.space-topology (or will as part of this). Units don't exist in a vacuum.Read-only and decision-making only. kubectl get, cub ... list/get to inspect inventory. No cub unit create / import / update — hand that off to the specialized import skill after the decision is made.
There's nothing to apply here; the skill's output is a split proposal. The user (or the import skill called next) verifies by running --dry-run and reviewing the per-Unit resource set before importing.
kubectl get -n <ns> --show-labels — source inventory the recommendation was based on.references/cub-cli.md — --where-resource / --where-data scoping mechanics (including ConfigHub.ResourceType, ConfigHub.ResourceName, import.include_system, import.include_cluster, import.include_custom).https://docs.confighub.com/markdown/guide/rendered-manifests.md — the cub gitops import splitting flow this skill mirrors.https://docs.confighub.com/markdown/guide/helm-charts.md — cub helm install release + crds default.space-topology (where the Units go), import-from-helm, import-from-kustomize, import-from-argocd, import-from-flux, import-from-cluster, config-as-data (post-import doctrine).59ea831
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.