Kustomize Rollout

Kustomize Rollout

April 18, 2025

Kustomize is a tool built into kubectl which helps in the management of YAML. It does a lot of things, but one of the major ones is having overlays per deployment. It is not uncommon to have a single base and a rollout per deployments. However, this can cause issue when you need to fix your base, as it will happily update all your overlaid environments en mass; which is less then ideal. Here is how I have fixed that for my deployments.

Setup

Let’s say we have the following kustomize layout. It is a simple deployment with a service and ingress. There are 4 deployments; the 3 standard stages (dev, staging, prod) plus a local testing overlay meant to be deployed manually to a local instance.

      • deployment.yaml
      • ingress.yaml
      • kustomization.yaml
      • service.yaml
        • kustomization.yaml
        • kustomization.yaml
        • kustomization.yaml
        • kustomization.yaml
  • Option 1: Patches

    In this first option everything that makes a deployment unique is only tracked in the overlay via patches. This is very good for simple patches like the following:

    # overlays/dev/kustomization.yaml
    resources:
      - ../base
    
    patches:
      - patch: |-
          - op: replace
            path: /spec/rules/0/host
            value: dev.example.com
        target:
          kind: Ingress
          name: app

    However, things are little more annoying if you have to exclude or remote items. There is no way in kustomize to delete entire objects so the only way to handle this in each overlay is via full object copies. If however, you have a large discrepancy between an env or a lot of object then this can be error prone.

    Option 2: A/B Bases

    Note

    This style is more complicated because it prevents accidental rollouts. It should be reserved for when the deployment maturity requires it.

    In this style the base has a, b, provider-a and provider-b bases as well as all the standard overlays plus a local overlay that cannot have provider specific objects.

        • deployment.yaml
        • ingress.yaml
        • kustomization.yaml
        • service.yaml
        • backend-config.yaml
        • cert.yaml
        • external-secret.yaml
        • frontend-config.yaml
        • kustomization.yaml
        • deployment.yaml
        • ingress.yaml
        • kustomization.yaml
        • service.yaml
        • backend-config.yaml
        • cert.yaml
        • external-secret.yaml
        • frontend-config.yaml
        • kustomization.yaml
        • kustomization.yaml
        • kustomization.yaml
        • kustomization.yaml
        • kustomization.yaml
  • A standard rollout is as follows:

    1. Every overlay is pointing at *a
      1. local and provider-a directly at a
      2. dev, staging, and prod at provider-a
    2. Create / update b to be what is in a
    3. Point local to b
    4. Update b until it works for local
    5. Create / update provider-b based on provider-a
      1. Make sure that provider-b points to b
    6. Point dev to provider-b
    7. Update provider-b until it works for dev
    8. Roll out the changes to staging and prod
    9. (optional) Delete a and provider-a

    Blast Radius Diff

    Because there are going to be so many file changes with every update it is useful to create a script that can diff the output of kustomize between what is currently deployed (usually the main branch), and what is currently on the branch. This is the only way to get a real idea of the blast radius of the change.

    Option 3: Versioned Helm Charts

    The final option - which is complicated as it introduces a new tool - is to produce a versioned helm chart. Then use that reference and a values file in each overlay. In this case, because the helm chart is the base, no separate base is needed.

        • kustomization.yaml
        • values.yaml
        • kustomization.yaml
        • values.yaml
        • kustomization.yaml
        • values.yaml
        • kustomization.yaml
        • values.yaml
  • Each kustomizaton.yaml would look like this:

    helmCharts:
    - name: app
      includeCRDs: false
      valuesFile: values.yaml
      version: 3.1.3
      repo: https://oci.example.com/repos/app

    Separately you will need a repo for the helm chart with a CI pipeline that pushes a versioned OCI image to a repository. To rollout a change you update the version flag and make any correction needed to the values file.

    Note

    It is not uncommon to do local testing directly from the Helm chart and have a special dev cluster for manually testing the helm chart before it gets rolled out. This means that the local vs provider specific objects has to handled by logic exposed in the values.yaml.

    # values.yaml
    provider: "local" # only include the CRDs valid for local deployments
    # or provide explitic disable flags
    disableExternalSecrets: true
    disableManagedCertificates: true
    
    # hardcoded secrets are now needed
    secrets:
      someSecret:
        key: some-secret-value

    Don’t use chartHome

    Kustomize does support a chartHome option which will use a local file path to find the helm chart. Don’t use it, as it is worst of all worlds. You have to manage it in A/B style or you have to add the stage logic to the chart directly which means you cannot truly test it before roll out. It will bite you.