Skip to main content

Cluster Add-ons with ApplicationSets

1. Overview

Managing clusters at scale requires standardization and automation. Tasks that looked trivial with a handful of clusters become a burden when managing tens or even hundreds of them.

How did we get here? Read the backstory here.

Backstory by from the perspective of a cluster administrator named Nick.

While scrolling through Slack one morning, Nick got a ping that one of the product teams wanted to do performance testing on their application because of a new feature launch that ties together many different services. They are worried about how the influx of traffic might effect the environment. Given this fear, Nick decided to provision an environment dedicated to performance testing, base the stage environment.

Right now, they are using a cluster for each environment (dev, stage, and prod) with folders for each one that contains the Argo CD Applications for the “add-ons” (a standard set of Kubernetes resources used by applications, expected to be on every cluster).

diagram of cluster folders

Adding this new environment with a new cluster requires lots of copying and pasting of Argo CD Applications from an existing cluster configuration to a folder for the new one, and changing any details specific to the cluster (e.g. cluster name). While not a huge burden at this scale, it’s easy to forget to change something.

diagram of copying Applications between cluster folders

Nick updated the Terraform configuration to set up the new perf environment and create the cluster. Nick added the cluster to the central Argo CD instance and created the App-of-Apps to deploy the Applications from the cluster’s folder in their GitOps repo. Then, Nick confirmed that all of the Applications were synced and healthy.

diagram of terraform creating a cluster and argo cd pull from git to the cluster

After handing over the reins to the product team, Nick got a ping on Slack from a developer who deployed their application to the perf environment, but it’s not working the way it did in stage. After some investigation, Nick found that, in the transition to the new cluster, the Application for cert-manager pointed to the values file of the stage cluster. The values file contained the wrong subdomain, leading to the DNS challenge failing when the dev’s application requested a new cert—a minor oversight when copying over the Argo CD Applications for the new cluster’s add-ons.

diagram of mistake copying Applications

Nick thinks to himself, there must be a better way! A life without toil. 💡

In this tutorial presented by Akuity, attendees will learn how to leverage ApplicationSets, a powerful Argo CD feature, to streamline the deployment of a standard set of "add-ons" (like Sealed Secrets, cert-manager, and Kyverno) to each cluster. We will address the dynamic nature of clusters by demonstrating how they can "opt-in" to specific tooling and configurations, allowing for flexibility without sacrificing standardization.

Manually creating Applications is error-prone and tedious. Especially if the Applications are nearly identical aside from a few values that can be determined automatically. ApplicationSets template Applications and populate them using generators.

diagram of an ApplicationSet creating

In the control-plane repo, the bootstrap folder contains two ApplicationSets, one for the cluster add-ons (cluster-addons-appset.yaml) and one for the cluster apps (cluster-apps-appset). The distinction between "add-ons" and "apps" is delineation between cluster admin and developer. The cluster admin is primarily responsible for the "add-ons" as they are central to cluster and used by many other teams. The developers are responsible for their apps.

diagram with two ApplicationSets with one for add-ons and one for apps with dev/admin split

1.1. Prerequisites

The tutorial assumes you have experience working with Kubernetes, GitOps, and Argo CD. Given such, some underlying concepts won't be explained during this tutorial.

The tutorial requires that you have the following:

2. Creating the Workshop Environment

2.1. Kubernetes Clusters using devcontainer

To demonstrate multi-cluster management, you will create an environment for this workshop using a devcontainer. The devcontainer will be built based on the specification in the devcontainer.json in the workshop repo. Then, it will start up two minikube clusters named dev and prod.

The easiest way to get started is with GitHub Codespaces (especially if you are on conference wifi).

Open workshop in GitHub Codespaces

2.2. Akuity Platform Sign Up

This scenario demonstrates deploying applications to a cluster external to Argo CD.

Similar to how the GitHub repo is hosting the Helm charts, which describe what resources to deploy into Kubernetes, the Akuity Platform will host the Application manifests, which represent how to deploy the resources into the cluster. Along with Argo CD, which will implement the changes on the cluster.

tip

Sign up for a free 30-day trial of the Akuity Platform!

  1. Create an account on the Akuity Platform.

  2. To log in with GitHub SSO, click "Continue with GitHub".

note

You can also use Google SSO or an email and password combo.

  1. Click Authorize akuityio.
  1. Click the create or join link.

  2. Click + New organization in the upper right hand corner of the dashboard.

  3. Name your organization following the rules listed below the Organization Name field.

2.3. Create your Argo CD Instance

You can create your Argo CD instance using the Akuity Platform Dashboard or CLI by choosing between the tabs below.

  1. Check your akuity CLI version.

    akuity version
  2. Log into the akuity CLI.

    akuity login
  3. Set your organization name in the akuity config in one command using some bash-fu:

    akuity config set --organization-id=$(akuity org list | awk 'NR==2 {print $1}')
  4. Create the Argo CD instance on the Akuity Platform

    akuity argocd apply -f akuity-platform/
  5. Apply the agent install manifests to the clusters.

    kubectx k3d-dev && \
    akuity argocd cluster get-agent-manifests \
    --instance-name=cluster-addons dev | kubectl apply -f -
    kubectx k3d-prod && \
    akuity argocd cluster get-agent-manifests \
    --instance-name=cluster-addons prod | kubectl apply -f -
  6. From the Akuity Platform Dashboard, in the top, next to the Argo CD instance name and status, click the instance URL (e.g., <instance-id>.cd.akuity.cloud) to open the Argo CD login page in a new tab.

  7. Enter the username admin and the password akuity-argocd.

  8. Log into the argocd CLI

    argocd login \
    "$(akuity argocd instance get cluster-addons -o json | jq -r '.id').cd.training.akuity.cloud" \
    --username admin \
    --password akuity-argocd \
    --grpc-web

You now have a fully-managed Argo CD instance 🎉

After the bootstrap Application automatically syncs, it will create two ApplicationSets: cluster-addons and cluster-apps. Notice that they do not create any Applications. Each cluster can opt-in to add-ons, meaning that nothing is deployed unless the cluster indicates that it should be.

3. Enabling Cluster Add-ons

3.1. addons ApplicationSets

For each add-on there is a corresponding ApplicationSet.

bootstrap:
- appset-addon-cert-manager.yaml
- appset-addon-external-secrets.yaml
- appset-addon-kyverno.yaml

Each ApplicationSet will use the cluster generator to create an Application per cluster registered to Argo CD, but only if two conditions are met:

  1. The cluster name (from the akuity.io/argo-cd-cluster-name label) is not in-cluster. This condition prevents Argo CD from attempting to deploy add-ons to the cluster it is running in.

  2. The cluster contains a label in the format enable_<add_on_name>. This ensures that cluster's opt-in to add-ons, rather than deploying add-ons to every cluster by default.

generators:
- clusters:
selector:
matchExpressions:
# Don't deploy addons to cluster running Argo CD (i.e. the Akuity Platform).
- {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}

# Check annotation to see if addon is enabled.
- key: enable_cert_manager
operator: In
values: ['true']

The results from the generator look like:

- name: prod
nameNormalized: prod
server: https://cluster-prod:8001
metadata:
labels:
key: value
annotations:
key: value

The labels and annotations are sourced from the cluster configuration, which on the Akuity Platform is defined in the Cluster resource (or from the Dashboard).

apiVersion: argocd.akuity.io/v1alpha1
kind: Cluster
metadata:
name: dev
labels:
environment: "dev"
enable_cert_manager: true

In addition to the selectors on the clusters generator, it includes values which are added to each result for use in the Application template. The addonChartVersion value is used to specify the version of the Helm chart to deploy.

generators:
- clusters:
values:
# Default chart version
addonChartVersion: v1.13.1

These values are used in combination with a top-level merge generator to enable specifying the Helm chart version based on the environment (arbitrary groupings of clusters).

generators:
- merge:
mergeKeys: [server]
generators:
# ... (main cluster generator)
- clusters:
selector:
matchLabels:
environment: dev
values:
addonChartVersion: v1.14.5

The merge generator will combine the results from the two generators based on the server key. The values on the main clusters generator become the default.

The addons are sourced directly from the Helm chart repositories, along with values files from a Git repository. Traditionally to accomplish this combination would require using a Helm umbrella chart or specifying the values in the Application manifest. However, here you will take advantage of the multiple sources feature added in Argo CD version 2.6.

sources:
# Helm chart source.
- repoURL: 'https://charts.jetstack.io'
chart: 'cert-manager'
targetRevision: '{{.values.addonChartVersion}}'
helm:
releaseName: 'cert-manager'
ignoreMissingValueFiles: true
valueFiles:
- '$values/clusters/{{.name}}/addons/cert-manager.yaml'

# Helm valueFiles source.
- repoURL: 'https://github.com/akuity-cluster-addons-workshop/control-plane'
targetRevision: 'HEAD'
ref: values

The first source is for the Helm chart repository, which uses the addonChartVersion from the generator as the targetRevision for the Helm chart.

Then, each cluster can specify overrides to the defaults values with a file with the same name as the add-on in the clusters/<cluster-name>/addons/ folder (e.g. for the prod cluster and cert-manager add-on, the values file would be clusters/prod/addons/cert-manager.yaml).

clusters/
dev/
addons/
cert-manager.yaml

The second source will fetch the cluster-specific values file from the GitOps repository for use with the first source. The ref value in the second source sets the name (values) to target files in it from other sources. The first source uses this in the helm.valueFiles to tell Argo CD to reference the second source as the root and then includes the path to the values file in that source.

ref: values
#...
helm:
valueFiles:
- '$values/clusters/{{.name}}/addons/cert-manager.yaml'

3.2. Enable the cert-manager Add-on for dev

You'll start by deploying the cert-manager add-on for the dev cluster by adding the enable_cert_manager: true label to the cluster configuration.

Run the following command:

akuity argocd cluster update \
--instance-name=cluster-addons dev \
--label enable_cert_manager=true

4. Deploying Applications

4.1. cluster-apps ApplicationSet

The apps are unique to each cluster, so they are located in cluster/<cluster-name/apps folder.

Similar to the cluster-addons ApplicationSet, the cluster-apps ApplicationSet uses the cluster generator to create an Application for each cluster.

generators:
- clusters:
selector:
matchExpressions:
- {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}
- key: ready
operator: In
values: ['true']

Using the same selector technique, the creation of the Application is gated using the ready: true label in the cluster configuration. Once the cluster contains this label, the ApplicationSet generates an app-<cluster-name> Application which points to the cluster/<cluster-name>/apps folder (using the name attribute supplied by the cluster generator).

metadata:
name: 'apps-{{name}}'
...
source:
repoURL: https://github.com/akuity-cluster-addons-workshop/control-plane
targetRevision: HEAD
path: 'clusters/{{name}}/apps'

The Application is configured to take any plain Kubernetes manifest in the folder and deploy it to the Argo CD control-plane (i.e. the in-cluster destination in the argocd namespace). Therefore, it is expected to contain Argo CD manifests (e.g. Application, ApplicationSet, and AppProject).

source:
...
directory:
recurse: true
destination:
name: 'in-cluster'
namespace: 'argocd'

4.2. Mark the Cluster as Ready

Now that the add-ons have been deployed to the cluster, you'll add the ready: true label to the cluster configuration mark the cluster as ready for Applications.

Run the following command:

akuity argocd cluster update \
--instance-name=cluster-addons dev \
--label ready=true