Skip to main content

Cluster Add-ons with ApplicationSets

1. Overview

Managing clusters at scale requires standardization and automation. Tasks that looked trivial with a handful of clusters become a burden when managing tens or even hundreds of them.

How did we get here? Read the backstory here.

Backstory by from the perspective of a cluster administrator named Nick.

While scrolling through Slack one morning, Nick got a ping that one of the product teams wanted to do performance testing on their application because of a new feature launch that ties together many different services. They are worried about how the influx of traffic might effect the environment. Given this fear, Nick decided to provision an environment dedicated to performance testing, base the stage environment.

Right now, they are using a cluster for each environment (dev, stage, and prod) with folders for each one that contains the Argo CD Applications for the “add-ons” (a standard set of Kubernetes resources used by applications, expected to be on every cluster).

diagram of cluster folders

Adding this new environment with a new cluster requires lots of copying and pasting of Argo CD Applications from an existing cluster configuration to a folder for the new one, and changing any details specific to the cluster (e.g. cluster name). While not a huge burden at this scale, it’s easy to forget to change something.

diagram of copying Applications between cluster folders

Nick updated the Terraform configuration to set up the new perf environment and create the cluster. Nick added the cluster to the central Argo CD instance and created the App-of-Apps to deploy the Applications from the cluster’s folder in their GitOps repo. Then, Nick confirmed that all of the Applications were synced and healthy.

diagram of terraform creating a cluster and argo cd pull from git to the cluster

After handing over the reins to the product team, Nick got a ping on Slack from a developer who deployed their application to the perf environment, but it’s not working the way it did in stage. After some investigation, Nick found that, in the transition to the new cluster, the Application for cert-manager pointed to the values file of the stage cluster. The values file contained the wrong subdomain, leading to the DNS challenge failing when the dev’s application requested a new cert—a minor oversight when copying over the Argo CD Applications for the new cluster’s add-ons.

diagram of mistake copying Applications

Nick thinks to himself, there must be a better way! A life without toil. 💡

In this tutorial presented by Akuity, attendees will learn how to leverage ApplicationSets, a powerful Argo CD feature, to streamline the deployment of a standard set of "add-ons" (like Sealed Secrets, cert-manager, and Kyverno) to each cluster. We will address the dynamic nature of clusters by demonstrating how they can "opt-in" to specific tooling and configurations, allowing for flexibility without sacrificing standardization.

Manually creating Applications is error-prone and tedious. Especially if the Applications are nearly identical aside from a few values that can be determined automatically. ApplicationSets template Applications and populate them using generators.

diagram of an ApplicationSet creating

In the control-plane repo, the bootstrap folder contains two ApplicationSets, one for the cluster add-ons (cluster-addons-appset.yaml) and one for the cluster apps (cluster-apps-appset). The distinction between "add-ons" and "apps" is delineation between cluster admin and developer. The cluster admin is primarily responsible for the "add-ons" as they are central to cluster and used by many other teams. The developers are responsible for their apps.

diagram with two ApplicationSets with one for add-ons and one for apps with dev/admin split

1.1. Prerequisites

The tutorial assumes you have experience working with Kubernetes, GitOps, and Argo CD. Given such, some underlying concepts won't be explained during this tutorial.

The tutorial requires that you have the following:

2. Creating the Workshop Environment

2.1. Kubernetes Clusters using devcontainer

To demonstrate multi-cluster management, you will create an environment for this workshop using a devcontainer. The devcontainer will be built based on the specification in the devcontainer.json in the workshop repo. Then, it will start up two minikube clusters named dev and prod.

The easiest way to get started is with GitHub Codespaces (especially if you are on conference wifi).

Open workshop in GitHub Codespaces

2.2. Akuity Platform Sign Up

This scenario demonstrates deploying applications to a cluster external to Argo CD.

Similar to how the GitHub repo is hosting the Helm charts, which describe what resources to deploy into Kubernetes, the Akuity Platform will host the Application manifests, which represent how to deploy the resources into the cluster. Along with Argo CD, which will implement the changes on the cluster.

tip

Sign up for a free 30-day trial of the Akuity Platform!

  1. Create an account on the Akuity Platform.

  2. To log in with GitHub SSO, click "Continue with GitHub".

note

You can also use Google SSO or an email and password combo.

  1. Click Authorize akuityio.
  1. Click the create or join link.

  2. Click + New organization in the upper right hand corner of the dashboard.

  3. Name your organization following the rules listed below the Organization Name field.

2.3. Create your Argo CD Instance

You can create your Argo CD instance using the Akuity Platform Dashboard or CLI by choosing between the tabs below.

  1. Navigate to Argo CD.

  2. Click + Create in the upper right hand corner of the dashboard.

  3. Name your instance following the rules listed below the Instance Name field.

  4. (Optionally) Choose the Argo CD version you want to use.

  5. Click + Create.

At this point, your Argo CD instance will begin initializing. The start-up typically takes under 2 minutes.

2.3.1. Configure Your Instance

While the instance is initializing, you can prepare it for the rest of the lab.

  1. In the dashboard for the Argo CD instance, click Settings.

  2. On the inner sidebar, under "Security & Access", click System Accounts.

  3. Enable the "Admin Account" by clicking the toggle and clicking Confirm on the prompt.

  4. Then, for the admin user, click Set Password.

  5. Enter the password akuity-argocd, then click Submit.

  6. In the top, next to the Argo CD instance name and status, click the instance URL (e.g., <instance-id>.cd.akuity.cloud) to open the Argo CD login page in a new tab.

  7. Enter the username admin and the password akuity-argocd.

  8. Log into the argocd CLI

    argocd login \
    "<instance-id>.cd.training.akuity.cloud" \
    --username admin \
    --password akuity-argocd \
    --grpc-web

2.3.2. Deploy an Agent to the Cluster

You must connect the cluster to Argo CD to deploy the application resources. The Akuity Platform uses an agent-based architecture for connecting external clusters. So, you will provision an agent and deploy it to the cluster.

  1. Back on the Akuity Platform, in the top left of the dashboard for the Argo CD instance, click Clusters.

  2. In the top right, click Connect a cluster.

  3. Enter the dev name as the "Cluster Name".

  4. In the bottom right, click Connect Cluster.

  5. To get the agent install command, click Copy to Clipboard. Then, in the bottom right, Done.

  6. Open your terminal and set the cluster context to dev by running:

    cluster-dev
  7. Paste and run the command against the cluster. The command will create the akuity namespace and deploy the resources for the Akuity Agent.

  8. Check the pods in the akuity namespace. Wait for the Running status on all pods (approx. 1 minute).

    % kubectl get pods -n akuity
    NAME READY STATUS RESTARTS AGE
    akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-application-controller-<replicaset-id>-<pod-id> 2/2 Running 0 65s
    argocd-notifications-controller-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-redis-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64s
    argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64s

    Re-run the kubectl get pods -n akuity command to check for updates on the pod statuses.

  9. Back on the Clusters dashboard, confirm that the cluster shows a green heart before the name, indicating a healthy status.

  10. Repeat these steps for the prod cluster.

2.3.3. Bootstrapping Argo CD

To deploy the ApplicationSets that will automate the creation of Applications for your clusters, you will manually create an initial bootstrap Application.

  1. Navigate to the UI of your Argo CD instance.

  2. Create an Application to manage the Argo CD configuration using the bootstrap-app.yaml manifest in the workshop control-plane repo.

    • Click + NEW APP.

    • Click EDIT AS YAML.

    • Paste the contents of bootstrap-app.yaml.

    • Click SAVE.

    • Click CREATE.

You now have a fully-managed Argo CD instance 🎉

After the bootstrap Application automatically syncs, it will create two ApplicationSets: cluster-addons and cluster-apps. Notice that they do not create any Applications. Each cluster can opt-in to add-ons, meaning that nothing is deployed unless the cluster indicates that it should be.

3. Enabling Cluster Add-ons

3.1. cluster-addons ApplicationSet

The addons are located in the charts/addons folder, which contain Helm Umbrella charts for the various add-ons deployed to clusters. These Umbrella charts contain the default values for any cluster that opts into the add-on. Each cluster can specify overrides to the defaults values with a file with the same name as the add-on in the clusters/<cluster-name>/addons/ folder (e.g. for the dev cluster and cert-manager add-on, the values file would be clusters/dev/addons/cert-manager.yaml).

charts/
addons/
cert-manager/
Chart.yaml
values.yaml

clusters/
dev/
addons/
cert-manager.yaml

The cluster-addons ApplicationSet is a complex example of generators. It contains two different generators, git and clusters, and a meta-generator matrix to bring it all together.

  generators:
- matrix:
generators:
- git:
repoURL: https://github.com/akuity-cluster-addons-workshop/control-plane
revision: HEAD
directories:
- path: charts/addons/*
- clusters:
selector:
matchExpressions:
- {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}
- key: enable_{{ .path.basename | snakecase }}
operator: In
values: ['true']

The git generator is fairly straight-forward.

generators:
- git:
repoURL: https://github.com/akuity-cluster-addons-workshop/control-plane
revision: HEAD
directories:
- path: charts/addons/*

It takes a repoURL, a revision, and a directories value which determine where to get generator attributes from. In this case the git generator will produce an item for each folder in charts/addons of the control-plane repo on the main branch (HEAD points to the tip of the main branch). The result will be:

- path.basename: cert-manager
- path.basename: external-secrets
- path.basename: kyverno

The cluster generator is more complex.

generators:
- clusters:
selector:
matchExpressions:
- {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}
- key: enable_{{ .path.basename | snakecase }}
operator: In
values: ['true']

For each cluster registered with Argo CD, the cluster generator produces the name, server, labels, and annotations attributes based on the cluster's configuration. For example:

- name: dev
server: https://....
metadata:
labels:
enable_cert_manager: true
annotations: {}

The results returned from the cluster generator are filtered based on the metadata.labels, with two conditions:

  • the akuity.io/argo-cd-cluster-name label (which is automatically added to each cluster configuration by the Akuity Platform), excluding any that contain the value in-cluster. This prevents the ApplicationSet from attempting to deploy add-ons to the cluster hosting Argo CD, which on the Akuity Platform will only accept Argo CD manifests.

  • the label enable_{{ .path.basename | snakecase }} with the value true. This expression takes the attributes returned from the git generator to determine what labels to filter on. So if the git generator returned .path.basename: cert-manager, the cluster generator will only include clusters with the enable_cert_manager: true label.

    This label is what enables clusters to opt-in to add-ons based on their metadata. If a cluster does not have an enable_ label corresponding to the add-on folder name, the add-on will not be deployed to the cluster.

Then, to bring it all together, the matrix generator will combine the results from both the git and cluster generators. Each combination of cluster (matching the selector expressions) and add-on found in the charts/addons folder will produce an Application

Take the example of a cluster named dev with the label enable_cert_manager: true and the charts/addons folder that contains the folders cert-manager and external-dns.

The git generator will return:

- path.basename: cert-manager
- path.basename: external-secrets

The cluster generator will return:

- name: dev

The matrix generator will combine the results and return:

- name: dev
path.basename: cert-manager

The combination of the dev cluster and the external-secrets add-on is omitted due to the selector on the cluster generator.

Once these conditions are met (the cluster contains the enable_xxx label and the add-on folder exists), the ApplicationSet will produce an Application sourcing to the add-on Helm Umbrella chart using the path attribute supplied by the git generator.

source:
repoURL: 'https://github.com/akuity-cluster-addons-workshop/control-plane'
path: '{{.path.path}}'
targetRevision: HEAD

The Application uses the add-on name (the path.basename attribute supplied by the git generator) as the release name when templating the Helm chart, and uses the cluster name and add-on name in determining what values file to supply to the chart. The use of ignoreMissingValueFiles here allows the Application to work even if no cluster-specific overrides has been defined in the repo.

source:
...
helm:
releaseName: '{{.path.basename}}'
ignoreMissingValueFiles: true
valueFiles:
- '../../../clusters/{{.name}}/addons/{{.path.basename}}.yaml'

And finally, the cluster name is used for the destination cluster and the add-on name is used for the destination namespace.

destination:
namespace: '{{.path.basename}}'
name: '{{.name}}'

3.2. Enable the cert-manager Add-on for dev

You'll start by deploying the cert-manager add-on for the dev cluster by adding the enable_cert_manager: true label to the cluster configuration.

Run the following command:

akuity argocd cluster update \
--instance-name=cluster-addons dev \
--label enable_cert_manager=true

4. Deploying Applications

4.1. cluster-apps ApplicationSet

The apps are unique to each cluster, so they are located in cluster/<cluster-name/apps folder.

Similar to the cluster-addons ApplicationSet, the cluster-apps ApplicationSet uses the cluster generator to create an Application for each cluster.

generators:
- clusters:
selector:
matchExpressions:
- {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}
- key: ready
operator: In
values: ['true']

Using the same selector technique, the creation of the Application is gated using the ready: true label in the cluster configuration. Once the cluster contains this label, the ApplicationSet generates an app-<cluster-name> Application which points to the cluster/<cluster-name>/apps folder (using the name attribute supplied by the cluster generator).

metadata:
name: 'apps-{{name}}'
...
source:
repoURL: https://github.com/akuity-cluster-addons-workshop/control-plane
targetRevision: HEAD
path: 'clusters/{{name}}/apps'

The Application is configured to take any plain Kubernetes manifest in the folder and deploy it to the Argo CD control-plane (i.e. the in-cluster destination in the argocd namespace). Therefore, it is expected to contain Argo CD manifests (e.g. Application, ApplicationSet, and AppProject).

source:
...
directory:
recurse: true
destination:
name: 'in-cluster'
namespace: 'argocd'

4.2. Mark the Cluster as Ready

Now that the add-ons have been deployed to the cluster, you'll add the ready: true label to the cluster configuration mark the cluster as ready for Applications.

Run the following command:

akuity argocd cluster update \
--instance-name=cluster-addons dev \
--label ready=true