Skip to main content

Advanced GitOps Workshop

1. Overview

The Advanced GitOps tutorial, presented by Akuity, will take you through creating a self-service multi-tenant Kubernetes environment using Argo CD and GitHub Actions. The tutorial will cover the following topics:

  • Automating Application generation using ApplicationSets.

  • Using GitHub for Single Sign-on (SSO).

  • Managing manifests for promotion between environments.

  • Enabling self-service environment creation for teams.

Naturally, the Argo CD configuration is managed by the cluster administrators. However, developers will not need administrators to create their environments. With GitOps, teams can self-onboard, leveraging Helm Charts provided by the administrators to abstract the resources that comprise an environment and pull requests to propose the creation of and changes to them.

1.1. Prerequisites

The tutorial assumes you have experience working with Kubernetes, GitOps, GitHub Actions (or a similar CI engine), and Argo CD. Given such, some underlying concepts won't be explained during this tutorial.

The tutorial requires that you have the following:

  • a dedicated Kubernetes cluster with Cluster Admin access.

    • This tutorial can be completed with a local Kubernetes cluster on your machine and does not require a publicly accessible cluster, only egress traffic (i.e., internet access). Consider using Docker Desktop and kind.

    • Admin access is required to create namespaces and cluster roles.

  • a GitHub Account - you will use this to:

    • host public repositories for the control plane, demo application, and deployment configuration.

    • use GitHub Codespaces to run Terraform for repository setup.

    • run GitHub Actions. (The repositories are public, so there is no cost.)

    • create an Argo CD instance on the Akuity Platform.

  • the Kubernetes command-line tool, kubectl.

  • a local IDE with access set up to your GitHub.

    tip

    Throughout this tutorial, all changes to repositories can be done from your browser by changing github.com to github.dev in the URL for the repository. See the GitHub docs for more details.

  • a browser with internet access.

This tutorial shows placeholder text between less-than and greater-than symbols (i.e., <...>), indicating that you must substitute it with the value relevant to your scenario.

  • <username> - Your GitHub username.

2. Repositories

The tutorial will use the following repositories:

  • The control-plane repository defines the desired state of the Kubernetes platform and enables self-service onboarding for teams. It contains the following directories:

    • argocd - configuration for Argo CD (e.g., Applications, AppProjects)
    • charts - local and Umbrella Helm Charts.
    • clusters - cluster-specific configurations (e.g., ClusterSecretStore).
    • teams - a folder for each team.
  • The demo-app repository contains the source code for a simple Golang application packaged into a container image used to demonstrate Argo CD driven GitOps processes and CI.

  • The demo-app-deploy repository uses Kustomize to define the manifests to deploy the demo-app and contains the following directories:

    • base/ - the application manifests agnostic to any environment.
      • deployment.yaml - defines the Deployment.
      • service.yaml - defines the Service.
    • env/ - the overlays specific to environments.
      • dev/
      • stage/
      • prod/

2.1. Automated Repository Setup

You will run Terraform in a GitHub Codespace to setup the repositories and add a token to the GitHub Actions secrets of them.

  1. Open the automated-repo-setup repo in GitHub Codespaces.

  1. Generate a GitHub Personal Access Token (PAT).

  • Set Note to adv-gitops-workshop.
  • Set the Expiration to 7 days.
  • Under "Select scopes", select:
    • repo
    • write:packages
    • user:email
    • delete_repo - optional, include if you want to use Terraform to clean up the repos after the workshop.
  • Click Generate token at the bottom of the page.
  • Copy the generated PAT.
  1. Use Terraform to create the repositories.

  • Run terraform apply in the terminal of the Codespace.
  • When prompted for var.github_token, paste the PAT.
  • Review the plan, type yes then hit return/enter apply it.
  • This will result in the creation of three GitHub repositories under your account.
Bonus Question (1)

What is the purpose of the github_release.setup-complete resource?

At this point, the Codespace can be stopped.

  1. Hit cmd/ctrl + shift + p.
  2. Enter >codespaces: Stop Current Codespace.
  3. Hit return/enter.

2.2. Building Images

To deploy an application using Argo CD, you need a container image. It would be boring if you deployed a pre-built image, so you will build one specific to you!

You are going to leverage GitHub Actions to build an image and push it to the GitHub Container Registry (GHCR). The workflow is already defined in the repository and was triggered by the demo-app repository creation.

  1. View the image created in the GitHub Packages page.
    • Navigate to https://github.com/<username>/demo-app/pkgs/container/demo-app

The demo-app image is avaiable at ghcr.io/<username>/demo-app with the latest and the commit ID (SHA) tags.

If the GitHub Actions workflow fails to push the demo-app image to the repo with the response 403, either the token is missing the write:packages scope or the package already exists for the user and the repository and not added to it.

3. Argo CD Setup

The control plane setup assumes that you have already prepared a Kubernetes cluster with internet access.

3.1. Creating an Argo CD Instance

The tutorial demonstrates deploying resources to clusters connected to Argo CD. You will create an Argo CD instance using the Akuity Platform to simplify the installation and connecting clusters.

tip

Sign up for a free 14-day trial of the Akuity Platform!

  1. Log into the Akuity Platform with your GitHub account.

    • If you don't already have an organization (i.e., it's your first time logging in), create one naming it using your GitHub username.
  2. Create an Argo CD instance on the Akuity Platform with the name adv-gitops (or any permitted string).

  3. Enable Declarative Management for the Argo CD instance.

info

Declarative Management enables using the Akuity Platform with the App of Apps pattern and ApplicationSets by exposing the in-cluster destination. Using the in-cluster destination means an Application can create other Applications in the cluster hosting Argo CD.

  1. Connect your Kubernetes cluster by creating an agent named workshop.

    caution

    Using the name workshop is crucial because the manifests provided for the tutorial assume this is the destination.name available for Applications in Argo CD.

  2. Enable the admin user, generate the password.

  3. Access your Argo CD instance.

4. Cluster Bootstrapping

4.1. Automating Application Creation with ApplicationSets

Each cluster will get two addons (kyverno and external-secrets), the cluster-specific configurations, and Namespaces for the teams. Each requires an Application in Argo CD to install them into each cluster.

Manually creating Applications is error-prone and tedious. Scripting config management tools can automate the process, but there is a better way. ApplicationSets template Applications and populate them using generators.

These are the ApplicationSets in the control-plane repo:

  • argocd/clusters-appset.yaml will generate an Application for each cluster registered to Argo CD (excluding the in-cluster) and point it to the clusters/{{name}} folder in the control-plane repo.

  • argocd/addons-appset.yaml will generate an Application for each combination of cluster registered to Argo CD (excluding the in-cluster) and Helm Chart folder in the list generator.

    # generators
    - list:
    elements: # The value in the pairs below are the folder name from `charts/`.
    - addonChart: external-secrets
    - addonChart: kyverno
    - clusters:
    selector:
    matchExpressions:
    - {key: 'akuity.io/argo-cd-cluster-name', operator: NotIn, values: [in-cluster]}

    The addons-values.yaml file from the cluster/ folder is passed to each Helm Chart. Using Umbrella charts (setting sub-charts as dependencies in the local chart), the same values file can be passed to each chart, without fear of conflicts.

    # cluster/{{name}}/addons-values.yaml
    external-secrets: {}
    kyverno: {}

    # values file from cluster folder
    source:
    helm:
    valueFiles:
    - '../../clusters/{{name}}/addons-values.yaml'
  • argocd/teams-appset.yaml will generate an Application for each folder in teams/. The resulting Application will use the team Helm Chart (in the charts/ folder) with the values.yaml from the team's folder.


    # File structure of the `teams` folder.
    teams/
    <username>/
    values.yaml

    # ApplicationSet `git` generator.
    - git:
    repoURL: https://github.com/<username>/control-plane
    revision: HEAD
    directories:
    - path: "teams/*"

    # Application spec template using the values file.
    source:
    helm:
    releaseName: '{{path.basename}}' # The team name.
    valueFiles:
    - '../../{{path}}/values.yaml' # The team's folder.
  • In the team Helm Chart, the repo-appset.yaml template will create an ApplicationSet for each item in the repos value. Each one will generate an Application for the repository name and all the folders found under env/.

    {{- range .Values.repos }}
    ---
    apiVersion: argoproj.io/v1alpha1
    kind: ApplicationSet
    metadata:
    name: '{{ $.Release.Name }}-{{ . | trimSuffix "-deploy" }}'
    spec:
    generators:
    - git:
    repoURL: 'https://github.com/{{ $.Values.githubOrg | default $.Release.Name }}/{{ . }}'
    revision: HEAD
    directories:
    - path: env/*
    template:
    metadata:
    name: '{{ $.Release.Name }}-{{ . | trimSuffix "-deploy" }}-{{`{{path.basename}}`}}'
    spec:
    project: '{{ $.Release.Name }}'
    source:
    repoURL: 'https://github.com/{{ $.Values.githubOrg | default $.Release.Name }}/{{ . }}'
    targetRevision: 'HEAD'
    path: '{{`{{path}}`}}'
    destination:
    name: '{{ $.Values.cluster }}'
    namespace: '{{ $.Release.Name }}-{{`{{path.basename}}`}}'
    syncPolicy:
    automated: {}
    {{- end }}
    • Notice the use of {{`{{ in the path of the template; this is to prevent Helm from interpreting the ApplicationSet template ({{path}}) syntax during templating of the chart.

4.2. Bootstrapping with App of Apps

To enable the GitOps process, a top-level Application is used to manage the Argo CD configuration and propagate repository changes to the argocd namespace on the in-cluster destination (i.e., the Akuity Platform control plane).

  1. Navigate to the Argo CD UI.

  2. Create an Application to manage the Argo CD configuration using the argocd-app.yaml manifest at the root of the control-plane repo.

    • Click + NEW APP.

    • Click EDIT AS YAML.

    • Paste the contents of argocd-app.yaml.

    • Click SAVE.

    • Click CREATE.

argocd Applcation resource tree.

After creating the argocd Application, the automated sync policy deployed the addons, clusters, teams ApplicationSets, along with the default AppProject (assuming ownership of the existing project).

The clusters ApplicationSet generated an Application for the workshop cluster, which created a couple of Kyverno ClusterPolicy resources and an External Secrets SecretStore.

The addons ApplicationSet generated a Kyverno and External Secrets Application for the workshop cluster.

The addons create the CRDs on which resources in the cluster Application depend. This setup relies on the sync retry mechanism to make the resources eventually consistent.

5. Self-service Teams

Each team needs an AppProject, a Namespace for each environment, and Applications. A team can request to create these resources, but they are ultimately managed by the administrators who approve the Pull Request.

The teams are provided with a Helm Chart, charts/teams to abstract the concepts of Namespaces, AppProjects, and ApplicationSets. Instead, they must create a new folder under teams/ and add the example values.yaml from teams/USERNAME.

5.1. Creating a Team Environment

You will create a team for the tutorial using your GitHub username.

  1. Open the control-plane repository.

  2. Copy the teams/EXAMPLE folder and create a new folder with your GitHub username, converting to lowercase if required. (i.e., teams/<username>)

  3. Commit the changes to the main branch.

Once the teams ApplicationSet has detected the change in Git, it will create an Application for the Namespaces on the cluster, the AppProject for the team, and an ApplicationSet for each repo in the values.yaml.

On the Applications page of Argo CD, select the team's project on the filter to experience what they would see.

The demo-app is deployed using the latest tag to each namespace. Check out the Deployment logs to confirm that your <username> is being printed.

Bonus Question (2)

Why is a separate Application used to create the Namespace resources instead of using the CreateNamespace sync option?

6. Rendered Manifests

Using config management tools, like Kustomize, makes it much easier to maintain the environment-specific configuration. However, it also makes it much more challenging to determine what exactly changed in the resulting manifests. This is a critical requirement for production environments. Luckily there is a solution for this, the Rendered Manifests pattern!

The idea is to create a branch with plain YAML manifests generated by the config management tool (i.e., Kustomize). The process of maintaining rendered manifests branches, of course, should be automated. A Continuous Integration system, like GitHub Actions, is a perfect tool for this task.

6.1. Adding Render Manifests to the demo-app-deploy Repo

The demo-app-deploy repo contains the application's manifests, so you'll add a workflow to generate the rendered manifests.

  1. Navigate to your demo-app-deploy repo.

  2. Open .github/workflows/rendered-manifests.yaml:

    • Uncomment the render-manifests job.

    • Delete the placeholder job.

    • Commit and push the changes.

    • Once the workflow is created, it will create a branch for each environment and push the rendered manifests to the branch.

  3. Navigate to your control-plane repository to update the team Helm Chart (charts/team) to use the rendered manifests from the branches.

    • Open the file charts/team/templates/repo-appset.yaml.

    • Update the spec.template.spec.source.targetRevision to {{ "{{" }}path{{ "}}" }}

    • Update the spec.template.spec.source.path to ./

# charts/team/templates/repo-appset.yaml

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: '{{ $.Release.Name }}-{{ . | trimSuffix "-deploy" }}'
spec:
template:
spec:
source:
-- targetRevision: 'HEAD'
-- path: '{{ "{{" }}path{{ "}}" }}'
++ targetRevision: '{{ "{{" }}path{{ "}}" }}'
++ path: './'

Wait for the ApplictionSet to reconcile and see that nothing changes in the Applications since the rendered manifests are the same as what was already deployed.

7. Automating Image Tag Updates

Updating the image tag in the kustomization.yaml is boring, and no one likes to make trivial tag changes manually. To make the process more efficient, you are going to automate this process with GitHub Actions!

7.1. Automating the dev Environment

The dev environment is intended to track the trunk of the repository (i.e., the main branch), and there is minimal impact if a change breaks the application. Given this, you will automate the deployment to dev each time a new build is created in the demo-app repo.

  1. Navigate to your demo-app repository.

  2. Uncomment the deploy-dev job in .github/workflows/ci.yaml workflow file:

  deploy-dev:
runs-on: ubuntu-latest
needs: build-image
steps:
- uses: imranismail/setup-kustomize@v1

- name: Update env/dev image with Kustomize
run: |
git config --global user.name "Deploy Bot"
git config --global user.email "no-reply@akuity.io"
git clone https://bot:${{ secrets.DEPLOY_PAT }}@github.com/${{ github.repository_owner }}/demo-app-deploy.git
cd demo-app-deploy/env/dev
kustomize edit set image ghcr.io/${{ github.repository_owner }}/demo-app:${{ github.sha }}
git commit -a -m "chore(dev): deploy demo-app:${{ github.sha }}"
git notes append -m "image: ghcr.io/${{ github.repository_owner }}/demo-app:${{ github.sha }}"
git push origin "refs/notes/*" --force && git push --force
  1. Commit and push the changes, then watch the action run!

ci-deploy-dev-action-running

When the workflow completes (successfully), check out the latest commit on your demo-app-deploy repo.

demo-app-deploy-auto-dev-commit

Once changes are pushed, the CI workflow will build a new image and update the dev environment with the corresponding image tag. Developers no longer need to manually change deployment manifests to update the dev environment.

7.2. Automating Promotion to stage and prod

Updating the manifests for the stage and prod deployments requires a more careful process, but it can also be automated.

  1. Allow GitHub Actions to create and approve pull requests on the demo-app repo.

    • Navigate to: https://github.com/<username>/demo-app-deploy/settings/actions

    • Under Workflow permissions, check the box for Allow GitHub Actions to create and approve pull requests.

    • Click Save.

  2. In your demo-app-deploy repo, open .github/workflows/promote.yaml:

    • Uncomment the promote-image-change job.

    • Delete the placeholder job.

    • Commit and push the changes.

    While the workflow is running, read on to learn more about the logic.

    Engineers might make arbitrary changes in the deployment repository, but you must automate only the image change. To distinguish between the image change and other changes, it uses git notes to explicitly annotate the image promotion commit. The workflow checks the commit note, and if it contains the note that matches the image: <image> pattern, then propagates the image change to the staging and production environments.

    Remember the approval step requirement. The approval step is implemented using GitHub Pull Request. Instead of updating the production environment directly, workflow pushes the changes to the auto-promotion-prod branch and creates a pull request, so to approve the change approver needs to merge the pull request.

  3. To demonstrate the promotion workflow, make a change in your demo-app repo. Some suggestions:

    • Add another exclamation mark to the output.
    • Change the colour of the text.
    • Break it somehow, promote that bug into prod at 17:00 on a Friday, then see how quickly you can get the fix to prod.
      • The real challenge is creating the bug since it fails fast (can't just be a syntax error because the build will fail.)
      • Use the colour brown as it is invalid. This fix (and the initial feature depending on the fix) will be out to prod in ~2:30. Plus, there was no actual downtime since the pod never became healthy, and the original replicaSet stayed up.

8. Summary

To summarize what you've built so far, you have a multi-tenant Argo CD instance. The instance is managed using a GitOps-based process where engineers can self-onboard their team by creating a pull request. Each application development team independently manages their application by maintaining a separate deployment repository. Deployment manifests are generated using Kustomize, allowing engineers to avoid duplications and efficiently introduce environment-specific changes.

The simple image tag changes are fully automated using the CI pipelines. On top of this, teams leverage the "rendered manifests" pattern that turns to get Git history into a traceable audit log of every infrastructure change. Believe it or not, you've achieved more than many companies in several years.

9. Clean Up

Complete this clean-up checklist after completing the tutorial. After doing so, you can follow the tutorial from the start again.

  1. Delete the Argo CD instance on the Akuity Platform.
  2. Delete the control-plane, demo-app, and demo-app-deploy repos.
    • This step can be completed by running terraform destory in the Codespace used during the automated repo setup.
  3. Delete the workshop cluster.
  4. Delete the demo-app package from your GitHub account.
  5. Delete GitHub PATs generated.

10. Bonus Question Answers

  1. The repositories have a GitHub Actions workflow (update-placeholders.yaml ) to update placeholder text in the files from the repository templates. The challenge is that the DEPLOY_PAT, created in the same Terraform apply, needs to exist before the workflow can be run. The github_release.setup-complete resource creates a tag named setup-complete on the repositories to trigger the workflow and it depends_on the github_actions_secret.deploy-pat resource, which ensures that the workflow is run after the GitHub Secret is added.

  2. The AppProject for the team only permits creating resources in the specified cluster and namespaces matching the pattern <team-name>-*. This means that Namespace resources can't be created by Applications in the team's project. Instead a seperate Application is created in the default project, which does permit Namespace resources.