Skip to main content

Introduction to Argo CD for FinTech

This tutorial will walk you through building container images with GitHub Actions, and then implementing Argo CD with the Akuity Platform, to manage the deployment of the container to Kubernetes using Helm charts and GitOps.

Ultimately, you will have a Kubernetes cluster, with Applications deployed using an Argo CD control plane.

1. Prerequisites

To follow this tutorial make sure you have minimal working knowledge of the following concepts:

The tutorial requires that you also have a GitHub Account. You will use this to:

  • host a public repo for the GitOps configuration.
  • utilize GitHub Codespaces for the workshop environment. Ensure that you have a quota available. The free tier includes 4 vCPU at 30 hours a month.
  • create an account on the Akuity Platform.

The tutorial was written and tested using the following tool and component versions:

  • Argo CD: v2.10.1
  • Docker Desktop: 4.13.1
  • Kubernertes: v1.29.1
  • k3s: v5.6.3

2. Setting up Your Environment

2.1. Create the Repository from a Template

In this scenario, you will utilize GitHub to host the Git repo containing the Kubernetes manifests and building a container image. To represent this in the workshop, you will create a repository from a template.

  1. Click this link or click "Use this template" from the akuity/intro-argo-cd-tutorial-template repo main page.

  2. Ensure the desired "Owner" is selected (e.g., your account and not an organization).

  3. Enter intro-argo-cd for the "Repository name".

  4. Then click Create repository from template.

  5. Next, you'll start a new Codespace by clicking the green Code button on the repo page, selecting the Codespaces tab, and then selecting Create codespace on main.

The Codespace will open in another browser tab with information about setting up your codespace. Once it's done setting up, you should see a terminal in the browser with the repo open.

2.1.1. Verify Environment

Part of the codespace setup was to install all necessary tools, which includes setting up k3d and kubectl. You can verify that the environment is ready by going through the following:

  1. Verify that k3d is installed and running a Kubernetes cluster.

    k3d cluster list

    You should see the following output:

    NAME   SERVERS   AGENTS   LOADBALANCER
    dev 1/1 0/0 true
  2. Check that the cluster works by running kubectl get nodes.

    % kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    dev-control-plane Ready control-plane 7m44s v1.29.2

    Fetching the nodes will demonstrate that kubectl can connect to the cluster and query the API server. The node should be in the "Ready" status.

2.2. Akuity Platform Sign Up

This scenario demonstrates deploying applications to a cluster external to Argo CD.

Similar to how the GitHub repo is hosting the Helm charts, which describe what resources to deploy into Kubernetes, the Akuity Platform will host the Application manifests, which represent how to deploy the resources into the cluster. Along with Argo CD, which will implement the changes on the cluster.

tip

Sign up for a free 30-day trial of the Akuity Platform!

  1. Create an account on the Akuity Platform.

  2. To log in with GitHub SSO, click "Continue with GitHub".

note

You can also use Google SSO or an email and password combo.

  1. Click Authorize akuityio.
  1. Click the create or join link.

  2. Click + New organization in the upper right hand corner of the dashboard.

  3. Name your organization following the rules listed below the Organization Name field.

2.3. Setup Access for the akuity CLI

  1. Select the Organization you want to create a key for, from the pull down menu.

  2. Switch to the API Keys tab.

  3. Click + New Key button on the lower right side of the page.

  4. Set the Description for the key to workshop.

  5. Assign the Owner Role to the key.

  6. Click the Create button.

  7. Click the Copy to Clipboard button, then paste and run the commands in the Codespace terminal.

2.4. Create your Argo CD Instance

You can create your Argo CD instance using the Akuity Platform Dashboard or CLI by choosing between the tabs below.

  1. Set your organization name in the akuity config

    akuity config set --organization-id=$(akuity org list | awk 'NR==2 {print $1}')
    info

    This command uses some bash-fu, so let's break that down:

    $(akuity org list | awk 'NR==2 {print $1}'): This part of the command is a sub-command enclosed in $(), which means it will be executed first, and its output will be used as the value for the --organization-id parameter.

    akuity org list: This command lists all organizations in your Akuity Platform account. In this case, there should only be one.

    |: The pipe symbol takes the output of the previous command (the list of organizations) and passes it as input to the next command.

    awk 'NR==2 {print $1}': awk is a text processing tool. This specific awk command is used to extract the first field ($1) from the second line (NR==2) of the input provided by the akuity org list command.

  2. Create the Argo CD instance on the Akuity Platform

    akuity argocd apply -f akuity-platform/
  3. Wait for the instance to become healthy, then apply the agent install manifests to the cluster.

    akuity argocd cluster get-agent-manifests --instance-name=argo-cd dev | kubectl apply -f -
  4. From the Akuity Platform Dashboard, in the top, next to the Argo CD instance name and status, click the instance URL (e.g., <instance-id>.cd.akuity.cloud) to open the Argo CD login page in a new tab.

  5. Enter the username admin and the password akuity-argocd.

You now have a fully-managed Argo CD instance 🎉

3. Building a Container Image with GitHub Actions

Creating Your Container Image

The goal of continuous delivery is to react to new releases (versions) of applications and deploy them to the desired location. For the sake of this workshop, you'll use a simplified example of building a container image, which you will later deploy to Kubernetes following GitOps principles with Argo CD.

The repository that you cloned contains a Dockerfile at the root. The file itself is rather simple, it takes an existing container image as the base and builds a new container from it. The Dockerfile is purely illustrative, representing what would normally be a more complex set of instructions that generate an image based on source files. In the context of the workshop, the Dockerfile is used to build custom container images for deployment into your Kubernetes cluster.

FROM quay.io/akuity/argo-cd-learning-assets/guestbook:latest

The .github/workflows folder contains a workflow file build.yaml used to build an image based on the Dockerfile and push it to the GitHub Container Registry (GHCR) associated with the repository. The workflow is configured to push a new image for each change that includes the Dockerfile, tagging it with latest and the first 7 characters of the commit SHA (e.g. fe68a20).

Using Your Container Image

To take advantage of the container image built by GitHub Actions after modifying the Dockerfile, you must update the Helm chart to reference it.

  1. Navigate to the guestbook/values.yaml file in your repository.

  2. In a separate tab, open the package (container image in GHCR) associated with your repo.

    GHCR Package on the repo main page

  3. Copy the container image repo address and paste it into the image.repository value for the Helm chart.

    GHCR Package page

  4. Commit and push the change to the repo.

4. Using Argo CD to Deploy Helm Charts

4.1. Create an Application in Argo CD

Now, using an Application, you will declaratively tell Argo CD how to deploy the Helm charts.

Start by creating an Application to deploy the guestbook Helm Chart from the repo.

  1. Navigate to the Argo CD UI, and click NEW APP.

  2. In the top right, click EDIT AS YAML.

  3. Paste the contents of apps/guestbook-dev.yaml from your repo.

info

This manifest describes an Application.

  • The name of the Application is guestbook-dev.
  • The source is your repo with the Helm charts.
  • The destination is the cluster connected by the agent.
  • The sync policy will automatically create the namespace.
  1. Click SAVE. At this point, the UI has translated the Application manifest into the corresponding fields in the wizard.

  2. In the top left, click CREATE. The new app pane will close and show the card for the Application you created. The status on the card will show "Missing" and "OutOfSync".

  3. Click on the Application card titled argocd/guestbook-dev.

info

In this state, the Application resource tree shows the manifests generated from the source repo URL and path defined. You can click DIFF to see what manifests the Application rendered. Since auto-sync is disabled, the resources do not exist in the destination yet.

  1. In the top bar, click SYNC then SYNCHRONIZE to instruct Argo CD to create the resources defined by the Application.

The resource tree will expand as the Deployment creates a ReplicaSet that makes a pod, and the Service creates an Endpoint and EndpointSlice. The Application will remain in the "Progressing" state until the pod for the deployment is running.

Afterwards, all the top-level resources (i.e., those rendered from the Application source) in the tree will show a green checkmark, indicating that they are synced (i.e., present in the cluster).

4.2. Syncing Changes Manually

Trigger a new container build

To trigger the build of the container image and push to your GHCR, modify the tag in the FROM statement in the Dockerfile from latest to 0.1.0 then commit and push the change.

-- FROM quay.io/akuity/argo-cd-learning-assets/guestbook:latest
++ FROM quay.io/akuity/argo-cd-learning-assets/guestbook:0.1.0

Pushing the commit with the change will trigger the build GitHub Actions workflow on your repository, generating the package (container image) in your GHCR.

Using the container image

An Application now manages the deployment of the guestbook Helm chart. So what happens when you want to deploy a new image tag?

Instead of running helm upgrade guestbook-dev ./guestbook, you will trigger a sync of the Application.

  1. Navigate to your repo on GitHub, and open the file guestbook/values-dev.yaml.

  2. In the top right of the file, click the pencil icon to edit.

  3. Override the default image.tag value by setting it to the newest tag on your package.

    image:
    tag: <commit-sha>
  4. Click Commit changes....

  5. Add a commit message. For example chore(guestbook): bump dev tag.

  6. Click Commit changes.

  7. Switch to the Argo CD UI and go to the argocd/guestbook-dev Application.

  8. In the top right, click the REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.

    note

    The default sync interval is 3 minutes. Any changes made in Git may not apply for up to 3 minutes.

  9. In the top bar, click SYNC then SYNCHRONIZE to instruct Argo CD to deploy the changes.

Due to the change in the repo, Argo CD will detect that the Application is out-of-sync. It will template the Helm chart (i.e., helm template) and patch the guestbook-dev deployment with the new image tag, triggering a rolling update.

4.3. Enable Auto-sync and Self-heal for the Guestbook Application

Now that you are using an Application to describe how to deploy the Helm chart into the cluster, you can configure the sync policy to automatically apply changes — removing the need for developers to manually trigger a deployment for changes that already made it through the approval processes.

  1. In the top menu, click DETAILS.

  2. Under the SYNC POLICY section, click ENABLE AUTO-SYNC and on the prompt, click OK.

  3. Below that, on the right of "SELF HEAL", click ENABLE.

  4. In the top right of the DETAILS pane, click the X to close it.

If the Application was out-of-sync, this would immediately trigger a sync. In this case, your Application is already in sync, so Argo CD made no changes.

4.4. Demonstrate Application Auto-sync via Git

With auto-sync enabled on the guestbook-dev Application, changes made to the main branch in the repo will be applied automatically to the cluster. You will demonstrate this by updating the number of replicas for the guestbook-dev deployment.

  1. Navigate to your repo on Github, and open the file guestbook/values.yaml.

  2. In the top right of the file, click the pencil icon to edit.

  3. Update the replicaCount to the 2 list.

  4. In the top right, click Commit changes....

  5. Add a commit message. For example chore(guestbook): scale to 2 replicas.

  6. In the bottom left, click Commit changes.

  7. Switch to the Argo CD UI and go to the argocd/guestbook-dev Application.

  8. In the top right, click the REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.

You can view the details of the sync operation by, in the top menu, clicking SYNC STATUS. Here it will display, what "REVISION" it was for, what triggered it (i.e., "INITIATED BY: automated sync policy"), and the result of the sync (i.e., what resources changed).

5. Managing Argo CD Applications Declaratively

5.1. Create an App of Apps

One of the benefits of using Argo CD is that you are now codifying the deployment process for the Helm charts in the Application spec.

Earlier in the lab, you created the guestbook-dev Application imperatively, using the UI. Do you want to manage the Application manifests declaratively too? This is where the App of Apps pattern comes in.

Argo CD can take plain Kubernetes manifests from a directory and deploy them. The Application manifests are Kubernetes resources, just like a Deployment. Therefore, an Application pointing to a folder in Git containing Application manifests will work the same as the guestbook-dev Application from earlier. With the caveat that for Argo CD application controller to pick up Applications, they must be created in the argocd namespace.

  1. Navigate to the Applications dashboard in the Argo CD UI, and click NEW APP.

  2. In the top right, click EDIT AS YAML.

  3. Paste the contents of app-of-apps.yaml (in the repo's root).

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
    name: bootstrap
    namespace: argocd
    spec:
    destination:
    name: in-cluster
    project: default
    source:
    path: apps
    repoURL: https://github.com/<repo>
    targetRevision: HEAD

    This Application will watch the apps/ directory in your repo which contains Application manifests for the guestbook-stg and guestbook-prod instances of the guestbook chart, and a new portal Helm chart.

  4. Click SAVE.

  5. Then, in the top left, click CREATE.

  6. Click on the Application card titled argocd/bootstrap.

    At this point, the Application will be out-of-sync. The diff will show the addition of the argocd.argoproj.io/tracking-id label to the existing guestbook-dev Application, which indicates that the "App of Apps now manages it".

      kind: Application
    metadata:
    ++ annotations:
    ++ argocd.argoproj.io/tracking-id: 'bootstrap:argoproj.io/Application:argocd/guestbook-dev'
    generation: 44
    labels:
    ...
    path: guestbook
    repoURL: 'https://github.com/<username>/intro-argo-cd-tutorial'
    syncPolicy:
    automated: {}

    Along with a new Application for the portal Helm chart.

  7. To apply the changes, in the top bar, click SYNC then SYNCHRONIZE.

From this Application, you can see all of the other Applications managed by it in the resource tree. Each child Application resource has a link to its view on the resource card.

6. Tool Detection, and Sync Waves

After creating the App-of-Apps it deployed the portal Application based on the manifest in Git (apps/portal.yaml). Since the automated sync policy is not enabled, it remains out-of-sync. We'll go through some explanation before triggering the sync.

  1. Click on the Application card titled argocd/portal.

6.1. Tool Detection

The portal Application utilizes Kustomize via the kustomization.yaml file in the portal folder. Unlike the guestbook-dev Application, the portal does not specify Kustomize in the source spec. Instead, it relies on Argo CD's automatic tool detection. If a config management tool is not specified in the source of an Application, Argo CD will check for the following:

  1. A chart.yaml file in the folder. If found, it will assume that the folder contains a Helm chart.
  2. A kustomization.yaml file in the folder. If found, Argo CD will use Kustomize.

If neither is found, Argo CD will default to plain Kubernetes manifests.

Since the portal folder contains the kustomization.yaml, Argo CD knows to use Kustomize.

6.2. Sync Phases, Waves, & Hooks

Each time Argo CD performs a sync on an Application, it does not simply generate the manifests and apply them to the cluster. Each sync contains multiple phases and waves. It may also create Kubernetes Job resources known as hooks. The portal Application is an example of this functionality.

The portal folder contains a frontend and backend Deployment with corresponding Services, and 3 Jobs. Yet the resources shown in the out-of-sync Application don't contain those Jobs. This is because the Jobs contain the argocd.argoproj.io/hook annotation which indicates to Argo CD that the resource should applied during a sync. Typically hooks are Jobs or (Argo) Workflows, but can be any resource.

In the portal Application, hooks are used to update the SQL schema for the backend, and bring up and down a maintenance page for the frontend. The schema update Job, runs in the PreSync phase which happens before the Sync phase when resources are applied. Therefore, before updating the image for the backend Deployment, the schema can be updated in preparation.

The frontend Deployment and related resource hooks rely on sync waves to ensure they are executed in the correct order. Each phase in a sync can contain multiple sync waves. The default sync wave, which all resources that don't specify the argocd.argoproj.io/sync-wave fall on, is 0. Sync waves can be negative, so a wave of -1 will run before all resources on the default wave.

The hook to bring up the maintenance page for the frontend runs on sync wave 1. The frontend Deployment and Service are on wave 2. Then the hook to bring down the maintenance page is on wave 3. This order of waves ensures that the maintenance page is brought up first. While it's up, the changes are made to the frontend resources. Then once their sync is complete, the page is brought back down.

6.3. Sync It!

With an understanding of how the portal Application sync works, go ahead and trigger a sync and watch as the resources are created in a precisely choreographed fashion, rather than all at once.

  1. In the top bar, click SYNC then SYNCHRONIZE.

7. Review

You have reached the end of the tutorial. You have an Argo CD instance deploying Helm charts from your repo into your cluster.

You no longer manually deploy Helm chart changes. The Application spec describes the process for deploying the Helm charts declaratively. There's no longer a need for direct access to the cluster, removing the primary source of configuration drift.