Skip to main content

Introduction to Argo CD on EKS

This tutorial will walk you through implementing Argo CD on EKS with the Akuity Platform, to manage the deployment of the Helm charts in a declarative fashion.

Ultimately, you will have a Kubernetes cluster, with Applications deployed using an Argo CD control plane.

1. Prerequisites

To follow this tutorial make sure you have minimal working knowledge of the following concepts:

The tutorial requires that you have:

  • an AWS EKS cluster.
  • a GitHub Account. You will use this to:
    • host a public repo for the GitOps configuration.
    • utilize GitHub Codespaces for the workshop environment. Ensure that you have a quota available. The free tier includes 4 vCPU at 30 hours a month.
    • create an account on the Akuity Platform.

The tutorial was written and tested using the following tool and component versions:

  • Argo CD: v2.10.1
  • Docker Desktop: 4.13.1
  • Kubernertes: v1.29.1
  • kind: v0.22.0

2. Setting up Your Environment

2.1. Create the Repository from a Template

In this scenario, you manage the application Helm charts in version control. To represent this in the lab, you will create a repository from a template containing the application Helm charts.

  1. Click this link or click "Use this template" from the akuity/intro-argo-cd-eks-tutorial-template repo main page.

  2. Ensure the desired "Owner" is selected (e.g., your account and not an organization).

  3. Enter intro-argo-cd-tutorial for the "Repository name".

  4. Then click Create repository from template.

  5. Next, you'll start a new Codespace by clicking the green Code button on the repo page, selecting the Codespaces tab, and then selecting Create codespace on main.

The Codespace will open in another browser tab with information about setting up your Codespace. Once it's done setting up, you should see a terminal in the browser with the repo open.

2.2. Setup kubectl Access to EKS

Part of the codespace setup was to install all necessary tools, which includes installing the aws and kubectl CLIs. However, you'll need to configure the kubectl context to access your EKS cluster.

  1. Authenticate the aws CLI in the Codespace.

  2. Test that the authentication is working by getting the name of your EKS cluster.

    aws eks list-clusters

    You should see output similar to the following:

    {
    "clusters": [
    "eksworkshop-eksctl"
    ]
    }
  3. Create a kubeconfig file for your cluster using the following command.

    aws eks update-kubeconfig --name $(aws eks list-clusters | jq -r .clusters[0])

    You should see output similar to the following:

    Added new context arn:aws:eks:us-east-1:338615488317:cluster/<cluster-name> to /home/vscode/.kube/config
    info

    This command uses some bash-fu, so let's break that down:

    $(aws eks list-clusters | jq -r .clusters[0]): This part of the command is a sub-command enclosed in $(), which means it will be executed first, and its output will be used as the value for the --name parameter.

    aws eks list-clusters: This AWS CLI command lists all EKS clusters in the current AWS account and region.

    |: The pipe symbol takes the output of the previous command (the list of clusters) and passes it as input to the next command.

    jq -r .clusters[0]: jq is a command-line tool for processing JSON. The -r option tells jq to output raw text instead of JSON-formatted text. The .clusters[0] part extracts the first element of the clusters array from the JSON output provided by the aws eks list-clusters command.

  4. Test that kubectl can access the cluster.

    k get nodes

    You should see output similar to the following:

    NAME                              STATUS   ROLES    AGE   VERSION
    ip-192-168-108-33.ec2.internal Ready <none> 2d v1.28.8-eks-ae9a62a
    ip-192-168-158-117.ec2.internal Ready <none> 2d v1.28.8-eks-ae9a62a
    ip-192-168-182-251.ec2.internal Ready <none> 2d v1.28.8-eks-ae9a62a

2.3. Akuity Platform Sign Up

This scenario demonstrates deploying applications to a cluster external to Argo CD.

Similar to how the GitHub repo is hosting the Helm charts, which describe what resources to deploy into Kubernetes, the Akuity Platform will host the Application manifests, which represent how to deploy the resources into the cluster. Along with Argo CD, which will implement the changes on the cluster.

tip

Sign up for a free 30-day trial of the Akuity Platform!

  1. Create an account on the Akuity Platform.

  2. To log in with GitHub SSO, click "Continue with GitHub".

note

You can also use Google SSO or an email and password combination.

  1. Click Authorize akuityio.
  1. Click the create or join link.

  2. Click + New organization in the upper right hand corner of the dashboard.

  3. Name your organization following the rules listed below the Organization Name field.

2.4. Setup Access for the akuity CLI

  1. Select the Organization you want to create a key for, from the pull down menu.

  2. Switch to the API Keys tab.

  3. Click + New Key button on the lower right side of the page.

  4. Set the Description for the key to workshop.

  5. Assign the Owner Role to the key.

  6. Click the Create button.

  7. Click the Copy to Clipboard button, then paste and run the commands in the Codespace terminal.

2.5. Create your Argo CD Instance

You can create your Argo CD instance using the Akuity Platform Dashboard or CLI by choosing between the tabs below.

  1. Navigate to Argo CD.

  2. Click + Create in the upper right hand corner of the dashboard.

  3. Name your instance following the rules listed below the Instance Name field.

  4. (Optionally) Choose the Argo CD version you want to use.

  5. Click + Create.

At this point, your Argo CD instance will begin initializing. The start-up typically takes under 2 minutes.

2.5.1. Configure Your Instance

While the instance is initializing, you can prepare it for the rest of the lab.

  1. In the dashboard for the Argo CD instance, click Settings.

  2. On the inner sidebar, under "Security & Access", click System Accounts.

  3. Enable the "Admin Account" by clicking the toggle and clicking Confirm on the prompt.

  4. Then, for the admin user, click Set Password.

  5. Click Regenerate Password, then click Copy.

  6. In the bottom right of the Set password prompt, hit Close.

  7. In the top, next to the Argo CD instance name and status, click the instance URL (e.g., <instance-id>.cd.akuity.cloud) to open the Argo CD login page in a new tab.

  8. Enter the username admin and the password copied previously.

You now have a fully-managed Argo CD instance 🎉

2.6. Deploy an Agent to the Cluster

You must connect the cluster to Argo CD to deploy the application resources. The Akuity Platform uses an agent-based architecture for connecting external clusters. So, you will provision an agent and deploy it to the cluster.

  1. Back on the Akuity Platform, in the top left of the dashboard for the Argo CD instance, click Clusters.

  2. In the top right, click Connect a cluster.

  3. Enter the eks name as the "Cluster Name".

  4. In the bottom right, click Connect Cluster.

  5. To get the agent install command, click Copy to Clipboard. Then, in the bottom right, Done.

  6. Open your terminal and check that your target is the correct cluster by running kubectl config current-context.

    You should see output similar to the following:

    arn:aws:eks:us-east-1:338615488317:cluster/<cluster-name>
  7. Paste and run the command against the cluster. The command will create the akuity namespace and deploy the resources for the Akuity Agent.

  8. Check the pods in the akuity namespace. Wait for the Running status on all pods (approx. 1 minute).

    % kubectl get pods -n akuity
    NAME READY STATUS RESTARTS AGE
    akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-application-controller-<replicaset-id>-<pod-id> 2/2 Running 0 65s
    argocd-notifications-controller-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-redis-<replicaset-id>-<pod-id> 1/1 Running 0 65s
    argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64s
    argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64s

    Re-run the kubectl get pods -n akuity command to check for updates on the pod statuses.

  9. Back on the Clusters dashboard, confirm that the cluster shows a green heart before the name, indicating a healthy status.

3. Using Argo CD to Deploy Helm Charts

3.1. Create an Application in Argo CD

Now, using an Application, you will declaratively tell Argo CD how to deploy the Helm charts.

Start by creating an Application to deploy the guestbook Helm Chart from the repo.

  1. Navigate to the Argo CD UI, and click NEW APP.

  2. In the top right, click EDIT AS YAML.

  3. Paste the contents of apps/guestbook-dev.yaml from your repo.

info

This manifest describes an Application.

  • The name of the Application is guestbook-dev.
  • The source is your repo with the Helm charts.
  • The destination is the cluster connected by the agent.
  • The sync policy will automatically create the namespace.
  1. Click SAVE. At this point, the UI has translated the Application manifest into the corresponding fields in the wizard.

  2. In the top left, click CREATE. The new app pane will close and show the card for the Application you created. The status on the card will show "Missing" and "OutOfSync".

  3. Click on the Application card titled argocd/guestbook-dev.

info

In this state, the Application resource tree shows the manifests generated from the source repo URL and path defined. You can click DIFF to see what manifests the Application rendered. Since auto-sync is disabled, the resources do not exist in the destination yet.

  1. In the top bar, click SYNC then SYNCHRONIZE to instruct Argo CD to create the resources defined by the Application.

The resource tree will expand as the Deployment creates a ReplicaSet that makes a pod, and the Service creates an Endpoint and EndpointSlice. The Application will remain in the "Progressing" state until the pod for the deployment is running.

Afterwards, all the top-level resources (i.e., those rendered from the Application source) in the tree will show a green checkmark, indicating that they are synced (i.e., present in the cluster).

3.2. Syncing Changes Manually

An Application now manages the deployment of the guestbook Helm chart. So what happens when you want to deploy a new image tag?

Instead of running helm upgrade guestbook-dev ./guestbook, you will trigger a sync of the Application.

  1. Navigate to your repo on GitHub, and open the file guestbook/values-dev.yaml.

  2. In the top right of the file, click the pencil icon to edit.

  3. Override the default image.tag value by setting it to 0.2.0.

    image:
    tag: 0.2.0
  4. Click Commit changes....

  5. Add a commit message. For example chore(guestbook): bump dev tag to 0.2.0.

  6. Click Commit changes.

  7. Switch to the Argo CD UI and go to the argocd/guestbook-dev Application.

  8. In the top right, click the REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.

    note

    The default sync interval is 3 minutes. Any changes made in Git may not apply for up to 3 minutes.

  9. In the top bar, click SYNC then SYNCHRONIZE to instruct Argo CD to deploy the changes.

Due to the change in the repo, Argo CD will detect that the Application is out-of-sync. It will template the Helm chart (i.e., helm template) and patch the guestbook-dev deployment with the new image tag, triggering a rolling update.

3.3. Enable Auto-sync and Self-heal for the Guestbook Application

Now that you are using an Application to describe how to deploy the Helm chart into the cluster, you can configure the sync policy to automatically apply changes — removing the need for developers to manually trigger a deployment for changes that already made it through the approval processes.

  1. In the top menu, click DETAILS.

  2. Under the SYNC POLICY section, click ENABLE AUTO-SYNC and on the prompt, click OK.

  3. Below that, on the right of "SELF HEAL", click ENABLE.

  4. In the top right of the DETAILS pane, click the X to close it.

If the Application was out-of-sync, this would immediately trigger a sync. In this case, your Application is already in sync, so Argo CD made no changes.

3.4. Demonstrate Application Auto-sync via Git

With auto-sync enabled on the guestbook-dev Application, changes made to the main branch in the repo will be applied automatically to the cluster. You will demonstrate this by updating the number of replicas for the guestbook-dev deployment.

  1. Navigate to your repo on Github, and open the file guestbook/values.yaml.

  2. In the top right of the file, click the pencil icon to edit.

  3. Update the replicaCount to the 2 list.

  4. In the top right, click Commit changes....

  5. Add a commit message. For example chore(guestbook): scale to 2 replicas.

  6. In the bottom left, click Commit changes.

  7. Switch to the Argo CD UI and go to the argocd/guestbook-dev Application.

  8. In the top right, click the REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.

You can view the details of the sync operation by, in the top menu, clicking SYNC STATUS. Here it will display, what "REVISION" it was for, what triggered it (i.e., "INITIATED BY: automated sync policy"), and the result of the sync (i.e., what resources changed).

4. Managing Argo CD Applications Declaratively

4.1. Create an App of Apps

One of the benefits of using Argo CD is that you are now codifying the deployment process for the Helm charts in the Application spec.

Earlier in the lab, you created the guestbook-dev Application imperatively, using the UI. Do you want to manage the Application manifests declaratively too? This is where the App of Apps pattern comes in.

Argo CD can take plain Kubernetes manifests from a directory and deploy them. The Application manifests are Kubernetes resources, just like a Deployment. Therefore, an Application pointing to a folder in Git containing Application manifests will work the same as the guestbook-dev Application from earlier. With the caveat that for Argo CD application controller to pick up Applications, they must be created in the argocd namespace.

  1. Navigate to the Applications dashboard in the Argo CD UI, and click NEW APP.

  2. In the top right, click EDIT AS YAML.

  3. Paste the contents of app-of-apps.yaml (in the repo's root).

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
    name: bootstrap
    namespace: argocd
    spec:
    destination:
    name: in-cluster
    project: default
    source:
    path: apps
    repoURL: https://github.com/<username>/intro-argo-cd-tutorial # Update to your repo URL.
    targetRevision: HEAD

    This Application will watch the apps/ directory in your repo which contains Application manifests for the guestbook-stg and guestbook-prod instances of the guestbook chart, and a new portal Helm chart.

  4. Click SAVE.

  5. Then, in the top left, click CREATE.

  6. Click on the Application card titled argocd/bootstrap.

    At this point, the Application will be out-of-sync. The diff will show the addition of the argocd.argoproj.io/tracking-id label to the existing guestbook-dev Application, which indicates that the "App of Apps now manages it".

      kind: Application
    metadata:
    ++ annotations:
    ++ argocd.argoproj.io/tracking-id: 'bootstrap:argoproj.io/Application:argocd/guestbook-dev'
    generation: 44
    labels:
    ...
    path: guestbook
    repoURL: 'https://github.com/<username>/intro-argo-cd-eks-tutorial'
    syncPolicy:
    automated: {}

    Along with a new Application for the portal Helm chart.

  7. To apply the changes, in the top bar, click SYNC then SYNCHRONIZE.

From this Application, you can see all of the other Applications managed by it in the resource tree. Each child Application resource has a link to its view on the resource card.

5. Tool Detection, and Sync Waves

After creating the App-of-Apps it deployed the portal Application based on the manifest in Git (apps/portal.yaml). Since the automated sync policy is not enabled, it remains out-of-sync. We'll go through some explanation before triggering the sync.

  1. Click on the Application card titled argocd/portal.

5.1. Tool Detection

The portal Application utilizes Kustomize via the kustomization.yaml file in the portal folder. Unlike the guestbook-dev Application, the portal does not specify Kustomize in the source spec. Instead, it relies on Argo CD's automatic tool detection. If a config management tool is not specified in the source of an Application, Argo CD will check for the following:

  1. A chart.yaml file in the folder. If found, it will assume that the folder contains a Helm chart.
  2. A kustomization.yaml file in the folder. If found, Argo CD will use Kustomize.

If neither is found, Argo CD will default to plain Kubernetes manifests.

Since the portal folder contains the kustomization.yaml, Argo CD knows to use Kustomize.

5.2. Sync Phases, Waves, & Hooks

Each time Argo CD performs a sync on an Application, it does not simply generate the manifests and apply them to the cluster. Each sync contains multiple phases and waves. It may also create Kubernetes Job resources known as hooks. The portal Application is an example of this functionality.

The portal folder contains a frontend and backend Deployment with corresponding Services, and 3 Jobs. Yet the resources shown in the out-of-sync Application don't contain those Jobs. This is because the Jobs contain the argocd.argoproj.io/hook annotation which indicates to Argo CD that the resource should applied during a sync. Typically hooks are Jobs or (Argo) Workflows, but can be any resource.

In the portal Application, hooks are used to update the SQL schema for the backend, and bring up and down a maintenance page for the frontend. The schema update Job, runs in the PreSync phase which happens before the Sync phase when resources are applied. Therefore, before updating the image for the backend Deployment, the schema can be updated in preparation.

The frontend Deployment and related resource hooks rely on sync waves to ensure they are executed in the correct order. Each phase in a sync can contain multiple sync waves. The default sync wave, which all resources that don't specify the argocd.argoproj.io/sync-wave fall on, is 0. Sync waves can be negative, so a wave of -1 will run before all resources on the default wave.

The hook to bring up the maintenance page for the frontend runs on sync wave 1. The frontend Deployment and Service are on wave 2. Then the hook to bring down the maintenance page is on wave 3. This order of waves ensures that the maintenance page is brought up first. While it's up, the changes are made to the frontend resources. Then once their sync is complete, the page is brought back down.

5.3. Sync It!

With an understanding of how the portal Application sync works, go ahead and trigger a sync and watch as the resources are created in a precisely choreographed fashion, rather than all at once.

  1. In the top bar, click SYNC then SYNCHRONIZE.

6. Review

You have reached the end of the tutorial. You have an Argo CD instance deploying Helm charts from your repo into your cluster.

You no longer manually deploy Helm chart changes. The Application spec describes the process for deploying the Helm charts declaratively. There's no longer a need for direct access to the cluster, removing the primary source of configuration drift.