Introduction to Argo CD
This tutorial will walk you through implementing Argo CD with the Akuity Platform, to manage the deployment of the Helm charts in a declarative fashion.
Ultimately, you will have a Kubernetes cluster, with Applications deployed using an Argo CD control plane.
A video version of this tutorial is available with an introduction to Continous Delivery, GitOps, and Argo CD. 👇
- 1. Prerequisites
- 2. Setting up Your Environment
- 3. Using Argo CD to Deploy Helm Charts
- 4. Managing Argo CD Applications Declaratively
- 5. Tool Detection, and Sync Waves
- 6. Review
1. Prerequisites​
To follow this tutorial make sure you have minimal working knowledge of the following concepts:
The tutorial requires that you also have a GitHub Account. You will use this to:
- host a public repo for the GitOps configuration.
- utilize GitHub Codespaces for the workshop environment. Ensure that you have a quota available. The free tier includes 4 vCPU at 30 hours a month.
- create an account on the Akuity Platform.
The tutorial was written and tested using the following tool and component versions:
- Argo CD: v2.10.1
- Docker Desktop: 4.13.1
- Kubernetes: v1.27.13-k3s1
- k3d: v5.4.1
2. Setting up Your Environment​
2.1. Create the Repository from a Template​
In this scenario, you manage the application Helm charts in version control. To represent this in the lab, you will create a repository from a template containing the application Helm charts.
-
Click this link or click "Use this template" from the
akuity/intro-argo-cd-tutorial-template
repo main page. -
Ensure the desired "Owner" is selected (e.g., your account and not an organization).
-
Enter
intro-argo-cd
for the "Repository name". -
Then click
Create repository from template . -
Next, you'll start a new Codespace by clicking the green
Code button on the repo page, selecting theCodespaces tab, and then selectingCreate codespace on main .
If you have other codespaces running, you may run into usage limits. You should shutdown codespaces that you are not using.
The Codespace will open in another browser tab with information about setting up your codespace. Once it's done setting up, you should see a terminal in the browser with the repo open.
2.1.1. Verify Environment​
Part of the codespace setup was to install all necessary tools, which includes setting up k3d
and kubectl
. You can verify that the environment is ready by going through the following:
-
Verify that
k3d
is installed and running a Kubernetes cluster.k3d cluster list
You should see the following output:
NAME SERVERS AGENTS LOADBALANCER
dev 1/1 0/0 true -
Check that the cluster works by running
kubectl get nodes
.% kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-control-plane Ready control-plane 7m44s v1.29.2Fetching the nodes will demonstrate that
kubectl
can connect to the cluster and query the API server. The node should be in the "Ready" status.
2.2. Akuity Platform Sign Up​
This scenario demonstrates deploying applications to a cluster external to Argo CD.
Similar to how the GitHub repo is hosting the Helm charts, which describe what resources to deploy into Kubernetes, the Akuity Platform will host the Application manifests, which represent how to deploy the resources into the cluster. Along with Argo CD, which will implement the changes on the cluster.
Sign up for a free 30-day trial of the Akuity Platform!
-
Create an account on the Akuity Platform.
-
To log in with GitHub SSO, click "Continue with GitHub".
You can also use Google SSO or an email and password combo.
- Click
Authorize akuityio .
-
Click the
create or join link. -
Click
+ New organization in the upper right hand corner of the dashboard. -
Name your organization following the rules listed below the
Organization Name field.
2.3. Create your Argo CD Instance​
You can create your Argo CD instance using the Akuity Platform Dashboard or CLI by choosing between the tabs below.
- Dashboard
- CLI
-
Navigate to
Argo CD . -
Click
+ Create in the upper right hand corner of the dashboard. -
Name your instance following the rules listed below the
Instance Name field. -
(Optionally) Choose the Argo CD version you want to use.
-
Click
+ Create .
At this point, your Argo CD instance will begin initializing. The start-up typically takes under 2 minutes.
2.3.1. Configure Your Instance​
While the instance is initializing, you can prepare it for the rest of the lab.
-
In the dashboard for the Argo CD instance, click
Settings . -
On the inner sidebar, under "Security & Access", click
System Accounts . -
Enable the "Admin Account" by clicking the toggle and clicking
Confirm on the prompt. -
Then, for the
admin
user, clickSet Password . -
Click
Regenerate Password , then clickCopy . -
In the bottom right of the Set password prompt, hit
Close . -
In the top, next to the Argo CD instance name and status, click the instance URL (e.g.,
<instance-id>.cd.akuity.cloud
) to open the Argo CD login page in a new tab. -
Enter the username
admin
and the password copied previously.
You now have a fully-managed Argo CD instance 🎉
2.4. Deploy an Agent to the Cluster​
You must connect the cluster to Argo CD to deploy the application resources. The Akuity Platform uses an agent-based architecture for connecting external clusters. So, you will provision an agent and deploy it to the cluster.
-
Back on the Akuity Platform, in the top left of the dashboard for the Argo CD instance, click
Clusters . -
In the top right, click
Connect a cluster . -
Enter the
dev
name as the "Cluster Name". -
In the bottom right, click
Connect Cluster . -
To get the agent install command, click
Copy to Clipboard . Then, in the bottom right,Done . -
Open your terminal and check that your target is the correct cluster by running
kubectl config current-context
.If you are following along using
k3d
, you should see the following:% kubectl config current-context
k3d-dev -
Paste and run the command against the cluster. The command will create the
akuity
namespace and deploy the resources for the Akuity Agent. -
Check the pods in the
akuity
namespace. Wait for theRunning
status on all pods (approx. 1 minute).% kubectl get pods -n akuity
NAME READY STATUS RESTARTS AGE
akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
akuity-agent-<replicaset-id>-<pod-id> 1/1 Running 0 65s
argocd-application-controller-<replicaset-id>-<pod-id> 2/2 Running 0 65s
argocd-notifications-controller-<replicaset-id>-<pod-id> 1/1 Running 0 65s
argocd-redis-<replicaset-id>-<pod-id> 1/1 Running 0 65s
argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64s
argocd-repo-server-<replicaset-id>-<pod-id> 1/1 Running 0 64sRe-run the
kubectl get pods -n akuity
command to check for updates on the pod statuses. -
Back on the Clusters dashboard, confirm that the cluster shows a green heart before the name, indicating a healthy status.
-
Check your
akuity
CLI version.akuity version
-
Log into the
akuity
CLI.akuity login
- Open the link displayed, enter the code.
-
Set your organization name in the
akuity
configakuity config set --organization-name=<name>
infoYou can find your organization name by running
akuity org list
.ID NAME ROLES
r89390kxc0sf8y5r akuity-bot ownerReplace
<name>
with your organization's name.Or do it in one command with some bash-fu:
akuity config set --organization-id=$(akuity org list | awk 'NR==2 {print $1}')
-
Create the Argo CD instance on the Akuity Platform
akuity argocd apply -f akuity-platform/
-
Apply the agent install manifests to the cluster.
akuity argocd cluster get-agent-manifests --instance-name=argo-cd dev | kubectl apply -f -
-
From the Akuity Platform Dashboard, in the top, next to the Argo CD instance name and status, click the instance URL (e.g.,
<instance-id>.cd.akuity.cloud
) to open the Argo CD login page in a new tab. -
Enter the username
admin
and the passwordakuity-argocd
.
You now have a fully-managed Argo CD instance 🎉
3. Using Argo CD to Deploy Helm Charts​
3.1. Create an Application in Argo CD​
Now, using an Application, you will declaratively tell Argo CD how to deploy the Helm charts.
Start by creating an Application to deploy the guestbook
Helm Chart from the repo.
- Dashboard
-
Navigate to the Argo CD UI, and click
NEW APP . -
In the top right, click
EDIT AS YAML . -
Paste the contents of
apps/guestbook-dev.yaml
from your repo.
This manifest describes an Application.
- The name of the Application is
guestbook-dev
. - The source is your repo with the Helm charts.
- The destination is the cluster connected by the agent.
- The sync policy will automatically create the namespace.
-
Click
SAVE . At this point, the UI has translated the Application manifest into the corresponding fields in the wizard. -
In the top left, click
CREATE . The new app pane will close and show the card for the Application you created. The status on the card will show "Missing" and "OutOfSync". -
Click on the Application card titled
argocd/guestbook-dev
.
In this state, the Application resource tree shows the manifests generated from the source repo URL and path defined. You can click
- In the top bar, click
SYNC thenSYNCHRONIZE to instruct Argo CD to create the resources defined by the Application.
The resource tree will expand as the Deployment creates a ReplicaSet that makes a pod, and the Service creates an Endpoint and EndpointSlice. The Application will remain in the "Progressing" state until the pod for the deployment is running.
Afterwards, all the top-level resources (i.e., those rendered from the Application source) in the tree will show a green checkmark, indicating that they are synced (i.e., present in the cluster).
3.2. Syncing Changes Manually​
An Application now manages the deployment of the guestbook
Helm chart. So what happens when you want to deploy a new image tag?
Instead of running helm upgrade guestbook-dev ./guestbook
, you will trigger a sync of the Application.
-
Navigate to your repo on GitHub, and open the file
guestbook/values-dev.yaml
. -
In the top right of the file, click the pencil icon to edit.
-
Override the default
image.tag
value by setting it to0.2.0
.image:
tag: 0.2.0 -
Click
Commit changes... . -
Add a commit message. For example
chore(guestbook): bump dev tag to 0.2.0
. -
Click
Commit changes . -
Switch to the Argo CD UI and go to the
argocd/guestbook-dev
Application. -
In the top right, click the
REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.noteThe default sync interval is 3 minutes. Any changes made in Git may not apply for up to 3 minutes.
-
In the top bar, click
SYNC thenSYNCHRONIZE to instruct Argo CD to deploy the changes.
Due to the change in the repo, Argo CD will detect that the Application is out-of-sync. It will template the Helm chart (i.e., helm template
) and patch the guestbook-dev
deployment with the new image tag, triggering a rolling update.
3.3. Enable Auto-sync and Self-heal for the Guestbook Application​
Now that you are using an Application to describe how to deploy the Helm chart into the cluster, you can configure the sync policy to automatically apply changes — removing the need for developers to manually trigger a deployment for changes that already made it through the approval processes.
-
In the top menu, click
DETAILS . -
Under the
SYNC POLICY section, clickENABLE AUTO-SYNC and on the prompt, clickOK . -
Below that, on the right of "SELF HEAL", click
ENABLE . -
In the top right of the DETAILS pane, click the
X to close it.
If the Application was out-of-sync, this would immediately trigger a sync. In this case, your Application is already in sync, so Argo CD made no changes.
3.4. Demonstrate Application Auto-sync via Git​
With auto-sync enabled on the guestbook-dev
Application, changes made to the main
branch in the repo will be applied automatically to the cluster. You will demonstrate this by updating the number of replicas for the guestbook-dev
deployment.
-
Navigate to your repo on Github, and open the file
guestbook/values.yaml
. -
In the top right of the file, click the pencil icon to edit.
-
Update the
replicaCount
to the2
list. -
In the top right, click
Commit changes... . -
Add a commit message. For example
chore(guestbook): scale to 2 replicas
. -
In the bottom left, click
Commit changes . -
Switch to the Argo CD UI and go to the
argocd/guestbook-dev
Application. -
In the top right, click the
REFRESH button to trigger Argo CD to check for any changes to the Application source and resources.
You can view the details of the sync operation by, in the top menu, clicking
4. Managing Argo CD Applications Declaratively​
4.1. Create an App of Apps​
One of the benefits of using Argo CD is that you are now codifying the deployment process for the Helm charts in the Application spec.
Earlier in the lab, you created the guestbook-dev
Application imperatively, using the UI. Do you want to manage the Application manifests declaratively too? This is where the App of Apps pattern comes in.
Argo CD can take plain Kubernetes manifests from a directory and deploy them. The Application
manifests are Kubernetes resources, just like a Deployment
. Therefore, an Application pointing to a folder in Git containing Application
manifests will work the same as the guestbook-dev
Application from earlier. With the caveat that for Argo CD application controller to pick up Applications, they must be created in the argocd
namespace.
-
Navigate to the Applications dashboard in the Argo CD UI, and click
NEW APP . -
In the top right, click
EDIT AS YAML . -
Paste the contents of
app-of-apps.yaml
(in the repo's root).apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: bootstrap
namespace: argocd
spec:
destination:
name: in-cluster
project: default
source:
path: apps
repoURL: https://github.com/<username>/intro-argo-cd-tutorial # Update to your repo URL.
targetRevision: HEADThis Application will watch the
apps/
directory in your repo which contains Application manifests for theguestbook-stg
andguestbook-prod
instances of theguestbook
chart, and a newportal
Helm chart. -
Click
SAVE . -
Then, in the top left, click
CREATE . -
Click on the Application card titled
argocd/bootstrap
.At this point, the Application will be out-of-sync. The diff will show the addition of the
argocd.argoproj.io/tracking-id
label to the existingguestbook-dev
Application, which indicates that the "App of Apps now manages it".kind: Application
metadata:
++ annotations:
++ argocd.argoproj.io/tracking-id: 'bootstrap:argoproj.io/Application:argocd/guestbook-dev'
generation: 44
labels:
...
path: guestbook
repoURL: 'https://github.com/<username>/intro-argo-cd-tutorial'
syncPolicy:
automated: {}Along with a new Application for the
portal
Helm chart. -
To apply the changes, in the top bar, click
SYNC thenSYNCHRONIZE .
From this Application, you can see all of the other Applications managed by it in the resource tree. Each child Application resource has a link to its view on the resource card.
5. Tool Detection, and Sync Waves​
After creating the App-of-Apps it deployed the portal
Application based on the manifest in Git (apps/portal.yaml
). Since the automated sync policy is not enabled, it remains out-of-sync. We'll go through some explanation before triggering the sync.
- Click on the Application card titled
argocd/portal
.
5.1. Tool Detection​
The portal
Application
utilizes Kustomize via the kustomization.yaml
file in the portal
folder. Unlike the guestbook-dev
Application
, the portal
does not specify Kustomize in the source
spec. Instead, it relies on Argo CD's automatic tool detection. If a config management tool is not specified in the source
of an Application
, Argo CD will check for the following:
- A
chart.yaml
file in the folder. If found, it will assume that the folder contains a Helm chart. - A
kustomization.yaml
file in the folder. If found, Argo CD will use Kustomize.
If neither is found, Argo CD will default to plain Kubernetes manifests.
Since the portal
folder contains the kustomization.yaml
, Argo CD knows to use Kustomize.
5.2. Sync Phases, Waves, & Hooks​
Each time Argo CD performs a sync on an Application, it does not simply generate the manifests and apply them to the cluster. Each sync contains multiple phases and waves. It may also create Kubernetes Job
resources known as hooks. The portal
Application
is an example of this functionality.
The portal
folder contains a frontend
and backend
Deployment
with corresponding Service
s, and 3 Jobs
. Yet the resources shown in the out-of-sync Application
don't contain those Job
s. This is because the Job
s contain the argocd.argoproj.io/hook
annotation which indicates to Argo CD that the resource should applied during a sync. Typically hooks are Jobs
or (Argo) Workflows
, but can be any resource.
In the portal
Application
, hooks are used to update the SQL schema for the backend, and bring up and down a maintenance page for the frontend. The schema update Job
, runs in the PreSync
phase which happens before the Sync
phase when resources are applied. Therefore, before updating the image
for the backend
Deployment
, the schema can be updated in preparation.
The frontend
Deployment
and related resource hooks rely on sync waves to ensure they are executed in the correct order. Each phase in a sync can contain multiple sync waves. The default sync wave, which all resources that don't specify the argocd.argoproj.io/sync-wave
fall on, is 0
. Sync waves can be negative, so a wave of -1
will run before all resources on the default wave.
The hook to bring up the maintenance page for the frontend runs on sync wave 1
. The frontend
Deployment
and Service
are on wave 2
. Then the hook to bring down the maintenance page is on wave 3
. This order of waves ensures that the maintenance page is brought up first. While it's up, the changes are made to the frontend resources. Then once their sync is complete, the page is brought back down.
5.3. Sync It!​
With an understanding of how the portal
Application
sync works, go ahead and trigger a sync and watch as the resources are created in a precisely choreographed fashion, rather than all at once.
- In the top bar, click
SYNC thenSYNCHRONIZE .
6. Review​
You have reached the end of the tutorial. You have an Argo CD instance deploying Helm charts from your repo into your cluster.
You no longer manually deploy Helm chart changes. The Application spec describes the process for deploying the Helm charts declaratively. There's no longer a need for direct access to the cluster, removing the primary source of configuration drift.