Skip to main content

Getting Started

This documentation is designed to help you get started with the Akuity Self Hosted Platform. This is a comprehensive guide that will walk you through the process of setting up and using the Akuity Platform on your own infrastructure. If you have any questions or need help, please reach out to your designated support contact or in your communication channel (Slack, Teams, etc.)

Prerequisites

Before starting, you will need two key pieces of information:

  • Your Akuity Platform license key
  • A key for the Akuity Platform OCI registry

Logging into the registry

# This will prompt you for your registry key
helm registry login us-docker.pkg.dev -u _json_key_base64

Infrastructure Requirements

The Akuity Platform is designed to run on a dedicated Kubernetes cluster. Generally speaking, any cluster, no matter where it is hosted, you will need the following infrastructure available in addition to the Kubernetes cluster:

  • A postgres database accessible from the Kubernetes cluster
  • Kubernetes network policy engine (e.g. Calico, Cilium, etc.)
  • A network-level load balancer to route traffic to the platform
  • DNS configured to point traffic to the load balancer
  • An OIDC identity provider for authentication (e.g. Google, Okta, etc.)
  • A wildcard TLS certificate for the domain you are using for the platform (or the ability to generate one on the fly using cert-manager)
  • SMTP (optional)
  • A cluster autoscaler (optional)

The next sections cover more information on each of these requirements. Where applicable, you can select for more specific for many of the commonly used cloud providers.

If you have any other questions, please reach out to Akuity support.

Network Security

Akuity Platform deploys NetworkPolicy resources in order to ensure ingress and egress connectivity to Pods are allowed on an as-needed basis, and to disallow cross-namespace communication between tenants. For this to work, a Kubernetes network policy engine must be installed to enforce these policies.

Akuity recommends using the AWS VPC-CNI for a policy engine. As an alternative, Calico can also be used with the AWS VPC-CNI.

Calico Installation

If you choose to use Calico, follow the installation instructions provided by Tigera for EKS.

Kubernetes Auto-Scaling

As Argo CD instances are created, and as more clusters connect to Akuity Platform, additional capacity is needed in the host control-plane cluster. Akuity Platform has no direct dependency on a specific autoscaler, but deploys HPA resources to automatically scale based on cpu/memory load. A Kubernetes cluster autoscaler allows for automatically adding compute capacity to the cluster.

There are two popular solutions for autoscaling, in addition to vendor-specific solutions:

  1. Cluster Autoscaler
  2. Karpenter (AWS and Azure only)

If running on EKS, Akuity recommends Karpenter as it allows better flexibility for right-sizing the EC2 sizes according to the capacity. The following EC2 instance types (Nitro-based to support prefix delegation) are recommended:

  • t3.large
  • t3.xlarge
  • t3a.large
  • t3a.xlarge
  • t3a.2xlarge
  • m5.large
  • m5.xlarge
  • m5a.large
  • m5a.xlarge
  • c5.large
  • c5.xlarge
  • r6a.large

PostgreSQL Database

Akuity Platform requires a PostgreSQL database to store its data. The database can be self-hosted or be a managed service provided by your cloud provider. The database must be accessible from the Kubernetes cluster where the Akuity Platform is installed and securing the connection to the database is left to the user. No matter which option you choose, the database must be configured with the following requirements:

  • Engine: PostgreSQL
  • Version: 14.6+
  • Extensions: citext, uuid-ossp

Regardless of your cloud provider, the appropriate instance class/size is highly variable and depends on a number of factors:

  • Number of Argo CD instances
  • Number of total clusters across instances
  • Number of applications across instances

If using an Amazon-managed database, Akuity recommends using Amazon Aurora.

Requirements

  • Engine: Amazon Aurora PostgreSQL-Compatible Edition
  • Version: 14.6, 14.7, 14.8
  • Default parameter groups: aurora-postgresql14
  • Enforce TLS (certificates) with “rds.force_ssl” = “1”
  • Encryption enabled
  • Aurora Replica or Reader node in a different AZ for the purposes of high-availability failover and to provide read-only instances.

DB Instance Class:

The following are examples of configurations that have been tested:

LimitsDB Instance Class
10 instances, 10 clusters, 100 applicationsdb.t4g.medium (2 vCPU, 4 GiB RAM)
5 instances, 1000 Clusters, 5000 Applicationsdb.r6g.xlarge (4 vCPU, 32 GiB RAM)
note

The Aurora Serverless v2 instance class is not recommended due to cost inefficiencies. Akuity Platform’s database load characteristics are constant / not bursty, so as a result Aurora Serverless v2 costs much more to support the same or fewer number of Argo CD instances, clusters, and applications.

Networking

Akuity recommends the following:

  • Private subnets for both the Kubernetes workloads and database VPC.
  • While it is possible/supported for the RDS database to be provisioned in the same VPC as the EKS VPC, you may consider RDS database in its own VPC, and connected via Transit Gateway. The benefit of this separation allows for the EKS cluster to be more easily replaced, which is possible since Akuity Platform only runs stateless workloads.

Roles

The database user that will be used for Akuity Platform should be granted the pg_read_all_data role. You should connect to the database and run:

GRANT pg_read_all_data TO <user> WITH ADMIN OPTION;

Extensions

The citext extension must also be enabled on the database instance. To enable this extension, you should connect to the database and run:

CREATE EXTENSION citext;

Load Balancing and Ingress

The Akuity Platform installation bundles Traefik Ingress Controller as part of its installation. Upgrades of Traefik and CRDs will be handled as part of upgrades of Akuity Platform.

Akuity Platform assumes the presence of AWS Load Balancer Controller and will provision a Kubernetes Service with NLB specific annotations needed by the platform. e.g.:

apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: traefik-external
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: external

An NLB is used (as opposed to ALB) for the cloud load balancer since:

  • Traffic to the Akuity Platform is a mix of protocols (HTTP, TCP, gRPC)
  • Routing decisions are configured at Ingress level, not at the Load Balancer

TLS

The default behavior of Akuity Platform requires a public certificate and private key of a wildcard TLS certificate. TLS is needed to terminate inside cluster at Ingress/Traefik – not at the NLB. This is due to the fact that, by default, Akuity Platform leverages features of Traefik which require passthrough behavior to Ingress (e.g. IP allowlist, websocket support, HTTP2/gRPC).

note

For advanced use cases where you need to bring your own ingress controller or have a hard requirement on terminating TLS at the load balancer, it is possible to disable TLS termination. See the Bring Your Own Ingress document for more information.

The wildcard certificate given to Akuity Platform must be valid for the following domains (replace akuity.example.com with your own domain):

  • akuity.example.com
  • *.cd.akuity.example.com
  • *.cdsvcs.akuity.example.com
  • *.kargo.akuity.example.com
  • *.kargosvcs.akuity.example.com

Akuity Platform does not have any requirements on how the certificate is provisioned. TLS certificates can be obtained in different ways. Some options include:

  1. Purchased from a traditional certificate providers (e.g. Verisign, DigiCert)
  2. Free, time-limited (90 day) certificates from Let’s Encrypt

If the TLS Secret is managed separately and not supplied to the Helm chart (e.g. generated by cert-manager, deployed via External Secret) the resulting secret should be named akuity-platform-tls in the traefik-external namespace. Akuity Platform will assume the presence of this Secret so that it can be referenced by Traefik ingress.

Cert-Manager (optional)

If you are using cert-manager to manage your TLS certificates, the following example shows how to do so with a Let's Encrypt issuer. These instructions can be adapted for other issuers as well.

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
route53:
region: us-east-1
selector:
dnsZones:
- akuity.example.com
- cd.akuity.example.com
- cdsvcs.akuity.example.com
- kargo.akuity.example.com
- kargosvcs.akuity.example.com
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: akuity.example.com
namespace: akuity-platform
spec:
dnsNames:
- akuity.example.com
- '*.cd.akuity.example.com'
- '*.cdsvcs.akuity.example.com'
- '*.kargo.akuity.example.com'
- '*.kargosvcs.example.com'
issuerRef:
name: letsencrypt-prod
# This secret name is important, as it is the name that Akuity Platform expects
secretName: akuity-platform-tls

DNS

Akuity Platform requires DNS to be configured for the domain you are using for the platform. When the platform creates resources:

Argo CD instances will be provisioned with human-friendly and reconfigurable DNS names like:

  • myargocd-1.cd.akuity.example.com
  • myargocd-2.cd.akuity.example.com

Agent components will connect to static, ID specific DNS names like:

  • x8ax38dmb1d81h-cplane.cdsvcs.akuity.example.com
  • x8ax38dmb1d81h-cache.cdsvcs.akuity.example.com

Configuring DNS is platform specific and instructions are specified below.

tip

To automate the creation of these records, External DNS can be used to automatically create the records by adding additional annotations to the NLB Service object.

apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: traefik-external
annotations:
external-dns.alpha.kubernetes.io/alias: "true"
external-dns.alpha.kubernetes.io/hostname: |
'*.cd.akuity.example.com.,*.cdsvcs.akuity.example.com.'

Akuity Platform requires a Route53 Hosted Zone (e.g. akuity.example.com) with the following records pointing to the NLB hostname/IP:

Record NameTypeValue/Route traffic to
akuity.example.comAALIAS to NLB
*.cd.akuity.example.comAALIAS to NLB
*.cdsvcs.akuity.example.comAALIAS to NLB
*.kargo.akuity.example.comAALIAS to NLB
*.kargosvcs.akuity.example.comAALIAS to NLB

OIDC Identity Provider

Akuity Platform requires login via an OIDC-compliant identity provider (IdP) using OAuth 2.0 login flow. Akuity Platform can be configured to integrate directly with an OIDC provider, or through Dex as an intermediary.

Dex is bundled as an optional component of the helm chart. You may wish to enable Dex for the following reasons:

  • Dex supports local, static users/passwords with dummy email addresses. This is useful for getting started, reducing upfront requirements, and for testing purposes.
  • If obtaining a client application from your organization’s IdP is a difficult process or not possible, Dex can be used as an IdP to other services you may have more access to, such as GitHub.
  • If you are using Azure AD and you wish to limit Akuity Platform authentication through Azure AD groups. Dex is able to perform translation of Azure AD groups to OIDC claims, and limit logins to particular groups (without any configuration on the IdP side).

SMTP

SMTP is used to send emails notifying users that they have been invited to an organization. It is optional. If not configured users will not be notified about a pending invite but can still navigate to their account page to accept an invite.

Installation

For longer term management and updates, we recommend using Argo CD to manage the platform. However, there is a chicken and egg problem as you need to do the initial installation and creation of the manifests before they can be managed by Argo CD. To assist with this we have a scaffolding for bootstrapping your installation. To use it, run the following command in a directory where you want to keep the manifest files:

curl -fsSL -O quickstart.zip https://dl.akuity.io/self-hosted/quickstart.zip && unzip quickstart.zip && rm quickstart.zip
# Feel free to rename the directory if you want before continuing
cd quickstart
helm pull oci://us-docker.pkg.dev/akuity/akp-sh/charts/akuity-platform --untar --untardir charts

This directory contains a values.yaml file with some of the most common configuration items available for you to customize. You will need to edit this file and fill out the details specific to your installation.

If you need to make additional customizations that are not supported by the chart, you can use the kustomization.yaml file to make those last-mile modifications.

Once you have configured your values and customizations, use the following command to deploy the Akuity Platform:

kustomize build --enable-helm | kubectl apply -f -

Once you have run this command, the Akuity Platform will be installed in your cluster. You can also commit the manifests to your Git repository and use Argo CD to manage the platform going forward.

Changelogs

You can find the changelogs for the Self Hosted Platform in the changelog section of the documentation.