Published on

Creating ephemeral preview apps with Argo CD

Authors

It's pretty wild that Heroku introduced review apps in 2015, and yet it's still not a standard feature of CI/CD systems!

Put simply, the idea is that every time your team opens a pull request, an instance of the app is provisioned which reflects the changes made in the PR. When the PR is closed, the app is destroyed.

We use Argo CD; it has good support for this pattern but it's not particularly well-documented. This post describes how to combine GitHub, GitHub Actions, Argo CD, and Kubernetes to set up your own ephemeral preview apps.

Overview of the final setup

I'll break this down into a few steps, but first here's a diagram of the setup we're aiming for:

Two pull requests, resulting in two ephemeral preview apps

In summary:

  • For each pull request in your app's repo
  • … a paired pull request will be created in your "infrastrcture" repo
  • … which is watched by a pull request ApplicationSet generator in Argo CD
  • … which creates, updates, and deletes an Argo CD Application for each PR

You could go for variants of this setup – e.g. having a single monorepo for all infrastructure – without changing the core idea.

Step 1: Create a repo for your infrastructure

Following GitOps best practice, Argo CD recommends separating your infrastructure code from your application code, and I'll assume you're heeding that advice – although this setup would work just about as well if you had a single repo for both.

The structure we're running looks like:

infrastructure-repo
├── argo
│   ├── application-set-durable.yaml
│   ├── application-set-previews.yaml
│   └── kustomization.yaml
└── k8s
    ├── base
    │   ├── deployment.yaml
    │   └── kustomization.yaml
    └── overlays
        ├── preview
        │   ├── kustomization.yaml
        │   └── version-template.yaml
        ├── prod
        │   ├── ...
        └── test
            ├── ...

Here, the argo directory contains the Argo CD ApplicationSet definitions, and the k8s directory contains the Kubernetes manifests for the app itself.

I won't cover the Kustomize setup here, because there's nothing different about it from what you'd do for a non-Argo CD setup.

Step 2: Create an Argo CD ApplicationSet for your preview apps

ApplicationSets are very flexible, as evidenced by the exemplar use cases given in the docs. However, we'll focus on one specific use case: creating ephemeral preview apps for GitHub pull requests.

To do this, we'll use a pull request generator in our ApplicationSet, which creates an Argo CD Application for each pull request in a given repo – in other words, it does all of the heavy lifting.

Here's an example ApplicationSet definition for myapp:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: myapp-applicationset-previews
  namespace: argocd
spec:
  generators:
    - pullRequest:
        github:
          owner: oughtinc
          repo: myapp-infra
          tokenRef:
            secretName: github-token
            key: token
  template:
    metadata:
      name: "myapp-{{branch}}-{{number}}"
    spec:
      syncPolicy:
        automated:
          prune: true
        syncOptions:
          - CreateNamespace=true
      source:
        repoURL: "https://github.com/oughtinc/myapp-infra.git"
        targetRevision: "{{head_sha}}"
        path: k8s/overlays/preview
        kustomize:
          commonLabels:
            app.kubernetes.io/instance: "{{branch}}-{{number}}"
      project: default
      destination:
        server: https://our-test-cluster.eks.amazonaws.com
        namespace: "myapp-{{branch}}"

  • Lines 8–15 define the generator, which will create an Argo CD Application for each pull request in the myapp-infra repo.
  • Lines 16 onwards is the "template" for the Applications we'll create, per-pull request.
  • Line 35 specifies we'll create a new namespace for each preview app: this is fairly important.

Step 3: Create PRs in your infrastructure repo for each PR in your app repo

Because we separated out our infrastructure code from our application code, we need to create a PR in our infrastructure repo for each PR in our app repo.

This is what our continuous integration pipeline does:

Continuous integration pipeline

The key points here are:

  • The docker images are tagged with the SHA of your latest commit in the pull request: you're looking for ${{ github.event.pull_request.head.sha }}.
    • You could use other tags here, but we've found that the SHA is useful for post-hoc debugging.
  • We create or re-use an existing branch in the infrastructure repo which is "paired" to the PR's branch. There's no tangible link between the two branches – it's just that we derive the name of the infrastructure branch from the name of the app branch.
  • Creating a PR in the infrastructure repo is super easy:
    hub pull-request --base main --message "Infrastructure PR for $infra_branch }}"
    

Step 4: Clean up the infrastructure PRs when the app PRs are closed

The pull request generator will handle cleaning up all of the Kubernetes resources associated with your ephemeral app – but only when the PR is merged or closed.

Because the application engineers are going to be working on the application repo PR, we need to make sure the paired infrastructure PR is closed at the right moment.

Here's what our CI pipeline does on the pull_request.closed event:

Continuous integration steps when PRs are closed

Gotchas

As always, the simple setup described above belies some easy-to-make mistakes which hopefully you can now avoid:

  • When creating a branch in the infrastructure repo, don't use the application repo's branch name verbatim.
    • This is because the branch name can appear in Kubernetes resources, domain names, etc.. Make sure to normalise it, e.g.:
      infra_branch=$(echo $GITHUB_HEAD_REF | sed -e 's/[^[:alnum:].]/-/g' | tr '[:upper:]' '[:lower:]')
      
  • Create a separate Kubernetes namespace for each preview app.
    • It would be possible to re-use a single namespace for all the preview apps, distinguishing between the resources with Kustomize's nameSuffix or namePrefix feature.
    • However, this makes it much more fiddly to clean up the preview apps. More importantly, it means the names aren't stable between preview apps – your code would need to do some sort of name lookup to know how to communicate with other services, for example.
  • Allow for pushes to be made to the infrastructure branch, and merge the infrastructure PR.
    • Above, I've described quite a simple application change, where we just need to update the Docker images.
    • However, you often want to update the app and its infrastructure at the same time: adding secrets, changing the number of replicas, etc..
    • In this case, it's convenient (and good for auditability) to see the infrastructure and application changes as part of a single diff on the infrastructure PR.
    • You'll want some kind of meat-space process to ensure the infrastructure PR is manually reviewed before it's merged.