It’s pretty wild that Heroku introduced review apps in 2015, and yet it’s still not a standard feature of CI/CD systems!
Put simply, the idea is that every time your team opens a pull request, an instance of the app is provisioned which reflects the changes made in the PR. When the PR is closed, the app is destroyed.
We use Argo CD; it has good support for this pattern but it’s not particularly well-documented. This post describes how to combine GitHub, GitHub Actions, Argo CD, and Kubernetes to set up your own ephemeral preview apps.
I’ll break this down into a few steps, but first here’s a diagram of the setup we’re aiming for:
In summary:
You could go for variants of this setup – e.g. having a single monorepo for all infrastructure – without changing the core idea.
Following GitOps best practice, Argo CD recommends separating your infrastructure code from your application code, and I’ll assume you’re heeding that advice – although this setup would work just about as well if you had a single repo for both.
The structure we’re running looks like:
infrastructure-repo
├── argo
│ ├── application-set-durable.yaml
│ ├── application-set-previews.yaml
│ └── kustomization.yaml
└── k8s
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
└── overlays
├── preview
│ ├── kustomization.yaml
│ └── version-template.yaml
├── prod
│ ├── ...
└── test
├── ...
Here, the argo
directory contains the Argo CD ApplicationSet definitions, and the k8s
directory contains the Kubernetes manifests for the app itself.
I won’t cover the Kustomize setup here, because there’s nothing different about it from what you’d do for a non-Argo CD setup.
ApplicationSets are very flexible, as evidenced by the exemplar use cases given in the docs. However, we’ll focus on one specific use case: creating ephemeral preview apps for GitHub pull requests.
To do this, we’ll use a pull request generator in our ApplicationSet, which creates an Argo CD Application for each pull request in a given repo – in other words, it does all of the heavy lifting.
Here’s an example ApplicationSet definition for myapp
:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: myapp-applicationset-previews
namespace: argocd
spec:
generators:
- pullRequest:
github:
owner: oughtinc
repo: myapp-infra
tokenRef:
secretName: github-token
key: token
template:
metadata:
name: "myapp-{{branch}}-{{number}}"
spec:
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
source:
repoURL: "https://github.com/oughtinc/myapp-infra.git"
targetRevision: "{{head_sha}}"
path: k8s/overlays/preview
kustomize:
commonLabels:
app.kubernetes.io/instance: "{{branch}}-{{number}}"
project: default
destination:
server: https://our-test-cluster.eks.amazonaws.com
namespace: "myapp-{{branch}}"
myapp-infra
repo.Because we separated out our infrastructure code from our application code, we need to create a PR in our infrastructure repo for each PR in our app repo.
This is what our continuous integration pipeline does:
Continuous integration pipeline
The key points here are:
${{ github.event.pull_request.head.sha }}
.
hub pull-request --base main --message "Infrastructure PR for $infra_branch }}"
The pull request generator will handle cleaning up all of the Kubernetes resources associated with your ephemeral app – but only when the PR is merged or closed.
Because the application engineers are going to be working on the application repo PR, we need to make sure the paired infrastructure PR is closed at the right moment.
Here’s what our CI pipeline does on the pull_request.closed
event:
Continuous integration steps when PRs are closed
As always, the simple setup described above belies some easy-to-make mistakes which hopefully you can now avoid:
infra_branch=$(echo $GITHUB_HEAD_REF | sed -e 's/[^[:alnum:].]/-/g' | tr '[:upper:]' '[:lower:]')
nameSuffix
or namePrefix
feature.