Posted on 12 October 2023, updated on 23 October 2023.
When you have multiple clusters managed with ArgoCD, it can be interesting to centralize its management. The main advantages of this are the simplification of cluster management, as well as limiting the amount of resources consumed by tooling components.
Let’s get started managing multiple clusters with ArgoCD!
ArgoCD: main principles
This article assumes prior knowledge about ArgoCD and will focus on using it to manage multiple clusters. We will, however, go through the main concepts of ArgoCD and how it is installed in a cluster.
What’s an Application?
ArgoCD adds a new set of CRDs to your cluster when installed. The most important of them to begin working with ArgoCD is the Application. An Application
is a way to tell ArgoCD what to deploy in a cluster by linking a manifest source to a destination cluster.
The YAML manifest of an Application is as follows:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/my-team/my-repo.git
targetRevision: HEAD
path: deployments/my-app
destination:
server: https://kubernetes.default.svc
namespace: default
- The
spec.source
field defines where to find the YAML manifests that you want to deploy. There are multiple kinds of sources supported, with each their specific parameters, such as raw manifests, Helm charts, and Kustomize customizations. TherepoURL
andpath
fields respectively define in which repository the manifests are located and in which sub-directory of this repository. - The
spec.destination
field defines where the manifest should be deployed: which cluster and which namespace. - The
spec.project
field lets you organize your ArgoCD Applications in projects. We’ll speak about this later!
The app-of-app pattern
We saw earlier that the spec.source.path
field defines a path inside a directory path inside of the repository where manifests are located. This directory would typically contain manifests for Deployments
, Services
, Ingresses
... but what happens if this directory contains manifests for Applications
?
The answer is that the base Application
you defined now deploys and manages other Applications
. This is known as the app-of-app pattern and is very common when using ArgoCD.
ArgoCD installation
ArgoCD is usually installed in two steps. The first step is to deploy it into the cluster using either raw manifest or Helm. You can find more information on this first step in ArgoCD’s installation documentation. We will now focus on the second and most interesting step, which is to use ArgoCD to manage itself!
First, we set up a new repository dedicated to ArgoCD and the applications it will manage. A typical directory structure for this repository is as follows:
├── apps
│ └── tooling
│ └── argocd
│ └── argocd.yaml
└── deployments
└── argocd
└── tooling
├── Chart.yaml
├── templates
│ └── repo-secret.yaml
└── values.yaml
The repository is split into two directories. The app
directory contains YAML files defining Applications
. The deployment
directory contains raw YAML manifests, or Chart.yaml
and values.yaml
files, that define what will be deployed in the cluster.
For now, we only have one environment (tooling) and one application to deploy it in (argocd), but you can extend this structure to deploy other applications in other environments.
The definition of the argocd.yaml
Application
file is the following:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
spec:
project: default
source:
repoURL: git@gitlab.com:<ARGO_REPO_PATH>/argocd.git
path: deployments/argocd/tooling
destination:
name: https://kubernetes.default.svc
namespace: argocd
Then, we need to give ArgoCD access to the repository we created. For this, we add a Deploy Key to the Gitlab or Github repository. Create a new SSH key, and add the public part of it to your repository through Gitlab/Github’s interface.
To give the private part of the key to ArgoCD, we create a Secret
with a specific label. This is the purpose of the repo-secret.yaml
file.
As the secret file will be added to the Git repository, the correct way to do this would be to handle it using external-secret, for example. For simplification in this tutorial we will directly use aSecret
, but please note this is highly not recommended!
This is the definition of the secret to connect to the repository:
apiVersion: v1
kind: Secret
metadata:
name: repo-credentials
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
type: git
url: git@gitlab.com:<ARGO_REPO_PATH>/argocd.git
sshPrivateKey: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
Once the repository has been created with these YAML files and ArgoCD installed using Helm or raw manifest, we can finally use ArgoCD to manage itself.
First, manually apply the Secret
file to your Kubernetes cluster so that ArgoCD can access the repository. Then, manually apply the ArgoCD Application
file. Heading over to ArgoCD’s interface, you should now see that you have an argocd
Application
and can see the resources it deployed!
The resources you see in the interface are not re-created at the moment you make ArgoCD manage itself. ArgoCD is smart enough to realize that the resources to be deployed are already there.
Adding clusters to an ArgoCD instance
Now that ArgoCD is installed and managing resources in a tooling cluster, we’ll get to the most interesting part: adding other clusters for it to manage, such as staging and production clusters.
The first part, focusing on cluster access, is cloud provider-specific because it depends on the way your cloud provider gives access to Kubernetes clusters. We’ll focus on using AWS EKS in this article, but it can be adapted to other providers as well.
Prerequisites: cluster access
In order to make ArgoCD manage other clusters, we need to allow access to them. To do so in AWS, we first associate an IAM role to ArgoCD, then add this role to the aws-auth
ConfigMap
. This ConfigMap
is specific to AWS EKS and associates IAM Roles with Kubernetes Roles inside the cluster.
Creating the role
The role that we create in AWS IAM is a bit uncommon because we will not associate any policies to it. We usually create roles to give entities specific permissions on AWS services, such as listing the objects in an S3 bucket or creating RDS database instances: these are policies.
But here, we don’t want ArgoCD to access AWS services, only to modify resources in our Kubernetes clusters. So, the role is only here to give ArgoCD an identity on AWS IAM.
The only statements in our role will be AssumeRoles:
- One
AssumeRoleWithWebIdentity
statement will make the link between this role and ArgoCD’sServiceAccount
. To do this, you first have to set up OIDC in your cluster. Since ArgoCD has multipleServiceAccounts
, we will allow anyServiceAccount
in theargocd
namespace to assume this role. A good security measure would be to restrict this statement to a specificServiceAccounts
. - One
AssumeRole
statement will allow this role to assume itself. This may only be needed if the other clusters you want to manage are located in other AWS accounts.
The final role to create will look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/<CLUSTER_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.eu-west-1.amazonaws.com/id/<CLUSTER_ID>:sub": "system:serviceaccount:argocd:*"
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam: <AWS_ACCOUNT_ID>:role/argocd"
},
"Action": "sts:AssumeRole"
}
]
}
We then need to modify ArgoCD’s ServiceAccount
with an annotation indicating the ARN of this role, like so:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
name: argocd-server
Also, add this annotation to the application-controller and applicationset-controller ServiceAccounts
.
Updating the ConfigMap
Now that ArgoCD has a role, we need to allow it to create resources inside our clusters. To do so, EKS uses a ConfigMap
, named aws-auth
, which defines which IAM roles have which RBAC permissions. Simply edit this ConfigMap
to give ArgoCD the system:masters
permission.
It should look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
...
- groups:
- system:masters
rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/argocd
username: arn:aws:iam::<AWS_ACCOUNT_ID>:role/argocd
...
Defining new clusters
Finally, we can add a new cluster to ArgoCD. This is done in a similar way to adding a repository with a special Secret
.
Same as for the repository secret, we will use a simple Secret
for simplification, but it is highly recommended to use external-secrets as the file will be committed to a repository. The Secret
only defines the cluster’s ARN, its API endpoint and certificate, as well as the role that ArgoCD should use to access it.
You should create a Secret
for each cluster you want to manage with ArgoCD, and they should look like this:
apiVersion: v1
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
name: staging-cluster
namespace: argocd
type: Opaque
data:
config: |
{
"awsAuthConfig": {
"clusterName": "staging",
"roleARN": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/argocd"
},
"tlsClientConfig": {
"insecure": false,
"caData": "LS0tl...=="
}
}
name: "arn:aws:eks:eu-west-1:<AWS_ACCOUNT_ID>:cluster/staging"
server: "https://<CLUSTER_ID>.gr7.eu-west-1.eks.amazonaws.com"
Once this secret is created, you should be able to see the new cluster in ArgoCD’s interface. To deploy Applications
to it, use the application.spec.destination.name
or application.spec.destination.server
field.
Organizing clusters and teams
ArgoCD AppProjects
To separate teams or environments, we can use ArgoCD AppProjects
. An AppProject
can restrict Applications
to specific clusters and/or repositories, as well as specify what Kubernetes resources can be deployed by it. For example, you can create an AppProject
for the team-foo in the staging cluster like so:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: staging-team-foo
spec:
destinations:
- namespace: "*"
server: "https://<STAGING_CLUSTER_ID>.gr7.eu-west-1.eks.amazonaws.com"
sourceRepos:
- "*"
clusterResourceWhitelist:
- group: "*"
kind: "*"
Once the project is created, you can use it in the application.spec.project
field. This would mean that the Application
can only be deployed in the staging cluster.
After all the Applications
are grouped into projects, you can filter Applications
by projects in ArgoCD’s interface. But more importantly, you can give specific users permissions on a specific project, as we will do next.
ArgoCD RBAC
ArgoCD manages RBAC through the argocd-rbac-cm
ConfigMap
, and users through the argocd-cm
ConfigMap
.
First, you declare users in argocd-cm
simply by adding lines such as accounts.foo-user: apiKey, login
. This would add a foo-user account that can connect through ArgoCD’s interface or using an API key. Then, edit the argocd-rbac-cm
ConfigMap
to define roles with permissions and assign them to users.
For example, we could want the foo-user, part of team-foo, to have the following permissions:
- Synchronize and update
Applications
from the staging-team-foo and production-team-fooAppProjects
. - Readonly permission on all other
Applications
.
To do so, we would format argocd-rbac-cm
as such:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |
# Define a foo role and give it permissions
p, role:foo, applications, sync, staging-team-foo/*, allow
p, role:foo, applications, update, production-team-foo/*, allow
# Assign foo-user to the roles foo and readonly (pre-existing role)
g, foo-user, role:foo
g, foo-user, role:readonly
By using this combination of AppProject
and RBAC, we can easily allow users and teams to modify and access the Application
that they need.
An alternative to this system would be to use ApplicationSets
: it is a bit more complex to set up but much more powerful.
Conclusion
In this article, we saw how to manage multiple clusters from a single ArgoCD instance and also a simple way to organize your teams in ArgoCD.
I hope you learned something from it and that it made you want to use ArgoCD if you don’t already!