Posted on 3 February 2022, updated on 16 March 2023.
Restricting the permissions of people connecting to your cluster can be very useful if you work on a large project with a lot of developers.
In this article, we are going to talk about Kubernetes in AWS and how to manage access to your EKS cluster.
Target of this tutorial
Prerequisites
- Have an EKS Cluster deployed
- Have the admin rights on the cluster
- Ability to create AWS roles
- Use roles to manage permissions in AWS
- Ability to create profile assuming an AWS role
Hypothesis
- You have full access to the EKS Cluster assuming a role. We are going to call this role role-devops.
- The profile you use is devops
Target of this tutorial
You want to give restricted access to your cluster to the developers working on your project. In this tutorial, we want to give a set of permissions to the developers to manage the following resources in the namespaces app and POC:
- pods
- replicasets
- deployments
- configmaps
- secrets
Create the AWS Role
The first thing to do is to create an AWS role named role-developers dedicated to the developers. This role should have at least the following permissions :
{
"Statement": [
{
"Action": [
"eks:AccessKubernetesApi",
"eks:Describe*",
"eks:List*",
],
"Effect": "Allow",
"Resource": "*"
},
],
"Version": "2012-10-17"
}
Each developer should have the right to assume this role, and they can create a profile based on this role named developer.
Set up RBAC in EKS Cluster
Now we need to create the Kubernetes Roles with the correct permissions. You need one role for each namespace. Here is the YAML of these roles:
For namespace app:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: app
name: developers-app
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
- pods/status
- secrets
- configmaps
- podtemplates
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
- apiGroups:
- apps
resources:
- replicasets
- deployments
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
For namespace poc:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: poc
name: developers-poc
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
- pods/status
- secrets
- configmaps
- podtemplates
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
- apiGroups:
- apps
resources:
- replicasets
- deployments
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
Now you need to bind these roles to a group with the Kubernetes resource RoleBinding:
For namespace app:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developers-app
namespace: app
subjects:
- kind: Group
name: developers
namespace: app
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developers-app
apiGroup: rbac.authorization.k8s.io
For namespace poc:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developers-poc
namespace: poc
subjects:
- kind: Group
name: developers
namespace: poc
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developers-poc
apiGroup: rbac.authorization.k8s.io
Link AWS role to the Kubernetes RBAC
Finally, you need to associate the AWS role role-developer that you have created with the Kubernetes group developers created by the RoleBinding resources.
In an EKS cluster, in the namespace kube-system, you can find a configmap named aws-auth. This configmap is used by EKS to associate an AWS role to a Kubernetes group and thus to give AWS permissions in the cluster to specific resources.
If you describe this configmap with the following command:
kubectl describe configmap aws-auth -n kube-system
You will see that your role role-devops is there and that it is linked to system-masters (cluster-admin)
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::1234567890:role/role-devops"
"username": "role-devops"
So you have to modify this configmap and add this section:
- "groups":
- "developers"
"rolearn": "arn:aws:iam::1234567890:role/role-developer"
"username": "role-developer"
The EKS cluster will manage the permissions internally and now the role role-developer has adequate permissions on the resources pods, replicasets, deployments, configmaps, and secrets in the namespaces app and poc.
Test to make sure everything works smoothly
Now everything is set up for developers to connect to the cluster with limited access. First, they need to configure their file ~/.kube/config with the following command:
aws eks update-kubeconfig --profile developer --name <eks-cluster-name> --region <region>
Now they can make kubectl calls. For example:
kubectl get pod -n app
All the pods in the namespace app should be listed. But if they do the following commands they will not see any resources because you did not give them the correct permissions:
# List pods in kube-system namespace
kubectl get pod -n kube-system
# List nodes
kubectl get nodes
When you are in charge of an EKS cluster in Amazon Web Services, it is very easy to effectively manage the access and permissions in your Kubernetes cluster using AWS roles and Kubernetes resources. The main steps are:
- Create an AWS role that has the permission to access EKS resources
- In your Amazon EKS cluster:
- Create roles with the specific permissions you want to give to the Kubernetes resources.
- Bind this role to a group using a RoleBinding
- Update aws-auth configmap to link the aws role to the Kubernetes group
Now that you know how to give restricted permissions to your EKS cluster to the people in your team, you should make sure that you have set up good practices for your firewall in order to have a secured Kubernetes Cluster.
And if you find that, creating and managing Route53 records for the endpoints in your cluster is tedious and time-consuming, I recommend you check on External DNS. It is a solution worth your interest since it will automatically manage the DNS records of your ingresses and kubernetes services, saving you so much time!