Posted on 28 October 2021, updated on 21 December 2023.
Terraform is such a powerful tool and is now widely used to manage cloud infrastructure. But when it comes to collaboration it is not always easy. Before merging new changes it can be hard to know in which state the infrastructure currently is and in which state the infrastructure will be after merging: should I apply the changes before merging or should I do it right after merging the changes? If you ever encountered this problem, Atlantis can help you. Once it is successfully set up, you will be able to see the impact of your changes on the infrastructure even if you do not have Terraform on your computer.
This article will cover the deployment and the installation of Atlantis in an AWS EKS in order to deploy AWS resources in your Gitlab CI.
Deploy an Atlantis-ready Kubernetes cluster with Terraform
In this article, we will deploy Atlantis in a Kubernetes cluster on AWS with EKS. You can set up an EKS cluster easily with Terraform with those 2 snippets: main.tf
resource "aws_default_subnet" "default_az1" {
availability_zone = "eu-west-3a"
}
resource "aws_default_subnet" "default_az2" {
availability_zone = "eu-west-3b"
}
resource "aws_iam_role" "eks_cluster" {
name = "eks-cluster"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_eks_cluster" "aws_eks" {
name = "eks_cluster_atlantis"
role_arn = aws_iam_role.eks_cluster.arn
vpc_config {
subnet_ids = [aws_default_subnet.default_az1.id, aws_default_subnet.default_az2.id]
}
tags = {
Name = "EKS_atlantis"
}
}
resource "aws_iam_role" "eks_nodes" {
name = "eks_nodes_atlantis"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_nodes.name
}
resource "aws_eks_node_group" "node" {
cluster_name = aws_eks_cluster.aws_eks.name
node_group_name = "node_atlantis"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = [aws_default_subnet.default_az1.id, aws_default_subnet.default_az2.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
provider.tf
provider "aws" {
profile = "default"
region = "eu-west-3"
}
If you want further explanations on how to deploy an EKS cluster with Terraform and how to connect to it, you can read this article, which explains more precisely how to deploy an EKS cluster with Terraform.
In order to work properly, Atlantis needs high permissions on your AWS account. In fact, if you want to be able to create all the AWS resources with Terraform, your Atlantis user must have full access to your AWS account. Let's create an atlantis
user with AdministratorAccess
policy on AWS. To do so, just add this Terraform file to the 2 previous ones. atlantis.tf
resource "aws_iam_user" "atlantis_user" {
name = "atlantis"
}
resource "aws_iam_user_policy_attachment" "atlantis" {
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
user = aws_iam_user.atlantis_user.name
}
resource "aws_iam_access_key" "atlantis_access_key" {
user = aws_iam_user.atlantis_user.name
}
Do not forget to output access key ID and secret in your Terraform code. output.tf
output "atlantis_access_key_id" {
value = aws_iam_access_key.atlantis_access_key.id
}
output "atlantis_access_key_secret" {
value= aws_iam_access_key.atlantis_access_key.secret
sensitive = true
}
Configure your Gitlab to work with Atlantis
Now your Atlantis-ready Kubernetes cluster is running, some additional configurations on your Gitlab are required to allow your future Atlantis to interact with your Gitlab CI.
First, you will need to create an access token to give Atlantis the right to post messages on your Merge Requests. To do so, there are two options, first, you are using a self-hosted Gitlab you can create a project access token with API permission in the settings of your project (Setting > Access tokens). Unfortunately, project access tokens are currently disabled on Gitlab.com. **The other option is to use either your account or a dedicated account to create a personal access token with API permission in the Preferences > Access Tokens tab of the account.
Before deploying Atlantis, you will also need to create a random string that will be your Webhook secret.
Keep your 2 newly generated secrets somewhere safe, you will need them to deploy your Atlantis in the next part
Release Atlantis in your Kubernetes cluster
Several ways of deploying Atlantis exist, but in this article, we will do it with Atlantis' Helm chart. First, add Atlantis' chart to your repositories with:
helm repo add runatlantis <https://runatlantis.github.io/helm-charts>
Then create a values.yaml
file to configure the installation of your Atlantis. For the sake of simplicity, the secrets will be written directly in the values of the Helm Chart. But it is not really a good practice and Kubernetes secrets should be used instead.
gitlab:
user: <access-token-user>
token: <access-token-secret>
secret: <webhook-secret>
hostname: <hostname of your gitlab> # default to gitlab.com
# Replace this with your own repo whitelist:
orgWhitelist: gitlab.com/mygroup/* # Don't forget the "*" to whitelist all the repository of your group
service:
type: LoadBalancer
port: 80
# AWS credentials outputted when created your Kubernetes cluster on EKS
aws:
credentials: |
[default]
aws_access_key_id=<atlantis-access-key-id>
aws_secret_access_key=<atlantis-access-key-secret>
region=eu-west-3
As you may have noticed, Atlantis is configured to use an AWS user with an access key. It is not the best thing to do. Here, an AWS role with a service account linked to the Atlantis pod would be more efficient and secure. Unfortunately, it is not currently well-supported by Atlantis: when doing so, it ends up assuming the node's role to launch its Terraform commands so nothing works as expected since the node's permissions are limited.
Once your Atlantis pod is running properly you can get the public IP associated with it by running:
kubectl describe svc atlantis
If it worked well, you can access it at this public IP in your web browser.
Now go back to Gitlab, on the project in which you want to use Atlantis. In order to configure the Webhook, just go in Setting > Webhooks, in the field URL
write <your atlantis URL>/events
and check the following boxes:
- push events
- comments
- merge request events
How to use Atlantis
Once your Atlantis is successfully deployed, you can try it out by creating a new branch on your repository and pushing code to it. Now, opening a new Merge Request will automatically call terraform plan
in your repository and displays the result in the comments.
Let's have a look at Atlantis' commands:
atlantis help
atlantis plan
: Runsterraform plan
in your repositoryatlantis apply
: Apply the plans, if no directory is specified, it will apply the plans in all directories whereatlantis plan
were ranw
andd
flags can be used to select the workspace and the directory in which to launch the commands
Everything is highly configurable, both server-side and project-side. Server-side configuration can be changed by adding a value in your Helm's values.yaml
repoConfig: |
---
repos:
- id: /.*/
branch:
apply_requirements: []
workflow: default
allowed_overrides: []
allow_custom_workflows: false
workflows:
default:
plan:
steps: [init, plan]
apply:
steps: [apply]
By default, every user that is authorized to comment on the Merge Requests will be able to run atlantis plan
and atlantis apply
commands on the server. You can modify the apply_requirements in the server's configuration with approved
and mergeable
keywords:
approved
:atlantis apply
can be run only on approved Merge Requests.mergeable
:atlantis apply
can be run only by users who are able to merge the Merge Request right now.
The apply command is very powerful since the Atlantis user is authorized to create all the resources with Terraform on your AWS account. You should be very careful with both your Atlantis configuration and your Gitlab configuration. A configuration that is not enough restrictive on both would allow almost anybody to create Merge Request and to create any resource on your AWS account with Terraform
You can also define workflows server-side in the values.yaml
. Workflows allow you to override default commands plan
, apply
but also to create custom commands. You can configure Atlantis to run almost any Terraform or even Bash commands, like tests or linting for example. For more explanations on how to configure it, you can refer to the official documentation.
I did not have the opportunity to use Atlantis on a project yet, but I still want to give you some advantages and some drawbacks of using it on yours:
Advantages:
- Easier collaboration on Terraform code: Anyone can run
terraform plan
andterraform apply
to see the modifications directly on the merge request. - Developers writing Terraform code: This can be used to allow developers to write and apply Terraform code without having to configure their access to the infrastructure locally
- Highly configurable: You can create custom scripts, override existing commands and even configure it to use Terragrunt
Drawbacks:
- Security issues: If not well configured, it can be a security hole in your infrastructure
I hope your newly deployed Atlantis will help you to collaborate better within your Ops team. But will also give you the opportunity to onboard your developers and allow them to manage their infrastructure as they want without having to configure Terraform. Atlantis can definitely help you to enforce DevOps standards, reducing border between Ops and developers