Posted on 19 August 2020, updated on 21 September 2023.
DigitalOcean? I haven’t heard that name in years…
Recently I had to deploy a Kubernetes cluster on DigitalOcean, a Cloud Provider that I had heard about but not used yet. Less known than AWS, GCP, or Azure, it is still a player in Cloud solutions, with 13 different data centers around the world!
What is DigitalOcean?
It is promoted as Developer oriented experience, focusing on a clean API and user interface to create and interact with cloud components. And it is true that coming from AWS or GCP it feels like a breath of fresh air. You have only a few services, each of one with a clear purpose:
Figure 1: DO simple interface
- Droplets: Your basic VM solution, starting at $5 a month
- Volumes: Hard drives for your droplets
- Databases: Managed databases
- Spaces: S3 like blob storage
- Images: Manage your VM snapshots and backup, your Docker images
- Networking: DNS, IP, load balancers, VPC, and firewall are all in there
- Monitoring: an optional monitoring solution for your droplets
- Kubernetes: The DigitalOcean managed Kubernetes service, using the previous components
For a solo developer or a small team, this seems like a good fit, since you can fit every component in your head. However, it might miss some advanced solutions for bigger players, such as a real fine-grained IAM policies system. I'll cover in detail some pain points I had using DigitalOcean.
But first, let's assert that it is really developer-friendly, by creating a quick Kubernetes cluster from the Web UI. From a single interface, you get to choose:
- the Kubernetes version: you can already use the latest v1.18!
- the region in which your cluster will be deployed, out of 8 around the world. However, multi-AZ is not supported.
- a VPC network
- Your node pools, with only the number and size of the droplets configurable.
- Tags and naming
Then click on Create cluster, and in a few minutes (around 6 minutes, I measured) you've got a functional cluster.
Meanwhile, you can set up the doctl CLI. You'll need a personal access token, that you can generate in the API section. Then run doctl auth init -t <personal-access-token>
and you'll be able to see your Kubernetes cluster if it has been set up.
$ doctl kubernetes cluster list ID Name Region Version Auto Upgrade Status Node Pools 77d389a1-a609-49b9-bf4c-55443cf7ec11 my-digital-ocean-kubernetes-cluster ams3 1.18.6-do.0 false running default-node-pool
Update your ~/kube/config
file with doctl kubernetes cluster kubeconfig save my-digitalocean-kubernetes-cluster
and you're good to manage your cluster with kubectl
!
I won't go into how to deploy an application on Kubernetes, but you'll find several posts about this on this blog.
Here let's deploy our cluster properly with Terraform, since I don't want to rely on a web interface for my infrastructure.
Deploying a Kubernetes cluster on DigitalOcean with Terraform
Terraform is a solution from HashiCorp which allows managing Infrastructure As Code. You just need to write your desired state and terraform
manages to build the desired infrastructure, using a modular system of providers. There is a specific provider for DigitalOcean, which encapsulates the translation from the Terraform definition to the DigitalOcean API.
There is comprehensive documentation, but we'll only use the resources digitalocean_kubernetes_cluster
and digitalocean_kubernetes_node_pool
to deploy another node pool dedicated to, let's say, applications. To keep things simple, I'll put all my code in a single main.tf
, and won't use any variables. For a real project, take some time to set a real terraform project layout.
First, we need to set up the actual DigitalOcean Terraform provider, it's very simple. You need to provide your previous DigitalOcean personal access token. You can either use a variable and the prompt (Don't commit your token!) or environment variables. The provider automatically uses DIGITALOCEAN_TOKEN
and DIGITALOCEAN_ACCESS_TOKEN
if they exist.
export DIGITALOCEAN_TOKEN=*******************************
Then you can use write in your main.tf:
To spice up things a little, we'll use a remote state, which, instead of storing your state in a local terraform.tfstate
will synchronize it with a remote storage solution, such as S3 (you can skip this part if you are in a hurry) Here we'll stay with DigitalOcean and use AWS S3's clone Spaces. You'll need:
- an ACCESSKEY and a SECRETKEY from the web interface. You can store them as environment variables:
export SPACES_ACCESS_TOKEN="***************"export SPACES_SECRET_KEY="*********************"
- a Spaces created. You can either do it from the web UI or using a CLI tool to interact with S3, such as s3cmd.
s3cmd --host=ams3.digitaloceanspaces.com --host-bucket='%(bucket)s.ams3.digitaloceanspaces.com' --access_key=$SPACES_ACCESS_KEY --secret_key=$SPACES_SECRET_KEY mb s3://kube-terraform-state
Here I created a Spaces namedkube-terraform-state
, in the “Amsterdam 3” region. My endpoint is logically https://kube-terraform-state.ams3.digitaloceanspaces.com/
To init your terraform backend with this remote state, you need to specify that you want to use an S3 remote backend.
Then, at the terraform init
, you'll provide the Spaces access keys.
terraform init \ -backend-config="access_key=$SPACES_ACCESS_TOKEN" \ -backend-config="secret_key=$SPACES_SECRET_KEY"
If you encounter an error, clean your local state with rm -rf ./.terraform
.
Your Terraform state is now stored remotely. If you work with several people on the same Terraform project, a locking mechanism will prevent any conflict when applying a new configuration.
Now, let's create a Kubernetes cluster, with a second Node Pool for our application. I play along with the few options available.
We are all set! Just run terraform plan
to see what it will create.
That seems fine, just terraform apply
and create the cluster.
This should take a few minutes to create.
Now you can deploy your application in Kubernetes!
Nowadays the principal Cloud Providers offer a managed Kubernetes service: EKS for AWS, AKS for Azure, and GKS for GCP. So where does DOKS stand?
The PROS and CONS of DigitalOcean
The pros of DigitalOcean
DigitalOcean feels simple and is cheap
And it's quite refreshing. For example, you can automatically upgrade your cluster version, which isn't a feature available everywhere. All of this without paying an extra fee for the control plan, only the price of the droplets running your cluster, unlike other clouds such as AWS which can charge up to $70 for the managed service only.
Cilium installed by default
By default, the managed service installs Cilium as CNI. I find this Open Source solution better than the one installed by other Cloud providers.
PVC in Kubernetes
The Cloud Controller is very functional overall. For example, the provisioning off hard drives for my PersistentVolumeClaims worked like a charm!
The cons of DigitalOcean
Not much choice for the base image
Apart from the size, you don't have any control over the machine on which Kubernetes is installed. No ARM CPU or other Linux OS 😥
Same for the nodes
You cannot taint the nodes from the interface or Terraform. I like to keep my pods organized, for example, my Gitlab Runners shouldn't impact my application pods, and tainting allows me to keep the pods in their right place. Relying only on NodeAffinity isn't very comfortable.
No private Load balancer
I encountered issues when setting up a totally private cluster (apart from the kubectl api). If you set up a DigitalOcean-managed load balancer, you have to keep all your nodes exposed, with very permissive firewall rules. For our project, it wasn’t something acceptable, and we had to create a custom private load balancer with nginx, an unfortunate experience. Having a Private Load Balancer would be a great improvement.
Role management
You don’t have a granular role and permission system such as AWS IAM. It is often a mandatory functionality for big structures.
Conclusion
We have discovered DigitalOcean, an outsider in the world dominated by only a few Cloud Providers. Its Kubernetes solution is quite complete, with a few drawbacks, however. Finally, you have deployed your nodes using Terraform, your infrastructure is versioned, just the same as your code!