Kubernetes is a container orchestration system built to deploy and scale applications across a cluster architecture. Google Cloud Platform offers a Kubernetes cluster as a managed, production-ready service with Google Kubernetes Engine (GKE).
Terraform is an infrastructure as code (IaC) tool by HashiCorp that allows provisioning of a wide variety of infrastructure resources through a collection of plugins. IaC means we write configuration files that describe the infrastructure we want, and when we run Terraform, it compares it with the current state of the deployed resources and provisions or edits the necessary resources to match the wanted state.
Run the following commands to download and install Terraform.
We can check that Terraform is properly installed with the Terraform version.
Before we can start using GCP, we need to create a project and activate billing on it. Don’t worry, this will not cost you anything, Google offers $300 in free credit when you start using GCP and will never charge you unless you manually upgrade to a paid account.
Once we have our project, we can install and configure the Google Cloud SDK and the Kubernetes command line tool. The SDK provides the tools used to interact with the Google Cloud Platform REST API, they allow us to create and manage GCP resources from a command-line interface. Run the following commands to install and initialize it:
Google Cloud offers an advanced permissions management system with Cloud Identity and Access Management (Cloud IAM). Terraform needs to be authorized to communicate with the Google Cloud API to create and manage resources in our GCP project. We achieve this by enabling the corresponding APIs and creating a service account with appropriate roles.
First, enable the Google Cloud APIs we will be using:
Then create a service account:
Here service_account_name is the name of our service account, it cannot contain spaces or fancy characters, you can name it terraform-gke
for example.
Now we can grant the necessary roles for our service account to create a GKE cluster and the associated resources:
Finally, we can create and download a key file that Terraform will use to authenticate as the service account against the Google Cloud Platform API:
To work on our infrastructure with a team, we can use source control to share our infrastructure code. By default, Terraform stores the state of our infrastructure in a local state file. We could commit it with our infrastructure code, but the best practice for sharing a Terraform state when working with teams is to store it in remote storage. In our case, we will configure Terraform to store the state in a Google Cloud Storage Bucket.
First, let’s create a bucket, we could do it graphically on the Google Cloud Console, or we can use the Google Cloud SDK we just installed:
project_name
, type in the name of the GCP project you created earlier.location
, you can choose europe-west4
if you are in Europe or us-central1
if you are in America. You can see a complete list of available locations on the Google Cloud Documentation.terraform-state-my-first-gke.
If the name you chose is unavailable, try again with a different name.Once we have our bucket, we can activate object versioning to allow for state recovery in the case of accidental deletions and human error:
Finally, we can grant read/write permissions on this bucket to our service account:
We can now configure Terraform to use this bucket to store the state. Create the following terraform.tf
file in the same directory where you downloaded the service account key file. Make sure to replace the bucket name with yours.
We can now run terraform init
and the output will display that the Google Cloud Storage backend is properly configured.
We should now have a terraform.tf
file and the service account key-file in our working directory. Here is our target directory structure:
We will create each of those files and learn their purpose.
The .terraform/
directory is created and managed by Terraform, this is where it stores the external modules and plugins we will reference. To create the GKE cluster with Terraform, we will use the Google Terraform provider and a GKE community module. A module is a package of Terraform code that combines different resources to create something more complex. In our case, we will use a single module that will create for us many various resources such as a Google Container cluster, node pools, and a cluster service account.
To configure terraform to communicate with the Google Cloud API and to create GCP resources, create a providers.tf
file:
Then create the following main.tf
file where we reference the module mentioned earlier and provide it with the appropriate variables:
Now create a variables.tf
file to describe the variables referenced in the previous file and their type:
Finally, create the following variables.auto.tfvars
file to specify values for the variables defined above:
region
, you can choose the same as the location of your bucket, for example europe-west4
.machine_type
, you can choose g1-small
, it corresponds to a Compute Engine with 1 vCPU and 1.7 GB memory and is sufficient for a small Kubernetes cluster.Now that we have created all the necessary files, let’s run terraform init again to install the required plugins. If you are curious, you can compare the content of the .terraform/
directory before and after running this command.
To get a complete list of the different resources Terraform will create to achieve the state described in the configuration files you just wrote, run :
And to create the GKE cluster, apply the plan:
When prompted for confirmation, type in “yes” and wait a few minutes for Terraform to build the cluster.
When Terraform is done, we can check the status of the cluster and configure the kubectl command line tool to connect to it with:
In this article, we have learned how to use Terraform to build a Kubernetes cluster on Google Cloud Platform. Check out our next article in the series Kubernetes on Google Cloud Platform: Deploy your app with Helm.