/ kubernetes

Part I: Create a Google Kubernetes Engine (GKE) cluster with Terraform

Nowadays Terraform is the de facto when we talk about infrastructure-as-code (IaC) and it definitely deserves it. I will show you how easy it is to setup an autoscaling, auto-healing kubernetes cluster using terraform, Google Kubernetes Engine and the Google Cloud Platform.

Please note that running Kubernetes on GCP will incur charges but be sure to trial GCP as you get some free money to spend.


Project structure

└── gke
    ├── cluster.tf
    ├── gcp.tf
    └── variables.tf

Let's create this structure and empty files:

$ mkdir -p terraform-gke/gke
$ cd terraform-gke/gke
$ for file in cluster gcp variables; do touch $file.tf; done

Cluster Specification

Specify all the variables in the variables.tf file:

# Variables
variable "project" {
  default = "<your_project_name>"

variable "region" {
  default = "eu-west1-b"

variable "cluster_name" {
  default = "<your_cluster_name>"

variable "cluster_zone" {
  default = "europe-west1-b"

variable "cluster_k8s_version" {
  default = "1.9.7-gke.3"

variable "initial_node_count" {
  default = 2

variable "autoscaling_min_node_count" {
  default = 1

variable "autoscaling_max_node_count" {
  default = 2

variable "disk_size_gb" {
  default = 100

variable "disk_type" {
  default = "pd-standard"

variable "machine_type" {
  default = "n1-standard-4"

Specify a provider, in the gcp.tf file. In our case, it is Google Cloud.

# Google Cloud Platform
provider "google" {
  project = "${var.project}"
  region  = "${var.region}"

We can now create our Kubernetes cluster on Google Cloud using GKE resource. In Google Cloud, this is one resource, but this encapsulates many components (managed instance group and template, persistence store, GCE instances for worker nodes, GKE master).

Specify all resource components in the cluster.tf file:

# GKE Cluster
resource "google_container_cluster" "cluster" {
  name               = "${var.cluster_name}"
  zone               = "${var.cluster_zone}"
  min_master_version = "${var.cluster_k8s_version}"

  addons_config {
    network_policy_config {
      disabled = true

    http_load_balancing {
      disabled = false

    kubernetes_dashboard {
      disabled = false

  node_pool {
    name               = "default-pool"
    initial_node_count = "${var.initial_node_count}"

    management {
      auto_repair = true

    autoscaling {
      min_node_count = "${var.autoscaling_min_node_count}"
      max_node_count = "${var.autoscaling_max_node_count}"

    node_config {
      preemptible  = false
      disk_size_gb = "${var.disk_size_gb}"
      disk_type    = "${var.disk_type}"

      machine_type = "${var.machine_type}"

      oauth_scopes = [

      labels {
        env = "prod"

# Output for K8S
output "client_certificate" {
  value = "${google_container_cluster.cluster.master_auth.0.client_certificate}"
  sensitive = true

output "client_key" {
  value = "${google_container_cluster.cluster.master_auth.0.client_key}"
  sensitive = true

output "cluster_ca_certificate" {
  value = "${google_container_cluster.cluster.master_auth.0.cluster_ca_certificate}"
  sensitive = true

output "host" {
  value = "${google_container_cluster.cluster.endpoint}"
  sensitive = true

We will create the following:

  • 2 worker node cluster
  • Autoscaling enabled with a maximum of 2 more nodes (4 in total at peak)
  • Enables worker nodes auto repair
  • Enables HTTP load balancing
  • Enables kubernetes dashboard

The output variables will be used later when we deploy applications. They are marked sensitive to avoid printing out to standard output.

Building the kubernetes cluster

We need initialize our terraform environment, which includes both downloading plugins required google cloud provider and kubernetes provider. Run

$ terraform init

Now we can preview what we want to create without actually creating anything:

$ terraform plan

If we are satisfied with our preview, it is time to build our cluster:

$ terraform apply

The cluster creation usually takes 10-15 minutes to complete.

How to verify your newly created kubernetes is running

There are a couple of ways to do that. You can go to your Google Cloud console and select your cluster. The other option is to use Command-line access. To do so use kubectl command line access by running the following command:

$ gcloud container clusters get-credentials <your_cluster_name> --zone europe-west1-b --project <your_project_name>

This will get kubernetes cluster credentials and store on your local machine. So now you can use kubectl command to have full access to your cluster. For example:

$ kubectl get nodes

Destroying your cluster

If you want to get rid of any resources we’ve created. You can do this with one easy command:

$ terraform destroy

Terraform will ask you to confirm before it destroys your cluster because it cannot be undone.


That's it! You now have a fully functioning, auto-scaling and auto-healing Kubernetes cluster on Google Cloud Platform. Now you need to build and deploy your applications on Kubernetes. See our next article to find out how to build and deploy your docker containers on your Kubernetes cluster.

If you have some comments or questions, please feel free to leave them in the comment section.

If you need help with managing a Kubernetes cluster, Pacto Systems can offer experienced engineers to help. Please contact us for more information.

We will regularly be posting articles so be sure to register with the blog to be notified of the next post. This article was written and edited by Aaron East and Ruslan Chepurkin