/ kubernetes

Part II: How to configure a dev environment using Kubernetes on GCP

In this article I will describe step-by-step how to configure a managed Kubernetes on GoogleCloud Platform with all the tools you need to get building and deploying docker containers. We will be using the following tools; Helm & Tiller, ingress-controller, certificate-management, Chartmuseum, Gitlab, kube-slack and Heptio Ark

We are assuming you have a basic level understanding of Kubernetes and have read through Part I (missing article reference). You should also read about Helm Charts and understand how to use helm charts as this is what you will create to deploy your containers in Kubernetes.

Steps

  1. Create Kubernetes cluster
  2. Give your current IAM user a cluster-admin RBAC role
  3. Install Helm
  4. Set up kubernetes namespaces
  5. Install ingress-controller
  6. Install certificate-management tool
  7. Install chartmuseum to store helm packages
  8. Install GitLab
  9. Install kube-slack, a monitoring service for Kubernetes
  10. Install Heptio Ark backup and restore tool

Step 1. Create Kubernetes cluster

In my previous article Part I: Create a Google Kubernetes Engine (GKE) cluster with Terraform I was describing the way to create a Kubernetes cluster using Terraform. I recommend you follow that article to create your cluster and then go to Step 2.

Alternatively, you can use the console to create your cluster. Login to Google Cloud platform. Then go to Kubernetes Engine -> Clusters -> Create cluster. Fill out the form and then click “Create”

Here you go! In a couple of minutes, your cluster will be up and running, ready to connect to and start working with. Go to the end of the article Part I: Create a Google Kubernetes Engine (GKE) cluster with Terraform and you will see how to connect to your Kubernetes cluster to verify it is functioning.

Step 2. Give your current IAM user a cluster-admin RBAC role

Granting a role to an application-specific service account is following best practices to ensure that your application operates in the scope that you have specified.

You must grant your user the ability to create roles in Kubernetes by running the following command.

$ kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user <your_user_email_address>

Role-Based Access Control in GCP

Step 3. Install Helm

Install Helm client on local machine.

For macOS, run

$ brew install kubernetes-helm

From the script, run

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Create file rbac-config.yaml with content:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Create service account with cluster-admin role for Helm server (tiller)

$ kubectl create -f rbac-config.yaml

Install Helm server (tiller) inside your kubernetes cluster

$ helm init --service-account tiller

There are couple more of different RBAC configurations to install Helm server

Step 4. Set up kubernetes namespaces

In my case I will create 4 namespaces:

  • pactosystems-tools
  • pactosystems-dev
  • pactosystems-test
  • pactosystems-prod
$ kubectl create namespace <namespace_name>

And also will create docker registry secret in each namespace

$ kubectl create secret docker-registry pactosystems-hub-docker \
      -n <namespace_name> \
      --docker-email=<docker_user_email> \
      --docker-username=<docker_user_name> \
      --docker-password=<docker_user_password> \
      --docker-server=https://index.docker.io/v1/

Step 5. Install ingress-controller

Before we start with installing Helm charts I want to give you a couple of advice:

  • All apps that we need across all our environment will go to tools namespace.
  • Set resources to your apps. In this case, kubernetes can make better decisions about which nodes to place Pods on and take advantage of autoscaling.
  • Use versions in your helm charts, in this case, new update in the app will not break your set up.

Let's install ingress-controller helm chart

$ helm install \
    --name ingress \
    --namespace pactosystems-tools \
    --set rbac.create=true \
    --set tcp.22="pactosystems-tools/git-gitlab:ssh" \
    --set controller.resources.limits.memory=128Mi \
    --set controller.resources.requests.cpu=100m \
    --set controller.resources.requests.memory=128Mi \
    --set defaultBackend.resources.limits.memory=64Mi \
    --set defaultBackend.resources.requests.cpu=10m \
    --set defaultBackend.resources.requests.memory=64Mi \
    stable/nginx-ingress --version 0.15.0

Want to point your attention at --set tcp.22="pactosystems-tools/git-gitlab:ssh" parameter that we are passing to ingress-controller. We are passing this parameter, so we would be able to use port 22 to have access over SSH to our git repository.

Get external IP of LoadBalancer that ingress-controller has created:

$ kubectl --namespace pactosystems-tools get services -o wide -w ingress-nginx-ingress-controller

Point DNS records of a domain that you want to use, to this IP.

Step 6. Install certificate-management tool

Cert-manager is a Kubernetes addon to automate the management and issuance of TLS certificates from various issuing sources.

It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.

$ helm install \
    --name cetrs \
    --namespace pactosystems-tools \
    --set defaultBackend.resources.requests.cpu=10m \
    --set defaultBackend.resources.requests.memory=32Mi \
    --set ingressShim.defaultIssuerName=letsencrypt-prod \
    --set ingressShim.defaultIssuerKind=ClusterIssuer \
    stable/cert-manager --version v0.3.0

Create file clusterissuer.yaml, replace <your_email>:

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: pactosystems-tools
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: <your_email>
    privateKeySecretRef:
      name: letsencrypt-prod
    http01: {}

Create clusterissuer:

$ kubectl create -f clusterissuer.yaml

Step 7. Install chartmuseum to store helm packages

Create file chartmuseum-values.yaml with content:

replicaCount: 1
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
image:
  repository: chartmuseum/chartmuseum
  tag: v0.7.0
  pullPolicy: IfNotPresent
env:
  open:
    STORAGE: local
    ALLOW_OVERWRITE: false
    DISABLE_API: false
  secret:
    BASIC_AUTH_USER: <basic_http_user>
    BASIC_AUTH_PASS: <basic_http_password>
resources:
  limits:
    memory: 128Mi
  requests:
    cpu: 50m
    memory: 128Mi
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
ingress:
  enabled: true
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: nginx
  hosts:
    <chartmuseum.domain.com>:
    - /
    - /charts
    - /index.yaml
  tls:
  - secretName: <chartmuseum.domain.com>-tls
    hosts:
    - <chartmuseum.domain.com>

Make sure you replace <basic_http_user>, <basic_http_password> and <chartmuseum.domain.com> with your data

Install chartmuseum helm chart and add it to helm repo's list:

$ helm install \
    --name charts-repo \
    --namespace pactosystems-tools \
    -f chartmuseum-values.yaml stable/chartmuseum --version 1.4.0

$ helm repo add chartmuseum https://<basic_http_user>:<basic_http_password>@ <chartmuseum.domain.com>
$ helm repo update
$ helm plugin install https://github.com/chartmuseum/helm-push

Step 8. Install GitLab community

Add the GitLab Helm repository:

$ helm repo add gitlab https://charts.gitlab.io

Once you have reviewed the configuration settings you can install the chart. We recommend saving your configuration options in a values.yaml file for easier future upgrades.

Now you can install Gitlab helm chart:

$ helm install \
    --name git \
    --namespace pactosystems-tools \
    -f values.yaml gitlab/gitlab-omnibus

Step 9. Install kube-slack, a monitoring service for Kubernetes

Configure an Incoming Webhook in Slack and replace <webhook-url> with it:

$ helm install \
    --name alerts \
    --namespace pactosystems-tools \
    --set resources.limits.memory=128Mi \
    --set resources.requests.cpu=100m \
    --set resources.requests.memory=128Mi \
    --set slackUrl=<webhook-url> \
    stable/kube-slack --version 0.1.0

To ignore all errors from this pod, they can be marked with kube-slack/ignore-pod annotation

Step 10. Install Heptio Ark backup and restore tool

And last but not least, we will talk about backup and restore.

Clone or fork the Ark repository:

$ git clone git@github.com:heptio/ark.git
$ cd ark

Create a GCS bucket, in our case I called it ark-backup-1:

$ export BUCKET=ark-backup-1

View your current config settings:

$ gcloud config list

# the `project` value from the previous results
$ export PROJECT_ID=ci-cd-prod

Create a service account:

$ gcloud iam service-accounts create heptio-ark \
  --display-name "Heptio Ark service account"

Then list all accounts and find the heptio-ark account that you just created
Set the SERVICE_ACCOUNT_EMAIL variable to match its email value.

$ gcloud iam service-accounts list
$ export SERVICE_ACCOUNT_EMAIL=heptio-ark@ci-cd-prod.iam.gserviceaccount.com

Attach policies to give heptio-ark the necessary permissions to function:

$ export ROLE_PERMISSIONS=(
    compute.disks.get
    compute.disks.create
    compute.disks.createSnapshot
    compute.snapshots.get
    compute.snapshots.create
    compute.snapshots.useReadOnly
    compute.snapshots.delete
    compute.projects.get
)

$ gcloud iam roles create heptio_ark.server \
    --project $PROJECT_ID \
    --title "Heptio Ark Server" \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

$ gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/heptio_ark.server

$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}

Create a service account key, specifying an output file (credentials-ark) in your local directory:

$ gcloud iam service-accounts keys create credentials-ark \
    --iam-account $SERVICE_ACCOUNT_EMAIL

Credentials and configuration

$ kubectl apply -f examples/common/00-prereqs.yaml

Create a Secret. In the directory of the credentials file that you just created and run:

$ kubectl create secret generic cloud-credentials \
    --namespace heptio-ark \
    --from-file cloud=credentials-ark

In file examples/gcp/00-ark-config.yaml replace <YOUR_BUCKET> with GCS bucket name that you've created in the very beginning.

Start the server

$ kubectl apply -f examples/gcp/00-ark-config.yaml
$ kubectl apply -f examples/gcp/10-deployment.yaml

Let's run kubernetes job to schedule a backup process to run a backup every day at 7 pm and store backups for 168 hours. We will also exclude kube-system and heptio-ark namespaces.

$ kubectl run schedule-ark-backup \
    --namespace heptio-ark \
    --restart=OnFailure \
    --serviceaccount=ark \
    --image=pactosystems/ark_client:0.0.2 \
    --command -- ark schedule create k8s-backup --ttl 168h0m0s --schedule "0 19 * * *" --exclude-namespaces=kube-system,heptio-ark

After the job has finished successfully, delete it, since it's only used ones:

$ kubectl delete job schedule-ark-backup --namespace heptio-ark

To restore from backup

Choose a backup name to restore from. You can find it in GCS bucket name that you've created in the very beginning.

Run kubernetes job to run a restore process:

$ kubectl run ark-restore \
    --namespace heptio-ark \
    --restart=OnFailure \
    --serviceaccount=ark \
    --image=pactosystems/ark_client:0.0.2 \
    --command -- ark restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

After the job has finished successfully, delete it:

$ kubectl delete job ark-restore --namespace heptio-ark

Summary

That's it! You now have a fully functioning development environment running in GCP to build and deploy your docker containers.

If you have some comments or questions, please feel free to leave them in the comment section.

If you need help with managing a Kubernetes cluster, Pacto Systems can offer experienced engineers to help. Please contact us for more information.

We will regularly be posting articles so be sure to register with the blog to be notified of the next post. This article was written and edited by Aaron East and Ruslan Chepurkin