Learning Kubernetes with Minimal Cost in GKE

Raju Dawadi
4 min readJun 14, 2020

I often get question: How much will it cost to learn Kubernetes? There are various options, providers and tools for creating a cluster including cloud and in local system. Before diving into GKE(Google Kubernetes Engine), I would like to present other few options which are good for starting phase but may not give a good glimpse of connecting points.

image credit: spot.io

Local System

Minikube is popular one but I started to fall in love with microk8s not only because of its easy installation(snap install microk8s) but also it provides easy integration with Istio as well out of the box. Both of these offers single node cluster.

Playground

If there is one platform which I can name for technical and interactive learning, it would be katacoda. The interactive video browser-based scenarios covers beginning to intermediate learning for free. Check out the courses from: https://www.katacoda.com/courses/kubernetes

Okteto

I simply like Okteto when it comes to just running containers in Kubernetes with minimal effort. For learning, even free tier is good but lots of components are restricted. It provides loadbalancer, multi-node environment and pre-baked kubeconfig file which makes it much easier for diving-in in less than a minute.

Why GKE(Google Kubernetes Engine) Then?

Its a dope for beginner to production workload with managed cluster and with better observability and flexibility. Ranging from multi-node group to Istio mesh to in built monitoring of all cluster resources, GKE gives a good idea of how service runs abstracting the internals.

But running Kubernetes in cloud costs money than rest of the above options. As a beginner, everyone try to use less resource and achieve more but knowingly and unknowingly, we might be using cloud resources more than we need. In this post, I will walk you through the steps which will reduce the billing but giving a good experience while learning Kubernetes. Don’t consider the case similar for production. About the master node of GKE, its free for one cluster in one billing account. That means irrespective of

  1. Using small and preemptible nodes

By default, it tries to create 3 node cluster with 1 core, 3.75GB i.e. n1-standard-1 costing $24.2725 per month. In the course of diving into Kubernetes, we most probably will run few containers, learn the lifecycle and scale gradually. For that, there is another option of reducing the compute instance(node) pricing by using Preemptible VMs. This type of instance is short lived which will restart in 24 hours or even less which depends on the available resource in the zone and other parameters. Preemptible nodes cut the cost by more than 60–70%.

Standard vs Preemptible VM pricing

2. Enable Autoscaling

Autoscaling is by heart in GKE which can be enabled by simple actions and the node get auto scaled when the resource request is high. Even in above scenario, we can start with 1 node and configure it to autoscale upto limit of 3.

GKE Node Autoscale

3. Node Disk Size

This is often ignored by people when creating a Kubernetes cluster but adds a cost. Standard provisioned disk space costs $0.040 per GB/month and by default the GKE cluster creates each node of size 100GB but unless you are using very high number of workload on a huge VM, the space is not utilized. I would suggest to start it with 10–15GB so that performance is not so much impacted because disk size determines IOPS also. Then we can gradually create new nodes with higher disk size based on need.

4. Less Number of LoadBalancer

Unless you are creating load balancer of different type, it doesn’t make sense to add new load balancer ingress but utilize single which can forward traffic to multiple Kubernetes service based on path or host. HTTP(S) load balancer is not equipped with google cloud provisioned SSL certificate, so the overhead of ssl is also reduced. But based on need of external or internal loadbalancer, the count may vary. Read more about the Ingress Networking from this GCloud Guide.

5. Allocate Resource Limit

With autoscaling configured, if the application container is requesting much higher resources because of its memory issue or unusual processing, new nodes are auto provisioned which will add unnecessary cost. So, its a good practice to add resource limit for a container and even namespace called as ResourceQuota. Here are the example:

# Container resource limit on definition
...
- name: busybox
image: busybox:1.28
resources:
limits:
memory: 200Mi
cpu: 300m
requests:
memory: 100Mi
cpu: 100m
...
# Namespace resource quota
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-quota-example
spec:
hard:
requests.cpu: 3
requests.memory: 2Gi
limits.cpu: 4
limits.memory: 4Gi

By considering these few things, you can reduce the pricing of GKE and other Kubernetes cluster from cloud providers. Make sure to monitor the cluster behavior which is more easier in Google Cloud Platform with the inbuilt workloads dashboard as well as through Stackdriver Monitoring.

That’s it for now. Feel free to comment if you have interesting tips and get connected with me on Twitter and Linkedin where I keep on sharing interesting updates.

--

--