Deployment Flow: Gitlab to Kubernetes Cluster on AWS(kops) with Image from ECR Built by CodeBuild
So, you are using gitlab for CI CD, version control and all. Now, you want to make a flow to deploy you application on AWS where there will be Kubernetes cluster created by kops or other tools. We will be using AWS’s ECR to host our docker image and also use CodeBuild to create docker image so that it will be fast to create image without waiting for gitlab queue to build image which can also help in choosing minimal gitlab server spec if using community version. Now, lets dive into the flow.
- Commit is pushed to branch or tag is created on gitlab
- Send input artifact to S3 bucket
- Create AWS CodeBuild project with input artifact from S3*
- Trigger AWS CodeBuild for creating new docker image
- Push docker image with tag same as branch name or git tag name
- Create K8s cluster on AWS with kops*
- Create new kubernetes service account for gitlab with access to deploy on specified namespace*
- Helm install new chart on the cluster*
- Configure ingress with ELB (classic)*
- Deploy new image to Kubernetes
* -> First time only
We create Dockerfile, .gitlab-ci.yml and codebuild.yml files on the root of the project.
On the buildspec.yml
, we replace the TAG_NAME
value with branch or gitlab tag name during build stage which is configured on .gitlab-ci.yml
file.

.gitlab-ci.yml
file goes like following. We base64 encode the AWS_CREDENTIALS
& KOPS_KUBE_CONFIG
and add them on secret environment variables.

We base64 encode the AWS_CREDENTIALS
& KOPS_KUBE_CONFIG
and add them on secret environment variables.
We have a simple go app

And Dockerfile with multi-stage build with application artifact on alpine image

Time to dive into Kubernetes cluster on AWS with kops. Assuming you have installed kops and also generated AWS Credentials of IAM user with following accesses:
- AmazonEC2FullAccess
- AmazonRoute53FullAccess
- AmazonS3FullAccess
- IAMFullAccess
- AmazonVPCFullAccess
Now, create a S3 bucket for state storage
aws s3api create-bucket — bucket my-unique-bucket-aws
Enable versioning and define KOPS_STATE_STORE
environment variable referring to the bucket
aws s3api put-bucket-versioning — bucket my-unique-bucket-aws — versioning-configuration Status=Enabled
export KOPS_STATE_STORE=s3://my-unique-bucket-aws
Assuming you have example.com
domain which is essential in the cluster communication between worker nodes and master along with etcd servers discovery. Create Route 53 hosted zone:
ID=$(uuidgen) && \
aws route53 create-hosted-zone \
— name cluster.example.com \
— caller-reference $ID \
| jq .DelegationSet.NameServers
This outputs the NS records to which your domain should point too. The domain name example.com
can also be subdomain.
Time to Create Cluster
kops create cluster \
--name cluster.example.com \
--zones us-west-2a \
--state s3://my-unique-bucket-aws \
--NODE_SIZE m4.large \
--node-count 2 \
--master-size m4.large \
--yes
Within few minutes your cluster will be ready with a master and 2 nodes. You can directly ssh into the master ip using your private ssh key of default location(.ssh/id_rsa)
.
You can do a quick smoke test by creating a deployment and access it:
kubectl run my-nginx-app --image nginx:latest
kubectl expose deployment my-nginx-app — port=80 — type=LoadBalancer
kubectl describe svc my-nginx-app
Within few minutes a load balancer url is generated and the deployed nginx page is accessible.
Time for Helm
Create a new service account for helm with cluster-admin access for now:
kubectl create sa tiller -n kube-system
kubectl create clusterrolebinding tiller -- clusterrole cluster-admin -- serviceaccount=kube-system:tiller
helm init --service-account tiller
Now, create a new helm chart named my-app
and install it
helm create my-app

Create Service Account and kubeconfig for Gitlab Deployment
Get the script from this gist, give execute permission (chmod +x) and run it:
./kubectl-sa-kubeconfig.sh gitlab default
It will create a new service account named gitlab and generates kubeconfig file at /tmp/kube/k8s-gitlab-default-conf
. Now, open the file with favorite text editor and change the master server ip from server: https://127.0.0.1
. Now, base64 encode the file: base64 /tmp/kube/k8s-gitlab-default-conf
and add it to gitlab environment secrets as KOPS_KUBE_CONFIG
.
Create CodeBuild Trigger and ECR Registry
While creating the codebuild project keep the configurations as below:
- Source Provider: Amazon S3
- Source(input artifact):
arn:aws:s3:::gitlab-my-app/myappcode.zip
- Image:
aws/codebuild/docker:17.09.0
- Auto create new role

It will create a new role for which we need to attach new policy of EC2 Container Registry : GetAuthorizationToken
Create ECS Registry
Head over to ECS, give a name and a new repository is created. Add policy to the codebuild user so that it can push image to ECR.
Finally, get create a new AWS user for gitlab with access to put object to our input artifact S3 bucket and trigger codebuild. Base64 encode the credential file or content and add it to gitlab secret variable under name AWS_CREDENTIAL
.
Now, we are done. When a new commit is pushed, it zips our application code, sends to S3 bucket. Then triggers the codebuild project which creates new docker image and push it to ECS registry. Now, we can manual trigger for the deployment to our helm installed application on Kubernetes cluster on AWS.
Cheers !!!