Vagrant let’s you provision VMs locally

Creating a Local Kubernetes Cluster with Vagrant

Uğur Akgül
8 min readNov 1, 2021

--

Hi ! I hope you are okay since we last meet. It’s been a long time and I’ve missed so much about writing. In this post I will be talking about local kubernetes clusters, why and how to bootstrap them.

What is a Local Kubernetes Cluster and WHY ?

A local kubernetes cluster is, well as the name says, a local cluster on your local machine say a laptop (and a powerful one). You can bring up and down these clusters with no anxiety about messing another person’s work. And this condition works the same for you too, some other worker in your company can not mess around with your cluster, thus your environment is completely safe from the intrusion of others. ( Well if you keep your laptop safe from the intrusion of others of course :) )

Another reason why it is local is, you can develop your environment, your cluster so easily without having to wait some merging,approving etc. Since it’s all on your local machine, you can quickly change one variable and bootstrap the whole cluster from the beginning. If there is a critical development error in your cluster, no worries you can bring it down, change it from the file and start up all over again.

Since you can bring up and down so many times, your configuration can not change every time the cluster bootstraps. That’s why it needs to be idempotent.

To manage idempotency you can utilize the usage of ansible. But in this post I will be using some bash scripts because it’s simple and does the job.

What is Vagrant ?

Vagrant is a tool for building and managing virtual machine environments in a single workflow. It can build virtual machines so easily and you can configure these virtual machines with it’s Vagrantfiles. With Vagrant you will decrease the VM setup time, and will have your own playground to test. Vagrant VMs are provisioned on top of Virtualbox, VMware, AWS or providers that the documentation provided here. In this post I will be using Virtualbox.

Let’s Start The Fun Part

Before we begin, we need some prerequisites to not say “well it was working on my machine”.

Prerequisites

  • Virtualbox (I will be using version 6.1.28)
  • Vagrant (I will be using version 2.2.7)

You can access the project and all of the codes below from THIS repository.

After you installed virtualbox and vagrant, there is nothing to stop you from provisioning your first virtual machine. For resource reasons, I will be making the kubernetes cluster with 2 nodes which is 1 master and 1 worker. Each VM will have 1CPU and 2GB of memory. Each VM will use Ubuntu 20.04 for it’s base image. If your end of this project is working then you can increase the number of workers and have multi worker kubernetes clusters.

Remember that we want these processes to be automated. So we will have Vagrant use these scripts for us so that after our VM’s created, we won’t do anything. We will be ready to use our playground.

First, let’s create our scripts.

Basic Kubernetes Installation Parts

To install kubernetes, we must install the required packages first on all virtual machines. There is 25–30 commands to complete this step. So to automate the process, we will be using a bash script.

Kubernetes dependencies will be executed on all VMs.

In this script we are configuring all virtual machines for the kubernetes cluster. These configurations consists of installing required packages, configuring hosts file, disabling swap, configuring sysctl and install docker runtime. You can use different container runtimes of course. These steps are required for a kubernetes cluster to run smoothly.

After this script finishes we will have our all VMs ready to bootstrap kubernetes correctly.

To Configure Master Node

After configuring the prerequisite steps on all VMs, we will be configuring the master node of our cluster. Again we will be using a bash script for automation. Note that we will be executing the script only on the master node. So we need to configure in the Vagrantfile that this script is for only the master node later when we create our Vagrantfile.

Configuring the master node.

In this script, we are creating the kubernetes cluster. Initializing kubernetes via kubeadm. Creating a join command for the worker nodes, configuring kubectl to execute kubectl command without any trouble and finally installing a CNI, in this case it’s Flannel.

IMPORTANT NOTE:

With the command below, we are configuring our kubernetes control plane to run on master node’s IP.

kubeadm init --apiserver-advertise-address=$master_node --pod-network-cidr=$pod_network_cidr --ignore-preflight-errors=NumCPU

If we don’t give the apiserver-advertise-address, it will automatically take the default value from Vagrant, which is the enp0s3 network adapter. This adapter is the default adapter created by Vagrant no matter what, and we will not be using it, so we are telling our cluster to run on our defined IP and thus our defined network adapter.

Since our Vagrant VMs have 2 different network adapters, if we deploy our CNI, it will automatically deploy on Vagrant default network adapter. If we don’t change this, our pods can not be reachable. We need to change this setting in Flannel deployment. If you look at

 install_network_cni (){kubectl apply -f /vagrant/kube-flannel.yml}

we are giving a local file /vagrant/kube-flannel.yml. This file includes our configuration in these lines. This file can be found in the github repository I referred to in the beginning of this post.

containers: 
— name: kube-flannel
image: quay.io/coreos/flannel:v0.15.0
command:
— /opt/bin/flanneld
args:
— --ip-masq
— --kube-subnet-mgr
— --iface=enp0s8

By giving — iface=enp0s8 we are configuring our Flannel CNI to work on our defined network adapter. This is an important step, if you don’t configure your Flannel like this, your pods will not be reachable. (With Weave CNI you don’t have to configure this option.)

And we are giving ignore-preflight-errors=NumCPU because, all our VMs have 1CPU (since we are working on our local machine) and kubernetes cluster by default needs 2CPU and if that requirement doesn’t meet, it will throw an error in preflight check step and stop the installation. By passing this argument we are telling to ignore 2CPU striction.

After successfully executing this script on only master node, we have only the worker node(s) to configure.

Configuring Worker Node(s)

To configure worker node(s) we will be using (of course) a bash script and this is a tiny one. Because we need for the worker node to only join the cluster created by master node.

In this step we are only taking the join command which has been created by master node before. This join command is stored on the shared folder. This shared folder is the beauty of Vagrant. Every VM on this Vagrantfile can read/write this shared folder. The shared folder is binding with your current folder on the host, which includes Vagrantfile.

Now that we have all of our bash scripts and automation processes done, we can move towards Vagrant and configure it’s Vagrantfile in our final step.

Configuring Vagrantfile

Vagrant uses Vagrantfile to provision virtual machines. Just like Docker uses Dockerfiles to create containers. Vagrantfile is written in Ruby, so a little bit of Ruby knowledge is valuable.

Because Vagrantfile is written in Ruby, you can use all of Ruby’s tips and tricks to modify something or create workarounds in a Vagrantfile.

Let’s take a look at our Vagrantfile.

In our Vagrantfile we are describing our VMs, defining their resources, and hostnames. For every VM we are executing install-kubernetes-dependencies.sh. We are executing configure-master-node.sh script in master node only and executing configure-worker-nodes.sh in worker nodes only.

Powering Up

After creating our scripts and Vagrantfile, there is only one step left. The moment of truth. You can give it a go with command:

vagrant up

You need to execute this command on the Vagrantfile folder, just like docker build.

This command should take ~10mins to successfully execute.

If you configured everything correctly you should see something like this.

Installation completed

Now to ssh into master node and check kubernetes cluster status

vagrant ssh master

and get kubernetes nodes

kubectl get nodes
kubectl get nodes output

And there it is. Your pocket-kubernetes-cluster is at your service. From now on you can start to use kubernetes commands. Let’s create a simple nginx web page.

To create a deployment, we can use /vagrant/nginx-deployment.yml file which I added in the repository.

kubectl apply -f /vagrant/nginx-deployment.yml
kubectl apply -f /vagrant/nginx-service.yml
kubectl apply deployment and service

Let’s see if our pod is working

kubectl get pods -o wide
kubectl get pods

Let’s check our service which will expose our pod

kubectl get service
kubectl get service

Now we can see our nginx web page in 172.16.8.11:30080, which is the IP of node-01 and port of our service.

Nginx is working on 172.16.8.11:30080

And our kubernetes cluster is working and can be accessible from outside the VM.

Final Thoughts

Vagrant is a powerful tool to provision VMs. If you have a slightly better laptop or PC, you can have your playground at your fingertips. Just like that. This is too valuable if you are consistently developing something and dealing with that “approval time” or the “downtime of the developer cluster”. Vagrant removes the Can I break something in this environment ? Will I break something ? Did I break something ??? thoughts from your mind because you are working on your local. And since it is an idempotent way to provision VMs, if it’s working on your local, then it should be working on any other machine that can bootstrap your Vagrantfile. So no more “it works on my local”.

Thank you for reading this far, I hope you enjoyed and learned something from this post. You can ask me about your questions and I would love to answer them. Stay safe and be well :)

--

--

Uğur Akgül
Uğur Akgül

Written by Uğur Akgül

Tech Lead, Platform Engineering @TurkNet // You can find me at https://www.linkedin.com/in/hikmetugurakgul/

Responses (2)