top of page

Steps to Setup kubernetes Cluster!!

In this blog, I will show you, how you can set up a Kubernetes Cluster using Kubeadm.

For this hands on, I have used Ubuntu EC2s hosted in AWS environment.I have launched one master node and one worker node (both EC2 had t2 medium Instance type ) for this hands on.

Please follow below mentioned steps to build Kubernetes Cluster with Kubeadm:

  1. Install Docker on all nodes ( master and worker)

1. Install Docker on all nodes:

#curl -fsSL | sudo apt-key add –

#sudo add-apt-repository \ "deb [arch=amd64] \ $(lsb_release -cs) \ stable"

#sudo apt-get update

#sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu

#sudo apt-mark hold docker-ce

Here, first we are configuring repository to install docker from docker repository.

Then installing docker and make sure that auto update of the installation package is disabled.

2. Verify that Docker is up and running with:

ubuntu@ip-172-31-36-106:~$ sudo systemctl status docker

docker.service - Docker Application Container Engine

Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)

Active: active (running) since Thu 2020-10-29 13:55:44 UTC; 17h ago


Main PID: 874 (dockerd)

3.Install Kubeadm, Kubelet, and Kubectl on all nodes:

# curl -s | sudo apt-key add –

# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb kubernetes-xenial main EOF

# sudo apt-get update

# sudo apt-get install -y kubelet=1.14.5-00 kubeadm=1.14.5-00 kubectl=1.14.5-00

# sudo apt-mark hold kubelet kubeadm kubectl

4.Bootstrap the cluster on the Kube master node.

# sudo kubeadm init --pod-network-cidr=

Take note that the kubeadm init command printed a long kubeadm join command to the screen. You will need that kubeadm join command in the next step!

sudo kubeadm join --token 53widj.y0fqvaxib8e9y96d --discovery-token-ca-cert-hash sha256:3fb41b6623f099647f212153291592772ca363d0dc39ffbaba65b04eea7884cf --ignore-preflight-errors all

When it is done, set up the local kubeconfig:

#mkdir -p $HOME/.kube

#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run the following commmand on the Kube master node to verify it is up and running:

# kubectl version

ubuntu@ip-172-31-36-106:~$ kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.5", GitCommit:"0e9fcb426b100a2aea5ed5c25b3d8cfbb01a8acf", GitTreeState:"clean", BuildDate:"2019-08-05T09:21:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

6. Join the worker nodes to the cluster.

Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:

#sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash

Actual command:

# sudo kubeadm join --token 53widj.y0fqvaxib8e9y96d --discovery-token-ca-cert-hash sha256:3fb41b6623f099647f212153291592772ca363d0dc39ffbaba65b04eea7884cf --ignore-preflight-errors all

Now, on the Kube master node, make sure your nodes joined the cluster successfully:

kubectl get nodes

ubuntu@ip-172-31-36-106:~$ kubectl get nodes


ip-172-31-36-106 NotReady master 17h v1.14.5

ip-172-31-41-248 NotReady <none> 17h v1.14.5

If you see the Status as Not Ready for Master and Worker Node, then we need to setup cluster networking .

Turn on iptables bridge calls on all nodes:

# echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf

# sudo sysctl –p

Next, run this only on the Kube master node

Now flannel is installed! Make sure it is working by checking the node status again:

ubuntu@ip-172-31-36-106:~$ kubectl get nodes


ip-172-31-36-106 Ready master 17h v1.14.5

ip-172-31-41-248 Ready <none> 17h v1.14.5

After a short time, allnodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again


bottom of page