How to Configure K8S Multi-Node Cluster over AWS Cloud via Ansible Role

๐Ÿ“Œ Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
๐Ÿ”… Create Ansible Playbook to launch 3 AWS EC2 Instance
๐Ÿ”… Create Ansible Playbook to configure Docker over those instances.
๐Ÿ”… Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

The code link is attached below.

Launch three ec2 instances on AWS cloud

In this task, we would deploy a Kubernetes cluster on AWS cloud via Ansible roles.

I have made two different roles for the same purpose named:

aws_provision is for launching two ec2-instances and cluster_setup is for setting Kubernetes cluster on the launched ec2-instances.

This is the main task file for AWS provision.

This is the vars file to store variables for the aws_provision.

Now we would run the ansible-playbook to run the aws_provision role.

We can see now that we have launched three ec2 instances on AWS.

Configure Kubernetes consisting one Master and two Worker Nodes.

Steps involving are:


Configure yum for kubernetes

Installing docker

Install kubeadm, kubelet, kubectl

Start kubelet service

Pull the images

Edit Daemon.json

Restart the docker service

Install Iproute-tc

kube init


pull images for kubeadm

Configuring Master via kubeadm


copying admin file

Pasting the token file

Running the token file

Now, this is the task file for cluster setup.

this is the file folder for role cluster_setup

Now we would run the ansible-playbook to configure the cluster.

Now we would check the configuration by logging into the Master node and running: kubectl get nodes

Finally!!! our cluster is set up.

Refer to the code below for any confusion.

I automate things ๐Ÿ˜‰