Kubernetes with Kubespray

Distribution Overview

  • Control plane storage: ETCD
  • CRI: containerd
  • CNI: calico
  • Ingress: nginx
  • Optionally install kubeadm
  • Kubernetes components can be installed as binaries on the target infrastructure or run as containers
  • DNS resolution with kubeproxy or iptables

Installation Architectures

  • Single Node Cluster: You deploy everything on just one node. This node should have substantial hardware capabilities — remind yourself that you are installing the official Kubernetes binaries, not a optimized version such as in K3S.
  • Single controller, multi worker: You configure the cluster to have one controller node and several worker nodes. Same requirements apply: The controller node should have good hardware capabilities, and for the workloads you use the additional worker nodes.
  • Multi controller, multi worker: This is the recommended way to setup a Kubernetes cluster. The number of controller nodes should confirm to the equation of 2*n + 1 to allow a quorum in the case that a controller node goes down.

Installation Process

  • Kubespray controller: The computer or server on which you install Ansible and all required libraries
  • K8S controller nodes: The node(s) designated as controller nodes
  • K8S worker nodes: The node(s) designated as worker nodes
  • controller nodes need at least 1.5 GB RAM, worker nodes 1GB of RAM.
  • On the nodes, a compatible OS needs to be installed
  • Ensure SSH access to the nodes
  • The installation process uses an Ansible galaxy role which does all the heavy lifting: Defining a Python virtual env in which the Ansible version is isolated, install all requires Python libraries, and also clone the actual Ansible files that Kubespray uses
  • The inventory is composed of three groups: control plane nodes, worker nodes, and etcd servers
  • Copy the sample inventory file by cp inventory/sample inventory/mycluster and define the nodes and their role as controller or worker
  • Decide and define which Kubernetes Components to use
  • Define these components in the configuration files inventory/mycluster/group_vars/all/all.yml and inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
  • Run the Ansible playbook with ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml and your cluster will be created

Upgrade Process

  • Upgrade the worker nodes: The Ansible playbook upgrade-cluster.yml is called and the desired Kubernetes version is specified
ansible-playbook upgrade-cluster.yml \
-b -i inventory/sample/hosts.ini
-e kube_version=v1.25.0 \
--limit "kube_control_plane:etcd"
  • Upgrade the worker nodes: The same playbook is used, but you specify which nodes to use
ansible-playbook upgrade-cluster.yml \
-b -i inventory/sample/hosts.ini
-e kube_version=v1.25.0
--limit "node5*"
  • Docker
  • Containerd
  • etcd
  • kubelet and kube-proxy
  • network plugins
  • kube-apiserver, kube-scheduler, and kube-controller-manager
  • Add-ons (such as KubeDNS)
ansible-playbook \
-b -i inventory/sample/hosts.ini \
--tags=docker \
cluster.yml

Customization

  • etcd
  • Containerd
  • Docker
  • CRI-O
  • Kata Containers
  • gVisor
  • cni-plugins
  • calico
  • canal
  • cilium
  • flannel
  • kube-ovn
  • kube-router
  • multus
  • weave
  • kube-vip
  • Kube VIP
  • ALB Ingress
  • MetalLB
  • Nginx Ingress
  • cephfs-provisioner
  • rbd-provisioner
  • aws-ebs-csi-plugin
  • azure-csi-plugin
  • cinder-csi-plugin
  • gcp-pd-csi-plugin
  • local-path-provisioner
  • local-volume-provisioner

Conclusion

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store