Kubernetes Concepts

Node Management, Service Discovery, and More

Removing a Worker Node from a Cluster

When you need to remove a worker node from a Kubernetes cluster, follow these steps:

  1. On the control plane node, drain the worker node:
kubectl drain <node-name> --ignore-daemonsets

Draining ensures that no new pods are scheduled on the node, and existing pods are safely evicted.

If you plan to re-add the node to another cluster or remove it entirely, perform these additional steps on the worker node:

  1. Reset kubeadm:
sudo kubeadm reset
  1. Manually remove other configurations as needed.

Image Pulling Process in Kubernetes

The process of pulling an image and passing it to a worker node involves several components:

NodePort Services and New Worker Nodes

When a new worker node joins a Kubernetes cluster with existing NodePort services:

  1. The new node registers itself with the Kubernetes API server.
  2. It automatically starts listening on the NodePorts assigned to existing services.

This means you can access the service through the new node's IP address and the assigned NodePort.

kube-proxy and Service Discovery

kube-proxy plays a crucial role in service discovery and load balancing:

Node Affinity, Taints, and Tolerations

These concepts help control pod scheduling:

Examples of using taints and tolerations:

# Add a taint
kubectl taint nodes node1 key1=value1:NoSchedule

# Remove a taint
kubectl taint nodes node1 key1=value1:NoSchedule-

# Make a pod tolerant to a taint (in pod spec):
tolerations:
- key: "example-key"
  operator: "Exists"
  effect: "NoSchedule"

Priority and Preemption

Kubernetes allows you to set pod priorities and control preemption:

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority-nonpreempting
value: 1000000
preemptionPolicy: Never
globalDefault: false
description: "This priority class will not cause other pods to be preempted."

---

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  priorityClassName: high-priority

Autoscaling in Kubernetes

Kubernetes supports various types of autoscaling:

Few Kubernetes Commands

Here are some crucial commands for troubleshooting and managing your Kubernetes cluster:

# Verify kube-dns service IP
kubectl get svc -n kube-system kube-dns

# Check CoreDNS pod IPs
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide

# Inspect kube-dns service
kubectl describe svc kube-dns -n kube-system

# Check CoreDNS configuration
kubectl get configmap coredns -n kube-system -o yaml

# Restart CoreDNS pods
kubectl rollout restart deployment coredns -n kube-system

# Check network policies
kubectl get networkpolicies --all-namespaces

# Verify cluster CIDR and service CIDR
kubectl cluster-info dump | grep -m 1 cluster-cidr
kubectl cluster-info dump | grep -m 1 service-cluster-ip-range

# Check kube-proxy configuration
kubectl -n kube-system get cm kube-proxy -o yaml | grep clusterCIDR

# Verify node network plugin status
kubectl describe node $(hostname) | grep -i network

# Check cni0 interface status
ip link show cni0
sudo ip link set cni0 up

# Verify iptables rules related to CNI
sudo iptables-save | grep -Ei 'cni|flannel'