Node Management, Service Discovery, and More
When you need to remove a worker node from a Kubernetes cluster, follow these steps:
kubectl drain <node-name> --ignore-daemonsets
Draining ensures that no new pods are scheduled on the node, and existing pods are safely evicted.
If you plan to re-add the node to another cluster or remove it entirely, perform these additional steps on the worker node:
sudo kubeadm reset
The process of pulling an image and passing it to a worker node involves several components:
When a new worker node joins a Kubernetes cluster with existing NodePort services:
This means you can access the service through the new node's IP address and the assigned NodePort.
kube-proxy plays a crucial role in service discovery and load balancing:
These concepts help control pod scheduling:
Examples of using taints and tolerations:
# Add a taint
kubectl taint nodes node1 key1=value1:NoSchedule
# Remove a taint
kubectl taint nodes node1 key1=value1:NoSchedule-
# Make a pod tolerant to a taint (in pod spec):
tolerations:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
Kubernetes allows you to set pod priorities and control preemption:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority-nonpreempting
value: 1000000
preemptionPolicy: Never
globalDefault: false
description: "This priority class will not cause other pods to be preempted."
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
priorityClassName: high-priority
Kubernetes supports various types of autoscaling:
Here are some crucial commands for troubleshooting and managing your Kubernetes cluster:
# Verify kube-dns service IP
kubectl get svc -n kube-system kube-dns
# Check CoreDNS pod IPs
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
# Inspect kube-dns service
kubectl describe svc kube-dns -n kube-system
# Check CoreDNS configuration
kubectl get configmap coredns -n kube-system -o yaml
# Restart CoreDNS pods
kubectl rollout restart deployment coredns -n kube-system
# Check network policies
kubectl get networkpolicies --all-namespaces
# Verify cluster CIDR and service CIDR
kubectl cluster-info dump | grep -m 1 cluster-cidr
kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
# Check kube-proxy configuration
kubectl -n kube-system get cm kube-proxy -o yaml | grep clusterCIDR
# Verify node network plugin status
kubectl describe node $(hostname) | grep -i network
# Check cni0 interface status
ip link show cni0
sudo ip link set cni0 up
# Verify iptables rules related to CNI
sudo iptables-save | grep -Ei 'cni|flannel'