Kubernetes – Container Orchestration Platform
Complete Guide to Kubernetes: Container Orchestration at Scale
Kubernetes, often abbreviated as K8s, has become the de facto standard for container orchestration, enabling organizations to deploy, scale, and manage containerized applications across clusters of machines. Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes automates the complex tasks of scheduling containers, maintaining desired state, handling failures, and scaling applications based on demand.
The power of Kubernetes lies in its declarative configuration model. Rather than specifying how to achieve a desired state, you describe what you want, and Kubernetes continuously works to make reality match your specification. This approach simplifies operations, enables self-healing systems, and provides a consistent interface across different infrastructure providers.
Core Kubernetes Concepts
Understanding Kubernetes requires familiarity with its building blocks. Pods represent the smallest deployable units, containing one or more containers that share network and storage. Deployments manage pod lifecycle, ensuring the correct number of replicas run and enabling rolling updates. Services provide stable network endpoints for accessing pods, abstracting away individual pod addresses.
The Kubernetes control plane orchestrates the cluster, with the API server handling all requests, the scheduler placing pods on nodes, and controllers maintaining desired state. Worker nodes run the actual workloads, with the kubelet agent managing containers and the kube-proxy handling networking.
Installing Kubernetes
Local Development with Minikube
# Install minikube
# Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# macOS
brew install minikube
# Windows
winget install minikube
# Start cluster
minikube start
minikube start --cpus 4 --memory 8192
# Status and dashboard
minikube status
minikube dashboard
# Stop and delete
minikube stop
minikube delete
Kind (Kubernetes in Docker)
# Install kind
# Linux/macOS
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# macOS via Homebrew
brew install kind
# Create cluster
kind create cluster
kind create cluster --name dev-cluster
# Multi-node cluster
cat <
kubectl Installation
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install kubectl /usr/local/bin/kubectl
# macOS
brew install kubectl
# Windows
winget install Kubernetes.kubectl
# Verify
kubectl version --client
kubectl cluster-info
Working with Pods
# Pod manifest (pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "500m"
# Create pod
kubectl apply -f pod.yaml
kubectl run nginx --image=nginx:alpine
# List pods
kubectl get pods
kubectl get pods -o wide
kubectl get pods --all-namespaces
# Describe pod
kubectl describe pod nginx-pod
# Pod logs
kubectl logs nginx-pod
kubectl logs -f nginx-pod
kubectl logs nginx-pod -c container-name
# Execute in pod
kubectl exec -it nginx-pod -- /bin/sh
# Delete pod
kubectl delete pod nginx-pod
kubectl delete -f pod.yaml
Deployments and ReplicaSets
# Deployment manifest (deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# Create deployment
kubectl apply -f deployment.yaml
kubectl create deployment nginx --image=nginx --replicas=3
# List deployments
kubectl get deployments
kubectl get deploy
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# Update image
kubectl set image deployment/nginx-deployment nginx=nginx:1.25
# Rollout status
kubectl rollout status deployment/nginx-deployment
kubectl rollout history deployment/nginx-deployment
# Rollback
kubectl rollout undo deployment/nginx-deployment
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# Delete deployment
kubectl delete deployment nginx-deployment
Services
# Service manifest (service.yaml)
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
# Service types
# ClusterIP - Internal only (default)
# NodePort - External via node port
# LoadBalancer - Cloud load balancer
# ExternalName - DNS alias
# NodePort service
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
# Create service
kubectl apply -f service.yaml
kubectl expose deployment nginx --port=80 --type=NodePort
# List services
kubectl get services
kubectl get svc
# Describe service
kubectl describe service nginx-service
# Access service
kubectl port-forward svc/nginx-service 8080:80
ConfigMaps and Secrets
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "db.example.com"
LOG_LEVEL: "info"
config.json: |
{
"setting": "value"
}
# Create ConfigMap
kubectl create configmap app-config --from-literal=KEY=value
kubectl create configmap app-config --from-file=config.json
# Secret
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
username: YWRtaW4= # base64 encoded
password: cGFzc3dvcmQ=
# Create Secret
kubectl create secret generic app-secrets --from-literal=password=secret
# Use in deployment
spec:
containers:
- name: app
image: myapp
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DATABASE_HOST
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: password
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
Persistent Storage
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
# Use in pod
spec:
containers:
- name: app
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
# List PVCs
kubectl get pvc
kubectl describe pvc data-pvc
Namespaces
# Create namespace
kubectl create namespace development
kubectl create namespace production
# List namespaces
kubectl get namespaces
# Deploy to namespace
kubectl apply -f deployment.yaml -n development
# Set default namespace
kubectl config set-context --current --namespace=development
# Resource quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
Helm Package Manager
# Install Helm
# Linux/macOS
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# macOS
brew install helm
# Add repositories
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add stable https://charts.helm.sh/stable
helm repo update
# Search charts
helm search repo nginx
# Install chart
helm install my-nginx bitnami/nginx
helm install my-nginx bitnami/nginx --set service.type=NodePort
# List releases
helm list
helm list --all-namespaces
# Upgrade release
helm upgrade my-nginx bitnami/nginx --set replicaCount=3
# Uninstall
helm uninstall my-nginx
# Create chart
helm create mychart
Monitoring and Debugging
# Get all resources
kubectl get all
kubectl get all -A
# Resource usage
kubectl top nodes
kubectl top pods
# Events
kubectl get events
kubectl get events --sort-by='.lastTimestamp'
# Debug pod
kubectl describe pod pod-name
kubectl logs pod-name
kubectl logs pod-name --previous
kubectl exec -it pod-name -- /bin/sh
# Port forwarding
kubectl port-forward pod/pod-name 8080:80
kubectl port-forward svc/service-name 8080:80
# Proxy
kubectl proxy
Production Best Practices
# Resource limits
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# Liveness and readiness probes
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Pod disruption budget
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: myapp
# Network policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Conclusion
Kubernetes provides the foundation for cloud-native application deployment, offering powerful abstractions for managing containerized workloads at any scale. While the learning curve is substantial, mastering Kubernetes enables building resilient, scalable systems that can run consistently across different cloud providers and on-premises infrastructure. The ecosystem of tools including Helm, monitoring solutions, and service meshes extends these capabilities further.
Download Options
Safe & Secure
Verified and scanned for viruses
Regular Updates
Always get the latest version
24/7 Support
Help available when you need it