Whether you’re a DevOps engineer or not, you’ve probably heard about Kubernetes or K8s. In this blog, we’ll learn what it is and how it works. This is just a brief overview - in the next chapter, we’ll dive deeper into K8s.
Remember our Docker journey? We learned to containerize applications with docker run, docker-compose, and Dockerfiles. But what happens when you need to run hundreds or thousands of containers across multiple servers?
Docker Limitations:
# This works great for development
docker run -p 3000:3000 my-app
docker-compose up -dBut in production, you need:
Kubernetes to the Rescue: K8s is like having a smart manager for your container fleet. It handles all the complex orchestration so you can focus on your applications.
Think of it this way:
| Docker | Kubernetes |
|---|---|
| Single container | Multiple containers |
| One machine | Multiple machines |
| Manual management | Automated orchestration |
| Basic networking | Advanced networking |
| Simple scaling | Intelligent scaling |
Note: Actually, Docker has Docker Swarm to manage multiple containers across multiple machines. But it’s not as powerful as K8s, so very few people use it.
Kubernetes = K + ubernete + s. The word ‘ubernete’ has 8 letters => K8s :)))
kubectl. (Behind the scenes, this tool calls the K8s cluster API server)The best way to learn K8s is to try installing it on your local machine. MicroK8s is the simplest way to get Kubernetes running locally. It’s perfect for learning and development.
# Install MicroK8s
sudo snap install microk8s --classicAt this point, your machine is now a K8s cluster with 1 master node. Note that MicroK8s installs its own kubectl as microk8s kubectl. In the real world, you need to install kubectl on your machine and configure it to control your MicroK8s cluster (and other clusters you have too - e.g., I’m managing about 10 clusters).
Let’s deploy a simple nginx app using kubectl commands:
# Get nodes in the cluster
sudo microk8s kubectl get nodes
# Create a deployment
sudo microk8s kubectl create deployment nginx-app --image=nginx:alpine
# Scale the deployment
sudo microk8s kubectl scale deployment nginx-app --replicas=3
# Expose the deployment
sudo microk8s kubectl expose deployment nginx-app --port=80 --type=LoadBalancer
# Check what we created
sudo microk8s kubectl get pods
sudo microk8s kubectl get services
sudo microk8s kubectl get deployments
# Get detailed info
sudo microk8s kubectl describe pod <pod-name>
sudo microk8s kubectl describe service nginx-app
# View logs
sudo microk8s kubectl logs <pod-name>
sudo microk8s kubectl logs -f <pod-name> # Follow logs
# Execute commands in pod
sudo microk8s kubectl exec -it <pod-name> -- /bin/shsudo microk8s kubectl get pods.sudo microk8s kubectl create namespace <namespace-name> and add --namespace <namespace-name> to any kubectl command to tell kubectl to operate on that namespace. If you don’t specify the namespace, kubectl will operate on the default namespace (as shown in the code example).# Update the image
sudo microk8s kubectl set image deployment nginx-app nginx=nginx:latest
# Check rollout status
sudo microk8s kubectl rollout status deployment nginx-app
# Rollback if needed
sudo microk8s kubectl rollout undo deployment nginx-app
# Check rollout history
sudo microk8s kubectl rollout history deployment nginx-app# Scale up
sudo microk8s kubectl scale deployment nginx-app --replicas=5 # Scale to 5 pods
# Scale down
sudo microk8s kubectl scale deployment nginx-app --replicas=2
# Auto-scaling (if metrics-server is enabled)
sudo microk8s kubectl autoscale deployment nginx-app --min=2 --max=10 --cpu-percent=50# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer# Apply the configuration
sudo microk8s kubectl apply -f nginx-deployment.yaml
# Check status
sudo microk8s kubectl get pods
sudo microk8s kubectl get services
# Update the deployment
sudo microk8s kubectl apply -f nginx-deployment.yaml
# Delete resources
sudo microk8s kubectl delete -f nginx-deployment.yamlSome kubectl commands you should know:
# Cluster info
sudo microk8s kubectl cluster-info
sudo microk8s kubectl get nodes
# Pods
sudo microk8s kubectl get pods
sudo microk8s kubectl get pods -o wide
sudo microk8s kubectl describe pod <pod-name>
sudo microk8s kubectl logs <pod-name>
# Deployments
sudo microk8s kubectl get deployments
sudo microk8s kubectl describe deployment <deployment-name>
sudo microk8s kubectl rollout status deployment <deployment-name>
# Services
sudo microk8s kubectl get services
sudo microk8s kubectl describe service <service-name>
# Namespaces
sudo microk8s kubectl get namespaces
sudo microk8s kubectl create namespace my-namespace
# Delete resources
sudo microk8s kubectl delete pod <pod-name>
sudo microk8s kubectl delete deployment <deployment-name>
sudo microk8s kubectl delete service <service-name>This is just the beginning! In the next chapter, we’ll dive deeper into:
✅ What we covered:
Kubernetes might seem complex at first, but it’s just a container orchestrator that makes your life easier. Start with the basics, practice locally with MicroK8s, and gradually explore advanced features. The best way to learn K8s is by doing - deploy applications, break things, and fix them! Don’t know how to fix? Don’t forget these days you have AI assistants to help you.
| ← Previous Chapter
🐳 Docker and the world of containerization | DevOps Series
Chapter 4 of 16 | Next Chapter →
🔧 K8s in details |