Canary Deployments in Kubernetes: Hands-on Guide
A practical guide to implementing canary deployments in Kubernetes using kind or Docker Desktop
TLDR; Canary deployments allow you to gradually roll out new versions of your application to a subset of users, minimizing risk and enabling quick rollbacks. This guide will walk you through implementing canary deployments in Kubernetes using either kind or Docker Desktop.
What is a Canary Deployment?#
A canary deployment is a deployment strategy where a new version of an application is gradually rolled out to a small percentage of users before being released to the entire user base. This approach helps in:
- Reducing risk by testing new versions with a limited audience
- Enabling quick rollbacks if issues are detected
- Gathering real-world performance metrics before full deployment
- Validating new features with actual users
Prerequisites#
Before we begin, ensure you have:
- Docker Desktop with Kubernetes enabled OR kind installed
- kubectl command-line tool
- Basic understanding of Kubernetes concepts
Setting Up the Environment#
Option 1: Using kind#
# Install kind if not already installed
brew install kind
# Create a new cluster
kind create cluster --name canary-demo
# Verify cluster creation
kubectl cluster-info --context kind-canary-demo
bashOption 2: Using Docker Desktop#
- Open Docker Desktop
- Go to Settings > Kubernetes
- Enable Kubernetes
- Click “Apply & Restart”
Creating a Sample Application#
Let’s create a simple web application with two versions that display their version information:
# Create a namespace for our demo
kubectl create namespace canary-demo
# Create deployment for version 1
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-v1
namespace: canary-demo
spec:
replicas: 3
selector:
matchLabels:
app: webapp
version: v1
template:
metadata:
labels:
app: webapp
version: v1
spec:
containers:
- name: webapp
image: nginx:1.19
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: webapp-v1-html
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-v1-html
namespace: canary-demo
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>WebApp v1</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
.version { color: #666; font-size: 0.8em; }
</style>
</head>
<body>
<h1>Welcome to WebApp</h1>
<p class="version">Version: v1</p>
</body>
</html>
EOF
# Create a service to expose the application
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: webapp
namespace: canary-demo
spec:
selector:
app: webapp
ports:
- port: 80
targetPort: 80
type: ClusterIP
EOF
bashImplementing the Canary Deployment#
Now, let’s deploy version 2 of our application as a canary:
# Create canary deployment (10% of traffic)
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-v2
namespace: canary-demo
spec:
replicas: 1
selector:
matchLabels:
app: webapp
version: v2
template:
metadata:
labels:
app: webapp
version: v2
spec:
containers:
- name: webapp
image: nginx:1.20
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: webapp-v2-html
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-v2-html
namespace: canary-demo
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>WebApp v2</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
.version { color: #666; font-size: 0.8em; }
</style>
</head>
<body>
<h1>Welcome to WebApp</h1>
<p class="version">Version: v2</p>
</body>
</html>
EOF
bashMonitoring the Canary Deployment#
To monitor the deployment and verify traffic splitting, we can use this shell script:
#!/bin/bash
# monitor-canary.sh
# Usage: ./monitor-canary.sh <number-of-requests>
REQUESTS=${1:-100}
SERVICE_URL="http://webapp.local" # Update this with your actual service URL
echo "Monitoring canary deployment with $REQUESTS requests..."
echo "----------------------------------------"
v1_count=0
v2_count=0
for i in $(seq 1 $REQUESTS); do
response=$(curl -s $SERVICE_URL)
if [[ $response == *"Version: v1"* ]]; then
v1_count=$((v1_count + 1))
elif [[ $response == *"Version: v2"* ]]; then
v2_count=$((v2_count + 1))
fi
echo -n "."
sleep 0.1
done
echo -e "\n----------------------------------------"
echo "Results:"
echo "v1 responses: $v1_count"
echo "v2 responses: $v2_count"
echo "v1 percentage: $((v1_count * 100 / REQUESTS))%"
echo "v2 percentage: $((v2_count * 100 / REQUESTS))%"
bashTraffic Splitting with Ingress#
For more sophisticated traffic splitting, we can use an Ingress controller with traffic splitting capabilities:
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
# Create Ingress with traffic splitting
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp
namespace: canary-demo
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
rules:
- host: webapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp
port:
number: 80
EOF
bashAlternative: Using Istio for Canary Deployments#
While NGINX Ingress provides basic traffic splitting capabilities, Istio offers more sophisticated features for canary deployments:
- Fine-grained traffic control: Istio allows you to route traffic based on various criteria like HTTP headers, cookies, or source IP.
- Advanced monitoring: Built-in metrics and tracing capabilities.
- Automatic retries and circuit breaking: Better handling of failures.
- A/B testing support: More sophisticated testing capabilities.
To implement canary deployments with Istio:
# Install Istio
istioctl install --set profile=demo -y
# Label namespace for Istio injection
kubectl label namespace canary-demo istio-injection=enabled
# Create VirtualService for traffic splitting
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: webapp
namespace: canary-demo
spec:
hosts:
- webapp.local
http:
- route:
- destination:
host: webapp
subset: v1
weight: 90
- destination:
host: webapp
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: webapp
namespace: canary-demo
spec:
host: webapp
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
EOF
bashGradual Rollout Strategy#
To gradually increase the canary traffic:
- Start with 10% traffic
- Monitor metrics and logs
- If successful, increase to 25%
- Continue monitoring
- If still successful, increase to 50%
- Finally, roll out to 100%
# Update canary weight to 25%
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"25"}}}'
bashRollback Strategy#
If issues are detected:
# Scale down canary deployment
kubectl scale deployment webapp-v2 -n canary-demo --replicas=0
# Set canary weight to 0
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"0"}}}'
bashBest Practices#
- Monitoring: Implement comprehensive monitoring before starting canary deployments
- Metrics: Define clear success metrics for the canary
- Automation: Automate the rollout process where possible
- Testing: Ensure proper testing before canary deployment
- Documentation: Maintain clear documentation of the process
Cleanup#
# Delete the namespace and all resources
kubectl delete namespace canary-demo
# If using kind
kind delete cluster --name canary-demo
bashConclusion#
Canary deployments provide a safe way to roll out new versions of your applications. By following this guide, you can implement canary deployments in your Kubernetes clusters and reduce the risk associated with deployments.
Remember to:
- Start with a small percentage of traffic
- Monitor closely
- Have a clear rollback strategy
- Document the process
- Automate where possible
Happy deploying!