BuggyCodeMaster

Back

TLDR;#

Canary deployments allow you to gradually roll out new versions of your application to a subset of users, minimizing risk and enabling quick rollbacks. This guide will walk you through implementing canary deployments in Kubernetes using either kind or Docker Desktop.

What is a Canary Deployment?#

A canary deployment is a deployment strategy where a new version of an application is gradually rolled out to a small percentage of users before being released to the entire user base. This approach helps in:

  • Reducing risk by testing new versions with a limited audience
  • Enabling quick rollbacks if issues are detected
  • Gathering real-world performance metrics before full deployment
  • Validating new features with actual users

Prerequisites#

Before we begin, ensure you have:

  1. Docker Desktop with Kubernetes enabled OR kind installed
  2. kubectl command-line tool
  3. Basic understanding of Kubernetes concepts

Setting Up the Environment#

Option 1: Using kind#

# Create a Kind cluster with ingress support
cat <<EOF | kind create cluster --name canary-demo --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

# Wait for ingress controller to be ready
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=90s
bash

Creating a Sample Application#

Let’s create a simple web application with two versions that display their version information:

# Create a namespace for our demo
kubectl create namespace canary-demo

# Create deployment for version 1
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v1
  namespace: canary-demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
      version: v1
  template:
    metadata:
      labels:
        app: webapp
        version: v1
    spec:
      containers:
      - name: webapp
        image: nginx:1.19
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: webapp-v1-html
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-v1-html
  namespace: canary-demo
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
        <title>WebApp v1</title>
        <style>
            body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
            .version { color: #666; font-size: 0.8em; }
        </style>
    </head>
    <body>
        <h1>Welcome to WebApp</h1>
        <p class="version">Version: v1</p>
    </body>
    </html>
EOF

# Create a service to expose the application
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: webapp
  namespace: canary-demo
spec:
  selector:
    app: webapp
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
EOF
bash

Implementing the Canary Deployment#

Now, let’s deploy version 2 of our application as a canary:

# Create canary deployment (10% of traffic)
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v2
  namespace: canary-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
      version: v2
  template:
    metadata:
      labels:
        app: webapp
        version: v2
    spec:
      containers:
      - name: webapp
        image: nginx:1.20
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: webapp-v2-html
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-v2-html
  namespace: canary-demo
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
        <title>WebApp v2</title>
        <style>
            body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
            .version { color: #666; font-size: 0.8em; }
        </style>
    </head>
    <body>
        <h1>Welcome to WebApp</h1>
        <p class="version">Version: v2</p>
    </body>
    </html>
EOF
bash

Traffic Splitting with Ingress#

For more sophisticated traffic splitting, we can use an Ingress controller with traffic splitting capabilities:

Creating Ingress Resources for Canary Deployment#

For canary deployments with NGINX, we need two ingress resources:

# Create primary and canary ingress resources
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress-canary
  namespace: canary-demo
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
  rules:
  - host: webapp.localhost
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port:
              number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress-primary
  namespace: canary-demo
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: webapp.localhost
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port:
              number: 80
EOF
bash

Accessing the Application Locally#

Add the following entry to your /etc/hosts file:

127.0.0.1 webapp.localhost
plaintext

Now you can access your application at http://webapp.localhost in your browser.

Monitoring the Canary Deployment#

To monitor the deployment and verify traffic splitting, we can use this shell script:

#!/bin/bash

# monitor-canary.sh
# Usage: ./monitor-canary.sh <number-of-requests>

REQUESTS=${1:-10}
SERVICE_URL="http://webapp.localhost"  # Update this with your actual service URL

echo "Monitoring canary deployment with $REQUESTS requests..."
echo "----------------------------------------"

v1_count=0
v2_count=0

for i in $(seq 1 $REQUESTS); do
    response=$(curl -s $SERVICE_URL)
    if [[ $response == *"Version: v1"* ]]; then
        v1_count=$((v1_count + 1))
    elif [[ $response == *"Version: v2"* ]]; then
        v2_count=$((v2_count + 1))
    fi
    echo -n "."
    sleep 0.1
done

echo -e "\n----------------------------------------"
echo "Results:"
echo "v1 responses: $v1_count"
echo "v2 responses: $v2_count"
echo "v1 percentage: $((v1_count * 100 / REQUESTS))%"
echo "v2 percentage: $((v2_count * 100 / REQUESTS))%"
bash

Gradual Rollout Strategy#

To gradually increase the canary traffic:

  1. Start with 10% traffic
  2. Monitor metrics and logs
  3. If successful, increase to 25%
  4. Continue monitoring
  5. If still successful, increase to 50%
  6. Finally, roll out to 100%
# Update canary weight to 25%
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"25"}}}'
bash

Rollback Strategy#

If issues are detected:

# Scale down canary deployment
kubectl scale deployment webapp-v2 -n canary-demo --replicas=0

# Set canary weight to 0
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"0"}}}'
bash

Best Practices#

  1. Monitoring: Implement comprehensive monitoring before starting canary deployments
  2. Metrics: Define clear success metrics for the canary
  3. Automation: Automate the rollout process where possible
  4. Testing: Ensure proper testing before canary deployment
  5. Documentation: Maintain clear documentation of the process

Cleanup#

# Delete the namespace and all resources
kubectl delete namespace canary-demo

# If using kind
kind delete cluster --name canary-demo
bash

Conclusion#

Canary deployments provide a safe way to roll out new versions of your applications. By following this guide, you can implement canary deployments in your Kubernetes clusters and reduce the risk associated with deployments.

Remember to:

  • Start with a small percentage of traffic
  • Monitor closely
  • Have a clear rollback strategy
  • Document the process
  • Automate where possible

Happy deploying!

Canary Deployments in Kubernetes: Hands-on Guide
https://sanjaybalaji.dev/blog/kubernetes-canary-deployments
Author Sanjay Balaji
Published at May 17, 2025