BuggyCodeMaster

Back

TLDR; Canary deployments allow you to gradually roll out new versions of your application to a subset of users, minimizing risk and enabling quick rollbacks. This guide will walk you through implementing canary deployments in Kubernetes using either kind or Docker Desktop.

What is a Canary Deployment?#

A canary deployment is a deployment strategy where a new version of an application is gradually rolled out to a small percentage of users before being released to the entire user base. This approach helps in:

  • Reducing risk by testing new versions with a limited audience
  • Enabling quick rollbacks if issues are detected
  • Gathering real-world performance metrics before full deployment
  • Validating new features with actual users

Prerequisites#

Before we begin, ensure you have:

  1. Docker Desktop with Kubernetes enabled OR kind installed
  2. kubectl command-line tool
  3. Basic understanding of Kubernetes concepts

Setting Up the Environment#

Option 1: Using kind#

# Install kind if not already installed
brew install kind

# Create a new cluster
kind create cluster --name canary-demo

# Verify cluster creation
kubectl cluster-info --context kind-canary-demo
bash

Option 2: Using Docker Desktop#

  1. Open Docker Desktop
  2. Go to Settings > Kubernetes
  3. Enable Kubernetes
  4. Click “Apply & Restart”

Creating a Sample Application#

Let’s create a simple web application with two versions that display their version information:

# Create a namespace for our demo
kubectl create namespace canary-demo

# Create deployment for version 1
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v1
  namespace: canary-demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
      version: v1
  template:
    metadata:
      labels:
        app: webapp
        version: v1
    spec:
      containers:
      - name: webapp
        image: nginx:1.19
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: webapp-v1-html
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-v1-html
  namespace: canary-demo
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
        <title>WebApp v1</title>
        <style>
            body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
            .version { color: #666; font-size: 0.8em; }
        </style>
    </head>
    <body>
        <h1>Welcome to WebApp</h1>
        <p class="version">Version: v1</p>
    </body>
    </html>
EOF

# Create a service to expose the application
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: webapp
  namespace: canary-demo
spec:
  selector:
    app: webapp
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
EOF
bash

Implementing the Canary Deployment#

Now, let’s deploy version 2 of our application as a canary:

# Create canary deployment (10% of traffic)
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v2
  namespace: canary-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
      version: v2
  template:
    metadata:
      labels:
        app: webapp
        version: v2
    spec:
      containers:
      - name: webapp
        image: nginx:1.20
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: webapp-v2-html
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-v2-html
  namespace: canary-demo
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
        <title>WebApp v2</title>
        <style>
            body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
            .version { color: #666; font-size: 0.8em; }
        </style>
    </head>
    <body>
        <h1>Welcome to WebApp</h1>
        <p class="version">Version: v2</p>
    </body>
    </html>
EOF
bash

Monitoring the Canary Deployment#

To monitor the deployment and verify traffic splitting, we can use this shell script:

#!/bin/bash

# monitor-canary.sh
# Usage: ./monitor-canary.sh <number-of-requests>

REQUESTS=${1:-100}
SERVICE_URL="http://webapp.local"  # Update this with your actual service URL

echo "Monitoring canary deployment with $REQUESTS requests..."
echo "----------------------------------------"

v1_count=0
v2_count=0

for i in $(seq 1 $REQUESTS); do
    response=$(curl -s $SERVICE_URL)
    if [[ $response == *"Version: v1"* ]]; then
        v1_count=$((v1_count + 1))
    elif [[ $response == *"Version: v2"* ]]; then
        v2_count=$((v2_count + 1))
    fi
    echo -n "."
    sleep 0.1
done

echo -e "\n----------------------------------------"
echo "Results:"
echo "v1 responses: $v1_count"
echo "v2 responses: $v2_count"
echo "v1 percentage: $((v1_count * 100 / REQUESTS))%"
echo "v2 percentage: $((v2_count * 100 / REQUESTS))%"
bash

Traffic Splitting with Ingress#

For more sophisticated traffic splitting, we can use an Ingress controller with traffic splitting capabilities:

# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

# Create Ingress with traffic splitting
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp
  namespace: canary-demo
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
  rules:
  - host: webapp.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port:
              number: 80
EOF
bash

Alternative: Using Istio for Canary Deployments#

While NGINX Ingress provides basic traffic splitting capabilities, Istio offers more sophisticated features for canary deployments:

  1. Fine-grained traffic control: Istio allows you to route traffic based on various criteria like HTTP headers, cookies, or source IP.
  2. Advanced monitoring: Built-in metrics and tracing capabilities.
  3. Automatic retries and circuit breaking: Better handling of failures.
  4. A/B testing support: More sophisticated testing capabilities.

To implement canary deployments with Istio:

# Install Istio
istioctl install --set profile=demo -y

# Label namespace for Istio injection
kubectl label namespace canary-demo istio-injection=enabled

# Create VirtualService for traffic splitting
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: webapp
  namespace: canary-demo
spec:
  hosts:
  - webapp.local
  http:
  - route:
    - destination:
        host: webapp
        subset: v1
      weight: 90
    - destination:
        host: webapp
        subset: v2
      weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: webapp
  namespace: canary-demo
spec:
  host: webapp
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF
bash

Gradual Rollout Strategy#

To gradually increase the canary traffic:

  1. Start with 10% traffic
  2. Monitor metrics and logs
  3. If successful, increase to 25%
  4. Continue monitoring
  5. If still successful, increase to 50%
  6. Finally, roll out to 100%
# Update canary weight to 25%
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"25"}}}'
bash

Rollback Strategy#

If issues are detected:

# Scale down canary deployment
kubectl scale deployment webapp-v2 -n canary-demo --replicas=0

# Set canary weight to 0
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"0"}}}'
bash

Best Practices#

  1. Monitoring: Implement comprehensive monitoring before starting canary deployments
  2. Metrics: Define clear success metrics for the canary
  3. Automation: Automate the rollout process where possible
  4. Testing: Ensure proper testing before canary deployment
  5. Documentation: Maintain clear documentation of the process

Cleanup#

# Delete the namespace and all resources
kubectl delete namespace canary-demo

# If using kind
kind delete cluster --name canary-demo
bash

Conclusion#

Canary deployments provide a safe way to roll out new versions of your applications. By following this guide, you can implement canary deployments in your Kubernetes clusters and reduce the risk associated with deployments.

Remember to:

  • Start with a small percentage of traffic
  • Monitor closely
  • Have a clear rollback strategy
  • Document the process
  • Automate where possible

Happy deploying!

Canary Deployments in Kubernetes: Hands-on Guide
https://sanjaybalaji.dev/blog/kubernetes-canary-deployments
Author Sanjay Balaji
Published at April 27, 2024