Observability Power Trio: Prometheus, Grafana & Loki
A fun, hands-on guide to setting up a powerful monitoring and logging stack with Docker.
Tired of flying blind? Wondering what your applications are really doing? Need to find that needle-in-a-haystack log entry from last Tuesday? Fear not, weary developer! Meet the dream team of open-source observability: Prometheus, Grafana, and Loki (or the PLG stack, as the cool kids call it).
- Prometheus: The metric maestro. It scrapes data about your systems and applications (CPU, memory, request counts, custom metrics, you name it!) and stores it efficiently. Think of it as the tireless accountant of your system’s performance.
- Grafana: The visualization virtuoso. It takes data from Prometheus (and Loki, and many others!) and turns it into beautiful, insightful dashboards. It’s the artist that makes sense of the numbers.
- Loki: The log lord. Inspired by Prometheus, it indexes metadata about your logs (like labels) rather than the full text, making it super efficient and cost-effective for log aggregation. It’s the librarian who knows exactly where every log entry is filed.
Together, they provide a unified view of your system’s metrics and logs, making troubleshooting and monitoring a breeze (and maybe even… fun?). In this guide, we’ll set up this powerful trio using Docker Compose for a quick and easy local environment. Let’s get our monitoring party started!
Prerequisites#
You’ll need Docker and Docker Compose installed on your machine. If you don’t have them, head over to the official Docker website and grab them.
Setting Up the Stage (Directory and Configs)#
First, create a directory for our project, let’s call it plg-stack
:
mkdir plg-stack
cd plg-stack
bashNow, let’s create the configuration files needed for our stars.
1. Prometheus Configuration (prometheus.yml
)#
Create a file named prometheus.yml
. This tells Prometheus what to scrape. We’ll configure it to scrape itself initially, and later you can add your own applications.
# prometheus.yml
global:
scrape_interval: 15s # How frequently to scrape targets
evaluation_interval: 15s # How frequently to evaluate rules
scrape_configs:
- job_name: 'prometheus' # Scrape Prometheus itself
static_configs:
- targets: ['localhost:9090'] # Default Prometheus port
- job_name: 'loki' # Scrape Loki
static_configs:
- targets: ['loki:3100']
- job_name: 'promtail' # Scrape Promtail (our log shipper)
static_configs:
- targets: ['promtail:9080']
# Add your application scraping jobs here!
# Example:
# - job_name: 'my-app'
# static_configs:
# - targets: ['host.docker.internal:8081'] # If your app runs locally outside docker
yaml2. Loki Configuration (loki-config.yml
)#
Create loki-config.yml
. This is a basic configuration telling Loki where to store data.
# loki-config.yml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki # Storage path inside the container
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093 # Optional: For alerting based on logs
# Limits can be adjusted for production
# limits_config:
# enforce_metric_name: false
# reject_old_samples: true
# reject_old_samples_max_age: 168h
yaml3. Promtail Configuration (promtail-config.yml
)#
Promtail is the agent that tails your log files and ships them to Loki. Create promtail-config.yml
. This example scrapes Docker container logs and adds useful labels.
# promtail-config.yml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml # Stores the last read position
clients:
- url: http://loki:3100/loki/api/v1/push # Loki's push endpoint
scrape_configs:
- job_name: containers
static_configs:
- targets:
- localhost
labels:
job: containerlogs # A job label for these logs
__path__: /var/log/pods/*/*/*.log # Path to Kubernetes logs if running in k8s
pipeline_stages:
- docker: {} # Extracts labels from Docker logs
# Example for scraping local system logs (requires volume mapping)
# - job_name: system
# static_configs:
# - targets:
# - localhost
# labels:
# job: varlogs
# __path__: /var/log/*log # Path to system logs
yamlNote: The scrape_configs
in Promtail are for discovering log files, not the same as Prometheus scrape configs.
Docker Compose Magic (docker-compose.yml
)#
Now, the pièce de résistance! Create docker-compose.yml
to orchestrate our services.
# docker-compose.yml
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus # Persistent storage for metrics
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--web.enable-lifecycle' # Allows reloading config via API
ports:
- '9090:9090'
networks:
- monitor-net
loki:
image: grafana/loki:latest
container_name: loki
volumes:
- ./loki-config.yml:/etc/loki/local-config.yaml
- loki_data:/loki # Persistent storage for logs
command: -config.file=/etc/loki/local-config.yaml
ports:
- '3100:3100' # HTTP port
networks:
- monitor-net
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- ./promtail-config.yml:/etc/promtail/config.yml
- /var/log:/var/log # Mount host logs (adjust as needed)
- /var/lib/docker/containers:/var/lib/docker/containers:ro # Mount Docker container logs
- /tmp/positions:/tmp # To store read positions
command: -config.file=/etc/promtail/config.yml
depends_on:
- loki
ports:
- '9080:9080' # metrics endpoint
networks:
- monitor-net
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana # Persistent storage for dashboards, etc.
# You can pre-configure datasources and dashboards here too
# - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
# - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
environment:
- GF_SECURITY_ADMIN_PASSWORD=yourstrongpassword # Change this!
- GF_USERS_ALLOW_SIGN_UP=false
# Uncomment to enable anonymous access
# - GF_AUTH_ANONYMOUS_ENABLED=true
# - GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer
depends_on:
- prometheus
- loki
ports:
- '3000:3000'
networks:
- monitor-net
volumes:
prometheus_data: {}
loki_data: {}
grafana_data: {}
networks:
monitor-net:
driver: bridge
yamlImportant Security Note:
- Change the default Grafana admin password (
yourstrongpassword
)! - Be careful about the volumes you mount into Promtail, especially
/var/log
and/var/lib/docker/containers
. Ensure Promtail only has read access if possible (:ro
).
Showtime! Running the Stack#
You’ve set the stage, written the script, and cast the actors. Time to raise the curtain!
From your plg-stack
directory, run:
docker-compose up -d
bashDocker Compose will pull the images and start the containers in detached mode (-d
). You can check the status with docker-compose ps
.
Exploring Your New Kingdom (Grafana)#
-
Access Grafana: Open your browser and navigate to
http://localhost:3000
. -
Login: Use the username
admin
and the password you set indocker-compose.yml
(e.g.,yourstrongpassword
). You’ll be prompted to change it on first login. -
Add Data Sources:
- Click the gear icon (Configuration) on the left sidebar, then “Data Sources”.
- Click “Add data source”.
- Select “Prometheus”.
- URL:
http://prometheus:9090
(Use the service name fromdocker-compose.yml
as Grafana talks to Prometheus over the Docker network). - Click “Save & Test”. You should see a green checkmark!
- URL:
- Click “Add data source” again.
- Select “Loki”.
- URL:
http://loki:3100
- Click “Save & Test”. Success again!
- URL:
-
Explore Metrics (Prometheus):
- Click the compass icon (Explore) on the left sidebar.
- Make sure “Prometheus” is selected in the top-left dropdown.
- In the “Metrics browser” field, start typing
prom
– you should see metrics likeprometheus_http_requests_total
. Select one. - Click “Run query”. Voila! Metrics from Prometheus itself.
-
Explore Logs (Loki):
- In the Explore view, switch the top-left dropdown to “Loki”.
- Click the “Log browser” button.
- Under “1. Select labels”, choose
job
and thencontainerlogs
. - Click “Show logs”. You should see logs from your running Docker containers (including Prometheus, Grafana, etc.)! You can filter further using other labels Promtail extracted (like
container_name
).
-
Import a Dashboard (Optional):
- Grafana has a vast library of pre-built dashboards.
- Click the ”+” icon (Create) on the left sidebar, then “Import”.
- You can paste a Grafana.com dashboard ID (e.g.,
1860
for Node Exporter Full, if you were monitoring a host) or upload a JSON file. Find dashboards relevant to Prometheus, Loki, or your specific applications.
Conclusion: Your Observability Journey Begins!#
You’ve done it! You’ve successfully set up a powerful, local Prometheus, Grafana, and Loki stack using Docker Compose. You can now scrape metrics from your applications (configure Prometheus), ship logs (configure Promtail), and visualize everything beautifully in Grafana.
This setup is fantastic for local development and learning. For production, you’d want to consider persistent storage options more carefully, configure security (TLS, authentication), potentially use clustering for Loki/Prometheus, and set up alerting.
But for now, pat yourself on the back! You’ve taken a massive step towards understanding your systems better. Happy monitoring!
Questions? Got your own PLG stack tips? Share them below!