Deploying nginx-proxy
in Docker Swarm: A Practical Guide
Ever found yourself wrestling with Nginx configurations for your Dockerized web services? If you're running a Docker Swarm, nginx-proxy
is about to become your new best friend. This awesome tool takes the headache out of reverse proxy setup by automatically configuring Nginx for your containers. In this guide, we'll walk through how to deploy nginx-proxy
in your Docker Swarm cluster, break down a sample setup, and highlight some key things you need to know.
Why nginx-proxy
is a Game-Changer for Docker Swarm
So, why bother with nginx-proxy
when you're already managing a Swarm? Simple: it makes exposing your applications to the outside world incredibly easy. For a Swarm setup, it brings a lot to the table:
- Automagic Configuration: Forget writing Nginx config files by hand.
nginx-proxy
keeps an eye on your Docker events and updates Nginx on the fly as your containers start or stop. It's like magic, but it's just good engineering! - Effortless SSL: While not directly in our
docker-compose
example,nginx-proxy
plays super nicely withdocker-letsencrypt-nginx-proxy-companion
. This combo handles all your SSL certificate needs automatically, so you can get HTTPS without breaking a sweat. - Smart Routing: You can easily tell
nginx-proxy
where to send traffic by just setting aVIRTUAL_HOST
environment variable on your application containers. It's that straightforward.
What You'll Need
Before we dive into the fun stuff, make sure you have:
- A Docker Swarm cluster up and running.
- Docker Compose (version 3.x or newer) installed. We'll use it to define our services.
Your docker-compose.yml
Blueprint for Swarm
Here’s a docker-compose.yml
file that sets up nginx-proxy
as a global service across your Docker Swarm. Think of this as the foundation for your dynamic reverse proxy:
version: "3.9"
volumes:
ssl:
dhparam:
challenges:
services:
proxy:
image: mesudip/nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ssl:/etc/ssl
- dhparam:/etc/nginx/dhparam/
- challenges:/tmp/acme-challenges/
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
networks:
- frontend
deploy:
mode: global
placement:
constraints:
- node.labels.nginx == true
networks:
frontend:
name: frontend
attachable: true
Detailed Explanation of the Configuration
Let's break down each section of this docker-compose.yml
file:
-
volumes
: These are like persistent storage areas fornginx-proxy
. They're super important for keeping your data safe.ssl
: This is where all your precious SSL/TLS certificates will live. It includes ACME account key.dhparam
: Stores Diffie-Hellman parameters. Sounds fancy, right? It's just a way to boost your HTTPS security.challenges
: When you get SSL certificates, services like Let's Encrypt need to verify you own the domain. This volume helps handle those verification challenges. When you run multiple nginx containers per host, you must share this volume in order to get the requests
-
services.proxy
: This is where we define ournginx-proxy
service itself.volumes
:- /var/run/docker.sock:/var/run/docker.sock:ro
: This is the secret sauce! It givesnginx-proxy
read-only access to your Docker daemon. This means it can "watch" what Docker is doing – like when containers start or stop – and peek at their settings (like environment variables). This is how it magically configures Nginx.- ssl:/etc/ssl
: Mounts ourssl
volume into the container, so Nginx can find and use your certificates.- dhparam:/etc/nginx/dhparam/
: Same idea, but for those Diffie-Hellman parameters.- challenges:/tmp/acme-challenges/
: And for the ACME challenges.
ports
: We're opening up the standard web ports: 80 for regular HTTP and 443 for secure HTTPS.mode: host
: This is a crucial detail for Swarm. Instead of letting Swarm's internal routing handle things, this binds the container's ports directly to the host machine's network. Why? Becausenginx-proxy
needs to be the first point of contact for web traffic. If you don't set the mode to host, nginx will see the ip of docker-proxy instead of the actual client.
networks: - frontend
: We're connecting our proxy service to a network we've namedfrontend
. This allows it to chat with other services that are also on this network.deploy
: This section is all about how Docker Swarm should deploy and manage ournginx-proxy
.mode: global
: This is super handy! It tells Swarm to run one instance ofnginx-proxy
on every single eligible node in your cluster. This means high availability and that traffic can be handled locally on any node.placement.constraints: - node.labels.nginx == true
: This gives us precise control. We're telling Swarm: "Hey, only putnginx-proxy
on nodes that have a labelnginx=true
." This is great for dedicating specific nodes as your proxy entry points.
To tag a Swarm node with this label, just run this command:
docker node update --label-add nginx=true <node-id>
-
networks.frontend
: This defines our custom overlay network.name: frontend
: Giving it a clear name makes it easy to find and use.attachable: true
: This is a really neat trick for Swarm! It means that other containers or services, even if they're defined in completely separatedocker-compose.yml
files or deployed individually, can still connect to thisfrontend
network. How to useattachable
in another stack:
Let's say you have a separate application stack (
my-app-stack.yml
) that needs to be exposed vianginx-proxy
. You can simply declare thefrontend
network asexternal
in your application'sdocker-compose.yml
:# my-app-stack.yml services: web: image: my-web-app:latest environment: - VIRTUAL_HOST=https://myapp.example.com networks: - frontend # Connect to the existing frontend network networks: frontend: external: true # This network already exists and is attachable
By setting
external: true
, Docker Swarm knows not to create a newfrontend
network but to connect to the one already created by yournginx-proxy
stack. This allowsnginx-proxy
to discover and proxy yourmy-web-app
service.
The Big Catch: Per-Host Nginx Configuration
Alright, here's something super important to wrap your head around when using nginx-proxy
(and its SSL companion) in Docker Swarm:
nginx-proxy
builds its Nginx configuration files on a per-host basis. What does that mean? It means the Nginx config on a particular Swarm node only knows about and includes routes for backend services that are actually running on that very same node.
The Implication: If you have a web application (let's call it my-app
) that's only running on Node C, but your nginx-proxy
instances are on Node A and Node B, then nginx-proxy
on A and B will have no idea my-app
even exists. So, traffic won't get routed to it.
Your Action Item: To make sure nginx-proxy
can find and route to your services, you need to deploy your services with strict placement constraints that match where your nginx-proxy
instances are running. If nginx-proxy
is set to run on nodes with node.labels.nginx == true
, then any service you want it to proxy must also be deployed to a node with that same label.
Let's look at an example to make this crystal clear:
- You've got
nginx-proxy
running happily on Node A and Node B (because they both havenginx=true
). - Now, you deploy
my-app
withVIRTUAL_HOST=myapp.com
, but you tell Swarm to run it only on Node C (which, crucially, does not havenginx=true
).
What happens? nginx-proxy
on Node A and Node B will never "see" my-app
, and myapp.com
won't work. To fix this, my-app
needs to be deployed to a node where nginx-proxy
is also running (like Node A or Node B).
Wrapping It Up
Deploying nginx-proxy
in Docker Swarm is a fantastic way to automate your reverse proxy and SSL setup for containerized applications. By understanding how to use global
deployment mode, node labels for smart placement, and attachable
networks for flexible service connections, you can build a really robust and scalable proxy layer. Just keep that "per-host configuration" limitation in mind, and plan your service deployments accordingly. Get this right, and you'll be smoothly routing traffic in your Swarm in no time!