I’ve wanted to provision Jenkins on NGINX reverse proxy for a long time. I would hit walls here and there, but I finally got things sorted out at least to the point where I can hit Jenkins master on NGINX reverse proxy on my browser both running on Docker container.
There is so much to cover, so I think this blog post is going to be quite long.
First of all, I need to make clear what I want to accomplish.
I want to have multiple instances of Jenkins hosted on a single Docker host. Depending on the host name the HTTP request came, NGINX reverse proxy routes the traffic to the appropriate Jenkins instance.
The diagram below represents the logical architecture.

Here is the list of things that will be covered in this blog entry.
- Docker Network
- Jenkins on Docker
- NGINX on Docker as Reverse Proxy
- Putting NGINX and Jenkins Together
Docker Network
As you can see, there are 3 containers in the diagram above. NGINX reverse proxy routes traffic to Jenkins instances. It means that NGINX needs to be able to communicate with Jenkins instance. The containers in the diagram exist within Docker network.
By default, Docker containers use bridge network meaning that Docker uses its own network though it is configurable. If you execute ip addr
on your terminal, you’d see network interfaces like this.
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:67:fe:f4:d5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
103: br-1418967f1f10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:63:60:8f:d1 brd ff:ff:ff:ff:ff:ff
inet 172.25.0.1/16 brd 172.25.255.255 scope global br-1418967f1f10
valid_lft forever preferred_lft forever
inet6 fe80::42:63ff:fe60:8fd1/64 scope link
valid_lft forever preferred_lft forever
When you spin up a Docker container, each container has its own IP address within the Docker network. I provisioned a Jenkins container and I can check its IP address with the steps below.
- Check the container ID.
docker ps -a
- Run the following command to get the IP address.
docker inspect [container ID] | grep IPAddress
- I got
172.25.0.2
as the IP address of the container. This means that it’s within the range of the CIDR172.25.0.1/16
listed in thebr-1418967f1f10
bridge network above. Based on the CIDR, the network can host up to 65,536 containers which is way more than enough for normal operations.
Why is this important in this article. It’s because I am planning to provision NGINX as a Docker container and I want the NGINX container to communicate with Jenkins container within the Docker network without using the host network. It will increase the network security level to the next level. It is important to understand the network aspect of containerization.
Jenkins on Docker
First of all, I need to provision a Jenkins container. Here is the initial Docker Compose file (YAML) I created.
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: '1000'
volumes:
- ./jenkins_home:/var/jenkins_home
ports:
- '8080:8080'
- '2000:2000'
restart: unless-stopped
environment:
- JENKINS_SLAVE_AGENT_PORT=2000
- JENKINS_JAVA_OPTIONS=-Djava.awt.headless=true
- JENKINS_LOG=/var/jenkins_home/logs/jenkins.log
Please note that you should run mkdir jenkins_home
(create the directory) beforehand to make sure you won’t get an error because as you can see it in docker-compose.yaml, /var/jenkins_home in the container is mapped to the host’s ./jenkins_home directory where all the data is persisted.
If you save this file as docker-compose.yaml
and run docker-compose up -d
and then access it on your browser http://foobar:8080/
, you should see the initial set up UI. If you want to stop the container, you can just run docker-compose down
.
It would be operational enough to use the Jenkins container as the way it is. However, it’s limited in a sense that if you want to have another instance of Jenkins on the same machine, you would have to use another port like http://foobar:8081
and http://foobar:8082
if you need another instance. It would mean you would have to open more ports as you have more instances of Jenkins on the same Docker host. Also, it’s kind of ugly. By having NGINX reverse proxy in place in front of the Jenkins instances on the Docker host, it solves the problem.
Moreover, NGINX can be the front man to service SSL communication. It’s the best practice nowadays that even internal web applications are protected by encrypted communication by SSL. You may want to have URLs that looks like the following.
https://jenkins1.foobar and https://jenkins2.foobar
This way the URL is easier to remember and also protected by SSL communication.
NGINX on Docker as Reverse Proxy
Here is the NGINX’s docker-compose.yaml file that I’ve got.
version: '3.8'
services:
reverse:
image: nginx:latest
volumes:
- ./data/nginx.conf:/etc/nginx/nginx.conf
- ./data/conf.d:/etc/nginx/conf.d
ports:
- "80:80"
- "443:443"
networks:
proxynet:
restart: always
networks:
proxynet:
name: custom_network
For the NGINX container, I am mapping the /etc/nginx/nginx.conf to the local ./data/nginx.conf file and /etc/nginx/conf.d directory to ./data/conf.d directory. Make sure that you have ./data and ./data/conf.d directories before you run docker-compose up -d
.
I am also creating networks -> proxynet -> custom_network here in docker-compose.yaml because I want to have the Docker network where NGINX and Jenkins can communicate with each other.
Putting NGINX and Jenkins Containers Together
Here is the docker-compose.yaml file that can spin up NGINX and Jenkins within the same subnet within Docker. Notice that I commented out - 8080:8080
line. It means I stopped exposing the port 8080 of the Jenkins container to the host side because it is not necessary anymore. NGINX and Jenkins will communicate with each other within the Docker network subnet.
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: '1000'
volumes:
- ./jenkins_home:/var/jenkins_home
ports:
#- '8080:8080'
- '2000:2000'
networks:
proxynet:
restart: unless-stopped
environment:
- JENKINS_SLAVE_AGENT_PORT=2000
- JENKINS_JAVA_OPTIONS=-Djava.awt.headless=true
- JENKINS_LOG=/var/jenkins_home/logs/jenkins.log
#command: --enable-future-java
reverse:
image: nginx:latest
volumes:
- ./data/nginx.conf:/etc/nginx/nginx.conf
- ./data/conf.d:/etc/nginx/conf.d
ports:
- "80:80"
- "443:443"
networks:
proxynet:
restart: always
networks:
proxynet:
name: custom_network
Also make sure you have the nginx.conf file under ./data directory.
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
accept_mutex off;
}
http {
include /etc/nginx/mime.types;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 300m;
client_body_buffer_size 128k;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_min_length 0;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 300m;
client_body_buffer_size 128k;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_min_length 0;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
}
include /etc/nginx/conf.d/*.conf;
}
Also you need to have the following ssl.conf file under ./data/conf.d directory file. Notice that this file does 2 important things. One is to enable SSL communication between NGINX and the client side. If you wonder how to create SSL certificate, I have written how to just do it in my previous blog post. The second thing is to route the HTTPS traffic to the server jenkins:8080. This communication with done within the Docker network subnet, so no traffic goes out to the host’s network. You can also see that the SSL certificate files are under conf.d/ssl directory.
upstream jenkins_upstream {
server jenkins:8080;
}
server {
server_name jenkins.linux-mint.local;
listen 443 ssl;
ssl_certificate /etc/nginx/conf.d/ssl/jenkins.linux-mint.local.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/jenkins.linux-mint.local.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
resolver 127.0.0.11;
#proxy_redirect http:// https://;
proxy_pass http://jenkins_upstream;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off; # Required for HTTP-based CLI to work over SSL
# workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
add_header 'X-SSH-Endpoint' 'jenkins.linux-mint.local:50022' always;
}
}
Once it’s all done, just execute the command docker-compose up -d in the directory where you have docker-compose.yaml. If it’s successful and you access the URL like https://jenkins.linux-mint.local, you should see the initial screen where you can start to manage the instance of Jenkins.

If you execute cat jenkins_home/secrets/initialAdminPassword
, you should see the initial password to enter on the screen. And you just have to click Continue button, you are on to setting up Jenkins.

Recap
I have covered the steps to spin up Jenkins behind NGINX as a reverse proxy. Based on my experience, this is the most flexible and safest way to provision Jenkins. There are steps to provision agents (slaves) for it. I will cover the topic some other time.