Logo

dev-resources.site

for different kinds of informations.

Setting Up an NGINX Reverse Proxy with a Node.js Cluster Using Docker

Published at
1/10/2025
Categories
docker
devops
nginx
containers
Author
yash_patil16
Categories
4 categories in total
docker
open
devops
open
nginx
open
containers
open
Author
12 person written this
yash_patil16
open
Setting Up an NGINX Reverse Proxy with a Node.js Cluster Using Docker

In this blog, I’ll walk you through a project where I set up an NGINX server as a reverse proxy to handle requests for a Node.js application cluster. The setup uses Docker to containerize both the NGINX server and the Node.js applications, enabling seamless scaling and management. By the end of this, you'll understand why NGINX is an essential tool for modern web development and how to configure it for such use cases.

What is NGINX?

NGINX (pronounced "engine-x") is a high-performance web server and reverse proxy server. It is widely used for its speed, reliability, and ability to handle concurrent connections. Here are some of its key functionalities:

  • Web Server: NGINX can serve static files like HTML, CSS, and JavaScript with exceptional speed.

    Apache web server also provides this functionality but Nginx is favored for its high performance, low resource consumption, and ability to handle a large number of concurrent connections.

  • Reverse Proxy: It can forward incoming client requests to backend servers or upstream servers and return the responses to the client. This improves scalability and security. How? In this scenario the end user dont directly send request to the backend servers, instead Nginx acts like a mediator and handles this task.

  • Load Balancing: NGINX can distribute incoming traffic across multiple backend(upstream) servers using algorithms like round-robin or least connections(Default being the round-robin algorithm).

Image description

  • Kubernetes Ingress Controller: NGINX is often used as an ingress controller in Kubernetes clusters. In this role, NGINX receives requests from a cloud load balancer and routes them to services inside the cluster. This ensures that the cluster remains secure and only the load balancer is exposed to the public.

Bonus: NGINX in Kubernetes

When used as a Kubernetes ingress controller, NGINX takes on a similar role but within a cluster. Here’s how it works:

  1. A cloud load balancer forwards requests to the NGINX ingress controller.

  2. The ingress controller routes the requests to the appropriate Kubernetes service.

  3. The service forwards the requests to the pods (application instances).

Image description

This setup ensures that the Kubernetes cluster remains secure, with only the cloud load balancer exposed to external traffic.

In this project, we use NGINX as a reverse proxy and load balancer for a Node.js cluster serving a web-page.


Here is the Github link for the project which consists of the source code, custom nginx configuration created by me and the Docker-Compose file used to containerize the whole setup.

Project Overview

The project consists of the following components:

  1. NGINX Server: Listens on port 8080 and forwards incoming HTTP requests to a Node.js cluster.

  2. Node.js Cluster: Comprises three Docker containers, each running a Node.js application on port 3000.

  3. Docker Compose: Orchestrates the deployment of all containers.

Here’s how the setup works:

  1. A client sends an HTTP request to the NGINX server on port 8080.

  2. NGINX, acting as a reverse proxy, forwards the request to one of the Node.js containers using a round-robin load-balancing strategy.

  3. The Node.js container processes the request and returns the response via NGINX.


Custom NGINX Configuration

Below is the NGINX configuration file written by me used in this project:

worker_processes auto;

events {
    worker_connections 1024;
}

http {
    include mime.types;

    # Upstream block to define the Node.js backend servers
    upstream nodejs_cluster {
        server  app1:3000;
        server  app2:3000;
        server  app3:3000;
    }

    server {
        listen 8080;  # Listen on port 8080 for HTTP
        server_name localhost;

        # Proxying requests to the Node.js cluster
        location / {
            proxy_pass http://nodejs_cluster;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • So basically we have blocks like server, http, events, etc. And inside these block we have directives which decide the behavior of the server.

  • Nginx setups worker_processes which do the work of getting and processing the requests from the browsers. These handle several concurrent requests from the users in a single threaded event loop. The worker processes influences how well Nginx handles the traffic. This should be tuned according to the server’s hardware and expected traffic load. At production level its always advised to set the worker processes with the equivalent no. of CPU cores.

  • Then we setup worker_connections directive in the events block. This configures how many concurrent connections each worker process should handle.

  • The main logic of the server is defined in the http block. We can also configure Nginx to listen for requests using HTTPS protocol and SSL encryption but for this simple setup i have not configured it. So the http block defines at what port nginx handles user requests, where to forward it to for particular domains or IP addresses.

  • The server block listens on port 8080 and forwards requests to the upstream servers defined.

  • The upstream block defines the Node.js backend servers (three containers in our case). When Nginx acts as a reverse proxy, the request coming to the backend servers originates from Nginx, not directly from the client. AS a result, backend servers would see the IP address of the Nginx server as the source of request.

  • Why i have mentioned :

Image description

well docker’s internal **DNS** service resolves app1, app2 and app3 to the nodejs containers services we created in the docker compose file.
Enter fullscreen mode Exit fullscreen mode
  • We also want to forward information from the original client requests. This provides useful info for logging purposes. The proxy_pass directive sends requests to the upstream cluster, while headers like Host and X-Real-IP preserve client information.

  • Another important configuration must be include mime.types; in response to the client. When Nginx returns the response from upstream servers, it can include the type of file Nginx is serving which helps the browser to process and render the files.

So that’s pretty much it about Nginx configuration and this will be foundational while setting up other projects as the logic pretty much remains the same.


Docker Compose File

Here’s the docker-compose.yml file that defines the entire setup:

version: '3'
services:
  app1:
    build: ./app
    environment:
      - APP_NAME=App1
    image: yashpatil16/nginx-app1:latest
    ports:
      - "3001:3000"

  app2:
    build: ./app
    environment:
      - APP_NAME=App2
    image: yashpatil16/nginx-app3:latest
    ports:
      - "3002:3000"

  app3:
    build: ./app
    environment:
      - APP_NAME=App3
    image: yashpatil16/nginx-app3:latest
    ports:
      - "3003:3000"

  nginx:
    build: ./app/nginx
    image: yashpatil16/nginx:nodejs-app
    ports:
      - "8080:8080"
    depends_on:
      - app1
      - app2
      - app3
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • The app1, app2, and app3 services each build and run a Node.js application, exposing port 3000 internally.

  • The nginx service builds the NGINX image, exposing port 8080 to the host.

  • The depends_on directive ensures that the Node.js containers start before NGINX.

I have also written a custom DockerFile for the Nginx server container to instruct it to use my configuration instead of the default configuration file.

Image description


Running the Project

  1. Build and start the containers:

    docker-compose up --build
    
  2. Access the application in your browser:

    http://localhost:8080
    

    NGINX will forward the request to one of the Node.js containers and return the response.

Image description

To verify the Round-Robin loadbalancing approach check the logs :
Enter fullscreen mode Exit fullscreen mode

Image description

As you can requests are served by different containers, in our case App1,2,3 as mentioned in the Docker Compose file.
Enter fullscreen mode Exit fullscreen mode

Conclusion

Well that’s pretty much it. We understood what Nginx is, what functionalities it offers and how to setup a Nginx server as a reverse-proxy for our upstream servers. This was a simple setup, but for any other projects the logic remains the same with more configurations here and there.

And connect with me at LinkedIn : https://www.linkedin.com/in/yash-patil-24112a258/

nginx Article's
30 articles in total
Favicon
nginx-mod-http-geoip
Favicon
How to run a Nginx-web server
Favicon
ngx whitelist/blacklist module
Favicon
Nginx Simplified: Technical Insights with Real-World Analogies
Favicon
Nginx Configuration Tips for Secure Communication: Enabling mTLS and checking client fingerprint
Favicon
Building a Scalable Reverse Proxy Server like Nginx with Node.js and TypeScript
Favicon
Deploy NestJS and NextJS application in same server using pm2 and Nginx
Favicon
Setting Up an NGINX Reverse Proxy with a Node.js Cluster Using Docker
Favicon
การทำ HTTPS ด้วย Certbot และ Nginx บน Ubuntu Server
Favicon
How to configure Free SSL Certificate on Nginx using Certbot
Favicon
Docker Hands-on: Learn Docker Volume and Bind Mounts with Sample Projects using NGINX
Favicon
自建的git远程仓库,在push时413 Request Entity Too Large
Favicon
Optimize SvelteKit performance with brotli compression
Favicon
I’m running a Spring Boot application inside a Docker container on my VM. The application works fine over HTTP, and I can access all endpoints via http://127.0.0.1:8080. I’ve set up NGINX as a reverse proxy to serve HTTPS requests. No errors for http reqs.
Favicon
Deploying a MERN App on Azure: The Smart Way
Favicon
My First Full-Stack Deployment with Docker and NGINX as Load Balancer
Favicon
Streamlined Release Process for a Web Application: Trunk-Based Development with Feature Flags
Favicon
How to Install NGINX on Ubuntu 22.04
Favicon
Secure Nginx with Let's Encrypt on Ubuntu
Favicon
Kubernetes Ingress Controllers and NGINX Ingress: A Complete Guide
Favicon
What is HTTP 499 Status Code and How to Fix it?
Favicon
Docker Compose Demo: Running Multiple Services with Two Domains on Localhost
Favicon
Building a Production Stack: Docker, Meilisearch, NGINX & NestJS
Favicon
Step-by-Step Guide: Assigning a Namecheap Domain to DigitalOcean Hosting with Nginx
Favicon
Streamlining React CI/CD Pipelines with GitHub Actions
Favicon
Connecting to an EC2 Instance with Ubuntu and Installing NGINX on AWS
Favicon
Installing Nginx Web Server on Linux: A Step-by-Step Guide
Favicon
Hosting multiple Websites on a single Nginx Server
Favicon
Unleashing the Power of NGINX as an API Gateway
Favicon
Installing Wordpress with Nginx in Ubuntu

Featured ones: