화. 7월 22nd, 2025

Hello, everyone! 🌐 Behind many of the websites and services we use today are powerful pieces of technology that go unseen. One piece of software in particular is synonymous with “high performance” and “efficiency,” and that’s Nginx. 🚀.

In this article, we’re going to take a deep dive into what Nginx is, why so many companies and developers love it, and even what its core features are and how to configure them. Whether you’re interested in web development or already working with it, we hope you’ll find it informative!

— Dr.

💡 What is Nginx?

Nginx is an open source web server software developed by Igor Sysoev. It was initially created to solve traffic problems for Rambler.ru, a large Russian website, and has since become one of the most used web servers in the world.

Beyond simply serving web pages, Nginx performs a number of roles, including

  • Web Server (Web Server): efficiently serves static files.
  • Reverse Proxy: Takes requests from clients, forwards them to internal servers, and returns responses.
  • Load Balancer: Distributes traffic across multiple servers to improve system reliability and performance.
  • HTTP Cache: Stores frequently requested content to speed up responses.
  • SSL/TLS Terminator: Reduces the burden on backend servers by performing encryption/decryption tasks on their behalf.

In a word, Nginx is a versatile all-rounder that handles the complex requirements of modern web services! 🛠️

—.

🚀 Core Features of Nginx: Why Nginx?

There are clear reasons why Nginx is so widely used – let’s take a look at its core features.

1. Asynchronous, Event-Driven Architecture

Most web servers (such as Apache’s native MPM approach) create a new process or thread to handle each connection from a client. But Nginx is different. Nginx adopts an event-driven architecture, which means that it efficiently handles many concurrent connections on a single thread.

  • How does it work? 🧐 Nginx handles requests by worker processes, which wait for requests from multiple clients at the same time (an event loop) and process them quickly as they come in. It’s like having a single multitasking master taking turns handling multiple tasks at the same time.
  • PROS: High concurrent request handling capability (solves the C10K problem) with less memory and CPU resources. 💡

2. Low Resource Consumption

Thanks to the architecture described above, Nginx uses much less memory and CPU resources compared to other web servers like Apache. This is a huge advantage, especially in environments that require high performance or need to handle a lot of traffic with limited resources.

3. High Concurrency Processing

Nginx is very fast at many tasks, including serving static files, reverse proxying, and load balancing. This is especially essential for web services that need to handle high traffic. It can reliably handle hundreds of thousands of concurrent connections. 📈

4. Versatility and Extensibility

In addition to the basic functionality of a web server, it has most of the features required for modern web services built-in, including reverse proxy, load balancing, caching, SSL/TLS handling, and more. It is also modular and extensible, allowing you to add features to meet your customization needs.

—.

🛡️ A deep dive into Nginx’s key features

Let’s take a closer look at the various features of Nginx.

1. Web Server (Web Server)

Nginx excels at serving static files (HTML, CSS, JavaScript, images, etc.), especially when large websites need to deliver tons of static content quickly.

Example:

server {
    listen 80;
    server_name example.com; # set domain name

    # Set the root directory of your website
    root /var/www/html;
    index index.html index.htm; # Specify the default file

    # find and serve the file for every request
    location / {
        try_files $uri $uri/ =404;
    }
}

2. Reverse Proxy

A reverse proxy sits between the client (user) and the actual web application server (WAS, Backend Server), receiving requests from the client, forwarding them to the WAS, and forwarding the WAS’s response back to the client.

  • **Why use it?
    • Security enhancement: Hides the IP address of the WAS to protect it from direct attacks.
    • Load balancing: Reduces load by distributing requests across multiple WASs.
    • SSL/TLS termination: Reduce the burden on your WAS by performing encryption/decryption tasks on its behalf.
    • Caching: Caches frequently requested content to improve response times.

Examples:

server {
    listen 80;
    server_name api.example.com;

    location / {
        # Forward requests to the backend application server (e.g. Node.js, Django, Spring Boot)
        proxy_pass http://localhost:3000;

        # pass header information from the original request to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

3. Load Balancer 🚥 Β

Load balancing is a technique that increases the availability, scalability, and reliability of an application by distributing incoming network traffic across multiple servers. Nginx provides powerful load balancing capabilities.

Main load balancing methods: **Main load balancing methods.

  • Round Robin (default): Distributes requests to a group of servers in order.
  • Least Connections: Sends requests to the server with the fewest number of active connections.
  • IP Hash**: Hash the client’s IP address to always send requests from the same client to a specific server. (Useful for session persistence)

Example:

http {
    # define a server group named 'backend'
    upstream backend {
        # set the load balancing method (default is round robin)
        # least_conn;
        # ip_hash;

        server backend1.example.com:8080;
        server backend2.example.com:8080;
        server backend3.example.com:8080;
    }

    server {
        listen 80;
        server_name myapp.example.com;

        location / {
            # Forward requests to the 'backend' server group
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

4. Caching Server

With proxy caching, Nginx can store frequently requested content (e.g., images, JS, CSS files, etc.) on the Nginx server itself, and then respond directly from Nginx on the next request instead of going to the backend server, greatly improving response time.

Example:

http {
    # Set the cache storage path and size
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g
                     inactive=60m use_temp_path=off;

    server {
        listen 80;
        server_name cdn.example.com;

        location /static/ {
            proxy_pass http://static_backend; # Static file backend server
            proxy_cache my_cache; # use the cache zone defined above
            proxy_cache_valid 200 302 10m; # cache 200 and 302 responses for 10 minutes
            proxy_cache_valid 404 1m; # Cache 404 responses for 1 minute
            add_header X-Proxy-Cache $upstream_cache_status; # cache status check header
        }
    }
}

5. SSL/TLS Termination

SSL/TLS Termination is a feature that allows Nginx to handle the encryption/decryption of encrypted communications (HTTPS) on your behalf. This allows your backend application servers to process only HTTP communications without the burden of encryption, saving resources and improving performance.

Example:

server {
    listen 443 ssl; # HTTPS default port
    server_name secure.example.com;

    # Specify the path to the SSL/TLS certificate and private key
    ssl_certificate /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;

    # Additional SSL settings for security (recommended)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:EECDH+AESGCM:EDH+AESGCM';
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://localhost:8080; # Forward to backend HTTP server
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto https; # Notify backend that request came in HTTPS
    }
}

—.

🆚 Nginx vs. Apache: A Quick Comparison

Nginx and Apache are both powerful web servers, but they differ in their design philosophies and performance characteristics.

Features Nginx Apache HTTP Server
Architecture Asynchronous, event-driven (non-blocking) Process/thread-driven (blocking)
Performance High concurrency, low resources, fast static file serving Higher resource usage than Nginx, concurrency processing limits
Main uses Reverse proxy, load balancer, high-performance static file server Multifunctional web server, support for complex .htaccess configurations
Configuration** Centralized nginx.conf Decentralized .htaccess support
Learning curve Some learning required for initial setup Relatively easy and extensive documentation available

Verdict: If you want to handle high traffic or use it as a reverse proxy/load balancer in a microservices architecture, Nginx is the way to go. Apache can also be a good choice if you have complex modules or environments that require frequent changes to .htaccess-based settings. But for modern web services, Nginx is the way to go.

—.

📚 Guide to Nginx Default Settings (nginx.conf)

Nginx’s configuration file is typically located in /etc/nginx/nginx.conf or /etc/nginx/conf.d/*.conf. The configuration has a hierarchical structure and is roughly divided into the following blocks: main, events, http, server, and location.

1. Main configuration file (nginx.conf) structure

# main context (global settings)
user nginx; # user running Nginx
worker_processes auto; # number of worker processes (tailored to the number of CPU cores)

error_log /var/log/nginx/error.log warn; # error log path
pid /var/run/nginx.pid; # Process ID file

events {
    # events context (how network connections are handled)
    worker_connections 1024; # maximum number of connections each worker process can handle
    # use epoll; # use the high-performance I/O event handling model in Linux (automatically selected by default)
}

# use http {
    # http context (overall settings for the web server)
    include /etc/nginx/mime.types; # include MIME type definition file
    default_type application/octet-stream; # default file type

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log main; # access log path

    sendfile on; # optimize file transfer
    #tcp_nopush on; # disable Nagle algorithm (reduces latency)

    keepalive_timeout 65; # Time to keep a Keep-Alive connection with the client

    gzip on; # Enable Gzip compression (reduces data transfer)

    # Include the server blocks (most likely each site configuration file exists in this directory)
    include /etc/nginx/conf.d/*.conf;
}

2. server block

A single server block is responsible for one website or domain. It uses the listen (port) and server_name (domain) directives to handle specific requests.

server {
    listen 80; # listen for HTTP requests on port 80
    server_name example.com www.example.com; # domain to be handled by this server block

    # ... specific settings for this domain ...
}

3. location block

The location block defines how requests for specific URL paths should be handled. For example, you can set it to behave differently for certain paths, such as /, /api, /images, etc.

server {
    listen 80;
    server_name mywebsite.com;

    # 1. handle root path (/) requests
    location / {
        root /var/www/html; # Path to the main files of the website
        index index.html; # file to show by default
        try_files $uri $uri/ =404; # 404 error if no file exists
    }

    # 2. proxy the /api path request to the backend server
    location /api {
        proxy_pass http://localhost:8080; # forward requests to the local server on port 8080
        proxy_set_header Host $host; # keep original Host header
    }

    # 3. Set up caching for /static route requests
    location /static {
        expires 30d; # set static files to be cached for 30 days
        root /var/www/static;
    }
}

—]

📦 Install Nginx (Quick Guide)

On most Linux distributions, Nginx can be easily installed through the package manager.

  • Based on Ubuntu/Debian: Ubuntu/Debian: **Bash

    sudo apt update
    sudo apt install nginx
    sudo systemctl start nginx
    sudo systemctl enable nginx
  • On CentOS/RHEL: Β

    sudo yum install epel-release # Add the Nginx repository (CentOS 7 and earlier)
    sudo yum install nginx
    sudo systemctl start nginx
    sudo systemctl enable nginx

After installation, if you see the Nginx welcome page when you connect to the server’s IP address (http://YOUR_SERVER_IP) in a web browser, you’ve successfully installed it! 🎉 .

—]

✨ Conclusion: Nginx, the essential engine of modern web services

We’ve covered what Nginx is, its core features, key capabilities, and even how to set it up. Nginx is a powerful web server and reverse proxy that combines high performance, efficiency, and flexibility.

From small personal projects to large distributed systems, Nginx plays an essential role in ensuring the reliability and performance of modern web services. Why not take advantage of Nginx to build more robust and reliable web services?

If you have any questions, feel free to leave them in the comments! Next time, we’ll dive into more in-depth settings or specific features of Nginx. Thanks for reading! 🙏

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다