Home Linux 7 Tips for NGINX Performance Tuning

7 Tips for NGINX Performance Tuning

by Lakindu Jayasena
29.6K views 10 mins read
Nginx Performance Tuning

Tip 1 – Adjust Worker Processors & Worker Connections

The architecture of the Nginx has one master processor and several worker processors. The purpose of the master processor is to read and evaluate the configuration as well as manage the worker processors. The worker processor’s job is to handle requests and establish a connection with the client and the server.

By default, the value of this parameter is set to auto (recommended) and it sets the number of worker processes to match the number of available CPU cores in the system. However, in case there is more traffic coming to your Nginx web server and if you require to run more processors, then it’s recommended to upgrade your machine to more cores and re-adjust the worker processes to the new number of CPU cores in your system accordingly.

In order to figure out how many worker processes we required; we first need to know how many CPUs are available in our system. To find how many cores you have on your Nginx server, run the following command.

grep processor /proc/cpuinfo | wc –l

Then we can do changes to the “worker_processes” parameter inside the /etc/nginx/nginx.conf file.

worker_processes auto;

Worker connections are the maximum number of connections (TCP Sessions) that each worker process can handle simultaneously. By increasing this number, we can increase the capacity of each worker process. When combining worker processors and worker connections together, you get the maximum number of clients that can be served per second.

Max Number of Clients/Second = Worker processes * Worker connections

The default is 512, but most systems have enough resources to support a larger number and most web browsers open at least two connections per server, this number can be half.

The following command will tell you how many open files your system allows a process to use.

ulimit -a

Therefore the maximum number for the worker connections setting is 1024 and it’s best to use this to get the full potential from Nginx.

Let’s update the main config /etc/nginx/nginx.conf

worker_connections 1024;

Tip 2 – Enabling Gzip Compression

Gzip is a well-known software application used for file compression and decompression. It can help reduce the amount of network transfer Nginx deals with. Enabling gzip can save bandwidth and improve website load time on slow connections.
Nowadays most of the clients and servers support the gzip application. When a gzip compatible client/web browser request a resource from a Gzip-enabled server, the server will compress the response before sending it back to the client. That is a great way to optimize the Nginx server and manage requests even more efficiently.

The overall configuration of Gzip compression might look like this. Open up the file /etc/nginx/nginx.conf and add the following directives inside the server block.

server {
    ...
    gzip on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
    gzip_disable "MSIE [1-6]\.";    
    ...
}

Here’s an explanation for the configuration, line by line:

  • gzip on; – Enables gzip compression.
  • gzip_vary on; – Tells proxies to cache both gzipped and regular versions of a resource.
  • gzip_min_length 1024; – Informs NGINX to not compress anything smaller than the defined size.
  • gzip_proxied; – Compress data even for clients that are connecting via proxies (here we’re enabling compression if: a response header includes the “expired”, “no-cache”, “no-store”, “private”, and “Authorization” parameters).
  • gzip_types; – Enables the types of files that can be compressed.
  • gzip_disable ; – “MSIE [1-6]\.”; – disable compression for Internet Explorer versions 1-6.

Tip 3 – Change static content caching duration on Nginx

Static content is content of a website that remains the same across pages. As an example, that includes files like media files, documents, CSS & JS files. Caching is a mechanism for keeping frequent access files in easily accessible locations. Enabling caching will reduce bandwidth and improve the performance of the website. When a client request arrives at your site the cached version will be served up unless it has changed since the last cache. In the Nginx main configuration file, you can add the following directives to tell your computer to cache the web page’s static files for faster accessibility.

Add the following section inside /etc/nginx/sites-available/domainname.conf vhost file.

location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 365d;
}

In this example, all .jpg, .jpeg, .png, .gif, .ico, .css, and .js files get an Expires header with a date 365 days in the future from the browser access time.

Tip 4 – Change the size of the Buffers

Nginx buffers are also somewhat important for the optimization of Nginx performance. Because the buffer sizes are too low, then Nginx going to write to a temporary file causing huge disk I/O operations constantly. To prevent it, set the buffer size accordingly.

The following are the parameters that need to be adjusted inside the /etc/nginx/nginx.conf file for optimum performance:

http{
    ...
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 4 4k;
    ...
}
  • client_body_buffer_size – Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file.
  • client_header_buffer_size – Refers to the buffer size relative to the client request header.
  • client_max_body_size – Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size of a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client.
  • large_client_header_buffers – Maximum number and size of buffers for large client headers. A request line cannot exceed the size of one buffer, or the 414 (Request-URI Too Large) error is returned to the client.

With the values above the Nginx will work optimally but for even further optimization, you can tweak the values and test the performance.

Tip 5 – Reducing Timeouts

Timeouts also really improve the Nginx performance considerably. The keepalive connections reduce CPU and network overhead required when opening and closing connections.

The following are the parameters that need to be adjusted inside the /etc/nginx/nginx.conf file for optimum performance:

http {
    ...
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout 15;
    send_timeout 10;
    ...
}
  • client_header_timeout – Defines a timeout for reading the client request header. If a client does not transmit the entire header within this time, the request is terminated with the 408 (Request Time-out) error.
  • client_body_timeout  – Defines a timeout for reading the client request body. The timeout is set only for a period between two successive read operations, not for the transmission of the whole request body. If a client does not transmit anything within this time, the request is terminated with the 408 (Request Time-out) error.
  • keepalive_timeout – During which a keep-alive client connection will stay open on the server side.
  • send_timeout – Sets a timeout for transmitting a response to the client. If the client does not receive anything from the server within this time, the connection is closed.

Tip 6 – Disabling access logs (If required)

Logging is very important when troubleshooting issues and auditing. However, by enabling logging, it will consume a large amount of disk storage and utilize more CPU/IO cycles hence reducing server performance, because it logs every single Nginx request coming to the server.

There are two solutions to go about this.

  • Disable Access Logging entirely which will save your environment a bunch of additional processing and hard drive space.
access_log off;

  • If it’s required to have access logging, then enable access-log buffering. This enables Nginx to buffer a series of log entries and writes them to the log file together at once instead of performing the different write operations for each request.
access_log /var/log/nginx/access.log main buffer=16k

As an alternative solution, you could use open-source solutions for like ELK stack and others which will centralize all logs for your system.

Tip 7 – Configure HTTP/2 Support

The HTTP/2 is the successor of the HTTP 1.x network protocol, which has been widely used HTTP/2 to reduce latency, minimize the protocol overhead and add support for request prioritization, and make web applications load much faster. Hence, it is vital to stay current with performance optimization techniques and strategies. The main focus of HTTP/2 is to reduce overall web page loading time, thus optimizing performance. It also focuses on network and server-side resource usage as well as enhanced security because, with HTTP/2, SSL/TLS encryption is compulsory.

As a prerequisite, make sure the Nginx version is 1.9.5 or higher because it is built with the ngx_http_v2_module module otherwise you have to manually add it and the server must enable it with SSL/TLS.
Your HTTPS server block should now resemble the following:

server {
    listen 443 ssl http2;
    ...
    #Removing Old and Insecure Cipher Suites if available and add the following.
    ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;  
}

Do you have any questions or suggestions about this article? Feel free to let us know using the comment form below.

Suggested Read:

Related Articles

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.