Skip to main content

10 Essential NGINX Configuration Settings for Blazing-Fast Website Performance

10 minutes


Introduction

Optimizing NGINX for better website performance is crucial for maintaining constant accessibility. Whether you're utilizing NGINX for web hosting, acting as a reverse proxy for load balancing, or for HTTP caching, this blog post offers insights on how to fine-tune NGINX configuration settings to ensure your site remains speedy and efficient.

Fine-tuning NGINX configuration settings for better performance should not be considered a one-time fix since the website or application loads will vary over time. Make it an ongoing practice by re-evaluating the load factor and readjusting the configuration settings to maximize performance.

In this guide, we'll explore 10 essential NGINX configuration tips that'll have your website zooming past the competition. 

Optimizing Worker Processes and Connections

Let's start with worker processes and connections - the important configuration settings you should fine-tune for NGINX performance. Getting these settings right is like finding a perfect balance between performance and the available system resources. Too little, and you're sluggish; too much, and you're bouncing off the walls due to penalty from context switching of CPUs!

First up, worker processes. The general rule of thumb is to set this to the number of CPU cores you have. Here's how you can do it:

worker_processes auto;

This little line tells NGINX to automatically detect the number of cores and use the optimal number of worker processes. 

Next, move on to the worker connections. This setting determines how many simultaneous connections each worker process can handle. You can safely start with the number 1024 and readjust this value after doing a performance test.

events {
   worker_connections 1024;
}

But here's the thing - the right number depends on your server's resources and estimated traffic. For a high-traffic site bumping this number up to 2048 will make a world of difference. It's all about finding that sweet spot for your specific needs.

Remember, more isn't always better. There's a point of diminishing returns, and you don't want to overload your server otherwise you'll end up with a mess!

Fine-tuning Buffers and Timeouts

Alright, let's dive into the next configuration settings - buffers and timeouts. These settings might not sound exciting, but they can make or break your site's performance.

First up is the client's body buffer size. This determines how much of a client's request body NGINX will buffer. Here's a config that you can use to start with:

client_body_buffer_size 10K;
client_max_body_size 8m;

The first line sets a 10K buffer for most requests, while the second allows file uploads up to 8MB.  Setting these values too low can cause issues with form submissions and file uploads. 

Next, let's talk about client header buffer size:

client_header_buffer_size 1k;

This is usually enough for most requests. But if you're dealing with a lot of cookies or long URLs, you might need to bump it up.

Now, onto timeouts. These are crucial for keeping your server responsive.

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

These settings give clients 12 seconds to send headers and body, keep connections alive for 15 seconds, and allow 10 seconds for transmitting a response. It's a balancing act - too short, and you might cut off slow connections; too long, and you risk tying up server resources.

Implementing Efficient Caching Strategies

This is where the real performance magic happens. A well-configured caching strategy can dramatically reduce server load and speed up content delivery. It's like having a super-efficient librarian who remembers every book's location!

Microcaching

First up, microcaching. This is fantastic for dynamic content that doesn't change every millisecond.

fastcgi_cache_path /tmp/nginx_cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
  	set $no_cache 0;
   	# Don't cache POST requests
   	if ($request_method = POST) {
          set $no_cache 1;
        }
       location ~ \.php$ {
   	fastcgi_cache my_cache;
   	fastcgi_cache_valid 200 60m;
   	fastcgi_cache_use_stale error timeout http_500 http_503;
   	fastcgi_cache_lock on;
   	fastcgi_cache_bypass $no_cache;
   	fastcgi_no_cache $no_cache;
       }
}

This setup caches PHP responses for 60 minutes. It's a lifesaver for busy sites! Implement this on a high-traffic blog, and the server load will drop like a rock.

Browser caching

Next, let's set up some browser caching headers:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
   expires 1y;
   add_header Cache-Control "public, max-age=31536000";
}

This tells browsers to cache static assets for a year. Your site visitors' browser will be happy enough by discovering the browser cache headers.

Proxy caching

Lastly, don't forget about proxy caching:

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
   location / {
       	proxy_cache my_cache;
       	proxy_cache_valid 200 60m;
       	proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
   }
}

This caches responses from upstream servers, reducing load on your backend. Remember, caching is powerful, but use it wisely. You don't want to serve stale content to your users. Always test thoroughly!

 

 

Enabling Gzip Compression

Next, let's talk about Gzip compression - your secret weapon for shrinking file sizes and speeding up load times. The Gzip compression will enable your website's content for faster shipping!

Here's a config that you can try out:

gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
 application/atom+xml
 application/javascript
 application/json
 application/ld+json
 application/manifest+json
 application/rss+xml
 application/vnd.geo+json
 application/vnd.ms-fontobject
 application/x-font-ttf
 application/x-web-app-manifest+json
 application/xhtml+xml
 application/xml
 font/opentype
 image/bmp
 image/svg+xml
 image/x-icon
 text/cache-manifest
 text/css
 text/plain
 text/vcard
 text/vnd.rim.location.xloc
 text/vtt
 text/x-component
 text/x-cross-domain-policy;

Let's break this down:

   gzip on: enables Gzip compression. Simple, right?
   gzip_comp_level 5: sets the compression level. It's a balance between CPU usage and compression ratio. I've found 5 to be the sweet spot, but feel free to experiment!
   gzip_min_length 256: only compresses responses that are at least 256 bytes. No point compressing tiny files!
   gzip_proxied: compresses responses from proxied requests.
   gzip_vary on: adds the Vary header. This helps with caching compressed content.
   gzip_types: lists the MIME types we want to compress. It's quite a long list but worth it!

Implementing this on a content-heavy site will drop the page load time by 40%! It was like watching a magic trick - same content, way faster delivery.

Caution: While Gzip is awesome, it does use some CPU. If you're on a very CPU-constrained system, you might want to adjust the compression level or be more selective with the file types you compress.

Also, don't bother compressing already compressed files like JPEGs or modern video formats - you'll just waste energy!

Optimizing SSL/TLS Settings

Security and speed - You don't have to give up either one, but they are closely linked.  Let's dive into optimizing your SSL/TLS settings for both safety and performance!

First up, let's enable SSL session caching:

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

This caches SSL session parameters to speed up subsequent connections. It's like you are directing NGINX to remember a secret handshake so it doesn't have to go through the whole ritual every time!

Next, implement OCSP stapling with the following piece of configuration settings.

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

The OCSP stapling allows the server to send the SSL certificate's revocation status along with the certificate itself. The OSCP includes a note to save time on verification!

Now, let's choose our SSL protocols and ciphers:

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

This config disables older, insecure protocols and prioritizes strong, modern ciphers. Implementing these settings on an e-commerce site will not only boost the security score but also shave off precious milliseconds from the HTTPS negotiation times. 

Remember, SSL/TLS settings are a moving target. What's secure today might not be tomorrow. Keep an eye on best practices and update your config regularly. A little bit of regular upkeep can make a big difference!

Configuring Keepalive Connections

Keepalive connections also referred to as persistent connections, enable the client and server to maintain the TCP link active even after a request has been submitted. This feature permits the transmission of several requests through the same connection, thereby removing the necessity to set up a fresh connection for every request. Way more efficient!

Here's a config that you can try out:

keepalive_timeout 65;
keepalive_requests 100;

Let's break it down:

keepalive_timeout: keeps connections open for 65 seconds. This means a client can make multiple requests over the same connection within this time frame.
keepalive_requests: allows up to 100 requests on a single keepalive connection before closing it.

Do not set these values super high. Too long a timeout, and you risk tying up server resources. Too many requests, and you might not distribute the load evenly across worker processes.

Keep in mind, If you set the timeout too high, and during traffic spikes, new users will not be able to connect because all the workers will be busy with keepalive connections.

If you're using NGINX as a reverse proxy, don't forget to set up keepalives to your upstream servers too:

upstream backend {
   	server backend1.example.com;
   	keepalive 32;
}
server {
   location /api/ {
       	proxy_http_version 1.1;
       	proxy_set_header Connection "";
       	proxy_pass http://backend;
   }
}

This keeps connections to your backend servers alive, reducing the overhead of constantly opening new connections. Remember, the perfect keepalive settings depend on your specific use case. Monitor your server's performance and adjust accordingly.

Tweaking Open File Cache

You can improve static file handling by using open_file_cache a directive that allows NGINX to cache metadata about open files. However, the contents of the files will not be cached using this directive.

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

The first line tells NGINX to remember up to 1000 files for at least 20 seconds. The second line tells NGINX to check if the file info is still valid every 30 seconds since you don't want to serve outdated stuff.

The third directive open_file_cache_min_uses instructs NGINX to bother remembering files that have been requested at least twice. This keeps the NGINX cache filled with the good stuff. The last directive caches errors and can save you from hammering a missing file repeatedly.

Implementing open_file_cache on a site with tons of small static files will drop load time drastically. However, if you're dealing with files that change frequently, you might want to tweak these settings or even turn them off for certain locations.

 

 

Leveraging HTTP/2 for Enhanced Performance

HTTP/2 represents a major upgrade from HTTP/1.1, providing better performance, particularly for online applications. Nginx is capable of supporting HTTP/2, and activating this feature can result in quicker page loading, lower delay, and an improved experience for users.

So, how do we enable HTTP2 in NGINX? It's surprisingly simple! Just add this line to your server block:

{
 listen 443 ssl http2;
}

Remember, HTTP/2 only works over HTTPS, so make sure you've got your SSL/TLS configuration sorted out first.

One of the coolest features of HTTP/2  is server push. You can push critical assets to the client before they even ask for them. Here's how you can set it up in NGINX:

location / {
   	root /var/www/html;
   	index index.html;
   	http2_push /styles.css;
   	http2_push /script.js;
}

This tells NGINX to push styles.css and script.js to the client when they request the root page serving up content before they've even requested! But pushing too many resources can slow things down. Use it wisely for critical above-the-fold content.

Monitoring and Testing Your NGINX Configuration

Finally, Always monitor and test your NGINX setup to keep it running smoother! Keep an eye on NGINX metrics for any signs of abnormality. Here are the four metrics that you should always monitor. 

Connections: This tells you how busy your server is. Too many, and you might need to scale up.
Requests per second:  How many requests per second? A sudden spike could mean trouble.
Request processing time: If this starts creeping up, your users are gonna get restless.
Error rates: Error rates like 4xx or 5xx errors.

Now, how do you see these metrics? Well, NGINX Plus has some fancy real-time monitoring built-in. But if you're using open-source NGINX like me, don't worry! There are tons of great tools out there. 

You can use Prometheus with Grafana for visualizations. Here's a quick config to export NGINX metrics to Prometheus:

location /nginx_status {
   			stub_status on;
   			allow 127.0.0.1;
   			deny all;
}

Then you can use the NGINX Prometheus exporter to scrape these metrics. But monitoring is only half the battle. You've also gotta test your config changes before deploying in production.

 

 

Testing procedure:

Always, run nginx -t before reloading your config. Use ab (Apache Bench) or wrk for quick load tests. They're like giving your server a quick workout to see if it's fit. For more complex scenarios, you can use JMeter a Swiss Army knife for performance testing. 

Should you possess sufficient resources, establish a testing environment that replicates your production environment - this is where you can experiment with the riskier elements.

Remember, performance tuning is an ongoing process. What works today might not be enough tomorrow. Keep an eye on those metrics, test regularly, and don't be afraid to tweak your config.

Conclusion

To sum up, Optimizing NGINX configuration settings is like fine-tuning a high-performance engine - it takes time, patience, and a bit of experimentation. By making these 10 crucial tweaks, you're on track to boosting your website's speed significantly, ensuring your visitors are pleased and your servers are running smoothly. Dive into your NGINX config file and start optimizing using this guide!

fivestar_rating
No votes yet
Comments