This is a guide on how to optimize NGINX. Fortunately, the stock configuration will work well for most use cases and there really isn't any magic options to make it perform 2x faster or any other massive gains.
Worker Processes & Connections
Worker processes define how many NGINX processes will be opened to serve requests.
As a general rule of thumb, you should have one process for every core you have available. Since NGINX v1.3.8 and v1.2.5 it is recommended to set the value to
auto and NGINX will set the amount for you. You can determine the maximum amount of connections by using the formula:
worker_processes * worker_connections
/etc/nginx/nginx.conf you can edit the
worker_processes directive equal to
auto or the amount of CPU cores available:
Worker connections defines how many connections each worker process can manage. By default this is set to 768, which halves in reality because a browser opens up 2 connections at a minimum. The value can go as high as the limit set for opened files in Linux.
The value can be checked by issuing:
Usually this is set to 1024 on most Linux Distributions.
It is safe enough to set the
worker_connections to 1024. You should do tests to see if you need to go higher and raise the ulimit value for opened files, otherwise you can leave it at that.
By default NGINX logs every interaction it makes. Which in turn uses CPU and I/O cycles. There are a few ways of dealing with logs and reducing the amount of resources used. You can configure logs server wide in
nginx.conf or on a per server block basis.
Instead of writing every event to the file as it happens, we can enable buffering which contains a series of these entries and writes it to the file after some time. In order to enable buffering, we include the
buffer=size parameter to
access_log, which will write to file once it reaches the specified size. We can also use
flush=time if we want to write to file after some time.
access_log /var/log/nginx/access.log buffer=64k flush=5m;
Writing To Memory
Writing to memory is much faster than writing to disk and will greatly reduce IO usage. Create a memory partition with
tmpfs and point your access log to somewhere in that partition.
If you want to avoid IO usage, just disable logging altogether. If you have no need or never look at the logs then this is your best option. You can always enable it back on if you you wish.
Buffers deal with memory instead of writing to disk. It is important to note that, if the buffer sizes are too low then it will write to a temporary file on disk.
client_body_buffer_size handles the buffer size for POST data sent by the client. This usually consists of form submissions, file uploads etc. Be weary of setting this to a high value as it opens up for a DoS attack if multiple connections are opened and doesn't have enough memory to allocate. Only enter an amount you are comfortable with dedicating RAM for.
# Default value on 64-bit platforms
proxy_buffers deal with your upstream servers. They follow same guidelines as above. The only difference here is that they are both constrained by an upper limit, namely
proxy_max_temp_file_size, both of which have a default value of 1024MB.
GZIP compresses assets before delivering them. It can reduce the size of pages up to 70%, which means less time needed to load pages and lower bandwidth usage.
gzip directive will yield decent results right off the bat, which can be done by adding:
You can also adjust the compression levels with
gzip_comp_level with a range from 1 to 9. In most cases you will see diminishing returns after level 4-5. Level 1 is fine for most cases but feel free to experiment.
# Default gzip compression level
By default GZIP only works on text/html MiME types but you can define additional types using the following:
gzip_types text/plain text/css application/json;
HTTP/2 has a handful of improvements that may improve your sites performance. Mainly it uses fewer TCP connections which means a lower time to first byte and less time spent making TLS handshakes. You will need SSL enabled on your site to benefit from these improvements though. It is also relatively easy to enable with just appending
http2 to your listen directive:
listen 443 ssl http2;