Introduction
Nginx
Nginx is lightweight and fast, a replacement to the sometimes overbearing Apache 2. Nginx, the same as any kind of server or software, still needs to be tuned to help achieve an excellent performance.
Requirements
- You will need a fresh Debian 7 VPS along with its initial setup completed.
- The OS freshly installed and configured with Nginx server running.
- A simple understanding of Linux basics.
Worker Processes and Worker Connections
You will need to begin with two variables: the worker processes and worker connections.
Before going into every setting, you have to know what these directives control. The ‘worker_processes’ directive is the sturdy spine of life for Nginx and that directive is meant to let our virtual server know how many workers to spawn after it has become bound to the right IP and port(s). It is quite common to test this by running 1 worker process per core. Anything above this shouldn’t damage your system, though it will usually leave idle processes lying about.
To understand which number you will have to set ‘worker_processes’ to, take a look at the number of cores you hold on your setup. In the case that you are with the simple setup, it will most likely then be just one core. If you happen to be resizing fast to a larger setup, you will have to check your cores again afterwards and change the number accordingly. You may do this by grepping out the cpu info.
grep processor /proc/cpuinfo | wc –l
If it returns a value of 1, this is the amount of cores on our machine.
The ‘worker_connections’ command is going to tell our worker processes the amount of people who can simultaneously be served by Nginx.
The default value is 768, however, assuming that every browser usually opens up at least 2 connections/server, this number can half. This is the reason for having to adjust our worker’s connections to its full potential. You can check our core’s limitations by using a ulimit command like shown below.
ulimit -n
On a smaller machine, such as a 512MB, VPS this number will probably read 1024 which is a good starting number.
Now update your config.
sudo nano /etc/nginx/nginx.conf worker_processes 1; worker_connections 1024;
Remember: the number of clients that is served can be multiplied by the number of cores.
In this situation, the server can be 1024 clients/second though this is even mitigated further by the ‘keepalive_timeout’ directive.
Buffers
A very useful and important tweak can be to change the buffer size. If the buffer sizes are too low, then Nginx is going to need to write to a temporary file which causes the disk to read and write constantly. There are a couple directives which you will have to understand before making any decisions.
client_body_buffer_size: This directive will handle the client buffer size which means any POST actions are sent to Nginx. POST actions are usually form submissions.
client_header_buffer_size: This one is similar to the last one, just that this one handles the client header size. For every intent and purpose, 1K is normally a decent size for this directive.
client_max_body_size: The maximum allowed size for a client request. In case the maximum size is exceeded, Nginx will show up a 413 error or Request Entity Too Large.
large_client_header_buffers: The maximum number and size of buffers for large client headers.
client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 2 1k;
Timeouts
Timeouts are used to drastically increase performance.
The ‘client_body_timeout’ and ‘client_header_timeout’ directives are accountable for the time a server waits for a client body or a client header to be sent after a request. If neither a body or a header is sent, the server is going to issue a 408 error or Request time out.
The job of ‘Keepalive_timeout’ is to assign the timeout for the ‘keep-alive’ connections with the client. Simply put, Nginx is going to close connections with the client after a certain period of time.
Lastly, the ‘send_timeout’ is settled not on the whole transfer of answer, but only between two operations of reading, if it happens after this time the client should take nothing, afterwards Nginx will shut down the connection.
client_body_timeout 12; client_header_timeout 12; keepalive_timeout 15; send_timeout 10;
Gzip Compression
Gzip will help lower the number of network transfers Nginx deals with. Though, make sure you are careful when increasing the ‘gzip_comp_level’ as the server will start wasting cpu cycles if this is too high.
gzip on; gzip_comp_level 2; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/x-javascript text/xml text/css application/xml;
Static File Caching
You can set the headers to expire for files which do not change and are served regularly. This directive can be appended for the actual Nginx server block.
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 365d; }
Append and delete any of the file types in the array above in order for it to match the types of files on your Nginx servers.
Logging
What Nginx will do is log every request that hits the VPS to a log file. If you use analytics to monitor this, then you might want to turn this functionality off. Just modify the ‘access_log’ directive.
Access_log off;
Save and close the file, then execute the below.
sudo service nginx restart
Conclusion
A properly configured server is one which is monitored and also tweaked accordingly.
None of the variables above are set in stone, they will have to be adjusted to every unique case. Even deeper down the road, you might want to look further into your machine performance using research in load balancing and horizontal scaling. It is just a couple of the many enhancements a good sysadmin is able to create for a server.