Nginx
Nginx
Prerequisites:
To start: I’d like to run the nginx server on a ubuntu container based.
If needed: create an image:
Installation:
To run the container:
Docker run - - name=nginx -it -p 80:80 -v .:/sites/demo ubuntu
IMPORTANT! RUN THE CONTAINER IN THE FOLDER WHERE WE HAVING
THE SITE FILES FROM.!
Now we can build the from the source code. To view the flags of the source code, check
nginx.org documentation.
To build the source code use the command: (this will also create the configs)
./configure - -sbin-path=/usr/bin/nginx1 - -conf-pat=/etc/nginx/nginx.conf2 - - error-log-
path=/var/log/nginx/error.log3 - -http-log-path=/var/log/nginx/access.log4 - - with-pcre5 - -
pid-path=/var/run/nginx.pid6 - -with-http_ssl_module7
This is the main advantage of installing nginx from source. It allows us to add different
configurations.
Then, run:
1. Make
2. make install
3. check that the configuration files are created at /etc/nginx/
4. then check the version of the nginx to see that its working using nginx -V
5. and then run nginx using the command: nginx
6. check the nginx is running using: ps aux | grep nginx
7. we can also see that nginx is running using the browser.
We can check the configurations that we wrote and ran using nginx -t.
To start nginx as a service we can create a script from nginx init file:
From now on we can run nginx using systemctl, as we added nginx as a service.
Run nginx using:
Systemctl start nginx.service
Systemctl enable nginx.service to start at boot.
1
This sets the name of the nginx exe file to be at /usr/bin/nginx which is where the exe files are located at,
/usr/bin (all the commands which we execute are there)
2
Set the configuration path to be at /etc/nginx/nginx.conf, which is where the nginx configuration is located at.
3
Sets the name of the primary error, warnings and diagnostics file location.
4
Set file location to which logs of access to our https site will go
5
forces the usage of the PCRE library.
6
sets the name of an nginx.pid file that will store the process ID of the main process. This will allow us to know
the main process ID.
7
Allows to use https
Nginx configuration:
There are two terms used within nginx:
1. Directives – a specific configuration option that is set in the configuration file and
consists of a name and a value.
2. Context – a section within a configuration where directives are set. (like scope) we
can also have nested contexts.
3. Main context – where we configure global directives that apply to the master process.
To change the index.html page we need to have the files and edit, /etc/nginx/nginx.conf
Conf file content:
Root – where from we are going to forward the requests. So we if access lets say
localhost/hello then localhost will be looket at /sites/demo which is the root directory, and
then access /hello
To test the nginx.conf file run the command: nginx -t – to check syntax.
When we do it, we can see that we have only html without css, we can confirm it loads the
css using
Curl -i http://localhost/cover.css and see that it does work, but the content-type is plain-text,
so it doesn’t parse it as css.
To fix this we need to provide Nginx with content type for a given file extension to preview
it.
We can fix this using a types mapping like to map the content we want to a file extension:
We can do it in an easier way, using the mime.types file in /etc/nginx/.
This file contains the same mapping as above but with much more extensions.
We can use this file using in our nginx.conf file
The location context takes a parameter, this parameter is the URI which we redirect to. We
can preview another page, show a text, or anything that we would like to do.
The response that we return need to contain the status code and the returned value.
This is called a prefix location, which means that anything that will start with /greet will
work. Like /greetings or /greet/hello, will also work.
To create an exact match to /greet (to only redirect /greet) we need to add the = modifier.
We can also evaluate the redirection based on a regex match using the ~ sign.
This will allow access to to /greet with a number from 0-9 to it, which is also a case sensitive.
To create an insensitive case, we need to add use the ~* signs before the redirection value.
If we had a prefix match and a regex match, NGINX would give priority to the regex match.
We can also use preferential prefix using the ^~ which will give priority to the prefix match
instead of the regex match.
Priority of matches:
1. Exact match ( = ) – This will be the highest priority match because it matches the URI
exactly.
2. Preferential prefix ( ^~ ) – This will be the second priority as its preferred over the
regex match.
3. REGEX match (~ or ~*) – In case we have insensitive or sensitive case, NGINX
would take what comes first.
4. Prefix match – This would be the lowest priority, any match over this one will be
prioritized higher.
Variables:
If statements in nginx.conf:
Note, it is highly discouraged to use if statements in the location context.
Here we check that if get an API key that is not 1234 we return a string.
Since we are in /sites/demo we can access /thumb.png, but we can also access different
folders.
Redirect will change the URL, since we were accessing /logo and the URL changed to
/thumb.png
Rewrite will NOT change the URL but will create a new request behind the scenes. So, we
access to /user/Daniel but got /greet (REWRITE IS INTERNALL!)
Example:
rewrite ^/user/\w+ /greet;
This means, rewrite a URI that starts with /user and has 1 or more-word chars, to /greet
So if a /user/Daniel will try to access it will be redirect to /greet
We can also capture certain parts of the original request using regex ():
Rewrite ^/user/(\w+) /greet/$1; -> This means that I can capture the name
Rewrite ^/user/(\w+1)/(somethingElse) /greet/$2
So when we capture certain parts we can also make a specific locations:
When we receive a /user/Daniel it will save the /Daniel and convert it into /greet/Daniel, then
see there is a location match and redirect into it.
In this case, if the user Daniel will log, it will rewrite him to his location, if any other user
logs in, it will direct him into /greet.
Because /greet is a prefix match, anything that will come after /greet will be redirected to
/greet, but something that will have a match location will be forwarded to there.
We can use the last flag in a rewrite statement, the last flag states that after the rewrite was
done, it will be the last time it is going to rewrite, although there can be more equal matches.
If we have:
Rewrite ^/user/(\w+) /greet/$1;
rewrite /greet/john /thumb.png;
In this case, if /user/john accesses, it will change him to /greet/john, and then /greet/john will
be changed to /thumb.png.
We can use try_files in either server context, so any request we get into the server will be
addressed there. Or inside a location.
If we would use try files in the server context, it would intercept every request and server
what we put there.
So, if we have: try_files /thumb.png /greet;
It will check if /sites/demo/thumb.png exists, if so, it will serve it. Since we put it in the
server context, it will always return the thumb.png image. If the /thumb.png would not exist,
then we would redirect into /greet location, since its the last argument.
In this case, when we try to access /cat.png, it will check if /sites/demo/cat.png exists. Since it
doesn’t exist it will redirect for /greet. /greet location exist, so it will change the request to
/greet, and then return 200 hello.
It will check if the URI exists, if so, it will serve it. If the URI doesn’t exist, it will serve
/cat.png. If /cat.png doesn’t exist, it will serve /greet location.
Here, when a user accesses the /secure URI, we will log his access into both access.log and
secure.access.log (secure.access.log is our file that we created).
Since a container can hold 1 service at a time, we need to create 2 containers with docker
compose. One that will hold the nginx service, and one that will hold the php service.
If a request points to a directory, we want to tell nginx which file to load using the index
directive.
The default value is being our index.html file at /sites/demo/index.html.
location /
this location will match anything, we will try to serve the URI, if doesn’t exist then the URI
with the directory, either .php or.html, if also doesn’t exist, return 404.
Location ~ \.php$
This location will match anything that ends with php. (using regex we can say, match
anything that end $ with \.php).
In this location we include fastcgi configurations and pass the php socket that we’ve created
on the second docker container. (where php is the container name and 9000 is the port the php
container is listening on)
Then set the script which needs to be run from (/var/www/html is the root directory of the
php server. So connect the root directory of the php server as where it gets it scripts from)
To connect to the php server run: docker exec -it php /bin/bash
To preview index.php instead of index.html:
Create index.php file, and then when trying to access the main page we will get index.php
instead of index.html since we set at the index to first give index.php and then if doesn’t exist
index.html
Worker Processes:
When creating nginx we’ve created a master process. This process creates a worker process
under him that listens to clients. The default number of workers are 1.
We can set the number of worker processes in the nginx.conf file using: (at the main context)
Worker_processes 2;
Which will set the number of workers to 2.
Adding another worker doesn’t mean it will make the performance better. Since nginx is
async it means that it depends on the CPU cores. So, when we add workers, it doesn’t change
anything because we need to have more CPU cores.
Each core can hold 1 worker. This means that if we have 8 core CPU we can have 8 workers,
one per each core.
We can also set the number of connections each worker has using:
Worker_connections
To know how many files each process can open at a time, run:
Ulimit -n
This number is the number of files a worker can open, if we exceed this number, we will max
out our server.
We can also set the new location at which nginx runs using PID directive.
The default value is /var/run/nginx.pid
Buffers and Timeouts:
These tweaks refer to the requests coming from clients, and not how the server processes the
requests.
First, a buffer is the operation which we write data to the memory, either from a request that
we read from, or a file that we load. We buffer the data into the memory.
In case we don’t have enough memory, it would make a buffer overflow, and write that data
into the hard disk, which is expensive memory, and also slower.
Secondly, a timeout is an option which we can say, after 60 when connected to a client, stop
the connection.
We add those configurations to the http context, so we apply the entire configurations to our
requests.
We can set the units of each directive as the following:
100 – bytes
10K – 10 KB
10M – 10MB
1. Copy the old configurations that we built the server on using NGINX -V
2. List all the modules using ./configure – -help in the installation folder .
3. Use ./configure and paste the old configurations, then also add the new modules.
4. Set - - modules-path=/etc/nginx/modules
(this flag, sets the path at which we can load dynamic modules, so we can load them
faster)
5. In case the configuration fails, install the needed packages.
6. Run make.
7. Run make install
Dynamic modules are not loaded automatically, we need to load them in nginx.conf.
8. Load_module modules/modules_name;
Its important to notice that we have set the modules directory (/etc/nginx/modules) as
the same directory at which our nginx.conf file is located at (/etc/nginx/nginx.conf),
this way we can load the modules faster.
Performance:
Headers and expires:
An expire header is a header then when the server responses with, indicating how long the
client can cache the response.
So, the client doesn’t need to request something if it has it in cache. In case we change
something, then the client can request it again and get a different response.
This improves performance since we the server doesn’t need to look for that request if the
client already cached it.
Then, when we request thumb.png, we can see using F12, in the network section, out header:
Or, using curl: curl -I /localhost:80/thumb.png
And also set the Pragma header to public, as its the same header as above but an older
version.
Add_header Pragma public;
Adding the Vary header, which tells the client that the content of this response can vary with
the value being accept encoding. (Meaning that the response depends on the request header
named -> Accept_Encoding
Add_header Vary Accept-Encoding;
Setting the expire date of the cache using the expires header:
Expires 1M; (1 month)
This sets that the client will store the image, for 1 month, and then request from the server the
image again.
We can also set a location for static resources like images/css/js and so on:
Here, we set access_log as off, so we don’t log each time a client requests an image. Then we
add the normal headers and expire them for 1 month.
Compressed Responses with gzip:
When a user sends us a request, it can add a header named: Accept-Encoding.
This header tells us that we can return a response in an encoded way, like, GZIP.
This can compress the size of the file and make the sending much faster.
When the client receives the response, it can decompress it.
1. To enable gzip, add gzip on; in the http context. So every request will be able to be
gzip.
Note that any child under the http context can override the gzip directive.
2. Set gzip_comp_level – this leveling used to tell how much to compress the files. A
lower number will result in a bigger file, but will use less resources (3)
A bigger number will result in a smaller file, which is better for the client, but will use
more resources from the server (10).
Note that when setting the level about 5, the file size does reduce but not in much, and
using much more resources, so its better to used between 1-5.
3. Set gzip_types as: gzip_types text/css text/javascript, to tell what we can compress.
4. Since we do the compression when the client indicate that it accepts compressed files,
it needs to have Accept-Encoding header, this is why we have added: add_header
Vary Accept-Encoding; to tell the client that the response is depends on this header.
To enable microcaching:
1. Configure in the http context (so this will be applied to all servers), set
fastcgi_cache_path /tmp/nginx_cache levels=1:2
keys_zone=CACHEZONE:100M inactive=60m 8
2. Set what will be served per each reqeust from the cache, this means that if the user
requests a URI it will serve it depeneding on the request
Fastcgi_cache_key “$scheme$request_method$host$request_uri” -
scheme=http/https, request_method=post/get, host=localhost, request_uri=what
requested.
If we would remove $scheme we would serve both http and https over the same cache
entry, if we add $scheme, we would serve http and https from different entries.
Note that the string is being cached.
3. In the location of ~\.php$ add: fastcgi_cache CACHEZONE so nginx where where
to store the cache.
8
so the cache will be written to /tmp/nginx_cache. In a level of which the last letter of the cache entry is a
directory and two letter from end are the subdirectory, but will can remove the level also. Also, set the size of
the cache using keys_zone. Also, set the time that a record will be deleted if not accesses using the inactive
keyword (default value of 10 minutes)
4. Set fastcgi_cache_valid 200 404 60m; this sets responses with 200 and 404 to be
valid for 60 minutes.
We can check if a response was served from the server or from the cache:
$upstream_cache_status
We can pass this variable in a header to all the responses (in the http context):
add_header X-CACHE $upstream_cache_status
If we have a HIT response, this means that the response was served from the cache.
If we have a MISS response it means that the response was not server from cache.
This means that we set a variable named no_cache and set him to false.
If we recieve a skipcache argument, we set the no_cache value to true, or if the request is a
POST method, then also set the value to true.
Then, at the location of a php serve, if the no_cache value is false, then its not going to
bypass it, and not going to write it to the disl.
But, if the value is true, then its going to bypass the cache, and not serve it, and also not write
the response to the disk.
:HTTP2
Differences between http1.1 and http2:
http1 is a text protocol, which means we can see what is written, where http2 is a
binary protocol, which means we cannot read it.
Transferring data using binary protocol is less error prune and faster.
http2 compresses headers, which enables faster transmission and also uses persist
connection and multiplex streaming which takes a few connections or number of
streams and transmits the data using 1 connection stream. Where http1 create a
connection for each response, which takes time.
Opening connections:
Opening a connection takes time, since we need to create a TCP Handshake, and also pass
headers in the requests/responses. Thats why its better for us to add streams into 1 stream.
When requesting an html page from a server using http1, we will need more connections as
we request more parts of the page. Things like scripts, images and CSS files. These will
increase the number of connections and time it takes to see all the page and its context.
Since the browser can open only a specified number of connections, when the connection
stack up other connections will need to wait.
When requesting that same HTML page from a server using http2, the server will return us
the HTML page, and then when we request other parts of the page, the browser will use that
same connection, and the server will stream the data on one connection.
Meaning that we are opening less connections and enabling the option to send data faster.
To use http2 we need to enable HTTPS and SSL. We do have already installed SSL, but we
need to install our source code with the HTTPS build.
1. Go to the nginx folder after the installation.
2. Copy from NGINX -V the configurations.
3. Add to the ./configure, the old configurations + - -
modules-path=/etc/nginx/modules - - with-http_v2_module
4. Compile with make.
5. Install with m-ake install.
Because our container needs to listen on port 443, we need to change our docker-compose
file to listen on both 443 and 80:
Then to unable SSL in the nginx server
1. In the server context: Change the listen port to 443 and add ssl to enable the ssl
module. (listen 443 ssl)
2. Add ssl_certificate /etc/nginx/ssl/self.crt;
3. Tell nginx where to find the signing key which he signs the response using:
ssl_cerfiticate_key /etc/nginx/ssl/self.key;
To enable http2:
1. add in the server context the http2 module:
http2 on;
We can see that our server does return http2 responses using:
Curl -Ik https://localhost/index.html
Server push:
http2 has a server push feature. This feature enables us, lets say if we want to request a
HTML page, to send with that HTML altogether, the CSS and PNG files.
If we would use the server push feature, we will ask only for the HTML page, using -nys but
we will also receive the CSS and PNG files.
In the server context, add a location:
location /index.html {
http2_push /style.css;
http2_push /thumb.png; }
note that we are not specifying the resource itself (style.css), but rather the request for the
resource (/style.css).
a. In the HTTP context, create a virtual host that will redirect all traffic to HTTPS.
b. Make the server listen on port 80.
c. The server needs to listen on the same IP as the original server.
d. Return any request from the server to Https with to our host and with the same request
URI.
This will redirect any request from HTTP to HTTPS and fix the issue from before.
1. Disable SSL, since SSL is outdated, we need to disable it and enable TLSv1,
TLSv1.1, TLSv1.2
2. Optimize cipher suits and tell NGINX which ciphers to use and which not.
Ciphers with ! are ciphers that we don’t want to use.
3. Allow our server to perform key exchanges between the client and server using DH
params.
Note that these params needed to be created at /etc/nginx/ssl
4. Enable HSTS, which is a header that tells the browser not to load anything over
HTTP, so, we can minimize redirects.
5. Enable SSL cache, this will cache the TCP handshakes done between the server and
the client. The cache will hold for set amount of time. This will improve SSL
connections times since we there is no need to preform the handshake again. Also, we
want the session cache to be shared among all the workers. Also, we give the user a
ticker, this SSL ticket is trusted and allows the server to bypass the need to reread the
session.
To generate the params:
To test our server, we can install SEIGE, which is a tool used to check the load of the
server.
Run: siege -v -r 2 -c 5 (v – verbose. R – number of tests. C – connections. So here we
are doing 2 tests of 5 connections, which is 10 at total.)
If we have 5 connections to the server, and the rate limit is 1 per second, then the first one
will work, but the server will reject the other 4.
Here, we send 8 requests per second, 1 is forwarded immediately, 5 are sent to the queue
and served at the rate of 1r/s, we left with 2 that are rejected.
Nodelay keyword
We can also add the nodelay keyword, which is only being applicable to a zone that
already defines a burst value as well.
The nodelay keyword will serve the allowed burst requests as quickly as possible, so not
adhering to the rate limit, but it will still maintain the applicable time limit for any new
request.
If we send 6 requests, all 6 will be served immediately, but the queue will be full, since
its 1+5, the queue is full, and each slot is freed after X as the rate, here, 1.
If we would send after 2 second again 6 requests, then the server will be free for 2
requests, so 2 will fill the buffer, and the other 4 will be rejected.
Basic Auth:
We can set an authentication to our side, allowing only allowed users to enter different parts
of our site.
Note that the difference between FASTCGI and PROXY_PASS is that FASTCGI is when the
php and the nginx are on both server, and PROXY_PASS is when nginx and PHP are
different server that the nginx accepts the requests and then forwards it to the PHP server.
If we use php and nginx on containers, php can only listen using fast-cgi and not using http.
The load balancing will be in a round robin, which will send the requests to the server by
order.
If one server is dead, nginx will automatically still serve requests to the others servers alive.
Load Balancing Options:
Sticky Session:
When we use sticky session, the request is bound to the users IP and always, if possible,
proxied to the same server.
To allow sticky session, in the upstream, add:
ip_hash;
this will allow to create a memory where an IP address has the corresponding proxy server.
When using this option, nginx will forward requests to the server with the least amount of
requests, so the load is balanced and not just forwarded to the next numbered server.
To enable least connection load balancing, add:
least_conn;