Deploy Web applications with NGINX web server

NGINX (pronounced as “Engine-X”) is one of the most popular web server in the world and is trusted by many of the busiest websites. It is Open-Source and as of writing this article, it powers more than 400 million websites. Some of the most commonly used features that can also be implemented in our application are HTTP server,  reverse proxy, content cache and load balancer features. There are many more which can be utilized to create a robust and highly secured web server. Because of it's event-driven (asynchronous) architecture, it is highly scalable and even under heavy load, it uses a small, but predictable amount of memory. Since it's Open-Source, we can use it for free, but there is also the paid version called NGINX Plus which includes dedicated customer support as well as additional features which are not available in the free version.

Below image shows the server instance that we have configured for our web server.

Web server instance

As per our server deployment architecture, we need to initiate the VPN connection in order to connect to the web server, application server and the db server instances. Else, we cannot access those servers. Start the OpenVPN connection and then connect to the web server instance from our local machine using following command:

   
   	ssh -i "travel-app.cer" ec2-user@54.184.95.134
   

As for user and the static ip address used in ssh command, modify as per your server information.

Web server instance SSH connection using VPN

As discussed in our previous chapter, the first thing that we should be doing is, update all the presently installed packages to their latest available versions and remove all the obsolete packages.

   
   	sudo yum -y update
	sudo yum -y upgrade
   

Installation of NGINX


Amazon Linux 2

NGINX web server is not available in yum package manager by default. So, to access it, we need to install Extra Packages for Enterprise Linux(EPEL) repository  and then enable it. It is not included in Amazon Linux 2 by default. Enabling EPEL repository provides access to some of the packages that are not available in yum package manager. NGINX web server is one of such package provided by it. Install it by issuing following command:

   
   	sudo amazon-linux-extras install epel -y
   

Now that EPEL is installed, we need to also enable it.

   
   	sudo yum-config-manager --enable epel
   

To verify that EPEL repository is installed and enabled, run the following command:

   
   	sudo yum repolist
   
AWS EPEL repolist

To make sure that we always install the latest stable version of NGINX, we need to do some additional configurations with yum repository.

   
   	sudo vi /etc/yum.repos.d/nginx.repo
   

Paste the following content in the above file:

   
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/amzn2/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

   

Install NGINX web server:

   
   	sudo yum install nginx -y
   

Now that the NGINX web server is successfully installed, check the version of it.

   
   	nginx -v
   

For successful installation, it should give following output:

   
   	nginx version: nginx/1.22.0
   

Ubuntu

To install NGINX, first update all the packages:

   
   	sudo apt-get update -y && sudo apt-get upgrade -y
   


Install all prerequisites:

   
sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring -y
   

Import an official NGINX signing key so apt could verify the packages authenticity. Fetch the key

   
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
   

Verify that the downloaded file contains the proper key:

   
gpg --dry-run --quiet --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg
   

To set up the apt repository for stable nginx packages, run the following command:

   
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list
   

To install nginx, run the following commands:

   
sudo apt-get update -y 
sudo apt-get install nginx -y
   

Now that our NGINX web server is installed, let's start the web server:

To start NGINX web server:

   
   	sudo systemctl start nginx
   

To restart NGINX web server:

   
   	sudo systemctl restart nginx
   

To check the status of NGINX web server:

   
   	sudo systemctl status nginx
   
NGINX web server status check

To stop NGINX web server:

   
   	sudo systemctl stop nginx
   

To make sure that NGINX web server starts after each system reboot

   
   	sudo systemctl enable nginx
   

Now that our NGINX web server is up and running, let's try to access this web server from our local machine.

NGINX web server running

NGINX comes with a very basic configuration file, called nginx.conf, which is located at /etc/nginx/ folder. Let's check the contents of that file:

Note:

All the commented codes are removed from the below code:

   
	user  nginx;
	worker_processes  auto;

	error_log  /var/log/nginx/error.log notice;
	pid        /var/run/nginx.pid;


	events {
	    worker_connections  1024;
	}


	http {
	    include       /etc/nginx/mime.types;
	    default_type  application/octet-stream;
	    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
	                      '$status $body_bytes_sent "$http_referer" '
	                      '"$http_user_agent" "$http_x_forwarded_for"';
	    access_log  /var/log/nginx/access.log  main;
	    sendfile        on;
	    keepalive_timeout  65;
	    include /etc/nginx/conf.d/*.conf;
	}
   

We should never add server blocks directly to the nginx.conf file, even if we are dealing with a very basic configuration. This file is for configuring the server  processes only.

Inside of conf.d directory, there is another configuration file called default.conf. Please make sure that site configuration files are placed in this directory. We can create multiple site configuration files to host multiple applications with different domains. Usually default.conf is renamed as SITE_DOMAIN.com.conf. As of now, since we do not have any domains, we will use the default setup. default.conf file looks like below:

   
   	server {
	    listen       80;
	    server_name  localhost;
	    location / {
	        root   /usr/share/nginx/html;
	        index  index.html index.htm;
	    }
	    error_page   500 502 503 504  /50x.html;
	    location = /50x.html {
	        root   /usr/share/nginx/html;
	    }
	}
   

In NGINX, there are couple of important terms, which we need to know in order to understand the configuration clearly. Everything in NGINX is a directive.


Context


From the above NGINX configuration file, we can see that events, http, server and location directive contains curly brackets {} and inside of those curly brackets, there are additional directives defined. Any directives containing {} are called contexts bounded to the specific directives - events context, http context, server context and location context. Also we see some directives defined at the top of nginx.conf file which is not contained within any curly brackets. These are called main or global context. A global context contains other contexts which in turn can contain many layer of other sub-contexts. Based on the layer of the contexts in the nginx configuration file, directives defined at the top most contexts are inherited by child contexts and if needed, we can override those directives easily in child contexts.

Directive


The NGINX configuration options are called directives. Directives can be a simple directive or a block directive. A simple directive is a configuration option containing just the name and a value. A block directive is identified by a directive having curly brackets {} and in between those curly brackets, we can have both a simple directive as well as additional block directives.


Let's go through some of the important NGINX configuration directives from the above configuration files. Please refer to the following link for detail explanation.

http://nginx.org/en/docs/ngx_core_module.html

  • user

    It defines the user and group credentials used by the worker processes. If group is omitted, then a group with same name as that of user will be used automatically.

To list the users in linux, issue following command:

   
   	cat /etc/passwd
   

In a list of users, you will also see nginx user as well.

   
   	nginx:x:995:993:Nginx web server:/var/lib/nginx:/sbin/nologin
   

To list all the processes by linux users:

   
   	ps aux
   

You will see following line where nginx user is running nginx worker process.

   
nginx    20785  0.0  0.3  40608  3068 ?        S    08:57   0:00 nginx: worker process
   
  • worker_processes

    It defines the number of worker processes. The auto parameter will try to auto detect it.
  • error_log

    It is used for configuring error logs. The first parameter defines a file for storing error log. The second parameter determines the level of logging and can be one of the debug, info, notice, warn, error, crit, alert or emerg.
  • pid

    It defines a file that will store the process ID of the main process.
  • worker_connections

    It sets the maximum number of simultaneous connections that can be opened by worker process.

    Let's find the process id of nginx web server
   
   	ps aux | grep 'nginx'
   
NGINX PID check


As we can see, the pid for nginx worker process is 767. To inspect the process limits for this PID, issue following command:

   
   	cat /proc/<PID>/limits
   

NGINX worker process limits check

In above diagram, you can see Max Open files row which  has both Soft Limit and Hard Limit of 65535. Use 65535 as the value for worker_connections directive. For your machine, the value might be different and assign the value accordingly.

  • log_format

    It is used to define the format of the request logs, using which, logs are written to the access log files.
  • access_log

    All the client requests are logged to the file specified by this directive
  • keepalive_timeout

    It sets a timeout during which a keep-alive client connection will stay open on the server side. After the specified period of time, Nginx will close the connections with the client.
  • server

    It sets the configuration for a virtual server in the http context. We can define multiple virtual servers.
  • listen

    It sets the IP address and port on which the server will accept client requests.
  • server_name

    It sets the name of a virtual server. Usually it's the domain name on which we want to listen for client requests. We can specify multiple values here.
  • location

    It sets the configuration based on the requests URI. Each location block can either return a file or proxy that requests to a backend server. Also, we can rewrite the URIs as per our need.
  • root

    It specifies the folder path in the system from which the static files will be served.
  • proxy_pass

    It will pass the client requests to the specified proxied(backend) server. The response from the backend server is then return to the clients.

Let's do some essential modifications in the configuration file:

For ease of debugging, we will create multiple error files with different log level. This will help us massively when problem arises in the server and based on the error severity, we can check associated error log files.

   
   	error_log   /var/log/nginx/error.log;
	error_log   /var/log/nginx/error_debug.log debug;
	error_log   /var/log/nginx/error_extreme.log emerg;
	error_log   /var/log/nginx/error_critical.log crit;

   

Let's make this web server as a reverse proxy server for our Node.js application. Here, we are using proxy_pass directive to proxy client requests containing /api/ in the route to the backend server.

   
	  server {
	  	...
	  	location /api/ {
	      proxy_pass      http://PRIVATE_IP_APPLICATION_SVC:3000;
	    }
	    ...
	  }
   

Replace PRIVATE_IP_APPLICATION_SVC with private ip address of backend application server.

Let's modify the server block to listen for both IPV4 and IPV6 addresses and make this server block a default one. Any client requests with a Host header value not matching a server_name value will be served by the server block having default_server parameter.

   
   	 server {
	    listen 80 default_server;
	    listen [::]:80 default_server;
	    ...
	}
   

Let's deny any client requests for hidden files for security purpose.

   
   	 location ~ /\. {
	    deny all;
	}
   

Our final nginx.conf file looks like below:

   
   	 user nginx;
	worker_processes  auto;

	error_log   /var/log/nginx/error.log;
	error_log   /var/log/nginx/error_debug.log debug;
	error_log   /var/log/nginx/error_extreme.log emerg;
	error_log   /var/log/nginx/error_critical.log crit;

	pid        /var/run/nginx.pid;

	events {
	    worker_connections  65535;
	}

	http {
	    include       /etc/nginx/mime.types;
	    default_type  application/octet-stream;
	    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
	                      '$status $body_bytes_sent "$http_referer" '
	                      '"$http_user_agent" "$http_x_forwarded_for"';
	    access_log  /var/log/nginx/access.log  main;
	    sendfile        on;
	    keepalive_timeout  65;
	    include /etc/nginx/conf.d/*.conf;
	}
   

Our final default.conf file looks like below:

   
   	 server {
	    listen 80 default_server;
	    listen [::]:80 default_server;
	    server_name  localhost;
	    location / {
	        root   /usr/share/nginx/html;
	        index  index.html index.htm;
	    }

	    location /api/ {
	        proxy_pass      http://172.26.5.60:3000;
	    }

	    location ~ /\. {
	        deny all;
	    }

	    error_page   500 502 503 504  /50x.html;
	    location = /50x.html {
	        root   /usr/share/nginx/html;
	    }
	}
   

To check the syntax of NGINX configuration, issue the following command:

   
   	sudo nginx -t
   
NGINX status check

After making configuration changes, we need to reload the configuration. To reload it, issue the following command:

   
   	sudo nginx -s reload
   

Now, let's make a request to the hotel get api endpoint from the browser.

NGINX API request

As you can see, we get empty array response from our backend server to which a client get request is proxied by the nginx web server. That's it for this chapter.

In our next chapter, we will discuss and enhance our server deployment architecture diagram.

Prev Chapter                                                                                          Next Chapter