Deploying a Node.js application in place of a "real" webserver

Date: Wed Jun 14 2017 Node.JS

Node.js is an exciting new software stack for developing web applications, or a server implementation for any other sort of network protocol. Perhaps most will see its primary use in deploying web applications accessed like any other web applications such as the traditional LAMP/PHP approach. The obvious question then is what's the best way to deploy a Node.js web application. Indeed, this very question was asked today over on

I've covered the question in some detail in my book, Node Web Development (due out any time now), and I recommend that book for more detailed information.

The first consideration is that Node.js does not provide a full web server implementation (out of the box). It provides all the pieces required to build a proper web server, but it's like getting a box full of lego bricks. Some assembly is required.

Node.js comes with a comprehensive HTTP(S) implementation for both server and client. While that's the core of a web server implementation, you as the application developer have to do quite a bit. "Real" web servers like Apache or Nginx provide a lot of baked in features like authentication protocols, and the zillion other expected features that grew over the 20 years of the Web. These features do not exist in Node's http module, and some (but not all) of them have been implemented in 3rd party modules, leaving you with some assembly required.

The Express framework implements a heck of a lot of the expected features of a proper web server.

The next consideration is integrating the Node application with normal background server process management. e.g. an init script or the equivalent for your server OS. If your OS uses the traditional /etc/init.d style script it's straightforward to copy one of those scripts and change it to invoke Node to run your application.

The next consideration is that Node has a single execution thread, so it will consume only one core of your server CPU. This means Node puts front and center in your mind the question of how to scale the application. Scaling your application for multiple cores on the same server is the same task as scaling it out to multiple servers in a cloud deployment.

The Cluster project is one Node toolkit for building a scaled Node based server to handle multiple cores on the same system.

On the Dreamhost VPS I use to host this website, getting a Node server to run on port 80 will be tricky. That's because Dreamhost is a semi-managed VPS and Dreamhost VPS users have to take a circuitous route to deploy something other than Apache or Nginx on port 80. You, in your situation, may have other reasons for limited choice to deploy Node so it directly answers port 80, and you may be in the situation of hosting some other server on port 80 and using a proxy feature to make it appear the Node process is on port 80.

It might be tempting to configure an Apache reverse proxy and run the Node process on some other port. But stop and think about that for a second. Node is this highly asynchronous non-threaded blisteringly fast server, and Apache is a threaded heavy weight not exactly fast server. Node was created as a kind of remedy for Apache. So putting Apache in front of Node as a proxy front end to Node is kind of like painting your Ferrarri in concrete. It might still move down the highway, just not very well.

Putting Node behind Nginx using a proxy configuration probably makes more sense since Nginx is a similarly asynchronous non-threaded server.