One place for hosting & domains

      Create a MEAN app with Angular and Docker Compose

      Introduction

      Note: Update: 30/03/2019

      This article has been updated based on the updates to both docker and angular since this article was written. The current version of angular is 7, the updates also adds an attached docker volume to the angular client so that you don’t need to run docker-compose build every time.

      Docker allows us to run applications inside containers. These containers in most cases communicate with each other.

      Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

      We’ll build an angular app in one container, point it to an Express API in another container, which connects to MongoDB in another container.

      If you haven’t worked with Docker before, this would be a good starting point as we will explain every step covered, in some detail.

      Why Use Docker

      1. Docker images usually include only what your application needs to run. As a result, you don’t have to worry about having a whole operating system with things you will never use. This results in smaller images of your application.
      2. Platform Independent – I bet you’ve heard of the phrase ‘It worked on my machine and doesn’t work on the server’. With Docker, all either environments need to have is the Docker Engine or the Docker Daemon, and when we have a successful build of our image, it should run anywhere.
      3. Once you have an image of your application built, you can easily share the image with anyone who wants to run your application. They need not worry about dependencies, or setting up their individual environments. All they need to have is Docker Engine installed.
      4. Isolation – You’ll see from the article that I try to separate the individual apps to become independent, and only point to each other. The reason behind this is that each part of our entire application should be somewhat independent, and scalable on its own. Docker in this instance would make scaling these individual parts as easy as spinning up another instance of their images. This concept of building isolated, independently scalable parts of an entire system are what is called Microservices Approach. You can read more about it in Introduction to Microservices
      5. Docker images usually have tags, referring to their versions. This means you can have versioned builds of your image, enabling you to roll back to a previous version should something unexpected break.

      You need to have docker and docker-compose installed in your setup. Instructions for installing docker in your given platform can be found here.

      Instructions for installing docker-compose can be found here.

      Verify your installation by running:

      1. docker -v

      Output

      Docker version 18.09.2, build 6247962
      1. docker-compose -v

      Output

      docker-compose version 1.23.2, build 1110ad01
      1. node -v

      Output

      v11.12.0

      Next, you need to know how to build a simple Angular app and an Express App. We’ll be using the Angular CLI to build a simple app.

      We’ll now separately build out these three parts of our app. The approach we are going to take is building the app in our local environment, then dockerizing the app.

      Once these are running, we’ll connect the three docker containers. Note that we are only building two containers, Angular and the Express/Node API. The third container will be from a MongoDB image that we’ll just pull from the Docker Hub.

      Docker Hub is a repository for docker images. It’s where we pull down official docker images such as MongoDB, NodeJs, Ubuntu, and we can also create custom images and push them to Docker Hub for other people to pull and use.

      Let’s create a directory for our whole setup, we’ll call it mean-docker.

      1. mkdir mean-docker

      Next, we’ll create an Angular app and make sure it runs in a docker container.

      Create a directory called angular-client inside the mean-docker directory we created above, and initialize an Angular App with the Angular CLI.

      We’ll use npx, a tool that allows us to run CLI apps without installing them into our system. It comes preinstalled when you install Node.js since version 5.2.0

      1. npx @angular/cli new angular-client
      ? Would you like to add Angular routing? No
      ? Which stylesheet format would you like to use? CSS
      

      This scaffolds an Angular app, and npm installs the app’s dependencies. Our directory structure should be like this

      └── mean-docker
          └── angular-client
              ├── README.md
              ├── angular.json
              ├── e2e
              ├── node_modules
              ├── package.json
              ├── package-lock.json
              ├── src
              ├── tsconfig.json
              └── tslint.json
      

      Running npm start, inside the angular-client directory should start the angular app at http://localhost:4200.

      To dockerize any app, we usually need to write a Dockerfile

      A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

      To quickly brainstorm on what our angular app needs in order to run,

      1. We need an image with Node.js installed on it
      2. We could have the Angular CLI installed on the image, but the package.json file has it as a dependency, so it’s not a requirement.
      3. We can add our Angular app to the image and install its dependencies.
      4. It needs to expose port 4200 so that we can access it from our host machine through localhost:4200.
      5. If all these requirements are met, we can run npm start in the container, which in turn runs ng serve since it’s a script in the package.json file, created from the image and our app should run.

      Those are the exact instructions we are going to write in our Dockerfile.

      mean-docker/angular-client/Dockerfile

      # Create image based on the official Node 10 image from dockerhub
      FROM node:10
      
      # Create a directory where our app will be placed
      RUN mkdir -p /app
      
      # Change directory so that our commands run inside this new directory
      WORKDIR /app
      
      # Copy dependency definitions
      COPY package*.json /app/
      
      # Install dependencies
      RUN npm install
      
      # Get all the code needed to run the app
      COPY . /app/
      
      # Expose the port the app runs in
      EXPOSE 4200
      
      # Serve the app
      CMD ["npm", "start"]
      

      I’ve commented on the file to show what each instruction clearly does.

      Note: Before we build the image, if you are keen, you may have noticed that the line COPY . /app/ copies our whole directory into the container, including node_modules. To ignore this, and other files that are irrelevant to our container, we can add a .dockerignore file and list what is to be ignored. This file is usually sometimes identical to the .gitignore file.

      Create a .dockerignore file.

      mean-docker/angular-client/.dockerignore

      node_modules/
      

      One last thing we have to do before building the image is to ensure that the app is served from the host created by the docker image. To ensure this, go into your package.json and change the start script to:

      mean-docker/angular-client/package.json

      {
       ...
        "scripts": {
          "start": "ng serve --host 0.0.0.0",
          ...
        },
        ...
      }
      

      To build the image we will use docker build command. The syntax is

      1. docker build -t <image_tag>:<tag> <directory_with_Dockerfile>

      Make sure you are in the mean_docker/angular-client directory, then build your image.

      1. cd angular-client
      1. docker build -t angular-client:dev .

      -t is a shortform of --tag, and refers to the name or tag given to the image to be built. In this case the tag will be angular-client:dev.

      The . (dot) at the end refers to the current directory. Docker will look for the Dockerfile in our current directory and use it to build an image.

      This could take a while depending on your internet connection.

      Now that the image is built, we can run a container based on that image, using this syntax

      1. docker run -d --name <container_name> -p <host-port:exposed-port> <image-name>

      The -d flag tells docker to run the container in detached mode. Meaning, it will run and get you back to your host, without going into the container.

      1. docker run -d --name angular-client -p 4200:4200 angular-client:dev 8310253fe80373627b2c274c5a9de930dc7559b3dc8eef4abe4cb09aa1828a22

      --name refers to the name that will be assigned to the container.

      -p or --port refers to which port our host machine should point to in the docker container. In this case, localhost:4200 should point to dockerhost:4200, and thus the syntax 4200:4200.

      Visit localhost:4200 in your host browser should be serving the angular app from the container.

      You can stop the container running with:

      1. docker stop angular-client

      We’ve containerized the angular app, we are now two steps away from our complete setup.

      Containerizing an express app should now be straightforward. Create a directory in the mean-docker directory called express-server.

      1. mkdir express-server

      Add the following package.json file inside the app.

      mean-docker/express-server/package.json

      {
        "name": "express-server",
        "version": "0.0.0",
        "private": true,
        "scripts": {
          "start": "node server.js"
        },
        "dependencies": {
          "body-parser": "~1.15.2",
          "express": "~4.14.0"
        }
      }
      

      Then, we’ll create a simple express app inside it. Create a file server.js

      1. cd express-serve
      1. touch server.js
      1. mkdir routes && cd routes
      1. touch api.js

      mean-docker/express-server/server.js

      
      const express = require('express');
      const path = require('path');
      const http = require('http');
      const bodyParser = require('body-parser');
      
      
      const api = require('./routes/api');
      
      const app = express();
      
      
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      
      
      app.use('/', api);
      
      
      const port = process.env.PORT || '3000';
      app.set('port', port);
      
      
      const server = http.createServer(app);
      
      
      server.listen(port, () => console.log(`API running on localhost:${port}`));
      
      [labe mean-docker/express-server/routes/api.js]
      const express = require('express');
      const router = express.Router();
      
      
      router.get('/', (req, res) => {
          res.send('api works');
      });
      
      module.exports = router;
      

      This is a simple express app, install the dependencies and start the app.

      1. npm install
      1. npm start

      Going to localhost:3000 in your browser should serve the app.

      To run this app inside a Docker container, we’ll also create a Dockerfile for it. It should be pretty similar to what we already have for the angular-client.

      mean-docker/express-server/Dockerfile

      # Create image based on the official Node 6 image from the dockerhub
      FROM node:6
      
      # Create a directory where our app will be placed
      RUN mkdir -p /usr/src/app
      
      # Change directory so that our commands run inside this new directory
      WORKDIR /usr/src/app
      
      # Copy dependency definitions
      COPY package.json /usr/src/app
      
      # Install dependencies
      RUN npm install
      
      # Get all the code needed to run the app
      COPY . /usr/src/app
      
      # Expose the port the app runs in
      EXPOSE 3000
      
      # Serve the app
      CMD ["npm", "start"]
      

      You can see the file is pretty much the same as the angular-client Dockerfile, except for the exposed port.

      You could also add a .dockerignore file to ignore files we do not need.

      mean-docker/express-server/.dockerignore

      node_modules/
      

      We can then build the image and run a container based on the image with:

      1. docker build -t express-server:dev .
      1. docker run -d --name express-server -p 3000:3000 express-server:dev

      Going to localhost:3000 in your browser should serve the API.

      Once you are done, you can stop the container with

      1. docker stop express-server

      The last part of our MEAN setup, before we connect them all together is the MongoDB. Now, we can’t have a Dockerfile to build a MongoDB image, because one already exists from the Docker Hub. We only need to know how to run it.

      Assuming we had a MongoDB image already, we’d run a container based on the image with

      1. docker run -d --name mongodb -p 27017:27017 mongo

      The image name in this instance is mongo, the last parameter, and the container name will be mongodb.

      Docker will check to see if you have a mongo image already downloaded, or built. If not, it will look for the image in the Dockerhub. If you run the above command, you should have a mongodb instance running inside a container.

      To check if MongoDB is running, simply go to http://localhost:27017 in your browser, and you should see this message. It looks like you are trying to access MongoDB over HTTP on the native driver port.

      Alternatively, if you have mongo installed in your host machine, simply run mongo in the terminal. And it should run and give you the mongo shell, without any errors.

      To connect and run multiple containers with docker, we use Docker Compose.

      Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

      docker-compose is usually installed when you install docker. So to simply check if you have it installed, run:

      1. docker-compose

      You should see a list of commands from docker-compose. If not, you can go through the installation here

      Note: Ensure that you have docker-compose version 1.6 and above by running docker-compose -v

      Create a docker-compose.yml file at the root of our setup.

      1. touch docker-compose.yml

      Our directory tree should now look like this.

      .
      ├── angular-client
      ├── docker-compose.yml
      └── express-server
      

      Then edit the docker-compose.yml file

      mean-docker/docker-compose.yml

      version: '2' 
      
      
      services:
        angular: 
          build: angular-client 
          ports:
            - "4200:4200" 
      
        express: 
          build: express-server 
          ports:
            - "3000:3000" 
      
        database: 
          image: mongo 
          ports:
            - "27017:27017" 
      

      The docker-compose.yml file is a simple configuration file telling docker-compose which containers to build. That’s pretty much it.

      Now, to run containers based on the three images, simply run

      1. docker-compose up

      This will build the images if not already built, and run them. Once it’s running, and your terminal looks something like this.

      You can visit all three apps: http://localhost:4200, http://localhost:3000, or mongodb://localhost:27017. And you’ll see that all three containers are running.

      Finally, the fun part.

      Express and MongoDB

      We now finally need to connect the three containers. We’ll first create a simple CRUD feature in our API using mongoose. You can go through Easily Develop Node.js and MongoDB Apps with Mongoose to get a more detailed explanation of mongoose.

      First of all, add mongoose to your express server package.json

      mean-docker/express-server/package.json

      {
        "name": "express-server",
        "version": "0.0.0",
        "private": true,
        "scripts": {
          "start": "node server.js"
        },
        "dependencies": {
          "body-parser": "~1.15.2",
          "express": "~4.14.0",
          "mongoose": "^4.7.0"
        }
      }
      

      We need to update our API to use MongoDB:

      mean-docker/express-server/routes/api.js

      
      const mongoose = require('mongoose');
      const express = require('express');
      const router = express.Router();
      
      
      const dbHost = 'mongodb://database/mean-docker';
      
      
      mongoose.connect(dbHost);
      
      
      const userSchema = new mongoose.Schema({
        name: String,
        age: Number
      });
      
      
      const User = mongoose.model('User', userSchema);
      
      
      
      router.get('/', (req, res) => {
              res.send('api works');
      });
      
      
      router.get('/users', (req, res) => {
          User.find({}, (err, users) => {
              if (err) res.status(500).send(error)
      
              res.status(200).json(users);
          });
      });
      
      
      router.get('/users/:id', (req, res) => {
          User.findById(req.param.id, (err, users) => {
              if (err) res.status(500).send(error)
      
              res.status(200).json(users);
          });
      });
      
      
      router.post('/users', (req, res) => {
          let user = new User({
              name: req.body.name,
              age: req.body.age
          });
      
          user.save(error => {
              if (error) res.status(500).send(error);
      
              res.status(201).json({
                  message: 'User created successfully'
              });
          });
      });
      
      module.exports = router;
      

      Two main differences, first of all, our connection to MongoDB is in the line const dbHost="mongodb://database/mean-docker";. This database is the same as the database service we created in the docker-compose file.

      We’ve also added rest routes GET /users, GET /users/:id and POST /user.

      Update the docker-compose file, telling the express service to link to the database service.

      mean-docker/docker-compose.yml

      version: '2' 
      
      
      services:
        angular: 
          build: angular-client 
          ports:
            - "4200:4200" 
          volumes:
            - ./angular-client:/app 
      
        express: 
          build: express-server 
          ports:
            - "3000:3000" 
          links:
            - database
      
        database: 
          image: mongo 
          ports:
            - "27017:27017" 
      

      The links property of the docker-compose file creates a connection to the other service with the name of the service as the hostname. In this case database will be the hostname. Meaning, to connect to it from the express service, we should use database:27017. That’s why we made the dbHost equal to mongodb://database/mean-docker.

      Also, I’ve added a volume to the angular service. This will enable changes we make to the Angular App to automatically trigger recompilation in the container

      Angular and Express

      The last part is to connect the Angular app to the express server. To do this, we’ll need to make some modifications to our angular app to consume the express API.

      Add the Angular HTTP Client.

      mean-docker/angular-client/src/app.module.ts

      import { BrowserModule } from '@angular/platform-browser';
      import { NgModule } from '@angular/core';
      import { HttpClientModule } from '@angular/common/http'; 
      
      import { AppComponent } from './app.component';
      
      @NgModule({
        declarations: [
          AppComponent
        ],
        imports: [
          BrowserModule,
          HttpClientModule 
        ],
        providers: [],
        bootstrap: [AppComponent]
      })
      export class AppModule { }
      

      mean-docker/angular-client/src/app/app.component.ts

      import { Component, OnInit } from '@angular/core';
      import { HttpClient } from '@angular/common/http';
      
      @Component({
        selector: 'app-root',
        templateUrl: './app.component.html',
        styleUrls: ['./app.component.css']
      })
      export class AppComponent implements OnInit {
        title = 'app works!';
      
        
        API = 'http://localhost:3000';
      
        
        people: any[] = [];
      
        constructor(private http: HttpClient) {}
      
        
        ngOnInit() {
          this.getAllPeople();
        }
      
        
        addPerson(name, age) {
          this.http.post(`${this.API}/users`, {name, age})
            .subscribe(() => {
              this.getAllPeople();
            })
        }
      
        
        getAllPeople() {
          this.http.get(`${this.API}/users`)
            .subscribe((people: any) => {
              console.log(people)
              this.people = people
            })
        }
      }
      

      Angular best practices guides usually recommend separating most logic into a service/provider. We’ve placed all the code in the component here for brevity.

      We’ve imported the OnInit interface, to call events when the component is initialized, then added two methods AddPerson and getAllPeople, that call the API.

      Notice that this time around, our API is pointing to localhost. This is because while the Angular 2 app will be running inside the container, it’s served to the browser. And the browser is the one that makes requests. It will thus make a request to the exposed Express API. As a result, we don’t need to link Angular and Express in the docker-compose.yml file.

      Next, we need to make some changes to the template. I first added bootstrap via CDN to the index.html

      mean-docker/angular-client/src/app/index.html

      <!doctype html>
      <html>
      <head>
        <meta charset="utf-8">
        <title>Angular Client</title>
        <base href="/">
      
        <meta name="viewport" content="width=device-width, initial-scale=1">
      
        
        <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/css/bootstrap.min.css">
      
        <link rel="icon" type="image/x-icon" href="favicon.ico">
      </head>
      <body>
        <app-root>Loading...</app-root>
      </body>
      </html>
      

      Then update the app.component.html template

      mean-docker/angular-client/src/app/app.component.html

      
      <nav class="navbar navbar-light bg-faded">
        <div class="container">
          <a class="navbar-brand" href="#">Mean Docker</a>
        </div>
      </nav>
      
      <div class="container">
          <h3>Add new person</h3>
          <form>
            <div class="form-group">
              <label for="name">Name</label>
              <input type="text" class="form-control" id="name" #name>
            </div>
            <div class="form-group">
              <label for="age">Age</label>
              <input type="number" class="form-control" id="age" #age>
            </div>
            <button type="button" (click)="addPerson(name.value, age.value)" class="btn btn-primary">Add person</button>
          </form>
      
          <h3>People</h3>
          
          <div class="card card-block col-md-3" *ngFor="let person of people">
            <h4 class="card-title">{{person.name}}  {{person.age}}</h4>
          </div>
      </div>
      

      The above template shows the components’ properties and bindings. We are almost done.

      Since we’ve made changes to our code, we need to do a build for our Docker Compose

      1. docker-compose up --build

      The --build flag tells docker compose that we’ve made changes and it needs to do a clean build of our images.

      Once this is done, go to localhost:4200 in your browser,

      We are getting a No 'Access-Control-Allow-Origin' error. To quickly fix this, we need to enable Cross-Origin in our express app. We’ll do this with a simple middleware.

      mean-docker/express-server/server.js

      
      
      
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      
      
      app.use(function(req, res, next) {
        res.header("Access-Control-Allow-Origin", "*")
        res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
        next()
      })
      
      
      app.use('/', api);
      
      
      

      We can now run docker-compose again with the build flag. You should be in the mean-docker directory.

      1. docker-compose up --build

      Going to localhost:4200 on the browser.

      Note: I added an attached volume to the docker-compose file, and we now no longer need to rebuild the service every time we make a change.

      I bet you’ve learned a thing or two about MEAN or docker and docker-compose.

      The problem with our set up however is that any time we make changes to either the angular app or the express API, we need to run docker-compose up --build.

      This can get tedious or even boring over time. We’ll look at this in another article.

      Debugging Create React App Applications in Visual Studio Code

      Introduction

      In this post, we are going to create a Create React App application, then add configuration to debug it in Visual Studio Code.

      For a Create React App application, install the Debugger for Chrome extension, create a debug configuration in VS Code, and then run in debug mode.

      To be able to test a Create React App application, you will need a Create React App application. I’ll provide the basic steps, but for more reference on how to get started look at the Create React App page.

      First, you’ll need to install the Create React App.

      1. npm install -g create-react-app

      After that finishes, you’ll need to actually generate your new application. This will take a bit as it needs to install lots of npm packages.

      1. create-react-app my-app

      Open the project in VS Code and you should see the following.

      Create React App Project in VS Code

      Now, that you’ve got your new fancy React app, go ahead and run it to make sure everything looks right.

      1. npm start

      Should look like this.

      Create React App Project Running

      Assuming you’ve made it this far, we are ready to start debugging! Before we do, however, it’s worth understanding how configuring debugging in VS Code works. Basically, debug configurations are saved in a launch.json file which is stored inside of a .vscode folder. This .vscode folder is used to store different configurations for Code including our required debugging stuff.

      Before you create your debug configuration, you need to install the Debugger for Chrome extension. Find and install this extension from the extension tab in VS Code. After installing, reload VS Code.

      Debugger for Chrome

      Now, to create a debug configuration, you can open the debug panel (the bug-looking button on the left panel). At the top of the debug panel, you should see a dropdown that says “No Configurations”.

      Create Debug Configurations

      To the right of that dropdown, there is a gear icon. Click this button to have VS Code automatically generate that .vscode folder and launch.json file mentioned above.

      Then choose Chrome.

      Choose Chrome Debug Configuration

      You should get the following configuration created for you.

      Create React App Debug Configuration

      The only thing we need to do is update the port from 8080 to 3000.

      Updated Create React App Debug Configuration

      Now we’re ready! Go ahead and click the play button at the top of the Debug panel which will launch an instance of Chrome in debug mode. Keep in mind your app should already be running from using ng serve earlier. In VS Code, you should see the Debug toolbar pop up.

      With this up and running, you can set a breakpoint in your App.js. Open up your App.js and add a breakpoint inside of the render function by clicking in the gutter (to the left of the line numbers). Should look like this.

      Now, refresh debugging by clicking the refresh button on the debugging toolbar. This should open your application again and trigger this breakpoint. You should be directed back to VS Code directly to the place where you set your breakpoint.

      From here, you can set more breakpoints, inspect variables, etc. If you are interested in learning more about debugging JavaScript in general in either Chrome or VS Code you can check out Debugging JavaScript in Chrome and Visual Studio Code.

      If you have any follow-up questions or comments, leave one below or find me on Twitter @jamesqquick.

      For video content, check out my YouTube Channel

      Deploying a Node App to Digital Ocean

      Introduction

      There are various platforms that help with deploying Node.js apps to production.

      In this tutorial, we’ll be looking at how to deploy a Node.js app to DigitalOcean. DigitalOcean compared to these other platforms is cheaper and you also can log on to your server and configure it however you like.

      You get more control over your deployment and also it’s a great experiment to see exactly how Node apps are deployed to production.

      This tutorial assumes the following:

      Let’s quickly build a sample app that we’ll use for the purpose of this tutorial. It going to be a pretty simple app.

      1. // create a new directory
      2. mkdir sample-nodejs-app
      3. // change to new directory
      4. cd sample-nodejs-app
      5. // Initialize npm
      6. npm init -y
      7. // install express
      8. npm install express
      9. // create an index.js file
      10. touch index.js

      Open index.js and paste the code below into it:

      
      
      const express = require('express')
      const app = express()
      
      app.get('/', (req, res) => {
        res.send('Hey, I\'m a Node.js app!')
      })
      
      app.listen(3000, () => {
        console.log('Server is up on 3000')
      })
      

      You can start the app with:

      1. node index.js

      And access it on http://localhost:3000.

      You should get:

      Output

      Hey, I'm a Node.js app!

      The complete code is available on GitHub.

      Now let’s take our awesome app to production.

      Login to your DigitalOcean account and create a new droplet (server). We’ll be going to with One-click apps. Select Node.js as shown below:

      Next, we’ll choose the $10 plan. Though the task list app will work perfectly on the $5 plan, but we won’t be able to install the npm dependencies because the npm requires at least 1GB RAM for installing dependencies. Though there is a way around this by creating swap memory which is beyond the scope of this tutorial.

      Next, select a datacenter region, we’ll go with the default:

      Next, add a new SSH key or choose from the existing ones that you have added. You can get your SSH key by running the command below on your local computer:

      1. cat ~/.ssh/id_rsa.pub

      The command above will print your SSH key on the terminal, which you can then copy and paste in the SSH Key Content field. Also, give your SSH key a name.

      Finally, choose a hostname for the droplet and click the Create button.

      After a couple of seconds, you’ll have your new server up and running on Ubuntu 16.04 and NodeJS version 6.11.2. Note the IP address of the server as we’ll be using it to access the server.

      Before we start configuring the server for the task app, let’s quickly create a non-root user which we’ll use henceforth for the rest of the tutorial.

      Note: As a security measure, it is recommended to carry out tasks on your server as a non-root user with administrative privileges.

      First, we need to log in to the server as root. We can do that using the server’s IP address:

      1. ssh root@SERVER_IP_ADDRESS

      Once we are logged in to the server, we can move on to create a new user:

      1. adduser mezie

      This will create a new user called mezie, you can name the user whatever you like. You will be asked a few questions, starting with the account password.

      Having created the new user, we need to give it administrative privileges. That is, the user will be able to carry out administrative tasks by using sudo command.

      1. usermod -aG sudo mezie

      The command above adds the user mezie to sudo group.

      Now the user can run commands with superuser privileges.

      You need to copy your public key to your new server. Enter the command below on your local computer:

      1. cat ~/.ssh/id_rsa.pub

      This will print your SSH key to the terminal, which you can then copy.

      For the new user to log in to the server with SSH key, we must add the public key to a special file in the user’s home directory.

      Still logged in as root on the server, enter the following command:

      1. su - mezie

      This will temporarily switch to the new user. Now you’ll be in your new user’s home directory.

      Next, we need to create a new directory called .ssh and restrict its permission:

      1. mkdir ~/.ssh
      2. chmod 700 ~/.ssh

      Next, within the .ssh directory, create a new file called authorized_keys:

      1. touch ~/.ssh/authorized_keys

      Next, open the file with vim:

      1. vim ~/.ssh/authorized_keys

      Next, paste your public key (copied above) into the file. To save the file, hit ESC to stop editing, then :wq and press ENTER.

      Next, restrict the permissions of the authorized_keys file with this command:

      1. chmod 600 ~/.ssh/authorized_keys

      Type the command below to return to the root user:

      1. exit

      Now your public key is installed, and you can use SSH keys to log in as your user.

      To make sure you can log in as the new user with SSH. Enter the command below in a new terminal on your local computer:

      1. ssh mezie@SERVER_IP_ADDRESS

      If all went well, you’ll be logged in to the server as the new user with SSH.

      The rest of the tutorial assumes you are logged in to the server with the new user created (mezie in my case).

      We are going to clone the app unto the server directly in the user’s home directory (that is, /home/mezie in my case):

      1. git clone https://github.com/ammezie/sample-nodejs-app.git

      Next, we install the dependencies:

      1. cd sample-nodejs-app
      2. npm install

      Once the dependencies are installed we can test the app to make sure everything is working as expected. We’ll do so with:

      1. node index.js

      The app is listening on port 3000 and can be accessed at http://localhost:3000. To test the app is actually working, open a new terminal (still on the server) and enter the command below:

      1. curl http://localhost:3000

      You should get an output as below:

      1. Hey, I'm a Node.js app!

      Good! The app is up and running fine. But whenever the app crashes we’ll need to manually start the app again which is not a recommended approach. So, we need a process manager to help us with starting the app and restarting it whenever it crashes. We’ll use PM2 for this.

      We’ll install it globally through npm:

      1. sudo npm install -g pm2

      With PM2 installed, we can start the app with it:

      1. pm2 start index.js

      Once the app is started you will get an output from PM2 indicating the app has started.

      To launch PM2 on system startup or reboot, enter the command below:

      1. pm2 startup systemd

      You’ll get the following output:

      1. [PM2] Init System found: systemd
      2. [PM2] To setup the Startup Script, copy/paste the following command:
      3. sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie

      Copy and run the last command from the output above:

      1. sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie

      Now PM2 will start at boot up.

      Next, we’ll install Nginx as the webserver to be used for reverse proxy which will allow us to access the app directly with an IP address or domain instead of tacking port to the IP address.

      1. sudo apt-get update
      2. sudo apt-get install nginx

      Because we chose 1-Click Apps while creating our Droplet, ufw firewall is set up for us and running. Now, we need to open the firewall for only HTTP since we are not concerned with SSL in this tutorial:

      1. sudo ufw allow 'Nginx HTTP'

      Finally, we set up Nginx as a reverse proxy server. To this, run:

      1. sudo vim /etc/nginx/sites-available/default

      Within the server block you should have an existing location / block. Replace the contents of that block with the following configuration:

      // /etc/nginx/sites-available/default
      
      ...
      location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_pass http://localhost:3000;
        proxy_set_header Host $http_host;
        proxy_cache_bypass $http_upgrade;
        proxy_redirect off;
      }
      

      Save and exit vim.

      Test to make sure there are no syntax errors in the configuration by running:

      1. sudo nginx -t

      Then restart Nginx:

      1. sudo systemctl restart nginx

      Now you should be able to access the app with your IP_ADDRESS. You should get something similar to the image below:

      In this tutorial, we have seen how to deploy a Node.js app to DigitalOcean. We also saw how to setup a reverse proxy server with Nginx.