One place for hosting & domains

      stack

      Setting Up a MEAN Stack Single Page Application

      Introduction

      Beginning an application from scratch can sometimes be the hardest thing to do. Staring at an empty folder and a file with no code in it yet can be a very daunting thing.

      In today’s tutorial, we will be looking at the starting setup for a Node.js, AngularJS, MongoDB, and Express application (otherwise known as MEAN). I put those in the wrong order, I know.

      This will be a starting point for those that want to learn how to begin a MEAN stack application. Projects like mean.io and meanjs.org are more fully-fledged MEAN applications with many great features you’d want for a production project.

      You will be able to start from absolute scratch and create a basic application structure that will allow you to build any sort of application you want.

      This article has been updated to work with Express 4.0

      A lot of the applications we’ve dealt with so far had a specific function, like our Node and Angular To-Do Single Page Application. We are going to step away from that and just a good old getting started application.

      This will be very barebones but hopefully, it will help you set up your applications. Let’s just call it a starter kit.

      Application Requirements

      This tutorial will be more based on application structure and creating a solid foundation for single-page MEAN stack applications. For more information on CRUD, authentication, or other topics in MEAN apps we’ll make sure to write other tutorials to fill those gaps.

      Three letters out of the MEAN stack will be handled on the backend, our server. We will create our server, configure our application, and handle application routing.

      Tools Required

      We will need Node and to make our lives easier, we’ll use bower to pull in all our dependencies.

      Bower isn’t really necessary. You could pull in all the files we need yourself (bootstrap, angular, angular-route), but bower just gets them all for you! For more info, read our guide on bower to get a better understanding.

      By the end of this tutorial, we will have a basic application structure that will help us develop our Node backend along with our Angular frontend. Here’s what it will look like.

              - app
                  ----- models/
                  ---------- nerd.js <!-- the nerd model to handle CRUD -->
              ----- routes.js
              - config
                  ----- db.js
              - node_modules <!-- created by npm install -->
              - public <!-- all frontend and angular stuff -->
              ----- css
              ----- js
              ---------- controllers <!-- angular controllers -->
              ---------- services <!-- angular services -->
              ---------- app.js <!-- angular application -->
              ---------- appRoutes.js <!-- angular routes -->
              ----- img
              ----- libs <!-- created by bower install -->
              ----- views
              ---------- home.html
              ---------- nerd.html
              ---------- geek.html
              ----- index.html
              - .bowerrc <!-- tells bower where to put files (public/libs) -->
              - bower.json <!-- tells bower which files we need -->
              - package.json <!-- tells npm which packages we need -->
              - server.js <!-- set up our node application -->
      

      We’ll be filling in our files into a folder structure. All backend work is done in server.js, app, and config while all the frontend is handled in the public folder.

      All Node applications will start with a package.json file so let’s begin with that.

              {
                "name": "starter-node-angular",
                "main": "server.js",
                "dependencies": {
                  "express" : "~4.5.1",
                  "mongoose" : "~3.8.0",
                  "body-parser" : "~1.4.2",
                  "method-override" : "~2.0.2"
                }
              }
      

      That’s it! Now our application will know that we want to use Express and Mongoose.

      Express is a Node.js web application framework that will help us create our application. Mongoose is a MongoDB ORM that will help us communicate with our MongoDB database.

      Install Node Modules

      To install the dependencies we just setup, just go into your console and type:

      npm install You’ll see your application working to bring in those modules into the node_modules directory that it creates.

      Now that we have those, let’s set up our application in server.js.

      Since this is our starter kit for a single-page MEAN application, we are going to keep this simple. The entire code for the file is here and it is commented for help understanding.

      server.js

          
          var express        = require('express');
          var app            = express();
          var bodyParser     = require('body-parser');
          var methodOverride = require('method-override');
      
          
      
          
          var db = require('./config/db');
      
          
          var port = process.env.PORT || 8080;
      
          
          
          
      
          
          
          app.use(bodyParser.json());
      
          
          app.use(bodyParser.json({ type: 'application/vnd.api+json' }));
      
          
          app.use(bodyParser.urlencoded({ extended: true }));
      
          
          app.use(methodOverride('X-HTTP-Method-Override'));
      
          
          app.use(express.static(__dirname + '/public'));
      
          
          require('./app/routes')(app); 
      
          
          
          app.listen(port);
      
          
          console.log('Magic happens on port ' + port);
      
          
          exports = module.exports = app;
      

      We have now pulled in our modules, configured our application for things like database, some express settings, routes, and then started our server. Notice how we didn’t pull in mongoose here. There’s no need for it yet. We will be using it in our model that we will define soon.

      Let’s look at config, a quick model, and routes since we haven’t created those yet. Those will be the last things that the backend side of our application needs.

      Config config/

      I know it doesn’t seem like much now since we only are putting the db.js config file here, but this was more for demonstration purposes. In the future, you may want to add more config files and call them in server.js so this is how we will do it.

      config/db.js

              module.exports = {
                  url : 'mongodb://localhost/stencil-dev'
              }
      

      Now that this file is defined and we’ve called it in our server.js using var db = require('./config/db');, you can call any items inside of it using db.url.

      For getting this to work, you’ll want a local MongoDB database installed or you can just grab a quick one-off service like Modulus or Mongolab. Just go ahead and create an account at one of those, create a database with your own credentials, and you’ll be able to get the URL string to use in your own config file.

      Next up, we’ll create a quick Mongoose model so that we can define our Nerds in our database.

      Nerd Model app/models/nerd.js

      This will be all that is required to create records in our database. Once we define our Mongoose model, it will let us handle creating, reading, updating, and deleting our nerds.

      Let’s go into the app/models/nerd.js file and add the following:

      app/models/nerd.js

          
          var mongoose = require('mongoose');
      
          
          
          module.exports = mongoose.model('Nerd', {
              name : {type : String, default: ''}
          });
      

      This is where we will use the Mongoose module and be able to define our Nerd model with a name attribute with data type String. If you want more fields, feel free to add them here. Read up on the Mongoose docs to see all the things you can define.

      Let’s move onto the routes and use the model we just created.

      Node Routes app/routes.js

      In the future, you can use the app folder to add more models, controllers, routes, and anything backend (Node) specific to your app.

      Let’s get to our routes. When creating a single-page application, you will usually want to separate the functions of the backend application and the frontend application as much as possible.

      Separation of Routes

      To separate the duties of the separate parts of our application, we will be able to define as many routes as we want for our Node backend. This could include API routes or any other routes of that nature.

      We won’t be diving into those since we’re not really handling creating an API or doing CRUD in this tutorial, but just know that this is where you’d handle those routes.

      We’ve commented out the place to put those routes here.

      app/routes.js

          
          var Nerd = require('./models/nerd');
      
              module.exports = function(app) {
      
                  
                  
                  
      
                  
                  app.get('/api/nerds', function(req, res) {
                      
                      Nerd.find(function(err, nerds) {
      
                          
                                          
                          if (err)
                              res.send(err);
      
                          res.json(nerds); 
                      });
                  });
      
                  
                  
      
                  
                  
                  app.get('*', function(req, res) {
                      res.sendfile('./public/views/index.html'); 
                  });
      
              };
      

      This is where you can handle your API routes. For all other routes (*), we will send the user to our frontend application where Angular can handle routing them from there.

      Backend Done!

      Now that we have everything we need for our server to get set up! At this point, we can start our server, send a user the Angular app (index.html), and handle 1 API route to get all the nerds.

      Let’s create that index.html file so that we can test out our server.

      Create an Index View File public/views/index.html

      Let’s just open up this file and add some quick text so we can test our server.

      public/views/index.html

          <!doctype html>
          <html lang="en">
          <head>
              <meta charset="UTF-8">
      
              <title>Starter MEAN Single Page Application</title>
      
          </head>
          <body>
      
              we did it!
      
          </body>
          </html>
      

      With all the backend (and a tiny frontend piece) in place, let’s start up our server. Go into your console and type:

      1. node server.js

      Now we can go into our browser and see http://localhost:8080 in action.

      So simple, and yet so beautiful. Now let’s get to the frontend single-page AngularJS stuff.

      With all of our backend work in place, we can focus on the frontend. Our Node backend will send any user that visits our application to our index.html file since we’ve defined that in our catch-all route (app.get('*')).

      The frontend work will require a few things:

      We will need certain files for our application like bootstrap and of course angular. We will tell bower to grab those components for us.

      Bower is a great frontend tool to manage your frontend resources. You just specify the packages you need and it will go grab them for you. Here’s an article on getting started with bower.

      First, we will need Bower installed on our machine. Just type in npm install -g bower into your console.

      After you have done that, you will now have access to bower globally on your system. We will need 2 files to get Bower working for us (.bowerrc and bower.json). We’ll place both of these in the root of our document.

      .bowerrc will tell Bower where to place our files:

          {
              "directory": "public/libs"
          }
      

      bower.json is similar to package.json and will tell Bower which packages are needed.

          {
              "name": "starter-node-angular",
              "version": "1.0.0",
              "dependencies": {
                  "bootstrap": "latest",
                  "font-awesome": "latest",
                  "animate.css": "latest",
                  "angular": "latest",
                  "angular-route": "latest"
              }
          }
      

      Let’s run it! In your console, in the root of your application, type:

      1. bower install

      You can see bower pull in all the files we needed and now we have them in public/libs!

      Now we can get down to business and work on our Angular stuff.

      For our Angular application, we will want:

      Let’s create the files needed for our Angular application. This will be done in public/js. Here is the application structure for our frontend:

              - public
              ----- js
              ---------- controllers
              -------------------- MainCtrl.js
              -------------------- NerdCtrl.js
              ---------- services
              -------------------- NerdService.js
              ---------- app.js
              ---------- appRoutes.js
      

      Once we have created our controllers, services, and routes, we will combine them all and inject these modules into our main app.js file to get everything working together.

      We won’t go too far in-depth here so let’s just show off all three of our controllers and their code.

      public/js/controllers/MainCtrl.js

          angular.module('MainCtrl', []).controller('MainController', function($scope) {
      
              $scope.tagline = 'To the moon and back!';
      
          });
      

      public/js/controllers/NerdCtrl.js

          angular.module('NerdCtrl', []).controller('NerdController', function($scope) {
      
              $scope.tagline = 'Nothing beats a pocket protector!';
      
          });
      

      Of course in the future, you will be doing a lot more with your controllers, but since this is more about application setup, we’ll move onto the services.

      This is where you would use $http or $resource to do your API calls to the Node backend from your Angular frontend.

      public/js/services/NerdService.js

          angular.module('NerdService', []).factory('Nerd', ['$http', function($http) {
      
              return {
                  
                  get : function() {
                      return $http.get('/api/nerds');
                  },
      
                  
      
                  
                  create : function(nerdData) {
                      return $http.post('/api/nerds', nerdData);
                  },
      
                  
                  delete : function(id) {
                      return $http.delete('/api/nerds/' + id);
                  }
              }
      
          }]);
      

      That’s it for our services. The only function that will work in that NerdService is the get function. The other two are just placeholders and they won’t work unless you define those specific routes in your app/routes.js file. For more on building APIs, here’s a tutorial for Building a RESTful Node API.

      These services will call our Node backend, retrieve data in JSON format, and then we can use it in our Angular controllers.

      Now we will define our Angular routes inside of our public/js/appRoutes.js file.

      public/js/appRoutes.js

              angular.module('appRoutes', []).config(['$routeProvider', '$locationProvider', function($routeProvider, $locationProvider) {
      
              $routeProvider
      
                  
                  .when('/', {
                      templateUrl: 'views/home.html',
                      controller: 'MainController'
                  })
      
                  
                  .when('/nerds', {
                      templateUrl: 'views/nerd.html',
                      controller: 'NerdController'
                  });
      
              $locationProvider.html5Mode(true);
      
          }]);
      

      Our Angular frontend will use the template file and inject it into the <div ng-view></div> in our index.html file. It will do this without a page refresh which is exactly what we want for a single page application.

      For more information on Angular routing and templating, check out our other tutorial: Single Page Apps with AngularJS.

      With all of the Angular routing ready to go, we just need to create the view files and then the smaller template files (home.html, nerd.html, and geek.html) will be injected into our index.html file inside of the <div ng-view></div>.

      Notice in our index.html file we are calling the resources we pulled in using bower.

      public/index.html

          <!doctype html>
          <html lang="en">
          <head>
              <meta charset="UTF-8">
              <base href="/">
      
              <title>Starter Node and Angular</title>
      
              
              <link rel="stylesheet" href="libs/bootstrap/dist/css/bootstrap.min.css">
              <link rel="stylesheet" href="css/style.css"> 
      
              
              <script src="libs/angular/angular.min.js"></script>
              <script src="libs/angular-route/angular-route.min.js"></script>
      
              
              <script src="js/controllers/MainCtrl.js"></script>
              <script src="js/controllers/NerdCtrl.js"></script>
              <script src="js/services/NerdService.js"></script>
              <script src="js/appRoutes.js"></script>
              <script src="js/app.js"></script>
          </head>
          <body ng-app="sampleApp" ng-controller="NerdController">
          <div class="container">
      
              
              <nav class="navbar navbar-inverse">
                  <div class="navbar-header">
                      <a class="navbar-brand" href="/">Stencil: Node and Angular</a>
                  </div>
      
                  
                  <ul class="nav navbar-nav">
                      <li><a href="/nerds">Nerds</a></li>
                  </ul>
              </nav>
      
              
              <div ng-view></div>
      
          </div>
          </body>
          </html>
      
      
          
      
          <div class="jumbotron text-center">
              <h1>Home Page 4 Life</h1>
      
              <p>{{ tagline }}</p>
          </div>
      
      
          
      
          <div class="jumbotron text-center">
              <h1>Nerds and Proud</h1>
      
              <p>{{ tagline }}</p>
          </div>
      

      We have defined our resources, controllers, services, and routes and included the files in our index.html. Now let’s make them all work together.

      Let’s set up our Angular app to use all of our components. We will use dependency injection and set up our Angular application.

      public/js/app.js

          angular.module('sampleApp', ['ngRoute', 'appRoutes', 'MainCtrl', 'NerdCtrl', 'NerdService']);
      

      Now we have an application that has a Node.js backend and an AngularJS frontend. We can use this foundation to build any sort of application moving forward. We can add authentication and CRUD functionality to create a good application.

      Also, for those looking for this project with the addition of the Jade templating engine, Florian Zemke has created a Jade version at his GitHub repo.

      Next Steps

      Moving forward, I’d encourage you to take this and see if it fits your needs. The point of this was to have a foundation for starting applications so that we aren’t reinventing the wheel every time we start a new project.

      This is a very barebones example and for something more in-depth, I’d encourage people to take a look at mean.io for a more in-depth starter application.

      Check out the GitHub repo for this project and take from it what you need. Sound off in the comments if you have any questions about how to expand this into your own applications.

      We’ve put this tutorial together as a starter kit at the GitHub repo. We’ll keep adding features to it on request and any updates we think will be helpful for applications.

      Hopefully, it will be a good foundation for any sort of Single Page MEAN Stack Application.

      To Use the Starter App

      1. Download the code
      2. Install the npm modules: npm install
      3. Install the bower components: bower install
      4. Start the server: node server.js
      5. Visit the application in your browser at http://localhost:8080
      6. Use this starter kit to build any application you need.

      Further Reading: When building MEAN stack apps, the backend Node application will usually be an API that we build. This will allow the Angular frontend to consume our API that we built through Angular services. The next step is to hash out building a Node API. This next tutorial will teach us that and then we can go further in-depth on how to build the frontend Angular application to consume our new API.

      This article is part of our Getting MEAN series. Here are the other articles.

      How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 20.04

      Introduction

      The Elastic Stack — formerly known as the ELK Stack — is a collection of open-source software produced by Elastic which allows you to search, analyze, and visualize logs generated from any source in any format, a practice known as centralized logging. Centralized logging can be useful when attempting to identify problems with your servers or applications as it allows you to search through all of your logs in a single place. It’s also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

      The Elastic Stack has four main components:

      • Elasticsearch: a distributed RESTful search engine which stores all of the collected data.
      • Logstash: the data processing component of the Elastic Stack which sends incoming data to Elasticsearch.
      • Kibana: a web interface for searching and visualizing logs.
      • Beats: lightweight, single-purpose data shippers that can send data from hundreds or thousands of machines to either Logstash or Elasticsearch.

      In this tutorial, you will install the Elastic Stack on an Ubuntu 20.04 server. You will learn how to install all of the components of the Elastic Stack — including Filebeat, a Beat used for forwarding and centralizing logs and files — and configure them to gather and visualize system logs. Additionally, because Kibana is normally only available on the localhost, we will use Nginx to proxy it so it will be accessible over a web browser. We will install all of these components on a single server, which we will refer to as our Elastic Stack server.

      Note: When installing the Elastic Stack, you must use the same version across the entire stack. In this tutorial we will install the latest versions of the entire stack which are, at the time of this writing, Elasticsearch 7.7.1, Kibana 7.7.1, Logstash 7.7.1, and Filebeat 7.7.1.

      To complete this tutorial, you will need the following:

      Additionally, because the Elastic Stack is used to access valuable information about your server that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged.

      However, because you will ultimately make changes to your Nginx server block over the course of this guide, it would likely make more sense for you to complete the Let’s Encrypt on Ubuntu 20.04 guide at the end of this tutorial’s second step. With that in mind, if you plan to configure Let’s Encrypt on your server, you will need the following in place before doing so:

      • A fully qualified domain name (FQDN). This tutorial will use your_domain throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.

        • An A record with your_domain pointing to your server’s public IP address.
        • An A record with www.your_domain pointing to your server’s public IP address.

      The Elasticsearch components are not available in Ubuntu’s default package repositories. They can, however, be installed with APT after adding Elastic’s package source list.

      All of the packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Elasticsearch.

      To begin, use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.

      1. curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where APT will search for new sources:

      1. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

      Next, update your package lists so APT will read the new Elastic source:

      1. sudo apt update

      Then install Elasticsearch with this command:

      1. sudo apt install elasticsearch

      Elasticsearch is now installed and ready to be configured. Use your preferred text editor to edit Elasticsearch’s main configuration file, elasticsearch.yml. Here, we’ll use nano:

      1. sudo nano /etc/elasticsearch/elasticsearch.yml

      Note: Elasticsearch’s configuration file is in YAML format, which means that we need to maintain the indentation format. Be sure that you do not add any extra spaces as you edit this file.

      The elasticsearch.yml file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.

      Elasticsearch listens for traffic from everywhere on port 9200. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API] (https://en.wikipedia.org/wiki/Representational_state_transfer). To restrict access and therefore increase security, find the line that specifies network.host, uncomment it, and replace its value with localhost like this:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      # ---------------------------------- Network -----------------------------------
      #
      # Set the bind address to a specific IP (IPv4 or IPv6):
      #
      network.host: localhost
      . . .
      

      We have specified localhost so that Elasticsearch listens on all interfaces and bound IPs. If you want it to listen only on a specific interface, you can specify its IP in place of localhost. Save and close elasticsearch.yml. If you’re using nano, you can do so by pressing CTRL+X, followed by Y and then ENTER .

      These are the minimum settings you can start with in order to use Elasticsearch. Now you can start Elasticsearch for the first time.

      Start the Elasticsearch service with systemctl. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.

      1. sudo systemctl start elasticsearch

      Next, run the following command to enable Elasticsearch to start up every time your server boots:

      1. sudo systemctl enable elasticsearch

      You can test whether your Elasticsearch service is running by sending an HTTP request:

      1. curl -X GET "localhost:9200"

      You will see a response showing some basic information about your local node, similar to this:

      Output

      { "name" : "Elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ", "version" : { "number" : "7.7.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

      Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.

      According to the official documentation, you should install Kibana only after installing Elasticsearch. Installing in this order ensures that the components each product depends on are correctly in place.

      Because you’ve already added the Elastic package source in the previous step, you can just install the remaining components of the Elastic Stack using apt:

      1. sudo apt install kibana

      Then enable and start the Kibana service:

      1. sudo systemctl enable kibana
      2. sudo systemctl start kibana

      Because Kibana is configured to only listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server.

      First, use the openssl command to create an administrative Kibana user which you’ll use to access the Kibana web interface. As an example we will name this account kibanaadmin, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.

      The following command will create the administrative Kibana user and password, and store them in the htpasswd.users file. You will configure Nginx to require this username and password and read this file momentarily:

      1. echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

      Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web interface.

      Next, we will create an Nginx server block file. As an example, we will refer to this file as your_domain, although you may find it helpful to give yours a more descriptive name. For instance, if you have a FQDN and DNS records set up for this server, you could name this file after your FQDN.

      Using nano or your preferred text editor, create the Nginx server block file:

      1. sudo nano /etc/nginx/sites-available/your_domain

      Add the following code block into the file, being sure to update your_domain to match your server’s FQDN or public IP address. This code configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Additionally, it configures Nginx to read the htpasswd.users file and require basic authentication.

      Note that if you followed the prerequisite Nginx tutorial through to the end, you may have already created this file and populated it with some content. In that case, delete all the existing content in the file before adding the following:

      /etc/nginx/sites-available/your_domain

      server {
          listen 80;
      
          server_name your_domain;
      
          auth_basic "Restricted Access";
          auth_basic_user_file /etc/nginx/htpasswd.users;
      
          location / {
              proxy_pass http://localhost:5601;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
      }
      

      When you’re finished, save and close the file.

      Next, enable the new configuration by creating a symbolic link to the sites-enabled directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:

      1. sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain

      Then check the configuration for syntax errors:

      1. sudo nginx -t

      If any errors are reported in your output, go back and double check that the content you placed in your configuration file was added correctly. Once you see syntax is ok in the output, go ahead and restart the Nginx service:

      1. sudo systemctl reload nginx

      If you followed the initial server setup guide, you should have a UFW firewall enabled. To allow connections to Nginx, we can adjust the rules by typing:

      1. sudo ufw allow 'Nginx Full'

      Note: If you followed the prerequisite Nginx tutorial, you may have created a UFW rule allowing the Nginx HTTP profile through the firewall. Because the Nginx Full profile allows both HTTP and HTTPS traffic through the firewall, you can safely delete the rule you created in the prerequisite tutorial. Do so with the following command:

      1. sudo ufw delete allow 'Nginx HTTP'

      Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server’s status page by navigating to the following address and entering your login credentials when prompted:

      http://your_domain/status
      

      This status page displays information about the server’s resource usage and lists the installed plugins.

      |Kibana status page

      Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow the Let’s Encrypt guide now to obtain a free SSL certificate for Nginx on Ubuntu 20.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

      Now that the Kibana dashboard is configured, let’s install the next component: Logstash.

      Although it’s possible for Beats to send data directly to the Elasticsearch database, it is common to use Logstash to process the data. This will allow you more flexibility to collect data from different sources, transform it into a common format, and export it to another database.

      Install Logstash with this command:

      1. sudo apt install logstash

      After installing Logstash, you can move on to configuring it. Logstash’s configuration files reside in the /etc/logstash/conf.d directory. For more information on the configuration syntax, you can check out the configuration reference that Elastic provides. As you configure the file, it’s helpful to think of Logstash as a pipeline which takes in data at one end, processes it in one way or another, and sends it out to its destination (in this case, the destination being Elasticsearch). A Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins process the data, and the output plugins write the data to a destination.

      Logstash pipeline

      Create a configuration file called 02-beats-input.conf where you will set up your Filebeat input:

      1. sudo nano /etc/logstash/conf.d/02-beats-input.conf

      Insert the following input configuration. This specifies a beats input that will listen on TCP port 5044.

      /etc/logstash/conf.d/02-beats-input.conf

      input {
        beats {
          port => 5044
        }
      }
      

      Save and close the file.

      Next, create a configuration file called 30-elasticsearch-output.conf:

      1. sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

      Insert the following output configuration. Essentially, this output configures Logstash to store the Beats data in Elasticsearch, which is running at localhost:9200, in an index named after the Beat used. The Beat used in this tutorial is Filebeat:

      /etc/logstash/conf.d/30-elasticsearch-output.conf

      output {
        if [@metadata][pipeline] {
      	elasticsearch {
        	hosts => ["localhost:9200"]
        	manage_template => false
        	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        	pipeline => "%{[@metadata][pipeline]}"
      	}
        } else {
      	elasticsearch {
        	hosts => ["localhost:9200"]
        	manage_template => false
        	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      	}
        }
      }
      
      
      

      Save and close the file.

      Test your Logstash configuration with this command:

      1. sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

      If there are no syntax errors, your output will display Config Validation Result: OK. Exiting Logstash after a few seconds. If you don’t see this in your output, check for any errors noted in your output and update your configuration to correct them. Note that you’ll receive warnings from OpenJDK, but they should not cause any problems and can be ignored.

      If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:

      1. sudo systemctl start logstash
      2. sudo systemctl enable logstash

      Now that Logstash is running correctly and is fully configured, let’s install Filebeat.

      The Elastic Stack uses several lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. Here are the Beats that are currently available from Elastic:

      • Filebeat: collects and ships log files.
      • Metricbeat: collects metrics from your systems and services.
      • Packetbeat: collects and analyzes network data.
      • Winlogbeat: collects Windows event logs.
      • Auditbeat: collects Linux audit framework data and monitors file integrity.
      • Heartbeat: monitors services for their availability with active probing.

      In this tutorial we will use Filebeat to forward local logs to our Elastic Stack.

      Install Filebeat using apt:

      1. sudo apt install filebeat

      Next, configure Filebeat to connect to Logstash. Here, we will modify the example configuration file that comes with Filebeat.

      Open the Filebeat configuration file:

      1. sudo nano /etc/filebeat/filebeat.yml

      Note: As with Elasticsearch, Filebeat’s configuration file is in YAML format. This means that proper indentation is crucial, so be sure to use the same number of spaces that are indicated in these instructions.

      Filebeat supports numerous outputs, but you’ll usually only send events directly to Elasticsearch or to Logstash for additional processing. In this tutorial, we’ll use Logstash to perform additional processing on the data collected by Filebeat. Filebeat will not need to send any data directly to Elasticsearch, so let’s disable that output. To do so, find the output.elasticsearch section and comment out the following lines by preceding them with a #:

      /etc/filebeat/filebeat.yml

      ...
      #output.elasticsearch:
        # Array of hosts to connect to.
        #hosts: ["localhost:9200"]
      ...
      

      Then, configure the output.logstash section. Uncomment the lines output.logstash: and hosts: ["localhost:5044"] by removing the #. This will configure Filebeat to connect to Logstash on your Elastic Stack server at port 5044, the port for which we specified a Logstash input earlier:

      /etc/filebeat/filebeat.yml

      output.logstash:
        # The Logstash hosts
        hosts: ["localhost:5044"]
      

      Save and close the file.

      The functionality of Filebeat can be extended with Filebeat modules. In this tutorial we will use the system module, which collects and parses logs created by the system logging service of common Linux distributions.

      Let’s enable it:

      1. sudo filebeat modules enable system

      You can see a list of enabled and disabled modules by running:

      1. sudo filebeat modules list

      You will see a list similar to the following:

      Output

      Enabled: system Disabled: apache2 auditd elasticsearch icinga iis kafka kibana logstash mongodb mysql nginx osquery postgresql redis traefik

      By default, Filebeat is configured to use default paths for the syslog and authorization logs. In the case of this tutorial, you do not need to change anything in the configuration. You can see the parameters of the module in the /etc/filebeat/modules.d/system.yml configuration file.

      Next, we need to set up the Filebeat ingest pipelines, which parse the log data before sending it through logstash to elasticsearch. To load the ingest pipeline for the system module, enter the following command:

      1. sudo filebeat setup --pipelines --modules system

      Next, load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Indexes are identified with a name, which is used to refer to the index when performing various operations within it. The index template will be automatically applied when a new index is created.

      To load the template, use the following command:

      1. sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

      Output

      Index setup finished.

      Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.

      As the dashboards load, Filebeat connects to Elasticsearch to check version information. To load dashboards when Logstash is enabled, you need to disable the Logstash output and enable Elasticsearch output:

      1. sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

      You should receive output similar to this:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html Loaded machine learning job configurations Loaded Ingest pipelines

      Now you can start and enable Filebeat:

      1. sudo systemctl start filebeat
      2. sudo systemctl enable filebeat

      If you’ve set up your Elastic Stack correctly, Filebeat will begin shipping your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.

      To verify that Elasticsearch is indeed receiving this data, query the Filebeat index with this command:

      1. curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

      You should receive output similar to this:

      Output

      ... { { "took" : 4, "timed_out" : false, "_shards" : { "total" : 2, "successful" : 2, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 4040, "relation" : "eq" }, "max_score" : 1.0, "hits" : [ { "_index" : "filebeat-7.7.1-2020.06.04", "_type" : "_doc", "_id" : "FiZLgXIB75I8Lxc9ewIH", "_score" : 1.0, "_source" : { "cloud" : { "provider" : "digitalocean", "instance" : { "id" : "194878454" }, "region" : "nyc1" }, "@timestamp" : "2020-06-04T21:45:03.995Z", "agent" : { "version" : "7.7.1", "type" : "filebeat", "ephemeral_id" : "cbcefb9a-8d15-4ce4-bad4-962a80371ec0", "hostname" : "june-ubuntu-20-04-elasticstack", "id" : "fbd5956f-12ab-4227-9782-f8f1a19b7f32" }, ...

      If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you will need to review your setup for errors. If you received the expected output, continue to the next step, in which we will see how to navigate through some of Kibana’s dashboards.

      Let’s return to the Kibana web interface that we installed earlier.

      In a web browser, go to the FQDN or public IP address of your Elastic Stack server. After entering the login credentials you defined in Step 2, you will see the Kibana homepage:

      Kibana Homepage

      Click the Discover link in the left-hand navigation bar (you may have to click the the Expand icon at the very bottom left to see the navigation menu items). On the Discover page, select the predefined filebeat-* index pattern to see Filebeat data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:

      Discover page

      Here, you can search and browse through your logs and also customize your dashboard. At this point, though, there won’t be much in there because you are only gathering syslogs from your Elastic Stack server.

      Use the left-hand panel to navigate to the Dashboard page and search for the Filebeat System dashboards. Once there, you can select the sample dashboards that come with Filebeat’s system module.

      For example, you can view detailed stats based on your syslog messages:

      Syslog Dashboard

      You can also view which users have used the sudo command and when:

      Sudo Dashboard

      Kibana has many other features, such as graphing and filtering, so feel free to explore.

      In this tutorial, you’ve learned how to install and configure the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash using Beats, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.

      How to Create a LEMP Stack on Linux


      LEMP stack refers to a development framework for Web and mobile applications based on four open source components:

      1. Linux operating system
      2. NGINX Web server
      3. MySQL relational database management system (RDBMS)
      4. PHP,
        Perl, or
        Python programming language

      NGINX contributes to the acronym “LEMP” because English-speakers pronounce NGINX as “engine-x”, hence an “E”.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our
      Users and Groups guide.

      How Does LEMP Differ from LAMP?

      LAMP is just like LEMP, except with Apache in place of NGINX.

      LAMP played a crucial role in the Web for
      over twenty years. NGINX was released publicly in 2004, largely to address faults in LAMP. LEMP use spread widely after 2008, and NGINX is now the second
      most popular Web server, after the
      Apache Web server that LAMP uses.

      Both LEMP and LAMP combine open source tools to supply the essentials for a Web application. This includes an underlying Linux operating system which hosts everything else, including:

      • The NGINX or Apache Web server that receives and responds to end-user actions.
      • The MySQL RDBMS which stores information including user profile, event histories, and application-specific content which has a lifespan beyond an individual transaction.
      • A programming language for business logic that defines a particular application.

      Abundant documentation and rich communities of practitioners make both LEMP and LAMP natural choices for development. The difference between them is confined to the Web server part of the stack.

      Apache Versus NGINX

      In broad terms, the two Web servers have much in common.
      NGINX is faster than Apache, but requires more expertise in certain aspects of its configuration and use, and is less robust on Windows than Apache. Apache works usefully “out of the box”, while, as we see below, NGINX demands a couple of additional steps before its installation is truly usable.

      RDBMS and Programming Language

      Two other variations deserve clarity in regard to the initials “M” and “P”.
      MariaDB is a drop-in replacement for MySQL. The differences between the two are explained in
      this tutorial. Everything you do with MySQL applies immediately with MariaDB as well.

      While several different programming languages work well in a LEMP stack, this guide focuses on PHP. However, nearly all the principles of LEMP illustrated below apply with Python or another alternative.

      LEMP Benefits

      LEMP has a deep track record of successful deliveries. Hundreds of millions of working Web applications depend on it.

      LEMP’s suitability extends beyond purely technical dimensions. Its flexible open-source licensing enables development teams to focus on their programming and operations, with few legal constraints to complicate their engineering.

      Install the LEMP Stack

      Linode’s support for LEMP begins with abundant documentation, including
      How to Install the LEMP Stack on Ubuntu 18.04.

      Rich collections of documentation are available to readers
      new to Linux and its command line. This guide assumes familiarity with the command line and Linux filesystems, along with permission to run as root or with sudo privileges. With the “L” (Linux) in place, the installation in this Guide focuses purely on the “EMP” layers of LEMP.

      Install “E”, “M”, and “P” Within “L”

      Different distributions of Linux require subtly different LEMP installations. The sequence below works across a range of Ubuntu versions, and is a good model for other Debian-based distributions.

      1. Update your host package index with:

        sudo apt-get update -y
        
      2. Now upgrade your installed packages:

        sudo apt-get upgrade -y
        
      3. Install software-properties-common and apt-transport-httpsto manage the PHP PPA repository:

        sudo apt-get install software-properties-common apt-transport-https -y
        
      4. Now provide a reference to the current PHP repository:

        sudo add-apt-repository ppa:ondrej/php -y
        
      5. Update the package index again:

        sudo apt update -y
        
      6. Install the rest of the LEMP stack:

        sudo apt-get install nginx php-mysql mysql-server php8.1-fpm -y
        

      The installation demands a small amount of interaction to give information about geographic location and timezone. Depending on circumstances, you may need to verify the country and timezone your server is located in.

      Start Services

      1. Start the “E” (NGINX), “M” (MySQL), and “P” (PHP) services:

        sudo service nginx start
        sudo service mysql start
        sudo service php8.1-fpm start
        
      2. Check on these services:

        sudo service --status-all
        

        You should see them all running::

        [ + ]  mysql
        [ + ]  nginx
        [ + ]  php8.1-fpm

      Verify PHP

      Verify the healthy operation of these services.

      1. For PHP, launch:

        php -version
        

        You should see:

        PHP 8.1.x (cli) (built: ...
        Copyright © The PHP Group ...
      2. Go one step further with verification of the PHP configuration through the command:

        php -m
        

        The result you see is:

        [PHP Modules]
        calendar
        Core
        ...
        mysqli
        mysqlnd
        ...

      This demonstrates that PHP is installed and that the modules needed to communicate with the rest of the LEMP stack are in place.

      Verify NGINX

      Verification of NGINX service is a little more involved. The first step is
      identification of the IP address of the host.

      1. Navigate a browser to a URL such as http://localhost or http://23.77.NNN.NNN, henceforth referred to as $LEMP_HOST

        Your Web browser shows a default display of:

        Welcome to nginx!
        If you see this page, the nginx web server is successfully installed and working.  ...
      2. With the default NGINX configuration verified, update it to enable PHP. Edit the file located at /etc/nginx/sites-enable/default and change this section:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        
        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        To become:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        location / {
               # First attempt to serve request as file, then
               # as directory, then fall back to displaying a 404.
               try_files $uri $uri/ =404;
        }
        location ~ \.php {
               include snippets/fastcgi-php.conf;
               fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
        }
      3. Back at the Linux command line, activate this new configuration with:

        service nginx restart
        
      4. Next, ensure that NGINX communicates with PHP by creating the file /var/www/html/php-test.php with contents:

        File: /var/www/html/php-test.php
        1
        2
        3
        
        <?php
        phpinfo();
        ?>
        
      5. Now direct your browser to http://$LEMP_HOST/php-test.php.

        Your browser shows several pages of diagnostic output, starting with:

        PHP Version 8.1.9
           System Linux ... 5.10.76-linuxkit #1 SMP Mon Nov 8 ...
           ...

      The location of /var/www/html/php-test.php is configurable. This means that a particular distribution of Linux and NGINX might designate a different directory. /var/www/html is common, especially for a new Ubuntu instance with NGINX “out of the box”. In practice, it’s common to modify the NGINX default a great deal. You can allow for tasks such as caching, special handling for static requests, virtual hosts, and logging security.

      Verify MySQL

      When you install MySQL according to the directions above, it doesn’t depend on authentication.

      1. No password is required. You only need one command:

        mysql
        

        And you see:

        Welcome to the MySQL monitor ...
      2. You can leave the MySQL monitor and return to the Linux command line with:

        \q
        

      Your LEMP stack is now installed, activated, and ready for application development. For a basic LEMP installation, this consists of placing programming source code in the /var/www/html directory, and occasionally updating the configurations of the LEMP layers.

      Use the LEMP Stack to Create an Example Application

      You can create a minimal model application that exercises each component and typical interactions between them. This application collects a record of each Web request made to the server in its backend database. A more refined version of this application could be used to collect:

      • Sightings of a rare bird at different locations.
      • Traffic at voting stations.
      • Requests for customer support.
      • Tracking data for a company automobile.

      The configuration and source below apply to LEMP environments. Even if your LEMP stack used different commands during installation, the directions that follow apply with a minimum amount of customization or disruption.

      Prepare a Database to Receive Data

      Start application development by configuring the database to receive program data.

      1. Re-enter the MySQL monitor with:

        mysql
        
      2. While connected to MySQL, create a database instance specific to this development:

        CREATE DATABASE model_application;
        
      3. Enter that database with:

        USE model_application;
        
      4. Define a table for the program data:

        CREATE TABLE events (
            timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            client_ip INT(4) UNSIGNED NOT NULL
        );
        
      5. Create a database account:

        CREATE USER 'automation'@'localhost' IDENTIFIED BY 'abc123';
        
      6. Now allow PHP to access it:

        GRANT ALL PRIVILEGES ON model_application.* TO 'automation'@'localhost' WITH GRANT OPTION;
        
      7. Quit MySQL:

        \q
        

      A polished application uses tighter security privileges, but this sample application adopts simple choices to maintain focus on the teamwork between the different LEMP layers.

      Create Application Source Code

      Create /var/www/html/event.php with the following content:

      File: /var/www/html/event.php
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      <?php
          $connection = new mysqli("127.0.0.1", "automation", "abc123", "model_application");
          $client_ip = $_SERVER['REMOTE_ADDR'];
          // INET_ATON() packs an IPv4 string representation into
          // four octets in a standard way.
          $query = "INSERT INTO events(client_ip)
      VALUES(INET_ATON('$client_ip'))";
          $connection->query($query);
          echo 'Your request has successfully created one database record.';
      ?>
      

      Verify Operation of the Application

      1. event.php is the only program source code for our minimal model application. With it in place, instruct your browser to visit http://$LEMP_HOST/event.php.

        You should see:

        Your request has successfully created one database record.
      2. You can also exercise the application from different remote browser connections. With a different browser, perhaps from a different desktop, again navigate to http://$LEMP_SERVER/event.php.

      View Collected Data

      The model application exhibits the expected behavior from a Web application and your browser reports success. Viewed through the Web browser, the application does the right thing.

      1. To confirm it updated the database, re-enter the MySQL monitor:

        mysql
        
      2. Enter the example application database:

        USE model_application;
        
      3. Pose the query:

        select timestamp, inet_ntoa(client_ip) from events;
        

        You should see output such as:

        +---------------------+----------------------+
        | timestamp           | inet_ntoa(client_ip) |
        +---------------------+----------------------+
        | 2022-08-03 02:26:44 | 127.0.0.1            |
        | 2022-08-03 02:27:18 | 98.200.8.79          |
        | 2022-08-05 02:27:23 | 107.77.220.62        |
        +---------------------+----------------------+

      This demonstrates the flow of data from a Web browser to the database server. Each row in the events table reflects one request from a Web browser to connect to the application. As the application goes into practical use, rows accumulate in the table.

      Application Context

      LEMP is a trustworthy basis for Web development, with decades of successful deliveries over a range of requirements. It directly supports only
      server-side processing. The model application above delivers pure HTML to the browser. However, LEMP is equally capable of serving up CSS and
      JavaScript, but does not build in tooling for these client-side technologies. Projects reliant on elaborate user interface effects usually choose a framework focused on the client side.
      React is an example of such a framework.

      Server-side orientation remains adequate for many applications, and LEMP fits these well. Server-side computation typically involves several functions beyond the model application above, including:

      • Account Management
      • Forms Processing
      • Security Restrictions
      • Analytic and Cost Reporting
      • Exception Handling
      • Quality Assurance Instrumentation

      Contemporary applications often build in a
      model-view-controller (MVC) architecture, and/or define a
      representational state transfer (REST) perspective. A commercial-grade installation usually migrates the database server to a separate dedicated host. Additionally, high-volume applications often introduce load balancers, security-oriented proxies,
      content delivery network (CDN) services, and other refinements. These functions are layers over the basic data flow between user, browser, business logic processing, and datastore that the model application embodies. The model application is a good first example.

      Conclusion

      You just installed a working LEMP stack, activated it, and created a model application. All the needs of a specific Web application have a place in this same model.



      Source link