One place for hosting & domains

      stack

      How to Create a LEMP Stack on Linux


      LEMP stack refers to a development framework for Web and mobile applications based on four open source components:

      1. Linux operating system
      2. NGINX Web server
      3. MySQL relational database management system (RDBMS)
      4. PHP,
        Perl, or
        Python programming language

      NGINX contributes to the acronym “LEMP” because English-speakers pronounce NGINX as “engine-x”, hence an “E”.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our
      Users and Groups guide.

      How Does LEMP Differ from LAMP?

      LAMP is just like LEMP, except with Apache in place of NGINX.

      LAMP played a crucial role in the Web for
      over twenty years. NGINX was released publicly in 2004, largely to address faults in LAMP. LEMP use spread widely after 2008, and NGINX is now the second
      most popular Web server, after the
      Apache Web server that LAMP uses.

      Both LEMP and LAMP combine open source tools to supply the essentials for a Web application. This includes an underlying Linux operating system which hosts everything else, including:

      • The NGINX or Apache Web server that receives and responds to end-user actions.
      • The MySQL RDBMS which stores information including user profile, event histories, and application-specific content which has a lifespan beyond an individual transaction.
      • A programming language for business logic that defines a particular application.

      Abundant documentation and rich communities of practitioners make both LEMP and LAMP natural choices for development. The difference between them is confined to the Web server part of the stack.

      Apache Versus NGINX

      In broad terms, the two Web servers have much in common.
      NGINX is faster than Apache, but requires more expertise in certain aspects of its configuration and use, and is less robust on Windows than Apache. Apache works usefully “out of the box”, while, as we see below, NGINX demands a couple of additional steps before its installation is truly usable.

      RDBMS and Programming Language

      Two other variations deserve clarity in regard to the initials “M” and “P”.
      MariaDB is a drop-in replacement for MySQL. The differences between the two are explained in
      this tutorial. Everything you do with MySQL applies immediately with MariaDB as well.

      While several different programming languages work well in a LEMP stack, this guide focuses on PHP. However, nearly all the principles of LEMP illustrated below apply with Python or another alternative.

      LEMP Benefits

      LEMP has a deep track record of successful deliveries. Hundreds of millions of working Web applications depend on it.

      LEMP’s suitability extends beyond purely technical dimensions. Its flexible open-source licensing enables development teams to focus on their programming and operations, with few legal constraints to complicate their engineering.

      Install the LEMP Stack

      Linode’s support for LEMP begins with abundant documentation, including
      How to Install the LEMP Stack on Ubuntu 18.04.

      Rich collections of documentation are available to readers
      new to Linux and its command line. This guide assumes familiarity with the command line and Linux filesystems, along with permission to run as root or with sudo privileges. With the “L” (Linux) in place, the installation in this Guide focuses purely on the “EMP” layers of LEMP.

      Install “E”, “M”, and “P” Within “L”

      Different distributions of Linux require subtly different LEMP installations. The sequence below works across a range of Ubuntu versions, and is a good model for other Debian-based distributions.

      1. Update your host package index with:

        sudo apt-get update -y
        
      2. Now upgrade your installed packages:

        sudo apt-get upgrade -y
        
      3. Install software-properties-common and apt-transport-httpsto manage the PHP PPA repository:

        sudo apt-get install software-properties-common apt-transport-https -y
        
      4. Now provide a reference to the current PHP repository:

        sudo add-apt-repository ppa:ondrej/php -y
        
      5. Update the package index again:

        sudo apt update -y
        
      6. Install the rest of the LEMP stack:

        sudo apt-get install nginx php-mysql mysql-server php8.1-fpm -y
        

      The installation demands a small amount of interaction to give information about geographic location and timezone. Depending on circumstances, you may need to verify the country and timezone your server is located in.

      Start Services

      1. Start the “E” (NGINX), “M” (MySQL), and “P” (PHP) services:

        sudo service nginx start
        sudo service mysql start
        sudo service php8.1-fpm start
        
      2. Check on these services:

        sudo service --status-all
        

        You should see them all running::

        [ + ]  mysql
        [ + ]  nginx
        [ + ]  php8.1-fpm

      Verify PHP

      Verify the healthy operation of these services.

      1. For PHP, launch:

        php -version
        

        You should see:

        PHP 8.1.x (cli) (built: ...
        Copyright © The PHP Group ...
      2. Go one step further with verification of the PHP configuration through the command:

        php -m
        

        The result you see is:

        [PHP Modules]
        calendar
        Core
        ...
        mysqli
        mysqlnd
        ...

      This demonstrates that PHP is installed and that the modules needed to communicate with the rest of the LEMP stack are in place.

      Verify NGINX

      Verification of NGINX service is a little more involved. The first step is
      identification of the IP address of the host.

      1. Navigate a browser to a URL such as http://localhost or http://23.77.NNN.NNN, henceforth referred to as $LEMP_HOST

        Your Web browser shows a default display of:

        Welcome to nginx!
        If you see this page, the nginx web server is successfully installed and working.  ...
      2. With the default NGINX configuration verified, update it to enable PHP. Edit the file located at /etc/nginx/sites-enable/default and change this section:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        
        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        To become:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        location / {
               # First attempt to serve request as file, then
               # as directory, then fall back to displaying a 404.
               try_files $uri $uri/ =404;
        }
        location ~ \.php {
               include snippets/fastcgi-php.conf;
               fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
        }
      3. Back at the Linux command line, activate this new configuration with:

        service nginx restart
        
      4. Next, ensure that NGINX communicates with PHP by creating the file /var/www/html/php-test.php with contents:

        File: /var/www/html/php-test.php
        1
        2
        3
        
        <?php
        phpinfo();
        ?>
        
      5. Now direct your browser to http://$LEMP_HOST/php-test.php.

        Your browser shows several pages of diagnostic output, starting with:

        PHP Version 8.1.9
           System Linux ... 5.10.76-linuxkit #1 SMP Mon Nov 8 ...
           ...

      The location of /var/www/html/php-test.php is configurable. This means that a particular distribution of Linux and NGINX might designate a different directory. /var/www/html is common, especially for a new Ubuntu instance with NGINX “out of the box”. In practice, it’s common to modify the NGINX default a great deal. You can allow for tasks such as caching, special handling for static requests, virtual hosts, and logging security.

      Verify MySQL

      When you install MySQL according to the directions above, it doesn’t depend on authentication.

      1. No password is required. You only need one command:

        mysql
        

        And you see:

        Welcome to the MySQL monitor ...
      2. You can leave the MySQL monitor and return to the Linux command line with:

        \q
        

      Your LEMP stack is now installed, activated, and ready for application development. For a basic LEMP installation, this consists of placing programming source code in the /var/www/html directory, and occasionally updating the configurations of the LEMP layers.

      Use the LEMP Stack to Create an Example Application

      You can create a minimal model application that exercises each component and typical interactions between them. This application collects a record of each Web request made to the server in its backend database. A more refined version of this application could be used to collect:

      • Sightings of a rare bird at different locations.
      • Traffic at voting stations.
      • Requests for customer support.
      • Tracking data for a company automobile.

      The configuration and source below apply to LEMP environments. Even if your LEMP stack used different commands during installation, the directions that follow apply with a minimum amount of customization or disruption.

      Prepare a Database to Receive Data

      Start application development by configuring the database to receive program data.

      1. Re-enter the MySQL monitor with:

        mysql
        
      2. While connected to MySQL, create a database instance specific to this development:

        CREATE DATABASE model_application;
        
      3. Enter that database with:

        USE model_application;
        
      4. Define a table for the program data:

        CREATE TABLE events (
            timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            client_ip INT(4) UNSIGNED NOT NULL
        );
        
      5. Create a database account:

        CREATE USER 'automation'@'localhost' IDENTIFIED BY 'abc123';
        
      6. Now allow PHP to access it:

        GRANT ALL PRIVILEGES ON model_application.* TO 'automation'@'localhost' WITH GRANT OPTION;
        
      7. Quit MySQL:

        \q
        

      A polished application uses tighter security privileges, but this sample application adopts simple choices to maintain focus on the teamwork between the different LEMP layers.

      Create Application Source Code

      Create /var/www/html/event.php with the following content:

      File: /var/www/html/event.php
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      <?php
          $connection = new mysqli("127.0.0.1", "automation", "abc123", "model_application");
          $client_ip = $_SERVER['REMOTE_ADDR'];
          // INET_ATON() packs an IPv4 string representation into
          // four octets in a standard way.
          $query = "INSERT INTO events(client_ip)
      VALUES(INET_ATON('$client_ip'))";
          $connection->query($query);
          echo 'Your request has successfully created one database record.';
      ?>
      

      Verify Operation of the Application

      1. event.php is the only program source code for our minimal model application. With it in place, instruct your browser to visit http://$LEMP_HOST/event.php.

        You should see:

        Your request has successfully created one database record.
      2. You can also exercise the application from different remote browser connections. With a different browser, perhaps from a different desktop, again navigate to http://$LEMP_SERVER/event.php.

      View Collected Data

      The model application exhibits the expected behavior from a Web application and your browser reports success. Viewed through the Web browser, the application does the right thing.

      1. To confirm it updated the database, re-enter the MySQL monitor:

        mysql
        
      2. Enter the example application database:

        USE model_application;
        
      3. Pose the query:

        select timestamp, inet_ntoa(client_ip) from events;
        

        You should see output such as:

        +---------------------+----------------------+
        | timestamp           | inet_ntoa(client_ip) |
        +---------------------+----------------------+
        | 2022-08-03 02:26:44 | 127.0.0.1            |
        | 2022-08-03 02:27:18 | 98.200.8.79          |
        | 2022-08-05 02:27:23 | 107.77.220.62        |
        +---------------------+----------------------+

      This demonstrates the flow of data from a Web browser to the database server. Each row in the events table reflects one request from a Web browser to connect to the application. As the application goes into practical use, rows accumulate in the table.

      Application Context

      LEMP is a trustworthy basis for Web development, with decades of successful deliveries over a range of requirements. It directly supports only
      server-side processing. The model application above delivers pure HTML to the browser. However, LEMP is equally capable of serving up CSS and
      JavaScript, but does not build in tooling for these client-side technologies. Projects reliant on elaborate user interface effects usually choose a framework focused on the client side.
      React is an example of such a framework.

      Server-side orientation remains adequate for many applications, and LEMP fits these well. Server-side computation typically involves several functions beyond the model application above, including:

      • Account Management
      • Forms Processing
      • Security Restrictions
      • Analytic and Cost Reporting
      • Exception Handling
      • Quality Assurance Instrumentation

      Contemporary applications often build in a
      model-view-controller (MVC) architecture, and/or define a
      representational state transfer (REST) perspective. A commercial-grade installation usually migrates the database server to a separate dedicated host. Additionally, high-volume applications often introduce load balancers, security-oriented proxies,
      content delivery network (CDN) services, and other refinements. These functions are layers over the basic data flow between user, browser, business logic processing, and datastore that the model application embodies. The model application is a good first example.

      Conclusion

      You just installed a working LEMP stack, activated it, and created a model application. All the needs of a specific Web application have a place in this same model.



      Source link

      How to Create a MERN Stack on Linux


      Of all the possible technical bases for a modern web site,
      “MERN holds the leading position when it comes to popularity.” This introduction makes you familiar with the essential tools used for a plurality of all web sites worldwide.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our
      Users and Groups guide.

      What is the MERN stack?

      MERN refers to MongoDB, Express.js, ReactJS, and Node.js, four software tools which cooperate to power millions of web sites worldwide. In broad terms:

      • MongoDB manages data, such as customer information, technical measurements, and event records.
      • Express.js is a web application framework for the “behaviors” of particular applications. For example, how data flows from catalog to shopping cart.
      • ReactJS is a library of user-interface components for managing the visual “state” of a web application.
      • Node.js is a back-end runtime environment for the server side of a web application.

      Linode has
      many articles on each of these topics, and supports thousands of
      Linode customers who have created successful applications based on these tools.

      One of MERN’s important distinctions is the
      JavaScript programming language is used throughout the entire stack. Certain competing stacks use PHP or Python on the back end, JavaScript on the front end, and perhaps SQL for data storage. MERN developers focus on just a single programming language,
      JavaScript, with all the economies that implies, for training and tooling.

      Install the MERN stack

      You can install a basic MERN stack on a 64-bit x86_64
      Linode Ubuntu 20.04 host in under half an hour. As of this writing, parts of MERN for Ubuntu 22.04 remain experimental. While thousands of variations are possible, this section typifies a correct “on-boarding” sequence. The emphasis here is on “correct”, as scores of already-published tutorials embed enough subtle errors to block their use by readers starting from scratch.

      Install MongoDB

      1. Update the repository cache:

        apt update -y
        
      2. Install the networking and service dependencies Mongo requires:

        apt install ca-certificates curl gnupg2 systemctl wget -y
        
      3. Configure access to the official MongoDB Community Edition repository with the MongoDB public GPG key:

        wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | apt-key add -
        
      4. Create a MongoDB list file:

        echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-5.0.list
        
      5. Update the repository cache again:

        apt update -y
        
      6. Install MongoDB itself:

        apt install mongodb-org -y
        
      7. Enable and the MongoDB service:

        systemctl enable mongod
        
      8. Launch the MongoDB service:

        systemctl start mongod
        
      9. Verify the MongoDB service:

        systemctl status mongod
        

        You should see diagnostic information that concludes:

        … Started MongoDB Database Server.
        
      10. For an even stronger confirmation that the Mongo server is ready for useful action, connect directly to it and issue this command:

        mongo
        
      11. Now issue this command:

        db.runCommand({ connectionStatus: 1 })
        

        You should see, along with many other details, this summary of the connectionStatus:

        … MongoDB server … "ok" : 1 …
        
      12. Exit Mongo:

         exit
        

      Install Node.js

      While the acronym is MERN, the true order of its dependencies is better written as “MNRE”. ReactJS and Express.js conventionally require Node.js, so the next installation steps focus on Node.js. As with MongoDB, Node.js’s main trusted repository is not available in the main Ubuntu repository.

      1. Run this command to adjoin it:

        curl -sL https://deb.nodesource.com/setup_16.x | bash -
        
      2. Install Node.js itself:

        apt-get install nodejs -y
        
      3. Verify the installation:

        node -v
        

        You should see v16.15.1 or perhaps later.

      Install React.js

      1. Next, install React.js:

        mkdir demonstration; cd demonstration
        npx --yes create-react-app frontend
        cd frontend
        npm run build
        

      Templates for all the HTML, CSS, and JS for your model application are now present in the demonstration/frontend directory.

      Install Express.js

      1. Express.js is the final component of the basic MERN stack.

        cd ..; mkdir server; cd server
        npm init -y
        cd ..
        npm install cors express mongodb mongoose nodemon
        

      Use the MERN stack to create an example application

      The essence of a web application is to respond to a request from a web browser with an appropriate result, backed by a datastore that “remembers” crucial information from one session to the next. Any realistic full-scale application involves account management, database backup, context dependence, and other refinements. Rather than risk the distraction and loss of focus these details introduce, this section illustrates the simplest possible use of MERN to implement a
      three-tier operation typical of real-world applications.

      “Three-tier” in this context refers to the teamwork web applications embody between:

      • The presentation in the web browser of the state of an application
      • The “back end” of the application which realizes that state
      • The datastore which supports the back end beyond a single session of the front end or even the restart of the back end.

      You can create a tiny application which receives a request from a web browser, creates a database record based on that request, and responds to the request. The record is visible within the Mongo datastore.

      Initial configuration of the MERN application

      1. Create demonstration/server/index.js with this content:

        const express = require('express');
        const bodyParser = require('body-parser');
        const mongoose = require('mongoose');
        const routes = require('../routes/api');
        const app = express();
        const port = 4200;
        
        // Connect to the database
        mongoose
          .connect('mongodb://127.0.0.1:27017/', { useNewUrlParser: true })
          .then(() => console.log(`Database connected successfully`))
          .catch((err) => console.log(err));
        
        // Override mongoose's deprecated Promise with Node's Promise.
        mongoose.Promise = global.Promise;
        app.use((req, res, next) => {
          res.header('Access-Control-Allow-Origin', '*');
          res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
            next();
          });
          app.use(bodyParser.json());
          app.use('/api', routes);
          app.use((err, req, res, next) => {
            console.log(err);
            next();
          });
        
          app.listen(port, () => {
            console.log(`Server runs on port ${port}.`);
          });
        
      2. Create demonstration/routes/api.js with this content:

        const express = require('express');
        const router = express.Router();
        
        var MongoClient = require('mongodb').MongoClient;
        var url="mongodb://127.0.0.1:27017/";
        const mongoose = require('mongoose');
        var db = mongoose.connection;
        
        router.get('/record', (req, res, next) => {
          item = req.query.item;
          MongoClient.connect(url, function(err, db) {
            if (err) throw err;
            var dbo = db.db("mydb");
            var myobj = { name: item };
            dbo.collection("demonstration").insertOne(myobj, function(err, res) {
              if (err) throw err;
              console.log(`One item (${item}) inserted.`);
              db.close();
            })
          });
        })
        module.exports = router;
        
      3. Create demonstration/server/server.js with this content:

        const express = require("express");
        const app = express();
        const cors = require("cors");
        require("dotenv").config({ path: "./config.env" });
        const port = process.env.PORT || 4200;
        app.use(cors());
        app.use(express.json());
        app.use(require("./routes/record"));
        const dbo = require("./db/conn");
        
        app.listen(port, () => {
          // Connect on start.
          dbo.connectToServer(function (err) {
            if (err) console.error(err);
          });
          console.log(`Server is running on port: ${port}`);
        });
        

      Verify your application

      1. Launch the application server:

        node server/index.js
        
      2. In a convenient Web browser, request:

        localhost:4200/api/record?item=this-new-item
        

        At this point, your terminal should display:

        One item (this-new-item) inserted.
        
      3. Now launch an interactive shell to connect to the MongoDB datastore:

        mongo
        
      4. Within the MongoDB shell, request:

        use mydb
        db.demonstration.find({})
        

        Mongo should report that it finds a record:

        { "_id" : ObjectId("62c84fe504d6ca2aa325c36b"), "name" : "this-new-item" }
        

      This demonstrates a minimal MERN action:

      • The web browser issues a request with particular data.
      • The React front end framework routes that request.
      • The Express application server receives the data from the request, and acts on the MongoDB datastore.

      Conclusion

      You now know how to install each of the basic components of the MERN stack on a standard Ubuntu 20.04 server, and team them together to demonstrate a possible MERN action: creation of one database record based on a browser request.

      Any real-world application involves considerably more configuration and source files. MERN enjoys abundant tooling to make the database and web connections more secure, to validate data systematically, to structure a
      complete Application Programming Interface (API), and to simplify debugging. Nearly all practical applications need to create records, update, delete, and list them. All these other refinements and extensions use the elements already present in the workflow above. You can build everything your full application needs from this starting point.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Set Up TOBS, The Observability Stack, for Kubernetes Monitoring


      Introduction

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed into any existing Kubernetes cluster. It includes many of the most popular open-source observability tools with Prometheus and Grafana as a baseline, including Promlens, TimescaleDB, Alertmanager, and others. Together, these provide a straightforward, maintainable solution for analyzing server traffic and identifying any potential problems with a deployment up to a very large scale.

      TOBS makes use of standard Kubernetes Helm charts in order to configure and update deployments. It can be installed into any Kubernetes cluster, but it can be demonstrated more effectively if you’re running kubectl to manage your cluster from a local machine rather than a remote node. DigitalOcean’s Managed Kubernetes will provide you with a configuration like this by default.

      In this tutorial, you will install TOBS into an existing Kubernetes cluster, and learn how to update, configure, and browse its component dashboards.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Verifying your Kubernetes Configuration

      In order to install TOBS, you should first have a valid Kubernetes configuration set up with kubectl from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      If kubectl is able to connect to your Kubernetes cluster and it’s up and running as expected, this command will return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If this is successful, you can move on to Step 2. If not, you should review your configuration details for any issues.

      By default, kubectl will look for a file at ~/.kube/config in order to understand your environment. In order to verify that this file exists and contains valid YAML syntax, you can run head on it to view its first several lines, i:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: …

      If the file does not exist, ensure that you are logged in as the same user that you configured Kubernetes with. ~/ paths reflect individual users’ home directories, and Kubernetes configurations are saved per-user by default.

      If you are using DigitalOcean’s Managed Kubernetes, ensure that you have run the doctl kubernetes cluster kubeconfig save command after setting up a cluster so that your local machine can authenticate to it. This will create a ~/.kube/config file:

      • doctl kubernetes cluster kubeconfig save your-cluster-name

      If you are using this machine to access multiple clusters, you should review the Kubernetes documentation on using environment variables and multiple configuration files in order to avoid conflicts. After configuring your kubectl environment, you can move on to installing TOBS in the next step.

      Step 2 — Installing TOBS and Testing Your Endpoints

      TOBS includes the following components:

      • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language.
      • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty. To learn more about Alertmanager, consult the Prometheus documentation on alerting.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • kube-state-metrics is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus.
      • Lastly is node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus.

      In order to install TOBS, you first need to run the TOBS installer on your control-plane. This will set up the tobs command and configuration directories. As mentioned in the prerequisites, the tobs command is only designed to work on Linux/macOS/BSD systems (like the official Kubernetes binaries), so if you have been using Windows up to now, you should be working in the Windows Subsystem for Linux environment.

      Retrieve and run the TOBS installer:

      • curl --proto '=https' --tlsv1.2 -sSLf https://tsdb.co/install-tobs-sh |sh

      Output

      tobs 0.7.0 was successfully installed 🎉 Binary is available at /root/.local/bin/tobs.

      You can now push TOBS to your Kubernetes cluster. This is done by a one-liner using your newly-provided tobs command:

      This will generate several lines of output and may take a few moments. Depending on your exact version of Kubernetes, there may be several warnings in the output, but you can ignore these as long as you eventually receive the Welcome to tobs message:

      Output

      WARNING: Using a generated self-signed certificate for TLS access to TimescaleDB. This should only be used for development and demonstration purposes. To use a signed certificate, use the "--tls-timescaledb-cert" and "--tls-timescaledb-key" flags when issuing the tobs install command. Creating TimescaleDB tobs-certificate secret Creating TimescaleDB tobs-credentials secret skipping to create TimescaleDB s3 backup secret as backup option is disabled. 2022/01/10 11:25:34 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame Installing The Observability Stack 2022/01/10 11:25:37 Transport: unhandled response frame type *http.http2UnknownFrame W0110 11:25:55.438728 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0110 11:25:55.646392 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ … 👋🏽 Welcome to tobs, The Observability Stack for Kubernetes …

      The output from this point onward will contain instructions for connecting to each of Prometheus, TimescaleDB, PromLens, and Grafana’s web endpoints in your browser. It is reproduced in full below for reference:

      Output

      ############################################################################### 🔥 PROMETHEUS NOTES: ############################################################################### Prometheus can be accessed via port 9090 on the following DNS name from within your cluster: tobs-kube-prometheus-prometheus.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: tobs prometheus port-forward The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster: tobs-kube-prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=alertmanager,alertmanager=tobs-kube-prometheus-alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 WARNING! Persistence is disabled on AlertManager. You will lose your data when the AlertManager pod is terminated. ############################################################################### 🐯 TIMESCALEDB NOTES: ############################################################################### TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster: tobs.default.svc.cluster.local To get your password for superuser run: tobs timescaledb get-password -U <user> To connect to your database, chose one of these options: 1. Run a postgres pod and connect using the psql cli: tobs timescaledb connect -U <user> 2. Directly execute a psql session on the master node tobs timescaledb connect -m ############################################################################### 🧐 PROMLENS NOTES: ############################################################################### PromLens is a PromQL query builder, analyzer, and visualizer. You can access PromLens via a local browser by executing: tobs promlens port-forward Then you can point your browser to http://127.0.0.1:8081/. ############################################################################### 📈 GRAFANA NOTES: ############################################################################### 1. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: tobs-grafana.default.svc.cluster.local You can access grafana locally by executing: tobs grafana port-forward Then you can point your browser to http://127.0.0.1:8080/. 2. The 'admin' user password can be retrieved by: tobs grafana get-password 3. You can reset the admin user password with grafana-cli from inside the pod. tobs grafana change-password <password-you-want-to-set>

      Each of this is provided with a DNS name internal to your cluster so that they can be accessed from any of your worker nodes, e.g. tobs-kube-prometheus-alertmanager.default.svc.cluster.local for Prometheus. In addition, there is a port forwarding command configured for each that allows you to access them from a local web browser.

      In a new terminal, run tobs prometheus port-forward:

      • tobs prometheus port-forward

      This will occupy the terminal as long as the port forwarding process is active. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port. Next, in a web browser, go to the URL http://127.0.0.1:9090/. You should see the full Prometheus interface running and producing metrics from your cluster:

      Prometheus welcome

      You can do the same for Grafana, which is accessible at http://127.0.0.1:8080/ as long as port forwarding is active in another process. First, you’ll need to use the get-password command provided by the installer output:

      • tobs grafana get-password

      Output

      your-grafana-password

      You can then use this password to log into the Grafana interface by running its port forwarding command and opening http://127.0.0.1:8080/ in your browser.

      • tobs grafana port-forward

      Grafana welcome

      You now have a working TOBS stack running in your Kubernetes cluster. You can refer to the individual components’ documentation in order to learn their respective features. In the last step of this tutorial, you’ll learn how to make updates to the TOBS configuration itself.

      Step 3 — Editing TOBS Configurations and Upgrading

      TOBS’ configuration contains some parameters for the individual applications in the stack, as well as some parameters for the TOBS deployment itself. It is generated and stored as a Kubernetes Helm chart. You can output your current configuration by running tobs helm show-values. However, this will output the entire long configuration to your terminal, which can be difficult to read. You can instead redirect the output to a file with the .yaml extension, because Helm charts are all valid YAML syntax:

      • tobs helm show-values > values.yaml

      The file contents will look like this:

      ~/values.yaml

      2022/01/10 11:56:37 Transport: unhandled response frame type *http.http2UnknownFrame
      # Values for configuring the deployment of TimescaleDB
      # The charts README is at:
      #    https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
      # Check out the various configuration options (administration guide) at:
      #    https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/admin-guide.md
      cli: false
      
      # Override the deployment namespace
      namespaceOverride: ""
      …
      

      You can review the additional parameters available for TOBS’ configuration by reading the TOBS documentation

      If you ever modify this file in order to update your deployment, you can re-install TOBS over itself using the updated configuration. Just pass the -f option to the tobs install command with the YAML file as an additional argument:

      • tobs install -f values.yaml

      Finally, you can upgrade TOBS with the following command:

      This performs the equivalent of a helm upgrade by fetching the newest upstream chart.

      Conclusion

      In this tutorial, you learned to deploy and configure TOBS, The Observability Stack, on an existing Kubernetes cluster. TOBS is particularly helpful because it eliminates the need to individually maintain configuration details for each of these apps, while providing standardized monitoring for the applications running on your cluster.

      Next, you might want to learn how to use Cert-Manager to handle HTTPS ingress to your Kubernetes cluster.



      Source link