One place for hosting & domains

      Deploy

      How to Deploy a Packer Image with Terraform


      Both the Packer and Terraform tools by HashiCorp stand out for remarkable infrastructure-automating. Despite some overlap, the tools have distinct and complimentary features. This makes them an effective pair, with Packer used to create images that Terraform then deploys as a complete infrastructure.

      Learn more about Packer in our
      Using the Linode Packer Builder to Create Custom Images guide. Discover how you can leverage Terraform in our
      Beginner’s Guide to Terraform.

      In this tutorial, find out how to use Packer and Terraform together to deploy Linode instances. The tutorial uses the Linode Terraform provider to deploy several instances based on a Linode image built with Packer.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      How to Install the Prerequisites

      To get started, install both Packer and Terraform on the same system. Below you can find links to installation guides for the two tools, as well as steps covering most Linux operating systems.

      Installing Packer

      Packer’s installation process varies substantially depending on your operating system. Refer to the
      official installation guide for instructions if your system is not covered here.

      Debian / Ubuntu

      sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
      curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -\
      sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
      sudo apt-get update && sudo apt-get install packer

      AlmaLinux / CentOS Stream / Rocky Linux

      sudo yum install -y yum-utils
      sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
      sudo yum -y install packer

      Fedora

      sudo dnf install -y dnf-plugins-core
      sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
      sudo dnf -y install packer

      Afterward, verify your installation and display the installed version with the following command:

      1.8.4

      Installing Terraform

      Terraform’s installation process also varies depending on your operating system. Refer to HashiCorp’s
      official documentation on installing the Terraform CLI for systems that are not covered here. You can also refer to the section on installing Terraform in our guide
      Use Terraform to Provision Linode Environments.

      Debian / Ubuntu

      sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
      wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
      echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
      sudo apt update && sudo apt install terraform

      AlmaLinux / CentOS Stream / Rocky Linux

      sudo yum install -y yum-utils
      sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
      sudo yum -y install terraform

      Fedora

      sudo dnf install -y dnf-plugins-core
      sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
      sudo dnf -y install terraform

      Afterward, verify your installation with:

      Terraform v1.3.3
      on linux_amd64

      How to Build a Packer Image

      Packer automates the creation of machine images. These images are helpful when looking to streamline your process for provisioning infrastructure. Such images give you a consistent basis for deploying instances.

      Moreover, images are much more efficient. Rather than executing a series of installations and commands with each provisioned instance, the provisioning tool can deploy ready-made images.

      The examples in this tutorial uses a Linode image built with Packer. Linode has a builder available for Packer, which lets you put together images specifically for a Linode instance.

      To do so, follow along with our guide on
      Using the Linode Packer Builder to Create Custom Images. By the end, you should have a Packer-built image on your Linode account.

      The remaining steps in this tutorial should work no matter what kind of image you built following the guide linked above. However, the Packer image used in the examples to follow has the label packer-linode-image-1, runs on an Ubuntu 20.04 base, and has NGINX installed.

      How to Configure Terraform

      Terraform focuses on automating the provisioning process, allowing you to deploy your infrastructure entirely from code.

      To learn more about deploying Linode instances with Terraform, see our tutorial on how to
      Use Terraform to Provision Linode Environments.

      This tutorial covers a similar series of steps, but specifically demonstrates how you can work with custom Linode images.

      Before moving ahead, create a directory for your Terraform scripts, and change that to your working directory. This tutorial uses the linode-terraform directory in the current user’s home directory:

      mkdir ~/linode-terraform
      cd ~/linode-terraform

      The rest of the tutorial assumes you are working out of this directory.

      Setting Up the Linode Provider

      Terraform’s providers act as abstractions of APIs, giving Terraform an interface for working with various resources on host platforms.

      Linode has its own Terraform provider, which you can learn more about from its Terraform
      provider registry page.

      To use the provider, you just need a couple of short blocks in a Terraform script.

      Create a new Terraform file named packer-linode.tf, which acts as the base for this tutorial’s Terraform project:

      Give it the contents shown here:

      File: packer-linode.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      terraform {
        required_providers {
          linode = {
            source = "linode/linode"
            version = "1.29.3"
          }
        }
      }
      provider "linode" {
        token = var.token
      }

      The terraform block starts the project by indicating its required providers (e.g. Linode). The provider block then starts the Linode provider. The token argument allows the provider to authenticate its connection to the Linode API.

      When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      Assigning Terraform Variables

      Above, you can see that the token value for the Linode provider uses the var.token variable. Although not required, variables make Terraform scripts much more adaptable and manageable.

      This tutorial handles variables using two files.

      1. First, create a variables.tf file:

        Now fill it with the contents shown below. This file defines all the variables for the Terraform project. Some of these variables have default values, which Terraform automatically uses if not otherwise assigned. Other variables need to be assigned, which you can see in the next file.

        File: variables.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        variable "token" {
          description = "The Linode API Personal Access Token."
        }
        variable "password" {
          description = "The root password for the Linode instances."
        }
        variable "ssh_key" {
          description = "The location of an SSH key file for use on the Linode instances."
          default = "~/.ssh/id_rsa.pub"
        }
        variable "node_count" {
          description = "The number of instances to create."
          default = 1
        }
        variable "region" {
          description = "The name of the region in which to deploy instances."
          default = "us-east"
        }
        variable "image_id" {
          description = "The ID for the Linode image to be used in provisioning the instances"
          default = "linode/ubuntu20.04"
        }

        When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      2. Now create a terraform.tfvars file:

        This file, with the .tfvars ending, is a place for assigning variable values. Give the file the contents below, replacing the values in arrow brackets (<...>) with your actual values:

        File: terraform.tfvars
        1
        2
        3
        4
        
        token = "<LinodeApiToken>"
        password = "<RootPassword>"
        node_count = 2
        image_id = "private/<LinodeImageId>"

        The <LinodeApiToken> needs to be an API token associated with your Linode account. You can follow our
        Get an API Access Token guide to generate a personal access token. Be sure to give the token “Read/Write” permissions.

        Above, you can see a value of private/<LinodeImageId> for the image_id. This value should match the image ID for the Linode image you created with Packer. All custom Linode images are prefaced with private/ and conclude with the image’s ID. In these examples, private/17691867 is assumed to be the ID for the Linode image built with Packer.

        There are two main ways to get your image ID:

        • The Linode image ID appears at the end of the output when you use Packer to create the image. For instance, in the guide on creating a Linode image with Packer linked above, you can find the output:

          ==> Builds finished. The artifacts of successful builds are:
          --> linode.example-linode-image: Linode image: packer-linode-image-1 (private/17691867)
        • The Linode API has an endpoint for listing available images. The list includes your custom images if you call it with your API token.

          You can use a cURL command to list all images available to you, public and private. Replace $LINODE_API_TOKEN with your Linode API token:

          curl -H "Authorization: Bearer $LINODE_API_TOKEN" \https://api.linode.com/v4/images

          The output can be overwhelming in the command line, so you may want to use another tool to prettify the JSON response. This has been done with the result shown here:

          {
              "pages": 1,
              "data": [{
                  "id": "private/17691867",
                  "label": "packer-linode-image-1",
                  "description": "Example Packer Linode Image",
                  // [...]

        When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      Defining the Linode Resource

      The next step for the Terraform script is to define the actual resource to be provisioned. In this case, the script needs to provision Linode instances, which can be done using the linode_instance resource.

      Open the packer-linode.tf file created earlier and add the details shown here to the end:

      File: packer-linode.tf
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      
      resource "linode_instance" "packer_linode_instance" {
        count = var.node_count
        image = var.image_id
        label = "packer-image-linode-${count.index + 1}"
        group = "packer-image-instances"
        region = var.region
        type = "g6-standard-1"
        authorized_keys = [ chomp(file(var.ssh_key)) ]
        root_pass = var.password
        connection {
          type = "ssh"
          user = "root"
          password = var.password
          host = self.ip_address
        }
        provisioner "remote-exec" {
          inline = [
            # Update the system.
            "apt-get update -qq",
            # Disable password authentication; users can only connect with an SSH key.
            "sed -i '/PasswordAuthentication/d' /etc/ssh/sshd_config",
            "echo \"PasswordAuthentication no\" >> /etc/ssh/sshd_config",
            # Check to make sure NGINX is running.
            "systemctl status nginx --no-pager"
          ]
        }
      }

      And with that, the Terraform project is ready to provision two Linode instances based on your Packer-built image. Most of the configuration details for the resource block are managed by variables. So you shouldn’t need to fiddle with much of the resource block to adjustment things like the number of instances to provision.

      The remote-exec provisioner, and specifically the inline list within it, is where much of the customization comes in. This block defines shell commands to be executed on the newly provisioned instance. The commands here are relatively simple, but this provisioner can give you fine-grained control of operations on the instance.

      How to Provision a Packer Image with Terraform

      From here, a handful of Terraform commands are all you need to provision and manage Linode instances from the Packer-built image.

      First, Terraform needs to run some initialization around the script. This installs any prerequisites, specifically the linode provider in this example, and sets up Terraform’s lock file.

      Running Terraform’s plan command is also good practice. Here, Terraform checks your script for immediate errors and provides an outline of the projected resources to deploy. You can think of it as a light dry run.

      Review the plan, and when ready, provision your instances with the apply command. This may take several minutes to process, depending on your systems and the number of instances being deployed.

      linode_instance.packer_linode_instance[0] (remote-exec): Connected!
      linode_instance.packer_linode_instance[0] (remote-exec): ● nginx.service - A high performance web server and a reverse proxy server
      linode_instance.packer_linode_instance[0] (remote-exec):      Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
      linode_instance.packer_linode_instance[0] (remote-exec):      Active: active (running) since Thu 2022-10-27 15:56:42 UTC; 9s ago
      [...]
      
      Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

      In the future, whenever you want to remove the instances created with Terraform, you can use the destroy command from within your Terraform script directory.

      As with the apply command, you get a preview of the instances and are asked to confirm before the instances are destroyed.

      Conclusion

      This tutorial outlined how to use Terraform to deploy Linode instances built with a Packer image. This arrangement provides an efficient setup for provisioning and managing Linode instances. Terraform streamlines the process of provisioning infrastructure, and it is made even more efficient using pre-built images from Packer.

      The example covered in this tutorial is fairly simple. But the setup can be readily adapted and expanded on to deploy more robust and complex infrastructures.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Use the Linode Ansible Collection to Deploy a Linode


      Ansible is a popular open-source Infrastructure as Code (IaC) tool that can be used to complete common IT tasks like cloud provisioning and configuration management across a wide array of infrastructure components. Commonly seen as a solution to multi-cloud configurations, automation, and continuous delivery issues, Ansible is considered by many to be an industry standard in the modern cloud landscape.

      Ansible Collections are the latest standard for managing Ansible content, empowering users to install roles, modules, and plugins with less developer and administrative overhead than ever before.
      The Linode Ansible collection provides the basic plugins needed to get started using Linode services with Ansible right away.

      This guide shows how to:

      Caution

      Before You Begin

      Note

      The steps outlined in this guide require
      Ansible version 2.9.10 or greater and were tested on a Linode running Ubuntu 22.04. The instructions can be adapted to other Linux distributions or operating systems.
      1. Provision a server that acts as the Ansible
        control node, from which other compute instances are deployed. Follow the instructions in our
        Creating a Compute Instance guide to create a Linode running Ubuntu 22.04. A shared CPU 1GB Nanode is suitable. You can also use an existing workstation or laptop if you prefer.

      2. Add a limited Linux user to your control node Linode by following the
        Add a Limited User Account section of our
        Setting Up and Securing a Compute Instance guide. Ensure that all commands for the rest of this guide are entered as your limited user.

      3. Ensure that you have performed system updates:

        sudo apt update && sudo apt upgrade
        
      4. Install Ansible on your control node. Follow the steps in the
        Install Ansible section of the
        Getting Started With Ansible – Basic Installation and Setup guide.

      5. Ensure you have Python version 2.7 or higher installed on your control node. Issue the following command to check your system’s Python version:

        python --version
        

        Many operating systems, including Ubuntu 22.04, instead have Python 3 installed by default. The Python 3 interpreter can usually be invoked with the python3 command, and the remainder of this guide assumes Python 3 is installed and used. For example, you can run this command to check your Python 3 version:

        python3 --version
        
      6. Install the pip package manager:

        sudo apt install python3-pip
        
      7. Generate a Linode API v4 access token with permission to read and write Linodes and record it in a password manager or other safe location. Follow the
        Get an Access Token section of the
        Getting Started with the Linode API guide.

      Install the Linode Ansible Collection

      The Linode Ansible collection is currently open-source and hosted on both a
      public Github repository and on
      Ansible Galaxy. Ansible Galaxy is Ansible’s own community-focused repository, providing information on and access to a wide array of
      Ansible collections and
      Ansible roles. Ansible Galaxy support is built into the latest versions of Ansible by default. While users can install the Linode Ansible collection
      from source or by
      using git, these steps show how to use Ansible Galaxy:

      1. Install required dependencies for Ansible:

        sudo -H pip3 install -Iv 'resolvelib<0.6.0'
        
      2. Download the latest version of the Linode Ansible collection using the ansible-galaxy command:

        ansible-galaxy collection install linode.cloud
        

        Once the collection is installed, all configuration files are stored in the default ~/.ansible/collections/ansible_collections/ collections folder.

      3. Install the Python module dependencies required for the Linode Ansible collection. The Linode collection’s installation directory contains a requirements.txt file that lists the Python dependencies, including the official
        Python library for the Linode API v4. Use pip to install these dependencies:

        sudo pip3 install -r .ansible/collections/ansible_collections/linode/cloud/requirements.txt
        

      The Linode Ansible collection is now installed and ready to deploy and manage Linode services.

      Configure Ansible

      When interfacing with the Linode Ansible collection, it is generally good practice to use variables to securely store sensitive strings like API tokens. This section shows how to securely store and access the
      Linode API Access token (generated in the
      Before You Begin section) along with a root password that is assigned to new Linode instances. Both of these are encrypted with
      Ansible Vault.

      Create an Ansible Vault Password File

      1. From the control node’s home directory, create a development directory to hold user-generated Ansible files. Then navigate to this new directory:

        mkdir development && cd development
        
      2. In the development directory, create a new empty text file called .vault-pass (with no file extension). Then generate a unique, complex new password (for example, by using a password manager), copy it into the new file, and save it. This password is used to encrypt and decrypt information stored with Ansible Vault:

        File: ~/development/.vault-pass
        1
        
        <PasteYourAnsibleVaultPasswordHere>

        This is an Ansible Vault password file. A password file provides your Vault password to Ansible Vault’s encryption commands. Ansible Vault also offers other options for password management. To learn more about password management, read Ansible’s
        Providing Vault Passwords documentation.

      3. Set permissions on the file so that only your user can read and write to it:

        chmod 600 .vault-pass
        

        Caution

        Do not check this file into version control. If this file is located in a Git repository, add it to your
        .gitignore file.

      Create an Ansible Configuration File

      Create an Ansible configuration file called ansible.cfg with a text editor of your choice. Copy this snippet into the file:

      File: ~/development/ansible.cfg
      1
      2
      
      [defaults]
      VAULT_PASSWORD_FILE = ./vault-pass

      These lines specify the location of your password file.

      Encrypt Variables with Ansible Vault

      1. Create a directory to store variable files used with your
        Ansible playbooks:

        mkdir -p ~/development/group_vars/
        
      2. Make a new empty text file called vars.yml in this directory. In the next steps, your encrypted API token and root password are stored in this file:

        touch ~/development/group_vars/vars.yml
        
      3. Generate a unique, complex new password (for example, by using a password manager) that should be used as the root password for new compute instances created with the Linode Ansible collection. This should be different from the Ansible Vault password specified in the .vault-pass file.

      4. Use the following ansible-vault encrypt_string command to encrypt the new root password, replacing MySecureRootPassword with your password. Because this command is run from inside your ~/development directory, the Ansible Vault password in your .vault-pass file is used to perform the encryption:

         ansible-vault encrypt_string 'MySecureRootPassword' --name 'password' | tee -a group_vars/vars.yml
        

        In the above command, tee -a group_vars/vars.yml appends the encrypted string to your vars.yml file. Once completed, output similar to the following appears:

        password: !vault |
            $ANSIBLE_VAULT;1.1;AES256
            30376134633639613832373335313062366536313334316465303462656664333064373933393831
            3432313261613532346134633761316363363535326333360a626431376265373133653535373238
            38323166666665376366663964343830633462623537623065356364343831316439396462343935
            6233646239363434380a383433643763373066633535366137346638613261353064353466303734
            3833
      5. Run the following command to add a newline at the end of your vars.yml file:

        echo "" >> group_vars/vars.yml
        
      6. Use the following ansible-vault encrypt_string command to encrypt your Linode API token and append it to your vars.yml file, replacing MyAPIToken with your own access token:

        ansible-vault encrypt_string 'MyAPIToken' --name 'api-token' | tee -a group_vars/vars.yml
        
      7. Run the following command to add another newline at the end of your vars.yml file:

        echo "" >> group_vars/vars.yml
        

        Your vars.yml file should now resemble:

        File: ~/development/group_vars/vars.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        
        password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        token: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  65363565316233613963653465613661316134333164623962643834383632646439306566623061
                  3938393939373039373135663239633162336530373738300a316661373731623538306164363434
                  31656434356431353734666633656534343237333662613036653137396235353833313430626534
                  3330323437653835660a303865636365303532373864613632323930343265343665393432326231
                  61313635653463333630636631336539643430326662373137303166303739616262643338373834
                  34613532353031333731336339396233623533326130376431346462633832353432316163373833
                  35316333626530643736636332323161353139306533633961376432623161626132353933373661
                  36663135323664663130

      Understanding Fully Qualified Collection Namespaces

      Ansible is now configured and the Linode Ansible collection is installed. You can create
      playbooks to leverage the collection and create compute instances and other Linode resources.

      Within playbooks, the Linode Ansible collection is further divided by resource types through the
      Fully Qualified Collection Name(FQCN) affiliated with the desired resource. These names serve as identifiers that help Ansible to more easily and authoritatively delineate between modules and plugins within a collection.

      Modules

      Below is a table of all FQCNs currently included with the Linode Ansible collection and a short overview of their purpose:

      The links in the table above correspond to the GitHub pages for each FQCN. These pages contain a list of all available configuration options for the resource the module applies to. A full dynamically updated list of all resources can be found in the
      Linode Ansible Collections Github Repo.

      Inventory Plugins

      Deploy a Linode with the Linode Ansible Collection

      This section shows how to write a playbook that leverages the Linode Ansible collection and your encrypted API token and root password to create a new Linode instance:

      1. Create a playbook file called deploylinode.yml in your ~/development directory. Copy this snippet into the file and save it:

        File: ~/development/deploylinode.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        - name: Create Linode Instance
          hosts: localhost
          vars_files:
              - ./group_vars/vars.yml
          tasks:
            - name: Create a Linode instance
              linode.cloud.instance:
                api_token: "{{ api_token }}"
                label: my-ansible-linode
                type: g6-nanode-1
                region: us-east
                image: linode/ubuntu22.04
                root_pass: "{{ password }}"
                state: present
        • The playbook contains the Create Linode Instance play. When run, the control node receives the necessary instructions from Ansible and uses the Linode API to deploy infrastructure as needed.

        • The vars_files key provides the location of the variable file or files used to populate information related to tasks for the play.

        • The task in the playbook is defined by the name, which serves as a label, and the FQCN used to configure the resource, in this case a Linode compute instance.

        • The configuration options associated with the FQCN are defined. The configuration options for each FQCN are unique to the resource.

          For options where secure strings are used, the encrypted variables in the ./group_vars/vars.yml file are inserted. This includes the API token and root password.

      2. Once the playbook is saved, enter the following command to run it and create a Linode Nanode instance. Because this command is run from inside your ~/development directory, the Ansible Vault password in your .vault-pass file is used by the playbook to decrypt the variables:

        ansible-playbook deploylinode.yml
        

        Once completed, output similar to the following appears:

        PLAY [Create Linode] *********************************************************************
        
        TASK [Gathering Facts] *******************************************************************
        ok: [localhost]
        
        TASK [Create a new Linode.] **************************************************************
        changed: [localhost]
        
        PLAY RECAP *******************************************************************************
        localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Deploy TOBS on LKE


      Updated
      , by Rajakavitha Kodhandapani

      Traducciones al Español

      Estamos traduciendo nuestros guías y tutoriales al Español. Es
      posible que usted esté viendo una traducción generada
      automáticamente. Estamos trabajando con traductores profesionales
      para verificar las traducciones de nuestro sitio web. Este proyecto
      es un trabajo en curso.

      Create a Linode account
      to try this guide with a $100 credit.

      This credit will be applied to any valid services used during your first 60 days.

      In this guide, deploy
      TOBS to your Linode Kubernetes Engine (LKE) cluster using
      Helm. And use kubectl port-forward for local access to your monitoring interfaces.

      The Prometheus Operator Monitoring Stack

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed on any existing Kubernetes cluster. It includes many of the most popular open-source observability tools such as Prometheus, Grafana, Promlens, TimescaleDB, and others. Together, these provide a maintainable solution to analyze the traffic on the server and identify any potential problems with a deployment. You can use Helm charts to configure and update TOBS deployments.

      TOBS includes the following components:

      • OpenTelemetry collector is deployed to collect traces.
      • Alertmanager, is deployed alongside Prometheus, forms the alerting layer of the stack, and handles alerts generated by Prometheus.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • PromLens helps users build PromQL queries with ease. PromLens is a PromQL query builder that helps you build, understand, and fix your queries much more effectively.
      • TimescaleDB is for long-term storage of metric data. Long-term storage provides the ability to perform post-hoc analysis on metric data over long periods of time. Such data analysis can be used for capacity planning, identifying slow-moving regressions, trend analysis, auditing, and more. For information about connecting to the database from the cluster, see
        TimescaleDB Documentation
      • Promscale provides the translation layer between Prometheus and the database. It allows the Prometheus server to store and retrieve metrics from TimescaleDB, and allows users to use PromQL on Promscale and Prometheus.
      • Prometheus is an open-source systems monitoring and altering stack. It has become the de-facto standard in metric monitoring and is the basis of standards such as OpenMetrics. It allows you to monitor and understand how your infrastructure and applications are performing. Service discovery allows Prometheus to automagically discover components within your Kubernetes cluster that are already emitting metrics.
      • kube-state-metrics exports the metrics related to Kubernetes resources such as the status and count of Kubernetes resources, with visibility of the desired resources and the current resources, as well as the trends in your cluster.
      • Node-Exporter is deployed to export node related metrics such as CPU, memory usage, and others from the Kubernetes cluster.

      Before You Begin

      Note

      This guide was written using
      Kubernetes version 1.23.
      1. Deploy an LKE Cluster. This guide was written using an example node pool with three
        2 GB Linodes. Depending on the workloads you plan to deploy on your cluster, you may consider using Linodes with more available resources.

      2. Install
        Helm 3 to your local environment.

      3. Install
        kubectl to your local environment and
        connect to your cluster.

      4. Create the monitoring namespace on your LKE cluster:

        kubectl create namespace monitoring
        
      5. Add the stable Helm charts repository to your Helm repos:

        helm repo add stable https://charts.helm.sh/stable
        
      6. Update your Helm repositories:

        helm repo update
        

      TOBS Minimal Deployment

      In this section, learn to deploy TOBS for individual/local access with kubectl
      Port-Forward.

      Deploy The Observability Stack

      1. Install a certificate manager for your LKE cluster:

         kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
        
      2. Using Helm, deploy the TOBS release labeled lke-monitor in the monitoring namespace on your LKE cluster:

        helm repo add timescale https://charts.timescale.com/
        helm repo update
        helm install --wait lke-monitor timescale/tobs --namespace monitoring
        
      3. Verify that the Prometheus Operator has been deployed to your LKE cluster and its components are running and ready by checking the pods in the monitoring namespace:

        kubectl -n monitoring get pods
        

        You should see a similar output to the following:

        NAME                                                        READY   STATUS      RESTARTS      AGE
        alertmanager-tobs-kube-prometheus-alertmanager-0            2/2     Running     0             2m13s
        lke-monitor-connection-secret-j4sdh                         0/1     Completed   0             2m35s
        lke-monitor-grafana-54d979dcf5-tkkgj                        3/3     Running     2 (65s ago)   2m32s
        lke-monitor-grafana-db-swm8g                                0/1     Completed   3             2m35s
        lke-monitor-kube-state-metrics-6bc5c44b9-g8r5g              1/1     Running     0             2m27s
        lke-monitor-prometheus-node-exporter-b4vvg                  1/1     Running     0             2m33s
        lke-monitor-prometheus-node-exporter-bbcnd                  1/1     Running     0             2m34s
        lke-monitor-prometheus-node-exporter-frrfp                  1/1     Running     0             2m26s
        lke-monitor-promlens-569cfbd586-bkhrr                       1/1     Running     0             2m34s
        lke-monitor-promscale-86d574986c-9wj2z                      1/1     Running     4 (64s ago)   2m27s
        lke-monitor-timescaledb-0                                   1/1     Running     0             2m30s
        opentelemetry-operator-controller-manager-8cf5c85c8-krdj5   2/2     Running     0             2m27s
        prometheus-tobs-kube-prometheus-prometheus-0                2/2     Running     0             2m13s
        tobs-kube-prometheus-operator-5b4f674986-55r4k              1/1     Running     0             2m34s

      Access Monitoring Interfaces with Port-Forward

      1. List the services running in the monitoring namespace and review their respective ports:

        kubectl -n monitoring get svc
        

        You should see an output similar to the following:

        NAME                                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                           AGE
        alertmanager-operated                                       ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   3m41s
        lke-monitor                                                 ClusterIP   10.128.40.142    <none>        5432/TCP                     4m3s
        lke-monitor-config                                          ClusterIP   None             <none>        8008/TCP                     4m3s
        lke-monitor-grafana                                         ClusterIP   10.128.102.243   <none>        80/TCP                       4m3s
        lke-monitor-kube-state-metrics                              ClusterIP   10.128.208.39    <none>        8080/TCP                     4m3s
        lke-monitor-prometheus-node-exporter                        ClusterIP   10.128.170.88    <none>        9100/TCP                     4m3s
        lke-monitor-promlens                                        ClusterIP   10.128.45.92     <none>        80/TCP                       4m3s
        lke-monitor-promscale-connector                             ClusterIP   10.128.198.88    <none>        9201/TCP,9202/TCP            4m3s
        lke-monitor-replica                                         ClusterIP   10.128.137.189   <none>        5432/TCP                     4m3s
        opentelemetry-operator-controller-manager-metrics-service   ClusterIP   10.128.45.42     <none>        8443/TCP                     4m3s
        opentelemetry-operator-webhook-service                      ClusterIP   10.128.12.89     <none>        443/TCP                      4m3s
        prometheus-operated                                         ClusterIP   None             <none>        9090/TCP                     3m41s
        tobs-kube-prometheus-alertmanager                           ClusterIP   10.128.33.44     <none>        9093/TCP                     4m3s
        tobs-kube-prometheus-operator                               ClusterIP   10.128.175.39    <none>        443/TCP                      4m3s
        tobs-kube-prometheus-prometheus                             ClusterIP   10.128.106.173   <none>        9090/TCP                     4m3s

        From the above output, the resource services you will access have the corresponding ports:

        ResourceService NamePort
        Prometheustobs-kube-prometheus-prometheus9090
        Alertmanagertobs-kube-prometheus-alertmanager9093
        Grafanalke-monitor-grafana80
      2. Use kubectl
        port-forward to open a connection to a service, then access the service’s interface by entering the corresponding address in your web browser:

        Note

        Press control+C on your keyboard to terminate a port-forward process after entering any of the following commands.

        • To provide access to the Prometheus interface at the address 127.0.0.1:9090 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/tobs-kube-prometheus-prometheus \
          9090
          
        • To provide access to the Alertmanager interface at the address 127.0.0.1:9093 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/tobs-kube-prometheus-alertmanager  \
          9093
          
        • To provide access to the Grafana interface at the address 127.0.0.1:8081 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/lke-monitor-grafana  \
          8081:80
          

          When accessing the Grafana interface, log in as admin. You can get the password using:

          kubectl get secret --namespace monitoring lke-monitor-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
          

          The Grafana dashboards are accessible at Dashboards > Manage from the left navigation bar.

      TOBS eliminates the need to maintain configuration details for each of the applications, while providing standardized monitoring for the applications running on your cluster.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      This page was originally published on



      Join the conversation.
      Read other comments or post your own below. Comments must be respectful,
      constructive, and relevant to the topic of the guide. Do not post external
      links or advertisements. Before posting, consider if your comment would be
      better addressed by contacting our
      Support team or asking on
      our
      Community Site.



      Source link