One place for hosting & domains

      Ubuntu

      How to Use APT to Manage Packages in Debian and Ubuntu


      Advanced Package Tool, more commonly known as
      APT, is a package management system for Ubuntu, Debian, Kali Linux, and other Debian-based Linux distributions. It acts as a front-end to the lower-level
      dpkg package manager, which is used for installing, managing, and providing information on .deb packages. In addition to these functions, APT interfaces with repositories to obtain packages and also provides very efficient dependency management.

      Most distributions that use APT also include a collection of command-line tools that can be used to interface with APT. These tools include apt-get, apt-cache, and the newer apt, which essentially combines both of the previous tools with some modified functionality. Other package managers and tools also exist for interacting with APT or dpkg. A popular one is called
      Aptitude. Aptitude includes both a command-line interface as well as an interactive user interface. While it does offer advanced functionality, it is not commonly installed by default and is not covered in this guide.

      This guide aims to walk you through using APT and its command-line tools to perform common functions related to package management. The commands and examples used throughout this guide default to using the apt command. Many of the commands interchangeable with either apt-get or apt-cache, though there may be breaking differences.

      Before You Begin

      Before running the commands within this guide, you will need:

      1. A system running on Debian or Ubuntu. Other Linux distributions that employ the APT package manager can also be used. Review the
        Creating a Compute Instance guide if you do not yet have a compatible system.

      2. Login credentials to the system for either the root user (not recommended) or a standard user account (belonging to the sudo group) and the ability to access the system through
        SSH or
        Lish. Review the
        Setting Up and Securing a Compute Instance guide for assistance on creating and securing a standard user account.

      Note

      Some commands in this guide require elevated privileges and are prefixed with the sudo command. If you are logged in as the root use (not recommended), you can omit the sudo prefix if desired. If you’re not familiar with the sudo command, see the
      Linux Users and Groups guide.

      What’s the difference between apt and apt-get/apt-cache?

      While there are more similarities than differences, there are a few important points to consider when decided which command to use.

      • apt: A newer end-user tool that consolidates the functionality of both apt-get and apt-cache. Compared to the others, the apt tool is more straightforward and user-friendly. It also has some extra features, such as a status bar and the ability to list packages. Both Ubuntu and Debian recommend the apt command over apt-get and apt-cache. See
        apt Ubuntu man pages
      • apt-get and apt-cache: The apt-get command manages the installation, upgrades, and removal of packages (and their dependencies). The apt-cache command is used to search for packages and retrieve details about a package. Updates to these commands are designed to never introduce breaking changes, even at the expense of the user experience. The output works well for machine readability and these commands are best limited to use within scripts. See
        apt-get Ubuntu man pages and
        apt-cache Ubuntu man pages.

      In short, apt is a single tool that encompasses most of the functionality of other APT-specific tooling. It is designed primarily for interacting with APT as an end-user and its default functionality may change to include new features or best practices. If you prefer not to risk breaking compatibility and/or prefer to interact with plainer output, apt-get and apt-cache can be used instead, though the exact commands may vary.

      Installing Packages

      Installs the specified package and all required dependencies. Replace [package] with the name of the package you wish to install. The apt install command is interchangeable with apt-get install.

      sudo apt install [package]
      

      Before installing packages, it’s highly recommended to obtain updated package version and dependency information and upgrade packages and dependencies to those latest version. See
      Updating Package Information and
      Upgrading Packages for more details. These actions can be performed quickly by running the following sequence of commands:

      sudo apt update && sudo apt upgrade
      

      Additional options, commands, and notes:

      • Install a specific version by adding an equal sign after the package, followed by the version number you’d like to install.

        sudo apt install [package]=[version]
        
      • Reinstall a package and any dependencies by running the following command. This is useful if an installation for a package becomes corrupt or dependencies were somehow removed.

        sudo apt reinstall [package]
        

      Updating Package Information

      Downloads package information from all the sources/repositories configured on your system (within /etc/apt/sources.list). This command obtains details about the latest version for all available packages as well as their dependencies. It should be the first step before installing or upgrading packages on your system.

      sudo apt update
      

      This command is equivalent to apt-get update.

      Upgrading Packages

      Upgrades all packages to their latest versions, including upgrading existing dependencies and installing new ones. It’s important to note that the currently installed versions are not removed and will remain on your system.

      sudo apt upgrade
      

      This command is equivalent to apt-get upgrade --with-new-pkgs. Without the --with-new-pkgs option, the apt-get upgrade command only upgrades existing packages/dependencies and ignores any packages that require new dependencies to be installed.

      Before upgrading packages, it’s highly recommended to obtain updated package version and dependency information. See
      Updating Package Information for more details. These two actions can be performed together through the following sequence of commands:

      sudo apt update && sudo apt upgrade
      

      Additional options, commands, and notes:

      • To view a list of all available upgrades, use the list command with the --upgradable option.

        apt list --upgradeable
        
      • To upgrade a specific package, use the install command and append the package name. If the package is already installed, it will be upgraded to the latest version your system knows about. To only upgrade (not install) a package, use the --only-upgrade option. In the below command, replace [package] with the name of the package you wish to upgrade.

        sudo apt install --only-upgrade [package]
        
      • The apt full-upgrade command (equivalent to apt-get dist-upgrade) can remove packages as well as upgrade and install them. In most cases, it is not recommended to routinely run these commands. To remove unneeded packages (including kernels), use apt autoremove instead.

      Uninstalling Packages

      Removes the specified package from the system, but retains any packages that were installed to satisfy dependencies as well as some configuration files. Replace [package] with the name of the package you’d like to remove.

      sudo apt remove [package]
      

      To remove the package as well as any configuration files, run the following command. This can also be used to just remove configuration files for previously removed packages.

      sudo apt purge [package]
      

      Both of these commands are equivalent to apt-get remove and apt-get purge, respectively.

      • To remove any unused dependencies, run apt autoremove (apt-get autoremove). This is commonly done after uninstalling a package or after upgrading packages and can sometimes help in reducing disk space (and clutter).

        sudo apt autoremove
        

      Common Command Options

      The following options are available for most of the commands discussed on this guide.

      • Multiple packages can be taken action on together by delimiting them with a space. For example:

        sudo apt install [package1] [package2]
        
      • Automatically accept prompts by adding the -y or --yes option. This is useful when writing scripts to prevent any user interaction when its implicit that they wish to perform the action on the specified packages.

        sudo apt install [package] -y
        

      Listing Packages

      The apt list command lists all available, installed, or upgradeable packages. This can be incredibly useful for locating specific packages – especially when combined with grep or less. There is no direct equivalent command within apt-cache.

      • List all packages that are installed

        apt list --installed
        
      • List all packages that have an upgrade available

        apt list --upgradeable
        
      • List all versions of all available packages

        apt list --all-versions
        

      Additional options, commands, and notes:

      • Use
        grep to quickly search through the list for specific package names or other strings. Replace [string] with the package name or other term you wish to search for.

        apt list --installed | grep [string]
        
      • Use a content viewer like
        less to interact with the output, which may help you view or search for your desired information.

        apt list --installed | less
        

      Searching for Available Packages

      Searches through all available packages for the specified term or regex string.

      apt search [string]
      

      The command apt-cache search is similar, though the output for apt search is more user-friendly.

      Additional options, commands, and notes:

      • Use the --full option to see the full description/summary for each package.

        apt search --full [string]
        
      • To find packages whose titles or short/long descriptions contain multiple terms, delimit each string with a space.

        apt search [string1] [string2]
        

      Viewing Information About Packages

      Displays information about an installed or available package. The following command is similar to apt-cache show --no-all-versions [package].

      apt show [package]
      

      The information in the output includes:

      • Package: The name of the package.
      • Version: The version of the package.
      • Installed-Size: The amount of space this package consumes on the disk, not including any dependencies.
      • Depends: A list of dependencies.
      • APT-Manual-Installed: Designates if the package was manually installed or automatically installed (for instance, like as a dependency for another package). This is visible within apt (not apt-cache).
      • APT-Sources: The repository where the package information was stored. This is visible within apt (not apt-cache).
      • Description: A long description of the package.

      Adding Repositories

      A repository is a collection of packages (typically for a specific Linux distribution and version) that are stored on a remote system. This enables software distributors to store a package (including new versions) in one place and enable users to quickly install that package onto their system. In most cases, we obtain packages from a repository – as opposed to manually downloading package files.

      Information about repositories that are configured on your system are stored within /etc/apt/sources.list or the directory /etc/apt/sources.list.d/. Repositories can be added manually by editing (or adding) a sources.list configuration file, though most repositories also require adding the GPG public key to APT’s keyring. To automate this process, it’s recommended to use the
      add-apt-repository utility.

      sudo add-apt-repository [repository]
      

      Replace [repository] with the url to the repository or, in the case of a PPA (Personal Package Archive), the reference to that PPA.

      Once a repository has been added, you can update your package list and install the package. See
      Updating Package Information and
      Installing Packages.

      Cloning Packages to Another System

      If you wish to replicate the currently installed packages to another system without actually copying over any other data, consider using the
      apt-clone utility. This software is compatible with Debian-based systems and is available through Ubuntu’s official repository.

      1. Install apt-clone.

        sudo apt install apt-clone
        
      2. Create a backup containing a list of all installed packages, replacing [name] with the name of the backup (such as my-preferred-packages)

        apt-clone clone [name]
        

        This command creates a new file using the name provided in the last step and appending .apt-clone.tar.gz.

      3. Copy the file to your new system. See the
        Download Files from Your Linode guide or the
        File Transfer section for more information.

      4. Install apt-clone on the new system (see Step 1).

      5. Using apt-clone, run the following command to restore the packages. Replace [name] with the name used in the previous step (or whatever the file is called). If the file is located within a different directly than your current directory, adjust the command to include the path.

        sudo apt-clone restore [name].apt-clone.tar.gz



      Source link

      How To Install Matomo Web Analytics on Ubuntu 20.04


      Introduction

      Matomo is an open-source, self-hosted web analytics application written in PHP.

      In this tutorial you will install Matomo and a MariaDB database using Docker Compose, then install Nginx to act as a reverse proxy for the Matomo app. Finally, you will enable secure HTTPS connections by using Certbot to download and configure SSL certificates from the Let’s Encrypt Certificate Authority.

      Prerequisites

      In order to complete this tutorial, you’ll first need the following:

      Note: These prerequisite steps can be skipped if you’re using DigitalOcean’s One-Click Docker Image. This image will have Docker, Docker Compose, and UFW already installed and configured.

      Launch a new Docker image in the region of your choice, then log in as the root user and proceed with the tutorial. Because you’ll be using the root user, you could leave off the sudo parts of all the commands that follow, but it’s not necessary.

      Finally, to enable SSL you’ll need a domain name pointed at your server’s public IP address. This should be something like example.com or matomo.example.com, for instance. If you’re using DigitalOcean, please see our DNS Quickstart for information on creating domain resources in our control panel.

      When you’ve satisfied all the prerequisites, proceed to Step 1, where you’ll download and launch the Matomo software.

      Step 1 — Running Matomo and MariaDB with Docker Compose

      Your first step will be to create the Docker Compose configuration that will launch containers for both the Matomo app and a MariaDB database.

      This tutorial will put your configuration inside a matomo directory in your home directory. You could also choose to work in an /opt/matomo directory or some other directory of your choosing.

      First ensure you’re in your home directory:

      Then create the matomo directory and cd into it:

      Now open a new blank YAML file called docker-compose.yml:

      This is the configuration file that the docker-compose software will read when bringing up your containers. Paste the following into the file:

      docker-compose.yml

      version: "3"
      
      services:
        db:
          image: mariadb
          command: --max-allowed-packet=64MB
          restart: always
          environment:
            - MARIADB_DATABASE=matomo
            - MARIADB_USER
            - MARIADB_PASSWORD
            - MARIADB_ROOT_PASSWORD
          volumes:
            - ./db:/var/lib/mysql
      
        app:
          image: matomo
          restart: always
          volumes:
            - ./matomo:/var/www/html
          ports:
            - 127.0.0.1:8080:80
      

      The file defines two services, one db service which is the MariaDB container, and an app service which runs the Matomo software. Both services also reference a named volume where they store some data, and the app service also opens up port 8080 on the loopback (127.0.0.1) interface, which we’ll connect to via localhost.

      Save the file and exit your text editor to continue. In nano, press CTRL+O then ENTER to save, then CTRL+X to exit.

      The MariaDB container needs some configuration to be passed to it through environment variables in order to function. The docker-compose.yml file lists these environment variables, but not all of them have associated values. That’s because it’s good practice to keep passwords out of your docker-compose.yml file, especially if you’ll be committing it to a Git repository or other source control system.

      Instead, we’ll put the necessary information in a .env file in the same directory, which the docker-compose command will automatically load when we start our containers.

      Open a new .env file with nano:

      You’ll need to fill in a user name and password, as well as a strong password for the MariaDB root superuser account:

      .env

      MARIADB_USER=matomo
      MARIADB_PASSWORD=a_strong_password_for_user
      MARIADB_ROOT_PASSWORD=a_strong_password_for_root
      

      One way of generating a strong password is to use the openssl command, which should be available on most any operating system. The following command will print out a random 30 character hash that you can use as a password:

      • openssl rand 30 | base64 -w 0 ; echo

      When you’re done filling out the information in your .env file, save it and exit your text editor.

      You’re now ready to bring up the two containers with docker-compose:

      • sudo docker-compose up -d

      The up subcommand tells docker-compose to start the containers (and volumes and networks) defined in the docker-compose.yml file, and the -d flag tells it to do so in the background (“daemonize”) so the command doesn’t take over your terminal. docker-compose will print some brief output as it starts the containers:

      Output

      Creating matomo_db_1 ... done Creating matomo_app_1 ... done

      When that’s done, Matomo should be running. You can test that a webserver is running at localhost:8080 by fetching the homepage using the curl command:

      • curl --head http://localhost:8080

      This will print out only the HTTP headers from the response:

      Output

      HTTP/1.1 200 OK Date: Tue, 25 Jan 2022 19:56:16 GMT Server: Apache/2.4.51 (Debian) X-Powered-By: PHP/8.0.14 X-Matomo-Request-Id: 1e953 Cache-Control: no-store, must-revalidate Referrer-Policy: same-origin Content-Security-Policy: default-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self' 'unsafe-inline' 'unsafe-eval' data:; Set-Cookie: MATOMO_SESSID=dde7d477b0822e166ed90448964ec1e7; path=/; HttpOnly; SameSite=Lax Content-Type: text/html; charset=utf-8

      The 200 OK response means the Matomo server is up and running, but it’s only available on localhost. The highlighted X-Matomo-Request-Id header indicates that the server is Matomo and not something else that might be configured to listen on port 8080. Next we’ll set up Nginx to proxy public traffic to the Matomo container.

      Step 2 — Installing and Configuring Nginx

      Putting a web server such as Nginx in front of your Matomo server can improve performance by offloading caching, compression, and static file serving to a more efficient process. We’re going to install Nginx and configure it to reverse proxy requests to Matomo, meaning it will take care of handing requests from your users to Matomo and back again. Using a non-containerized Nginx will also make it easier to add Let’s Encrypt SSL certificates in the next step.

      First, refresh your package list, then install Nginx using apt:

      • sudo apt update
      • sudo apt install nginx

      Allow public traffic to ports 80 and 443 (HTTP and HTTPS) using the “Nginx Full” UFW application profile:

      • sudo ufw allow "Nginx Full"

      Output

      Rule added Rule added (v6)

      Next, open up a new Nginx configuration file in the /etc/nginx/sites-available directory. We’ll call ours matomo.conf but you could use a different name:

      • sudo nano /etc/nginx/sites-available/matomo.conf

      Paste the following into the new configuration file, being sure to replace your_domain_here with the domain that you’ve configured to point to your Matomo server. This should be something like matomo.example.com, for instance:

      /etc/nginx/sites-available/matomo.conf

      server {
          listen       80;
          listen       [::]:80;
          server_name  your_domain_here;
      
          access_log  /var/log/nginx/matomo.access.log;
          error_log   /var/log/nginx/matomo.error.log;
      
          location / {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Proto https;
            proxy_pass http://localhost:8080;
        }
      }
      

      This configuration is HTTP-only for now, as we’ll let Certbot take care of configuring SSL in the next step. The rest of the config sets up logging locations and then passes all traffic, as well as some important proxy headers, along to http://localhost:8080, the Matomo container we started up in the previous step.

      Save and close the file, then enable the configuration by linking it into /etc/nginx/sites-enabled/:

      • sudo ln -s /etc/nginx/sites-available/matomo.conf /etc/nginx/sites-enabled/

      Use nginx -t to verify that the configuration file syntax is correct:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      And finally, reload the nginx service to pick up the new configuration:

      • sudo systemctl reload nginx

      Your Matomo site should now be available on plain HTTP. Load http://your_domain_here (you may have to click through a security warning) and it will look like this:

      Screenshot of the first page of the Matomo web installation process, with a

      Now that you have your site up and running over HTTP, it’s time to secure the connection with Certbot and Let’s Encrypt certificates. You should do this before going through Matomo’s web-based setup procedure.

      Step 3 — Installing Certbot and Setting Up SSL Certificates

      Thanks to Certbot and the Let’s Encrypt free certificate authority, adding SSL encryption to our Matomo app will take only two commands.

      First, install Certbot and its Nginx plugin:

      • sudo apt install certbot python3-certbot-nginx

      Next, run certbot in --nginx mode, and specify the same domain you used in the Nginx server_name config:

      • sudo certbot --nginx -d your_domain_here

      You’ll be prompted to agree to the Let’s Encrypt terms of service, and to enter an email address.

      Afterwards, you’ll be asked if you want to redirect all HTTP traffic to HTTPS. It’s up to you, but this is generally recommended and safe to do.

      After that, Let’s Encrypt will confirm your request and Certbot will download your certificate:

      Output

      Congratulations! You have successfully enabled https://matomo.example.com You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=matomo.example.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/matomo.example.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/matomo.example.com/privkey.pem Your cert will expire on 2021-12-06. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      Certbot will automatically reload Nginx to pick up the new configuration and certificates. Reload your site and it should switch you over to HTTPS automatically if you chose the redirect option.

      Your site is now secure and it’s safe to continue with the web-based setup steps.

      Step 4 — Setting Up Matomo

      Back in your web browser you should now have Matomo’s Welcome! page open via a secure https:// connection. Now you can enter usernames and passwords safely to complete the installation process.

      Click the Next button. You’ll be taken to the System Check step:

      Screenshot of Matomo's "System Check" page with a list of system properties with green checkmarks next to them

      This is a summary of the system Matomo is running on, and everything should be green checkmarks indicating there are no problems. Scroll all the way to the bottom and click the Next button.

      Now you’ll be on the Database Setup page:

      Screenshot of Matomo's "Database Setup" page, with a form for inputting database connection details

      The information you fill in on this page will tell the Matomo application how to connect to the MariaDB database. You’ll need the MARIADB_USER and MARIADB_PASSWORD that you chose in Step 1. You can copy them out of your .env file if you need to.

      Fill out the first four fields:

      • Database Server: db
      • Login: the username you set in the MARIADB_USER environment variable
      • Password: the password you set in the MARIADB_PASSWORD environment variable
      • Database Name: matomo

      The defaults are fine for the remaining two fields.

      Click Next once more. You’ll get a confirmation that the database was set up correctly. Click Next again. You’ll then need to set up an admin user, and finally you’ll set up information about the first website you want to collect analytics for.

      After all that, you should end up on step 8, a Congratulations page. You’re almost all done. Scroll down to the bottom and click the Continue to Matomo button, and you’ll be taken to the homepage:

      Screenshot of the Matomo homepage with a large orange

      There will be a large warning at the top of the page. There’s a small update you’ll need to do to Matomo’s configuration file to finish up this process.

      Back on the command line, open up the configuration file with a text editor:

      • sudo nano matomo/config/config.ini.php

      Near the top you should have a [General] section. Add the last three lines, highlighted below, to the end of that section:

      config.ini.php

      [General]
      proxy_client_headers[] = "HTTP_X_FORWARDED_FOR"
      proxy_host_headers[] = "HTTP_X_FORWARDED_HOST"
      salt = "e0a81d6e54d6d2200efd0f0ef6ef8563"
      trusted_hosts[] = "localhost"
      trusted_hosts[] = "example.com"
      trusted_hosts[] = "localhost:8080"
      assume_secure_protocol = 1
      force_ssl = 1
      

      These options let Matomo know that it’s safe to use port 8080, and that it should assume it’s always being accessed over a secure connection.

      Save and close the configuration file, then switch back to your browser and reload the page. The error should be gone, and you’ll be presented with a login prompt:

      Screenshot of Matomo's "Sign in" screen with a form for username and password

      Log in with the admin account you created during setup, and you should be taken to the dashboard:

      Screenshot of Matomo's homepage dashboard with a placeholder indicating "No data has been recorded yet" and instructions on how to set up the tracking code

      Because you’ve probably not set up your tracking code yet, the dashboard will indicate that no data has been recorded. Follow the instructions to finish setting up the JavaScript code on your website to start receiving analytics data.

      Conclusion

      In this tutorial, you launched the Matamo analytics app and a MariaDB database using Docker Compose, then set up an Nginx reverse proxy and secured it using Let’s Encrypt SSL certificates.

      You’re now ready to set up your website and add the Matomo analytics tracking script. For more information about operating the Matomo software, please see the official Matomo documentation.



      Source link

      How To Build A Security Information and Event Management (SIEM) System with Suricata and the Elastic Stack on Ubuntu 20.04


      Introduction

      The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.

      In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) system. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.

      The components that you will use to build your own SIEM tool are:

      • Elasticsearch to store, index, correlate, and search the security events that come from your Suricata server.
      • Kibana to display and navigate around the security event logs that are stored in Elasticsearch.
      • Filebeat to parse Suricata’s eve.json log file and send each event to Elasticsearch for processing.
      • Suricata to scan your network traffic for suspicious events, and either log or drop invalid packets.

      First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json logs to Elasticsearch.

      Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on an Ubuntu 20.04 server. This server will be referred to as your Suricata server.

      You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be an Ubuntu 20.04 server with:

      For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.

      Step 1 — Installing Elasticsearch and Kibana

      The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:

      • curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where apt will search for new sources:

      • echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

      Now update your server’s package index and install Elasticsearch and Kibana:

      • sudo apt update
      • sudo apt install elasticsearch kibana

      Once you are done installing the packages, find and record your server’s private IP address using the ip address show command:

      You will receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64 eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64

      The private network interface in this output is the highlighted eth1 device, with the IPv4 address 10.137.0.5/16. Your device name, and IP addresses will be different. However, the address will be from the following reserved blocks of addresses:

      • 10.0.0.0 to 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

      If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)

      Record the private IP address for your Elasticsearch server (in this case 10.137.0.5). This address will be referred to as your_private_ip in the remainder of this tutorial. Also note the name of the network interface, in this case eth1. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.

      Step 2 — Configuring Elasticsearch

      Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack security module.

      Configuring Elasticsearch Networking

      Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface. You will also need to configure your firewall rules to allow access to Elasticsearch on your private network interface.

      Open the /etc/elasticsearch/elasticsearch.yml file using nano or your preferred editor:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      Find the commented out #network.host: 192.168.0.1 line between lines 50–60 and add a new line after it that configures the network.bind_host setting, as highlighted below:

      /etc/elasticsearch/elasticsearch.yml

      # By default Elasticsearch is only accessible on localhost. Set a different
      # address here to expose this node on the network:
      #
      #network.host: 192.168.0.1
      network.bind_host: ["127.0.0.1", "your_private_ip"]
      #
      # By default Elasticsearch listens for HTTP traffic on the first free port it
      # finds starting at 9200. Set a specific HTTP port here:
      

      Substitute your private IP in place of the your_private_ip address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.

      Next, go to the end of the file using the nano shortcut CTRL+v until you reach the end.

      Add the following highlighted lines to the end of the file:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      discovery.type: single-node
      xpack.security.enabled: true
      

      The discovery.type setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled setting turns on some of the security features that are included with Elasticsearch.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using the Uncomplicated Firewall (ufw), run the following commands:

      • sudo ufw allow in on eth1
      • sudo ufw allow out on eth1

      Substitute your private network interface in place of eth1 if it uses a different name.

      Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack security module.

      Starting Elasticsearch

      Now that you have configured networking and the xpack security settings for Elasticsearch, you need to start it for the changes to take effect.

      Run the following systemctl command to start Elasticsearch:

      • sudo systemctl start elasticsearch.service

      Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.

      Configuring Elasticsearch Passwords

      Now that you have enabled the xpack.security.enabled setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin directory that can automatically generate random passwords for these users.

      Run the following command to cd to the directory and then generate random passwords for all the default users:

      • cd /usr/share/elasticsearch/bin
      • sudo ./elasticsearch-setup-passwords auto

      You will receive output like the following. When prompted to continue, press y and then RETURN or ENTER:

      Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
      The passwords will be randomly generated and printed to the console.
      Please confirm that you would like to continue [y/N]y
      
      
      Changed password for user apm_system
      PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
      
      Changed password for user kibana_system
      PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user kibana
      PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user logstash_system
      PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
      
      Changed password for user beats_system
      PASSWORD beats_system = 2p81hIdAzWKknhzA992m
      
      Changed password for user remote_monitoring_user
      PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
      
      Changed password for user elastic
      PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
      

      You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system user’s password in the next section of this tutorial, and the elastic user’s password in the Configuring Filebeat step of this tutorial.

      At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack security module.

      Step 3 — Configuring Kibana

      In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.

      First you’ll enable Kibana’s xpack security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.

      Enabling xpack.security in Kibana

      To get started with xpack security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.

      You can generate the required encryption keys using the kibana-encryption-keys utility that is included in the /usr/share/kibana/bin directory. Run the following to cd to the directory and then generate the keys:

      • cd /usr/share/kibana/bin/
      • sudo ./kibana-encryption-keys generate -q

      The -q flag suppresses the tool’s instructions so that you only receive output like the following:

      Output

      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585 xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b

      Copy your output somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml configuration file.

      Open the file using nano or your preferred editor:

      • sudo nano /etc/kibana/kibana.yml

      Go to the end of the file using the nano shortcut CTRL+v until you reach the end. Paste the three xpack lines that you copied to the end of the file:

      /etc/kibana/kibana.yml

      . . .
      
      # Specifies locale to be used for all localizable strings, dates and number formats.
      # Supported languages are the following: English - en , by default , Chinese - zh-CN .
      #i18n.locale: "en"
      
      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
      xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
      xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
      

      Keep the file open and proceed to the next section where you will configure Kibana’s network settings.

      Configuring Kibana Networking

      To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost" line in /etc/kibana/kibana.yml. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:

      /etc/kibana/kibana.yml

      # Kibana is served by a back end server. This setting specifies the port to use.
      #server.port: 5601
      
      # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
      # The default is 'localhost', which usually means remote machines will not be able to connect.
      # To allow connections from remote users, set this parameter to a non-loopback address.
      #server.host: "localhost"
      server.host: "your_private_ip"
      

      Substitute your private IP in place of the your_private_ip address.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.

      Configuring Kibana Credentials

      There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.

      We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly

      If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username and elasticsearch.password.

      If you choose to edit the configuration file, skip the rest of the steps in this section.

      To add a secret to the keystore using the kibana-keystore utility, first cd to the /usr/share/kibana/bin directory. Next, run the following command to set the username for Kibana:

      • sudo ./kibana-keystore add elasticsearch.username

      You will receive a prompt like the following:

      Username Entry

      Enter value for elasticsearch.username: *************
      

      Enter kibana_system when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an * asterisk character. Press ENTER or RETURN when you are done entering the username.

      Now repeat the same command for the password. Be sure to copy the password for the kibana_system user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl.

      Run the following command to set the password:

      • sudo ./kibana-keystore add elasticsearch.password

      When prompted, paste the password to avoid any transcription errors:

      Password Entry

      Enter value for elasticsearch.password: ********************
      

      Starting Kibana

      Now that you have configured networking and the xpack security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.

      Run the following systemctl command to restart Kibana:

      • sudo systemctl start kibana.service

      Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.

      Step 4 — Installing Filebeat

      Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.

      To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:

      • curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where apt will search for new sources:

      • echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

      Now update the server’s package index and install the Filebeat package:

      • sudo apt update
      • sudo apt install filebeat

      Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml configuration file using nano or your preferred editor:

      • sudo nano /etc/filebeat/filebeat.yml

      Find the Kibana section of the file around line 100. Add a line after the commented out #host: "localhost:5601" line that points to your Kibana instance’s private IP address and port:

      /etc/filebeat/filebeat.yml

      . . .
      # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
      # This requires a Kibana endpoint configuration.
      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
        host: "your_private_ip:5601"
      
      . . .
      

      This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.

      Next, find the Elasticsearch Output section of the file around line 130 and edit the hosts, username, and password settings to match the values for your Elasticsearch server:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["your_private_ip:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "6kNbsxQGYZ2EQJiqJpgl"
      
      . . .
      

      Substitute in your Elasticsearch server’s private IP address on the hosts line in place of the your_private_ip value. Uncomment the username field and leave it set to the elastic user. Change the password field from changeme to the password for the elastic user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Next, enable Filebeats’ built-in Suricata module with the following command:

      • sudo filebeat modules enable suricata

      Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.

      Run the filebeat setup command. It may take a few minutes to load everything:

      Once the command finishes you should receive output like the following:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html It is not possible to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat. Loaded machine learning job configurations Loaded Ingest pipelines

      If there are no errors, use the systemctl command to start Filebeat. It will begin sending events from Suricata’s eve.json log to Elasticsearch once it is running.

      • sudo systemctl start filebeat.service

      Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.

      Step 5 — Navigating Kibana’s SIEM Dashboards

      Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.

      Connecting to Kibana with SSH

      SSH has an option -L that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.

      On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.

      Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:

      • ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N

      The various arguments to SSH are:

      • The -L flag forwards traffic to your local system on port 5601 to the remote server.
      • The your_private_ip:5601 portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
      • The 203.11.0.5 address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.
      • The -N flag instructs SSH to not run a command like an interactive /bin/bash shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.

      If you would like to close the tunnel at any time, press CTRL+C.

      On Windows your terminal should resemble the following screenshot:

      Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER or RETURN.

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      On macOS and Linux your terminal will be similar to the following screenshot:

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:

      Screenshot of a Browser on Kibana's Login Page

      If your browser cannot connect to Kibana you will receive a message like the following in your terminal:

      Output

      channel 3: open failed: connect failed: No route to host

      This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.

      Log in to your Kibana server using elastic for the Username, and the password that you copied earlier in this tutorial for the user.

      Browsing Kibana SIEM Dashboards

      Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.

      In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:

      Screenshot of a Browser Using Kibana's Global Search Box to Locate Suricata Dashboards

      Click the [Filebeat Suricata] Events Overview result to visit the Kibana dashboard that shows an overview of all logged Suricata events:

      Screenshot of a Browser on Kibana's Suricata Events Dashboard

      To visit the Suricata Alerts dashboard, repeat the search or click the Alerts link that is included in the Events dashboard. Your page should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Suricata Alerts Dashboard

      If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.

      Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Security -> Network Dashboard

      You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.

      Conclusion

      In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack security module that is included with each tool.

      After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.

      Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.

      The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.



      Source link