One place for hosting & domains

      Stream

      How To Build A Security Information and Event Management (SIEM) System with Suricata and the Elastic Stack on CentOS 8 Stream


      Not using CentOS 8?


      Choose a different version or distribution.

      Introduction

      The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.

      In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and CentOS 8 Stream. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.

      The components that you will use to build your own SIEM are:

      • Elasticsearch to store, index, correlate, and search the security events that come from your Suricata server.
      • Kibana to display and navigate around the security event logs that are stored in Elasticsearch.
      • Filebeat to parse Suricata’s eve.json log file and send each event to Elasticsearch for processing.
      • Suricata to scan your network traffic for suspicious events, and either log or drop invalid packets.

      First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json logs to Elasticsearch.

      Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on a CentOS 8 Stream server. This server will be referred to as your Suricata server.

      You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a CentOS 8 Stream server with:

      For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.

      Step 1 — Installing Elasticsearch and Kibana

      The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor. This ensures that the upstream Elasticsearch repositories will be used when installing new packages via yum:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      If you are using vi, when you are finished making changes, press ESC and then :x to write the changes to the file and quit.

      Now install Elasticsearch and Kibana using the dnf command. Press Y to accept any prompts about GPG key fingerprints:

      • sudo dnf install --enablerepo=elasticsearch elasticsearch kibana

      The --enablerepo option is used to override the default disabled setting in the /etc/yum.repos.d/elasticsearch.repo file. This approach ensures that the Elasticsearch and Kibana packages do not get accidentally upgraded when you install other package updates to your server.

      Once you are done installing the packages, find and record your server’s private IP address using the ip address show command:

      You will receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64 eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64

      The private network interface in this output is the highlighted eth1 device, with the IPv4 address 10.137.0.5. Your device name, and IP addresses will be different. Regardless of your device name and private IP address, the address will be from the following reserved blocks:

      • 10.0.0.0 to 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

      If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)

      Record the private IP address for your Elasticsearch server (in this case 10.137.0.5). This address will be referred to as your_private_ip in the remainder of this tutorial. Also note the name of the network interface, in this case eth1. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.

      Step 2 — Configuring Elasticsearch

      Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack security module.

      Configuring Elasticsearch Networking

      Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface.

      Open the /etc/elasticsearch/elasticsearch.yml file using vi or your preferred editor:

      • sudo vi /etc/elasticsearch/elasticsearch.yml

      Find the commented out #network.host: 192.168.0.1 line between lines 50–60 and add a new line after it that configures the network.bind_host setting, as highlighted below:

      # By default Elasticsearch is only accessible on localhost. Set a different
      # address here to expose this node on the network:
      #
      #network.host: 192.168.0.1
      network.bind_host: ["127.0.0.1", "your_private_ip"]
      #
      # By default Elasticsearch listens for HTTP traffic on the first free port it
      # finds starting at 9200. Set a specific HTTP port here:
      

      Substitute your private IP in place of the your_private_ip address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.

      Next, go to the end of the file using the vi shortcut SHIFT+G.

      Add the following highlighted lines to the end of the file:

      . . .
      discovery.type: single-node
      xpack.security.enabled: true
      

      The discovery.type setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled setting turns on some of the security features that are included with Elasticsearch.

      Save and close the file when you are done editing it.

      Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using firewalld, run the following commands:

      • sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
      • sudo firewall-cmd --permanent --zone=internal --add-service=elasticsearch
      • sudo firewall-cmd --permanent --zone=internal --add-service=kibana
      • sudo systemctl reload firewalld.service

      Substitute your private network interface name in place of eth1 in the first command if yours is different. That command changes the interface rules to use the internal Firewalld zone, which is more permissive than the default public zone.

      The next commands add rules to allow Elasticsearch traffic on port 9200 and 9300, along with Kibana traffic on port 5601.

      The final command reloads the Firewalld service with the new permanent rules in place.

      Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack security module.

      Starting Elasticsearch

      Now that you have configured networking and the xpack security settings for Elasticsearch, you need to start it for the changes to take effect.

      Run the following systemctl command to start Elasticsearch:

      • sudo systemctl start elasticsearch.service

      Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.

      Configuring Elasticsearch Passwords

      Now that you have enabled the xpack.security.enabled setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin directory that can automatically generate random passwords for these users.

      Run the following command to cd to the directory and then generate random passwords for all the default users:

      • cd /usr/share/elasticsearch/bin
      • sudo ./elasticsearch-setup-passwords auto

      You will receive output like the following. When prompted to continue, press y and then RETURN or ENTER:

      Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
      The passwords will be randomly generated and printed to the console.
      Please confirm that you would like to continue [y/N]y
      
      
      Changed password for user apm_system
      PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
      
      Changed password for user kibana_system
      PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user kibana
      PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user logstash_system
      PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
      
      Changed password for user beats_system
      PASSWORD beats_system = 2p81hIdAzWKknhzA992m
      
      Changed password for user remote_monitoring_user
      PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
      
      Changed password for user elastic
      PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
      

      You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system user’s password in the next section of this tutorial, and the elastic user’s password in the Configuring Filebeat step of this tutorial.

      At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack security module.

      Step 3 — Configuring Kibana

      In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.

      First you’ll enable Kibana’s xpack security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.

      Enabling xpack.security in Kibana

      To get started with xpack security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.

      You can generate the required encryption keys using the kibana-encryption-keys utility that is included in the /usr/share/kibana/bin directory. Run the following to cd to the directory and then generate the keys:

      • cd /usr/share/kibana/bin/
      • sudo ./kibana-encryption-keys generate -q --force

      The -q flag suppresses the tool’s instructions, and the --force flag will ensure that you create new keys. You should receive output like the following:

      Output

      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585 xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b

      Copy these three keys somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml configuration file.

      Open the file using vi or your preferred editor:

      • sudo vi /etc/kibana/kibana.yml

      Go to the end of the file using the vi shortcut SHIFT+G. Paste the three xpack lines that you copied to the end of the file:

      /etc/kibana/kibana.yml

      . . .
      
      # Specifies locale to be used for all localizable strings, dates and number formats.
      # Supported languages are the following: English - en , by default , Chinese - zh-CN .
      #i18n.locale: "en"
      
      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
      xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
      xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
      

      Keep the file open and proceed to the next section where you will configure Kibana’s network settings.

      Configuring Kibana Networking

      To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost" line in /etc/kibana/kibana.yml. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:

      /etc/kibana/kibana.yml

      # Kibana is served by a back end server. This setting specifies the port to use.
      #server.port: 5601
      
      # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
      # The default is 'localhost', which usually means remote machines will not be able to connect.
      # To allow connections from remote users, set this parameter to a non-loopback address.
      #server.host: "localhost"
      server.host: "your_private_ip"
      

      Substitute your private IP in place of the your_private_ip address.

      Save and close the file when you are done editing it. Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.

      Configuring Kibana Credentials

      There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.

      We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly.

      If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username and elasticsearch.password.

      If you choose to edit the configuration file, skip the rest of the steps in this section.

      To add a secret to the keystore using the kibana-keystore utility, first cd to the the /usr/share/kibana/bin directory. Next, run the following command to set the username for Kibana:

      • cd /usr/share/kibana/bin
      • sudo ./kibana-keystore add elasticsearch.username

      You will receive a prompt like the following:

      Username Entry

      Enter value for elasticsearch.username: *************
      

      Enter kibana_system when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an * asterisk character. Press ENTER or RETURN when you are done entering the username.

      Now repeat the process, this time to save the password. Be sure to copy the password for the kibana_system user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl.

      Run the following command to set the password:

      • sudo ./kibana-keystore add elasticsearch.password

      When prompted, paste the password to avoid any transcription errors:

      Password Entry

      Enter value for elasticsearch.password: ********************
      

      Starting Kibana

      Now that you have configured networking and the xpack security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.

      Run the following systemctl command to restart Kibana:

      • sudo systemctl start kibana.service

      Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.

      Step 4 — Installing Filebeat

      Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.

      To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      When you are finished making changes save and exit the file. Now install the Filebeat package using the dnf command:

      • sudo dnf install --enablerepo=elasticsearch filebeat

      Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml configuration file using vi or your preferred editor:

      • sudo vi /etc/filebeat/filebeat.yml

      Find the Kibana section of the file around line 100. Add a line after the commented out #host: "localhost:5601" line that points to your Kibana instance’s private IP address and port:

      /etc/filebeat/filebeat.yml

      . . .
      # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
      # This requires a Kibana endpoint configuration.
      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
        host: "your_private_ip:5601"
      
      . . .
      

      This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.

      Next, find the Elasticsearch Output section of the file around line 130 and edit the hosts, username, and password settings to match the values for your Elasticsearch server:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["your_private_ip:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "6kNbsxQGYZ2EQJiqJpgl"
      
      . . .
      

      Substitute in your Elasticsearch server’s private IP address on the hosts line. Uncomment the username field and leave it set to the elastic user. Change the password field from changeme to the password for the elastic user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.

      Save and close the file when you are done editing it. Next, enable Filebeats’ built-in Suricata module with the following command:

      • sudo filebeat modules enable suricata

      Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.

      Run the filebeat setup command. It may take a few minutes to load everything:

      Once the command finishes you should receive output like the following:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat. Loaded machine learning job configurations Loaded Ingest pipelines

      If there are no errors, use the systemctl command to start Filebeat. It will begin sending events from Suricata’s eve.json log to Elasticsearch once it is running.

      • sudo systemctl start filebeat.service

      Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.

      Step 5 — Navigating Kibana’s SIEM Dashboards

      Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.

      Connecting to Kibana with SSH

      SSH has an option -L that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.

      On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.

      Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:

      • ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N

      The various arguments to SSH are:

      • The -L flag forwards traffic to your local system on port 5601 to the remote server.
      • The your_private_ip:5601 portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip.
      • The 203.0.113.5 address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.
      • The -N flag instructs SSH to not run a command like an interactive /bin/bash shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.

      If you would like to close the tunnel at any time, press CTRL+C.

      On Windows your terminal should resemble the following screenshot:

      Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER or RETURN.

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      On macOS and Linux your terminal will be similar to the following screenshot:

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:

      Screenshot of a Browser on Kibana's Login Page

      If your browser cannot connect to Kibana you will receive a message like the following in your terminal:

      Output

      channel 3: open failed: connect failed: No route to host

      This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.

      Log in to your Kibana server using elastic for the Username, and the password that you copied earlier in this tutorial for the user.

      Browsing Kibana SIEM Dashboards

      Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.

      In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:

      Screenshot of a Browser Using Kibana's Global Search Box to Locate Suricata Dashboards

      Click the [Filebeat Suricata] Events Overview result to visit the Kibana dashboard that shows an overview of all logged Suricata events:

      Screenshot of a Browser on Kibana's Suricata Events Dashboard

      To visit the Suricata Alerts dashboard, repeat the search or click the Alerts link that is included in the Events dashboard. Your page should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Suricata Alerts Dashboard

      If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.

      Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Security -> Network Dashboard

      You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.

      Conclusion

      In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack security module that is included with each tool.

      After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.

      Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.

      The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.



      Source link

      How To Install Suricata on CentOS 8 Stream



      [**]

      Not using CentOS 8?


      Choose a different version or distribution.

      Introduction

      [**]Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.

      [**]By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.

      [**]You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.

      [**]In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Centos 8 Stream to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.

      Prerequisites

      [**]Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.

      [**]If you plan to use Suricata to protect the server that it is running on, you will need:

      [**]Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.

      [**]If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most CentOS, Fedora, and other RedHat derived servers as well.

      Step 1 — Installing Suricata

      [**]To get started installing Suricata, you will need to add the Open Information Security Foundation’s (OISF) software repository information to your CentOS system. You can use the dnf copr enable command to do this. You will also need to add the Extra Packages for Enterprise Linux (EPEL) repository.

      [**]To enable the Community Projects (copr) subcommand for the dnf package tool, run the following:

      • sudo dnf install 'dnf-command(copr)'

      [**]You will be prompted to install some additional dependencies, as well as accept the GPG key for the CentOS Linux distribution. Press y and ENTER each time to finish installing the copr package.

      [**]Next run the following command to add the OISF repository to your system and update the list of available packages:

      • sudo dnf copr enable @oisf/suricata-6.0

      [**]Press y and ENTER when you are prompted to confirm that you want to add the repository.

      [**]Now add the epel-release package, which will make some extra dependency packages available for Suricata:

      • sudo dnf install epel-release

      [**]When you are prompted to import the GPG key, press y and ENTER to accept.

      [**]Now that you have the required software repositories enabled, you can install the suricata package using the dnf command:

      • sudo dnf install suricata

      [**]When you are prompted to add the GPG key for the OISF repository, press y and ENTER. The package and its dependencies will now be downloaded and installed.

      [**]Next, enable the suricata.service so that it will run when your system restarts. Use the systemctl command to enable it:

      • sudo systemctl enable suricata.service

      [**]You should receive output like the following indicating the service is enabled:

      [**]Output

      Created symlink /etc/systemd/system/multi-user.target.wants/suricata.service → /usr/lib/systemd/system/suricata.service.

      [**]Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl:

      • sudo systemctl stop suricata.service

      [**]Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.

      Step 2 — Configuring Suricata For The First Time

      [**]The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.

      [**]However, the default configuration still has a few settings that you may need to change depending on your environment and needs.

      [**]Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.

      [**]If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.

      [**]To enable the option, open /etc/suricata/suricata.yaml using vi or your preferred editor:

      • sudo vi /etc/suricata/suricata.yaml

      [**]Find line 120 which reads # Community Flow ID. If you are using vi type 120gg to go directly to the line. Below that line is the community-id key. Set it to true to enable the setting:

      [**]/etc/suricata/suricata.yaml

      . . .
            # Community Flow ID
            # Adds a 'community_id' field to EVE records. These are meant to give
            # records a predictable flow ID that can be used to match records to
            # output of other tools such as Zeek (Bro).
            #
            # Takes a 'seed' that needs to be same across sensors and tools
            # to make the id less predictable.
      
            # enable/disable the community id feature.
            community-id: true
      . . .
      

      [**]Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ= that you can use to correlate records across different NMS tools.

      [**]Save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC and then :x then ENTER to save and exit the file.

      Determining Which Network Interface(s) To Use

      [**]You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.

      [**]To determine the device name of your default network interface, you can use the ip command as follows:

      • ip -p -j route show default

      [**]The -p flag formats the output to be more readable, and the -j flag prints the output as JSON.

      [**]You should receive output like the following:

      [**]Output

      [ { "dst": "default", "gateway": "203.0.113.254", "dev": "eth0", "protocol": "static", "metric": 100, "flags": [ ] } ]

      [**]The dev line indicates the default device. In this example output, the device is the highlighted eth0 interface. Your output may show a device name like ens... or eno.... Whatever the name is, make a note of it.

      [**]Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml configuration file using vi or your preferred editor:

      • sudo vi /etc/suricata/suricata.yaml

      [**]Scroll through the file until you come to a line that reads af-packet: around line 580. If you are using vi you can also go to the line directly by entering 580gg. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:

      [**]/etc/suriata/suricata.yaml

      # Linux high speed capture support
      af-packet:
        - interface: eth0
          # Number of receive threads. "auto" uses the number of cores
          #threads: auto
          # Default clusterid. AF_PACKET will load balance packets based on flow.
          cluster-id: 99
      . . .
      

      [**]If you want to inspect traffic on additional interfaces, you can add more - interface: eth... YAML objects. For example, to add a device named enp0s1, scroll down to the bottom of the af-packet section to around line 650. To add a new interface, insert it before the - interface: default section like the following highlighted example:

      [**]/ec/suricata/suricata.yaml

          #  For eBPF and XDP setup including bypass, filter and load balancing, please
          #  see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
      
        - interface: enp0s1
          cluster-id: 98
      
        - interface: default
          #threads: auto
          #use-mmap: no
          #tpacket-v3: yes
      

      [**]Be sure to choose a unique cluster-id value for each - interface object.

      [**]Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC, then :x and ENTER to save and quit.

      Configuring Live Rule Reloading

      [**]Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:

      [**]/etc/suricata/suricata.yaml

      . . .
      
      detect-engine:
        - rule-reload: true
      

      [**]With this setting in place, you will be able to send the SIGUSR2 system signal to the running process, and Suricata will reload any changed rules into memory.

      [**]A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:

      • sudo kill -usr2 $(pidof suricata)

      [**]The $(pidof suricata) portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2 part of the command uses the kill utility to send the SIGUSR2 signal to the process ID that is reported back by the subshell.

      [**]You can use this command any time you run suricata-update or when you add or edit your own custom rules.

      [**]Save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC, then :x and ENTER to confirm.

      Step 3 — Updating Suricata Rulesets

      [**]At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:

      [**]Output

      <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules

      [**]By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.

      [**]Suricata includes a tool called suricata-update that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:

      [**]You should receive output like the following:

      [**]Output

      19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata. 19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml 19/10/2021 -- 19:31:03 - <Info> -- Using /usr/share/suricata/rules for Suricata provided rules. . . . 19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open 19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.3/emerging.rules.tar.gz. 100% - 3062850/3062850 . . . 19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 31011; enabled: 23649; added: 31011; removed 0; modified: 0 19/10/2021 -- 19:31:07 - <Info> -- Writing /var/lib/suricata/rules/classification.config 19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T. 19/10/2021 -- 19:31:32 - <Info> -- Done.

      [**]The highlighted lines indicate suricata-update has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /var/lib/suricata/rules/suricata.rules file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.

      Adding Ruleset Providers

      [**]The suricata-update tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.

      [**]You can list the default set of rule providers using the list-sources flag to suricata-update like this:

      • sudo suricata-update list-sources

      [**]You will receive a list of sources like the following:

      [**]Output

      . . . 19/10/2021 -- 19:27:34 - <Info> -- Adding all sources 19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml Name: et/open Vendor: Proofpoint Summary: Emerging Threats Open Ruleset License: MIT . . .

      [**]For example, if you wanted to include the tgreen/hunting ruleset, you could enable it using the following command:

      • sudo suricata-update enable-source tgreen/hunting

      [**]Then run suricata-update again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.

      Step 4 — Validating Suricata’s Configuration

      [**]Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.

      [**]Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T flag to run Suricata in test mode. The -v flag will print some additional information, and the -c flag tells Suricata where to find its configuration file:

      • sudo suricata -T -c /etc/suricata/suricata.yaml -v

      [**]The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.

      [**]With the default ET Open ruleset you should receive output like the following:

      [**]Output

      21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode 21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode 21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2 21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log 21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json 21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log 21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23879 rules successfully loaded, 0 rules failed 21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found 21/10/2021 -- 15:00:47 - <Info> - 23882 signatures processed. 1183 are IP-only rules, 4043 are inspecting packet payload, 18453 inspect application layer, 107 are decoder event only 21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting. 21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete

      [**]If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules would generate an error like the following:

      [**]Output

      21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode 21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode 21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2 21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json 21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log 21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/test.rules

      [**]With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.

      [**]Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.

      Step 5 — Running Suricata

      [**]Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl command:

      • sudo systemctl start suricata.service

      [**]You can examine the status of the service using the systemctl status command:

      • sudo systemctl status suricata.service

      [**]You should receive output like the following:

      [**]Output

      ● suricata.service - Suricata Intrusion Detection Service Loaded: loaded (/usr/lib/systemd/system/suricata.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2021-10-21 18:22:56 UTC; 1min 57s ago Docs: man:suricata(1) Process: 24588 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS) Main PID: 24590 (Suricata-Main) Tasks: 1 (limit: 23473) Memory: 80.2M CGroup: /system.slice/suricata.service └─24590 /sbin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -i eth0 --user suricata Oct 21 18:22:56 suricata systemd[1]: Starting Suricata Intrusion Detection Service.. Oct 21 18:22:56 suricata systemd[1]: Started Suricata Intrusion Detection Service. . . .

      [**]As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail command to watch for a specific message in Suricata’s logs that indicates it has finished starting:

      • sudo tail -f /var/log/suricata/suricata.log

      [**]You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:

      [**]Output

      19/10/2021 -- 19:22:39 - <Info> - All AFP capture threads are running.

      [**]This line indicates Suricata is running and ready to inspect traffic. You can exit the tail command using CTRL+C.

      [**]Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.

      Step 6 — Testing Suricata Rules

      [**]The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.

      [**]For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498 using the curl command.

      [**]Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:

      • curl http://testmynids.org/uid/index.html

      [**]The curl command will output a response like the following:

      [**]Output

      uid=0(root) gid=0(root) groups=0(root)

      [**]This example response data is designed to trigger an alert, by pretending to return the output of a command like id that might run on a compromised remote system via a web shell.

      [**]Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log and the second is a machine readable log in /var/log/suricata/eve.log.

      Examining /var/log/suricata/fast.log

      [**]To check for a log entry in /var/log/suricata/fast.log that corresponds to your curl request use the grep command. Using the 2100498 rule identifier from the Quickstart documentation, search for entries that match it using the following command:

      • grep 2100498 /var/log/suricata/fast.log

      [**]If your request used IPv6, then you should receive output like the following, where 2001:DB8::1 is your system’s public IPv6 address:

      [**]Output

      10/21/2021-18:35:54.950106 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628

      [**]If your request used IPv4, then your log should have a message like this, where 203.0.113.1 is your system’s public IPv4 address:

      [**]Output

      10/21/2021-18:35:57.247239 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364

      [**]Note the highlighted 2100498 value in the output, which is the Signature ID (sid) that Suricata uses to identify a rule.

      Examining /var/log/suricata/eve.log

      [**]Suricata also logs events to /var/log/suricata/eve.log (nicknamed the EVE log) using JSON to format entries.

      [**]The Suricata documentation recommends using the jq utility to read and filter the entries in this file. Install jq if you do not have it on your system using the following dnf command:

      [**]Once you have jq installed, you can filter the events in the EVE log by searching for the 2100498 signature with the following command:

      • jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json

      [**]The command examines each JSON entry and prints any that have an alert object, with a signature_id key that matches the 2100498 value that you are searching for. The output will resemble the following:

      [**]Output

      { "timestamp": "2021-10-21T19:42:47.368856+0000", "flow_id": 775889108832281, "in_iface": "eth0", "event_type": "alert", "src_ip": "203.0.113.1", "src_port": 80, "dest_ip": "147.182.148.159", "dest_port": 38920, "proto": "TCP", "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=", "alert": { "action": "allowed", "gid": 1, "signature_id": 2100498, "rev": 7, "signature": "GPL ATTACK_RESPONSE id check returned root", "category": "Potentially Bad Traffic", . . . }

      [**]Note the highlighted "signature_id": 2100498, line, which is the key that jq is searching for. Also note the highlighted "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=", line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.

      [**]Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.

      [**]A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Information Event Management (SIEM) system for further processing.

      Step 7 — Handling Suricata Alerts

      [**]Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.

      [**]If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.

      [**]Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.

      Conclusion

      [**]In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.

      [**]Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.

      [**]For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.

      [**]Now that you have Suricata installed and configured, you can continue to the next tutorial in this series Understanding Suricata Signatures where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.



      Source link