One place for hosting & domains


      How To Troubleshoot Terraform

      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.


      Things do not always go according to plan: deployments can fail, existing resources may break unexpectedly, and you and your team could be forced to fix the issue as soon as possible. Understanding the methods to approach debugging your Terraform project is crucial when you need to make a swift response.

      Similarly to developing with other programming languages and frameworks, setting log levels in Terraform to gain insight into its internal workflows with the necessary verbosity is a feature that can help you when troubleshooting. By logging the internal actions, you can uncover implicitly hidden errors, such as variables defaulting to an unsuitable data type. Also common with frameworks is the ability to import and use third-party modules (or libraries) to reduce code duplication between projects.

      In this tutorial, you’ll verify that variables always have sensible values and you’ll specify exactly which versions of providers and modules you need to prevent conflicts. You’ll also enable various levels of debug mode verbosity, which can help you diagnose an underlying issue in Terraform itself.


      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions to do this in How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-troubleshooting, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Setting Version Constraints

      Although the ability to make use of third-party modules and providers can minimize code duplication and effort when writing your Terraform code, it is also possible for developers of third-party modules to release new versions that can potentially bring breaking changes to your specific code. To prevent this, Terraform allows you to specify version boundaries to ensure that only the versions you want are installed and used. Specifying versions means that you will require the versions you’ve tested in your code, but it also leaves the possibility for a future update.

      Version constraints are specified as strings and passed in to the version parameter when you define module or provider requirements. As part of the Prerequisites, you’ve already requested the digitalocean provider in Run the following command to review what version requirements it specifies:

      You’ll find the provider code as follows:


      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"

      In this case, you have requested the digitalocean provider, and explicitly set the version to 1.22.2. When your project only requires a specific version, this is the most effortless way to accomplish that.

      Version constraints can be more complex than just specifying one version. They can contain one or more groups of conditions, separated by a comma (,). The groups each define an acceptable range of versions and may include operators, such as:

      • >, <, >=, <=: for comparisons, such as >=1.0, which would require the version to be equal to or greater than 1.0.
      • !=: for excluding a specific version—!= 1.0 would deny version 1.0 from being used and requested.
      • ~>: for matching the specified version up to the right-most version part, which is allowed to increment (~>1.5.10 will match 1.5.10 and 1.5.11, but won’t match 1.5.9).

      Here are two examples of version constraints with multiple groups:

      • >=1.0, <2.0: allows all versions from the 1.0 series onward, up to 2.0.
      • >1.0, != 1.5: allows versions greater than, but not equal to 1.0, with the exception of 1.5, which it also excludes.

      For a potential available version to be selected, it must pass every specified constraint and remain compatible with other modules and providers, as well as the version of Terraform that you’re using. If Terraform deems no combination acceptable, it won’t be able to perform any tasks because the dependencies remain unresolved. When Terraform identifies acceptable versions satisfying the constraints, it uses the latest one available.

      In this section, you’ve learned about locking the range of module and resource versions that you can install in your project by specifying version constraints. This is useful when you want stability by using only tested and approved versions of third-party code. In the next section, you’ll configure Terraform to show more verbose logs, which are necessary for bug reports and further debugging in case of crashes.

      Enabling Debug Mode

      There could be a bug or malformed input within your workflow, which may result in your resources not provisioning as intended. In such rare cases, it’s important to know how to access detailed logs describing what Terraform is doing. They may aid in pinpointing the cause of the error, tell you if it’s user-made, or prompt you to report the issue to Terraform developers if it’s an internal bug.

      Terraform exposes the TF_LOG environment variable for setting the level of logging verbosity, of which there are five:

      • TRACE: the most elaborate verbosity, shows every step taken by Terraform and produces enormous outputs with internal logs.
      • DEBUG: describes what happens internally in a more concise way compared to TRACE.
      • ERROR: shows errors that prevent Terraform from continuing.
      • WARN: logs warnings, which may indicate misconfiguration or mistakes, but are not critical to execution.
      • INFO: shows general, high-level messages about the execution process.

      To specify a desired log level, you’ll have to set the environment variable to the appropriate value:

      If TF_LOG is defined, but the value is not one of the five listed verbosity levels, Terraform will default to TRACE.

      You’ll now define a Droplet resource and try deploying it with different log levels. You’ll store the Droplet definition in a file named, so create and open it for editing:

      Add the following lines:


      resource "digitalocean_droplet" "test-droplet" {
        image  = "ubuntu-18-04-x64"
        name   = "test-droplet"
        region = "fra1"
        size   = "s-1vcpu-1gb"

      This Droplet will run Ubuntu 18.04 with one CPU core and 1GB RAM in the fra1 region; you’ll call it test-droplet. That is all you need to define, so save and close the file.

      Before deploying the Droplet, set the log level to DEBUG by running:

      Then, plan the Droplet provisioning:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be very long, and you can inspect it more closely to find that each line starts with the level of verbosity (importance) in double brackets. You’ll see that most of the lines start with [DEBUG].

      [WARN] and [INFO] are also present; that’s because TF_LOG sets the lowest log level. This means that you’d have to set TF_LOG to TRACE to show TRACE and all other log levels at the same time.

      If an internal error occurred, Terraform will show the stack trace and exit, stopping execution. From there, you’ll be able to locate where in the source code the error occurred, and if it’s a bug, report it to Terraform developers. Otherwise, if it’s an error in your code, Terraform will point it out to you, so you can fix it in your project.

      Here is how the log output would be when the DigitalOcean backend can’t verify your API token. It throws a user error because of incorrect input:


      ... digitalocean_droplet.test-droplet: Creating... 2021/01/20 06:54:35 [ERROR] eval: *terraform.EvalApplyPost, err: Error creating droplet: POST 401 Unable to authenticate you 2021/01/20 06:54:35 [ERROR] eval: *terraform.EvalSequence, err: Error creating droplet: POST 401 Unable to authenticate you Error: Error creating droplet: POST 401 Unable to authenticate you on line 1, in resource "digitalocean_droplet" "test-droplet": 1: resource "digitalocean_droplet" "test-droplet" { ...

      You’ve now learned to enable more verbose logging modes. They are very useful for diagnosing crashes and unexpected Terraform behavior. In the next section, you’ll review verifying variables and preventing edge cases.

      Validating Variables

      In this section, you’ll ensure that variables always have sensible and appropriate values according to their type and validation parameters.

      In HCL (HashiCorp Configuration Language), when defining a variable you do not necessarily need to specify anything except its name. You would declare an example variable called test_ip like this:

      variable "test_ip" { }

      You can then use this value through the code; passing its value in when you run Terraform.

      While that will work, this definition has two shortcomings: first, you can not pass a value in at runtime; and second, it can be of any type (bool, string and so on), which may not be suitable for its purpose. To remedy this, you should always specify its default value and type:

      variable "test_ip" {
        type    = string
        default = ""

      By setting a default value, you ensure that the code referencing the variable remains operational in the event that a more specific value was not provided. When you specify a type, Terraform can validate the new value the variable should be set to, and show an error if it’s non-conforming to the type. An instance of this behavior would be trying to fit a string into a number.

      A new feature of Terraform 0.13 is that you can provide a validation routine for variables that can give an error message if the validation fails. Examples of validation would be checking the length of the new value if it’s a string, or looking for at least one match with a RegEx expression in case of structured data.

      To add input validation to your variable, define a validation block:

      variable "test_ip" {
        type    = string
        default = ""
        validation {
          condition     = can(regex("d{1,3}.d{1,3}.d{1,3}.d{1,3}", var.test_ip))
          error_message = "The provided value is not a valid IP address."

      Under validation, you can specify two parameters within the curly braces:

      • A condition that accepts a bool it will calculate, which will signify if the validation passes.
      • An error_message that specifies the error message in case the validation does not pass.

      In this example, you compute the condition by searching for a regex match in the variable value. You pass that to the can function. The can function returns true if the function that’s passed in as a parameter ran without errors, so it’s useful for checking when a function completed successfully or returned results.

      The regex function we’re using here accepts a Regular Expression (RegEx), applies it to a given string, and returns the matched substrings. The RegEx matches four pairs of three digit numbers, separated with dots in-between. You can learn more about RegEx by visiting the Introduction to Regular Expressions tutorial.

      You now know how to specify a default value for a variable, how to set its type, and how to enable input validation using RegEx expressions.


      In this tutorial, you’ve troubleshooted Terraform by enabling debug mode and setting the log verbosity to appropriate levels. You’ve also learned about some of the advanced features of variables, such as declaring validation procedures and setting good defaults. Leaving out default values is a common pitfall that may cause strange issues further along in your project’s development.

      For the stability of your project, locking third-party module and provider versions is recommended, since it leaves you with a stable system and an upgrade path when it becomes necessary.

      Verification of input values for variables is not confined to matching with regex. For more built-in functions that you can make use of, visit the official docs.

      To learn more about Terraform, check out our How To Manage Infrastructure with Terraform series.

      Source link

      How To Troubleshoot Common HAProxy Errors

      Part of the Series:
      Common HAProxy Errors

      This tutorial series explains how to troubleshoot and fix some of the most common errors that you may encounter when using the HAProxy TCP and HTTP proxy server.

      Each tutorial in this series includes descriptions of common HAProxy configuration, network, filesystem, or permission errors. The series begins with an overview of the commands and log files that you can use to troubleshoot HAProxy. Subsequent tutorials examine specific errors in detail.


      There are three main commands, and a common log location that you can use to get started troubleshooting HAProxy errors. Generally when you are troubleshooting HAProxy, you will use these commands in the order indicated here, and then examine the log file for specific diagnostic data.

      The commands and log that you will commonly use to troubleshoot HAProxy across most Linux distributions are:

      • systemctl – Used to control and interact with Linux services via the systemd service manager.
      • journalctl – Used to query and view the logs that are generated by systemd.
      • haproxy – When troubleshooting, this command is used to check HAProxy’s configuration.
      • /var/log/haproxy.log – This file contains log entries from HAProxy itself detailing TCP and HTTP traffic that is being handled by the server.

      These commands, how to use them, and HAProxy’s logs where you can find additional information about errors are described in further detail in the following sections.

      systemctl Commands for HAProxy

      To troubleshoot common HAProxy errors using the systemd service manager, the first step is to inspect the state of the HAProxy processes on your system. The following systemctl commands will query systemd for the state of HAProxy’s processes on most Linux distributions.

      • sudo systemctl status haproxy.service -l --no-pager

      The -l flag will ensure that output is not truncated or ellipsized. The --no-pager flag will make sure that output will go directly to your terminal without requiring any interaction on your part to view it. If you omit the --no-pager flag you will be able to scroll through the output using arrow keys, or the page up and down keys. To quit from the pager use the q key. You should receive output like this:


      ● haproxy.service - HAProxy Load Balancer Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-08-20 19:30:11 UTC; 5s ago Docs: man:haproxy(1) file:/usr/share/doc/haproxy/configuration.txt.gz Process: 487 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS) Main PID: 488 (haproxy) Tasks: 2 (limit: 2344) . . . Aug 19 21:31:46 d6cdd0c71489 systemd[1]: Started HAProxy Load Balancer.

      Your output may be slightly different depending on which Linux distribution you are using, but in any case, make a note of the Active line in the output. If your HAProxy server does not show active (running) as highlighted in the example output but you expect it should, there may be an error. Typically if there is a problem, you will have a line like the following in your output (note the highlighted failed portion):

      Example Error Output

      Active: failed (Result: exit-code) since Thu 2020-08-20 19:32:26 UTC; 6s ago

      If there is a problem with your HAProxy process or configuration you can troubleshoot it further using the journalctl command.

      journalctl Commands for HAProxy

      To inspect the systemd logs for HAProxy, you can use the journalctl command. The systemd logs for HAProxy will usually indicate whether there is a problem with starting or managing the HAProxy process.

      These logs are separate from HAProxy’s request and error logs. journalctl displays logs from systemd that describe the HAProxy service itself, from startup to shutdown, along with any process errors that may be encountered along the way.

      • sudo journalctl -u haproxy.service --since today --no-pager

      The --since today flag will limit the output of the command to log entries beginning at 00:00:00 of the current day only. Using this option will help restrict the volume of log entries that you need to examine when checking for errors. You should receive output like the following (there may be a few extra lines between the Starting and Started lines depending on your Linux distribution):


      Aug 20 19:37:08 d6cdd0c71489 systemd[1]: Starting HAProxy Load Balancer... . . . Aug 20 19:37:08 d6cdd0c71489 systemd[1]: Started HAProxy Load Balancer.

      If there is an error, you will have a line in the output that is similar to the following, with the main difference between Linux distributions being the highlighted yourhostname portion:

      Example Error Output

      Aug 20 19:32:25 yourhostname systemd[1]: Failed to start HAProxy Load Balancer.

      If your HAProxy server has errors in the journalctl logs like the previous example, then the next step to troubleshoot possible issues is investigating HAProxy’s configuration using the haproxy command line tool.

      Troubleshooting with haproxy

      To troubleshoot HAProxy configuration issues, use the haproxy -c command. The tool will parse your HAProxy files and detect any errors or missing settings before attempting to start the server.

      Run the command like this on Ubuntu, Debian, CentOS, and Fedora based distributions. Be sure to change the path to the configuration file if you are using a different filename or location:

      • sudo haproxy -c -f /etc/haproxy/haproxy.cfg

      A working HAProxy configuration will result in output like the following:


      Configuration file is valid

      If there is an error in your HAProxy configuration, like a typo or misplaced directive, haproxy -c will detect it and attempt to notify you about the problem.

      For example, attempting to use the bind directive in haproxy.cfg in the wrong location will result in messages like the following:

      Example Error Output

      [ALERT] 232/194354 (199) : parsing [/etc/haproxy/haproxy.cfg:13] : unknown keyword 'bind' in 'global' section [ALERT] 232/194354 (199) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [ALERT] 232/194354 (199) : Fatal errors found in configuration.

      In this example the bind directive is misplaced inside a global configuration section, so HAProxy generates the unknown keyword error. The message also includes a line number 13, so that you can edit the file and fix or remove the erroneous line without having to search through the file.

      Learning how to use haproxy -c to detect and fix errors is useful when you are troubleshooting an existing error, or before you reload HAProxy with an edited configuration that may contain errors.

      HAProxy Log Files

      HAProxy log files are a very helpful resource for troubleshooting. Generally, any error that you receive in a browser or other HTTP client will have a corresponding entry in HAProxy’s logs. Sometimes HAProxy will also output errors related to configuration and other debugging information to its log files.

      On Ubuntu and Debian based Linux distributions, the haproxy package includes scripts that configure log output in /var/log/haproxy.log.

      On CentOS, Fedora, and other RedHat-derived Linux distributions, haproxy does not output to a log file by default. To log HAProxy output logs to /var/log/haproxy.log, follow this quickstart tutorial, How To Configure HAProxy Logging with Rsyslog on CentOS 8.

      When you are troubleshooting HAProxy using its log file, examine /var/log/haproxy.log for errors using a tool like tail or less. For example, to view the last two lines of the log using tail, run the following command:

      • sudo tail -n 2 /var/log/haproxy.log

      An example error will resemble something like the following lines, regardless of which Linux distribution you are using to run your HAProxy server:

      Log Examples

      Aug 20 19:36:21 d6cdd0c71489 haproxy[19202]: [ALERT] 258/134605 (19202) : Proxy 'app', server 'app1' [/etc/haproxy/haproxy.cfg:88] verify is enabled by default but no CA file specified. If you're running on a LAN where you're certain to trust the server's certificate, please set an explicit 'verify none' statement on the 'server' line, or use 'ssl-server-verify none' in the global section to disable server-side verifications by default. Aug 20 19:36:22 d6cdd0c71489 haproxy[4451]: [20/Aug/2020:19:36:22.288] main app/<NOSRV> 0/-1/-1/-1/1 503 212 - - SC-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"

      These example lines are just for illustration purposes. If you are diagnosing errors with your HAProxy server, chances are the lines in your logs will have different contents than these. Some lines will include success responses and other non-critical diagnostic entries.

      Regardless of your Linux distribution, the format of the lines in your HAProxy logs will include any HTTP status codes that are returned to clients, along with requesting IPs and the status of backend servers.

      Once you have an idea of what might be causing problems with your HAProxy server you can continue researching and troubleshooting the issue. The HTTP status code and text description are especially useful, since they give you explicit and specific terms that you can use to narrow down the range of possible causes of a problem.


      Troubleshooting HAProxy errors can range from diagnosing errors with the service itself, to locating misconfigured options for modules, or to examining customized access control rules in detail. This introduction to diagnosing issues with HAProxy explained how to use a number of utilities to help narrow down the possible causes of errors. Usually, you will use these utilities in the same order, although you can always skip some, or start directly with examining logs if you have a general idea of what the problem might be.

      However, as a general sequence for troubleshooting, it helps to be methodical and use these tools in the order described. Start troubleshooting with systemctl to examine the state of the HAProxy server. If you need more information, examine the systemd logs for HAProxy using the journalctl command. If the issue is still not apparent after checking journalctl, testing HAProxy’s configuration using haproxy -c -f /etc/haproxy/haproxy.cfg is the next step. Finally, for in-depth troubleshooting, examining HAProxy’s log files will usually indicate a specific error, with helpful diagnostic messages and error codes.

      The rest of the tutorials in this series will examine some common errors that you may encounter when using HAProxy in more detail.

      Source link

      How to Troubleshoot and Fix a Brute-Force Attack in WordPress on a DigitalOcean Droplet


      While running a WordPress installation through a hosting service can be a convenient way to start a website, it’s not without security vulnerabilities that may sometimes be hard to troubleshoot. Brute-force attacks, cyberattacks that rapidly work to guess and access personal information like logins or passwords, happen when these vulnerabilities are exploited, and can sometimes originate from your website.

      When facing brute-force attacks from your Droplets on DigitalOcean, it’s imperative to remove the threat quickly. While there are a number of ways to identify and remove compromised files vulnerable to attack, this tutorial aims to provide you with some steps to help you detect, resolve, and secure your WordPress installation(s) across DigitalOcean Droplets from vulnerabilities in the future.

      Step 1: Identify the Source of the Brute-Force Attack

      The first step in troubleshooting an issue with a brute-force attack initiated from your Droplet is to identify the malware responsible for the malicious traffic. There are numerous tools and options available, but ClamAV ( ) is a good tool to initially attempt to identify and remove the malware.

      Most Linux distributions have ClamAV in their package management system, and typically you’ll need to install ClamAV and then run it.

      • For Ubuntu, Debian, and most Debian-based distributions, you can run:
      • sudo apt-get install clamav clamav-daemon
      • For CentOS 8 you need to enable the EPEL ( ) repo, which is an official repository of packages supported by the Fedora project, and then install ClamAV.

      You can do so with a single command:

      • dnf --enablerepo=epel -y install clamav clamav-update

      Once ClamAV is installed, you can scan your system with:

      • clamscan --infected --recursive /path/to/wordpress/sites

      Replace the highlighted path with the correct path for your WordPress site. The --recursive parameter will make sure that the command is configured to recurse through subdirectories, and the path we used in this example points to the root folder where all WordPress installations are located. This way, with a single command you can scan all your WordPress sites. ClamAV will then return a list of all files it finds suspicious, but will not take any action yet. After investigating which files ClamAV detected as suspicious and confirming they can be safely removed without causing further damage to your system, you might want to re-run the command with the --remove option to remove the infected files.

      --remove will delete any files it finds suspicious with no input from you, so it is NOT RECOMMENDED to run with --remove as your first scan until you can confirm the results.

      In cases where ClamAV does not find any malware, you will need to manually investigate and find the malware. While there are several ways to do this, a good starting point is to find and identify any recently uploaded files, based on the file’s timestamp information.

      To do this, use the ‘find’ command:

      • find /path/to/wordpress/site -mtime -DAYS

      To use this command, replace the /path/to/wordpress/site with the file path to your WordPress site, and -DAYS with how many days to go back. For example, if you wanted to look back 1 day, it would be -1; to look back 10 days, it would be -10.

      Take time to investigate any files that were uploaded or modified that you’re unaware of.

      Step 2: Update your WordPress Installation

      After identifying the malware, the next step to preventing malicious attacks from reoccurring is to update your WordPress installation. It’s wise to patch WordPress and any themes or plugins installed, to ensure that, if the compromise was in a plugin or theme’s install directory, you have removed and reinstalled that plugin or theme. You may be able to remove all malicious files, but in most cases, a clean installation of a compromised component is preferred.

      You can perform these updates from within WordPress’ administration UI in most cases, which doesn’t require the use of any additional tools. WordPress also offers an automatic update option that you’re encouraged to enable in order to reduce the time your websites might be vulnerable to newly discovered security issues.

      Another helpful piece of advice in preventing malicious attacks is to update all components, even the ones that are marked as inactive. In some situations, even disabled plugins and themes may be accessible and able to be compromised if not kept updated. If you’re sure you don’t need a theme or plugin, the best course of action would be to remove it in its entirety.

      In some cases, a theme or plugin may be abandoned by the author, and while you have the most recent version installed, the plugin or theme may have an issue that has not been fixed. In this case, you may need to consider other options for substituting the abandoned component that is currently updated, but was still the source of a compromise.

      Step 3: Secure Your WordPress Installation Against Malicious Attacks

      Once you have both removed any malicious files and ensured all components are updated, it’s time to secure your WordPress installation. The next step we recommend is to change all passwords for users that have access to the administration UI, especially those that have full admin rights, or the ability to upload or modify file contents.

      Checking your filesystem permissions if you’re not aware of the current configuration is also an important step in securing your WordPress installation, as the wrong permissions can allow file read and write access you didn’t intend. WordPress provides a good outline of what the settings should be and how to update them here.

      As a step in securing your Droplet’s installation, you can also install a plugin to limit the amount of failed login attempts, which dramatically reduces the risk of brute force attacks. The wp-limit-login-attempts plugin is a popular option to use.

      Finally, consider using a WordPress security plugin like Jetpack or Wordfence. These plugins help actively combat intrusion attempts and provide a final layer of security to ensure that your site is only used for what you intend.

      An alternative to using a server-side plugin like Jetpack or Wordfence would be to investigate if Cloudflare’s caching and Web Application Firewall (WAF) service might be a good fit for your specific use case. To learn more about this option, check out CloudFlare’s documentation.


      Navigating troubleshooting options when brute-force attacks originate from your Droplets can be cumbersome, but in this tutorial, we shared some steps to help you detect, resolve, and secure your WordPress installation(s) across Droplets. For more security-related information to help manage Droplets, check out our Recommended Security Measures article.

      Source link