One place for hosting & domains

      Terraform

      How To Protect Sensitive Data in Terraform


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform provides automation to provision your infrastructure in the cloud. To do this, Terraform authenticates with cloud providers (and other providers) to deploy the resources and perform the planned actions. However, the information Terraform needs for authentication is very valuable, and generally, is sensitive information that you should always keep secret since it unlocks access to your services. For example, you can consider API keys or passwords for database users as sensitive data.

      If a malicious third party were to acquire the sensitive information, they would be able to breach the security systems by presenting themselves as a known trusted user. In turn, they would be able to modify, delete, and replace the resources and services that are available under the scope of the obtained keys. To prevent this from happening, it is essential to properly secure your project and safeguard its state file, which stores all the project secrets.

      By default, Terraform stores the state file locally in the form of unencrypted JSON, allowing anyone with access to the project files to read the secrets. While a solution to this is to restrict access to the files on disk, another option is to store the state remotely in a backend that encrypts the data automatically, such as DigitalOcean Spaces.

      In this tutorial, you’ll hide sensitive data in outputs during execution and store your state in a secure cloud object storage, which encrypts data at rest. You’ll use DigitalOcean Spaces in this tutorial as your cloud object storage. You’ll also use tfmask, which is an open source program written in Go that dynamically censors values in the Terraform execution log output.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. Instructions to do that can be found in this link: How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-sensitive, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.
      • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

      Note: This tutorial has specifically been tested with Terraform 0.13.

      Marking Outputs as sensitive

      In this step, you’ll hide outputs in code by setting their sensitive parameter to true. This is useful when secret values are part of the Terraform output that you’re storing indefinitely, or you need to share the output logs beyond your team for analysis.

      Assuming you are in the terraform-sensitive directory, which you created as part of the prerequisites, you’ll define a Droplet and an output showing its IP address. You’ll store it in a file named droplets.tf, so create and open it for editing by running:

      Add the following lines:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
      }
      

      This code will deploy a Droplet called web-1 in the fra1 region, running Ubuntu 18.04 on 1GB RAM and one CPU core. Here you’ve given the droplet_ip_address output a value and you’ll receive this in the Terraform log.

      To deploy this Droplet, execute the code by running the following command:

      • terraform apply -var "do_token=${DO_PAT}"

      The actions Terraform will take will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted. You’ll receive the following output:

      Output

      digitalocean_droplet.web: Creating... ... digitalocean_droplet.web: Creation complete after 33s [id=216255733] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: droplet_ip_address = your_droplet_ip_address

      You will find that the IP address is in the output. If you’re sharing this output with others, or in case it will be publicly available because of automated deployment processes, it’s important to take actions to hide this data in the output.

      To censor it, you’ll need to set the sensitive attribute of the droplet_ip_address output to true.

      Open droplets.tf for editing:

      Add the highlighted line:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = true
      }
      

      Save and close the file when you’re done.

      Apply the project again by running:

      • terraform apply -var "do_token=${DO_PAT}"

      The output will be:

      Output

      digitalocean_droplet.web: Refreshing state... [id=216255733] Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Outputs: droplet_ip_address = <sensitive>

      You’ve now explicitly censored the IP address—the value of the output. Censoring outputs is useful in situations when the Terraform logs would be in a public space, or when you want them to remain hidden, but not delete them from the code. You’ll also want to censor outputs that contain passwords and API tokens, as they are sensitive info as well.

      You’ve now hidden the values of the defined outputs by marking them as sensitive. In the next step, you’ll configure Terraform to store your project’s state in the encrypted cloud, instead of locally.

      Storing State in an Encrypted Remote Backend

      The state file stores all information about your deployed infrastructure containing all its internal relationships and secrets. By default, it’s stored in plaintext, locally on the disk. Storing it remotely, in the cloud, provides a higher level of security. If the cloud storage service supports encryption at rest, it will store the state file in an encrypted state at all times, so that potential attackers won’t be able to gather information from it. Storing the state file encrypted remotely is different from marking outputs as sensitive—this way, all secrets are securely stored in the cloud, which only changes how Terraform stores data, not when it’s displayed.

      You’ll now configure your project to store the state file in a DigitalOcean Space. As a result it will be encrypted at rest and protected with TLS in transit.

      By default, the Terraform state file is called terraform.tfstate and is located in the root of every initialized directory. You can view its contents by running:

      The contents of the file will be similar to this:

      {
        "version": 4,
        "terraform_version": "0.13.1",
        "serial": 3,
        "lineage": "926017f6-d7be-e1fa-99e4-f2a988026ed4",
        "outputs": {
          "droplet_ip_address": {
            "value": "...",
            "type": "string",
            "sensitive": true
          }
        },
        "resources": [
          {
            "mode": "managed",
            "type": "digitalocean_droplet",
            "name": "web",
            "provider": "provider["registry.terraform.io/digitalocean/digitalocean"]",
            "instances": [
              {
                "schema_version": 1,
                "attributes": {
                  "backups": false,
                  "created_at": "...",
                  "disk": 25,
                  "id": "216255733",
                  "image": "ubuntu-18-04-x64",
                  "ipv4_address": "...",
                  "ipv4_address_private": "10.135.0.3",
                  "ipv6": false,
                  "ipv6_address": "",
                  "ipv6_address_private": null,
                  "locked": false,
                  "memory": 1024,
                  "monitoring": false,
                  "name": "web-1",
                  "price_hourly": 0.00744,
                  "price_monthly": 5,
                  "private_networking": true,
                  "region": "fra1",
                  "resize_disk": true,
                  "size": "s-1vcpu-1gb",
                  "ssh_keys": null,
                  "status": "active",
                  "tags": [],
                  "urn": "do:droplet:216255733",
                  "user_data": null,
                  "vcpus": 1,
                  "volume_ids": [],
                  "vpc_uuid": "fc52519c-dc84-11e8-8b13-3cfdfea9f160"
                },
                "private": "..."
              }
            ]
          }
        ]
      }
      
      

      The state file contains all the resources you’ve deployed, as well as all outputs and their computed values. Gaining access to this file is enough to compromise the entire deployed infrastructure. To prevent that from happening, you can store it encrypted in the cloud.

      Terraform supports multiple backends, which are storage and retrieval mechanisms for the state. Examples are: local for local storage, pg for the Postgres database, and s3 for S3 compatible storage, which you’ll use to connect to your Space.

      The back-end configuration is specified under the main terraform block, which is currently in provider.tf. Open it for editing by running:

      Add the following lines:

      terraform-sensitive/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
          }
        }
      
        backend "s3" {
          key      = "state/terraform.tfstate"
          bucket   = "your_space_name"
          region   = "us-west-1"
          endpoint = "https://spaces_endpoint"
          skip_region_validation      = true
          skip_credentials_validation = true
          skip_metadata_api_check     = true
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      The s3 back-end block first specifies the key, which is the location of the Terraform state file on the Space. Passing in state/terraform.tfstate means that you will store it as terraform.tfstate under the state directory.

      The endpoint parameter tells Terraform where the Space is located and bucket defines the exact Space to connect to. The skip_region_validation and skip_credentials_validation disable validations that are not applicable to DigitalOcean Spaces. Note that region must be set to a conforming value (such as us-west-1), which has no reference to Spaces.

      Remember to put in your bucket name and the Spaces endpoint, including the region, which you can find in the Settings tab of your Space. When you are done customizing the endpoint, save and close the file.

      Next, put the access and secret keys for your Space in environment variables, so you’ll be able to reference them later. Run the following commands, replacing the highlighted placeholders with your key values:

      • export SPACE_ACCESS_KEY="your_space_access_key"
      • export SPACE_SECRET_KEY="your_space_secret_key"

      Then, configure Terraform to use the Space as its backend by running:

      • terraform init -backend-config "access_key=$SPACE_ACCESS_KEY" -backend-config "secret_key=$SPACE_SECRET_KEY"

      The -backend-config argument provides a way to set back-end parameters at runtime, which you are using here to set the Space keys. You’ll be asked if you wish to copy the existing state to the cloud, or start anew:

      Output

      Initializing the backend... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state.

      Enter yes when prompted. The rest of the output will be the following:

      Output

      Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      Your project will now store its state in your Space. If you receive an error, double-check that you’ve provided the correct keys, endpoint, and bucket name.

      Your project is now storing state in your Space. The local state file has been emptied, which you can check by showing its contents:

      There will be no output, as expected.

      You can try modifying the Droplet definition and applying it to check that the state is still being correctly managed.

      Open droplets.tf for editing:

      Modify the highlighted lines:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "test-droplet"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = false
      }
      

      Save and close the file, then apply the project by running:

      • terraform apply -var "do_token=${DO_PAT}"

      You will receive the following output:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # digitalocean_droplet.web will be updated in-place ~ resource "digitalocean_droplet" "web" { backups = false created_at = "2020-11-11T18:43:03Z" disk = 25 id = "216419273" image = "ubuntu-18-04-x64" ipv4_address = "159.89.21.92" ipv4_address_private = "10.135.0.4" ipv6 = false locked = false memory = 1024 monitoring = false ~ name = "web-1" -> "test-droplet" price_hourly = 0.00744 price_monthly = 5 private_networking = true region = "fra1" resize_disk = true size = "s-1vcpu-1gb" status = "active" tags = [] urn = "do:droplet:216419273" vcpus = 1 volume_ids = [] vpc_uuid = "fc52519c-dc84-11e8-8b13-3cfdfea9f160" } Plan: 0 to add, 1 to change, 0 to destroy. ...

      Enter yes when prompted, and Terraform will apply the new configuration to the existing Droplet, meaning that it’s correctly communicating with the Space its state is stored on:

      Output

      digitalocean_droplet.web: Modifying... [id=216419273] digitalocean_droplet.web: Still modifying... [id=216419273, 10s elapsed] digitalocean_droplet.web: Modifications complete after 12s [id=216419273] Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Outputs: droplet_ip_address = your_droplet_ip_address

      You’ve configured the s3 backend for your project, so that you’re storing the state encrypted in the cloud, in a DigitalOcean Space. In the next step, you’ll use tfmask, a tool that will dynamically censor all sensitive outputs and information in Terraform logs.

      Using tfmask in CI/CD Environments

      In this section, you’ll download tfmask and use it to dynamically censor sensitive data from the whole output log Terraform generates when executing a command. It will censor the variables and parameters whose values are matched by a RegEx expression that you provide.

      Dynamically matching their names is possible when they follow a pattern (for example, contain the word password or secret). The advantage of using tfmask over only marking the outputs as sensitive, is that it also censors matched parts of the resource declarations that Terraform prints out while executing. It’s imperative you hide them when the execution logs may be public, such as in automated CI/CD environments, which may often list execution logs publicly.

      Compiled binaries of tfmask are available at its releases page on GitHub. For Linux, run the following command to download it:

      • sudo curl -L https://github.com/cloudposse/tfmask/releases/download/0.7.0/tfmask_linux_amd64 -o /usr/bin/tfmask

      Mark it as executable by running:

      • sudo chmod +x /usr/bin/tfmask

      tfmask works on the outputs of terraform plan and terraform apply by masking the values of all variables whose names are matched by a RegEx expression that you specify. The regex expression and the character with which the actual values will be replaced, you supply using environment variables called TFMASK_CHAR and TFMASK_VALUES_REGEX, respectively.

      You’ll now use tfmask to censor the name and ipv4_address of the Droplet that Terraform would deploy. First, you’ll need to set the mentioned environment variables by running:

      • export TFMASK_CHAR="*"
      • export TFMASK_VALUES_REGEX="(?i)^.*(ipv4_address|name).*$"

      This regex expression will match all strings starting with ipv4_address or name, and will not be case sensitive.

      To make Terraform plan an action for your Droplet, modify its definition:

      Modify the Droplet’s name:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = false
      }
      

      Save and close the file.

      Because you’ve changed an attribute of the Droplet, Terraform will show its full definition in its output. Plan the configuration, but pipe it to tfmask to censor variables according to the regex expression:

      • terraform plan -var "do_token=${DO_PAT}" | tfmask

      You’ll receive output similar to the following:

      Output

      ... digitalocean_droplet.web: Refreshing state... [id=216419273] ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # digitalocean_droplet.web will be updated in-place ~ resource "digitalocean_droplet" "web" { backups = false created_at = "2020-11-11T18:43:03Z" disk = 25 id = "216419273" image = "ubuntu-18-04-x64" ipv4_address = "************" ipv4_address_private = "**********" ipv6 = false locked = false memory = 1024 monitoring = false ~ name = "**********************************" price_hourly = 0.00744 price_monthly = 5 private_networking = true region = "fra1" resize_disk = true size = "s-1vcpu-1gb" status = "active" tags = [] urn = "do:droplet:216419273" vcpus = 1 volume_ids = [] vpc_uuid = "fc52519c-dc84-11e8-8b13-3cfdfea9f160" } Plan: 0 to add, 1 to change, 0 to destroy. ...

      Note that tfmask has censored the values for name, ipv4_address, and ipv4_address_private using the character you specified in the TFMASK_CHAR environment variable, because they match the regex expression.

      This way of value censoring in the Terraform logs is very useful for CI/CD, where the logs may be publicly available. The benefit of tfmask is that you have full control over what variables to censor (using the regex expression). You can also specify keywords that you want to censor, which may not currently exist, but you are anticipating using in the future.

      You can destroy the deployed resources by running the following command and entering yes when prompted:

      • terraform destroy -var "do_token=${DO_PAT}"

      Conclusion

      In this article, you’ve worked with a couple of ways to hide and secure sensitive data in your Terraform project. The first measure, using sensitive to hide values from the outputs, is useful when only logs are accessible, but the values themselves stay present in the state stored on disk.

      To remedy that, you can opt to store the state file remotely, which you’ve achieved with DigitalOcean Spaces. This allows you to make use of encryption at rest. You also used tfmask, a tool that censors values of variables—matched using a regex expression—during terraform plan and terraform apply.

      You can also check out Hashicorp Vault to store secrets and secret data. It can be integrated with Terraform to inject secrets in resource definitions, so you’ll be able to connect your project with your existing Vault workflow. You may want to check out our tutorial on How To Build a Hashicorp Vault Server Using Packer and Terraform on DigitalOcean.

      For more on using Terraform, read other articles in our How To Manage Infrastructure with Terraform series.



      Source link

      How To Create Reusable Infrastructure with Terraform Modules and Templates


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      One of the main benefits of Infrastructure as Code (IAC) is reusing parts of the defined infrastructure. In Terraform, you can use modules to encapsulate logically connected components into one entity and customize them using input variables you define. By using modules to define your infrastructure at a high level, you can separate development, staging, and production environments by only passing in different values to the same modules, which minimizes code duplication and maximizes conciseness.

      You are not limited to using only your custom modules. Terraform Registry is integrated into Terraform and lists modules and providers that you can incorporate in your project right away by defining them in the required_providers section. Referencing public modules can speed up your workflow and reduce code duplication. If you have a useful module and would like to share it with the world, you can look into publishing it on the Registry for other developers to use.

      In this tutorial, we’ll consider some of the ways of defining and reusing code in Terraform projects. You’ll reference modules from the Terraform Registry, separate development and production environments using modules, learn about templates and how they are used, and how to specify resource dependencies explicitly using the depends_on meta argument.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. You can find instructions to do that at: How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial and be sure to name the project folder terraform-reusability, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.
      • The droplet-lb module available under modules in terraform-reusability. Follow the How to Build a Custom Module tutorial and work through it until the droplet-lb module is functionally complete. (That is, until the cd ../.. command in the Creating a Module section.)
      • Knowledge of Terraform project structuring approaches. For more information, see How To Structure a Terraform Project.
      • (Optional) Two separate domains whose nameservers are pointed to DigitalOcean at your registrar. Refer to the How To Point to DigitalOcean Nameservers From Common Domain Registrars tutorial to set this up. Note that you don’t need to do this if you don’t plan on deploying the project you’ll create through this tutorial.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Separating Development and Production Environments

      In this section, you’ll use modules to achieve separation between your target deployment environments. You’ll arrange these according to the structure of a more complex project. You’ll first create a project with two modules, one of which will define the Droplets and Load Balancers, and the other one will set up the DNS domain records. After, you’ll write configuration for two different environments (dev and prod), which will call the same modules.

      Creating the dns-records module

      As part of the prerequisites, you have set up the project initially under terraform-reusability and created the droplet-lb module in its own subdirectory under modules. You’ll now set up the second module, called dns-records, containing variables, outputs, and resource definitions. Assuming you’re in terraform-reusability, create dns-records by running:

      • mkdir modules/dns-records

      Navigate to it:

      This module will comprise the definitions for your domain and the DNS records that you’ll later point to the Load Balancers. You’ll first define the variables, which will become inputs that this module will expose. You’ll store them in a file called variables.tf. Create it for editing:

      Add the following variable definitions:

      terraform-reusability/modules/dns-records/variables.tf

      variable "domain_name" {}
      variable "ipv4_address" {}
      

      Save and close the file. You’ll now define the domain and the accompanying A and CNAME records in a file named records.tf. Create and open it for editing by running:

      Add the following resource definitions:

      terraform-reusability/modules/dns-records/records.tf

      resource "digitalocean_domain" "domain" {
        name = var.domain_name
      }
      
      resource "digitalocean_record" "domain_A" {
        domain = digitalocean_domain.domain.name
        type   = "A"
        name   = "@"
        value  = var.ipv4_address
      }
      
      resource "digitalocean_record" "domain_CNAME" {
        domain = digitalocean_domain.domain.name
        type   = "CNAME"
        name   = "www"
        value  = var.ipv4_address
      }
      

      First, you define the domain in your DigitalOcean account for your domain name. The cloud will automatically add the three DigitalOcean nameservers as NS records. Then, you define an A record for your domain, routing it (the @ as value signifies the true domain name, without subdomains) to the IP address supplied as the variable ipv4_address. For the sake of completeness, the CNAME record that follows specifies that the www subdomain should also point to the same IP address. Save and close the file when you’re done.

      Next, you’ll define the outputs for this module. The outputs will show the FQDN (fully qualified domain name) of the created records. Create and open outputs.tf for editing:

      Add the following lines:

      terraform-reusability/modules/dns-records/outputs.tf

      output "A_fqdn" {
        value = digitalocean_record.domain_A.fqdn
      }
      
      output "CNAME_fqdn" {
        value = digitalocean_record.domain_CNAME.fqdn
      }
      

      Save and close the file when you’re done.

      With the variables, DNS records, and outputs defined, the last thing you’ll need to specify are the provider requirements for this module. You’ll specify that the dns-records module requires the digitalocean provider in a file called provider.tf. Create and open it for editing:

      Add the following lines:

      terraform-reusability/modules/dns-records/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
          }
        }
        required_version = ">= 0.13"
      }
      

      When you’re done, save and close the file. The dns-records module now requires the digitalocean provider and is functionally complete.

      Creating Different Environments

      The following is the current structure of the terraform-reusability project:

      terraform_reusability/
      ├─ modules/
      │  ├─ dns-records/
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ records.tf
      │  │  ├─ variables.tf
      │  ├─ droplet-lb/
      │  │  ├─ droplets.tf
      │  │  ├─ lb.tf
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ variables.tf
      ├─ main.tf
      ├─ provider.tf
      

      So far, you have two modules in your project: the one you just created (dns-records) and droplet-lb, which you created as part of the prerequisites.

      To facilitate different environments, you’ll store the dev and prod environment config files under a directory called environments, which will reside in the root of the project. Both environments will call the same two modules, but with different parameter values. The advantage of this is when the modules change internally in the future, you’ll only need to update the values you are passing in.

      First, navigate to the root of the project by running:

      Then, create the dev and prod directories under environments at the same time:

      • mkdir -p environments/dev && mkdir environments/prod

      The -p argument orders mkdir to create all directories in the given path.

      Navigate to the dev directory, as you’ll first configure that environment:

      You’ll store the code in a file named main.tf, so create it for editing:

      Add the following lines:

      terraform-reusability/environments/dev/main.tf

      module "droplets" {
        source   = "../../modules/droplet-lb"
      
        droplet_count = 2
        group_name    = "dev"
      }
      
      module "dns" {
        source   = "../../modules/dns-records"
      
        domain_name   = "your_dev_domain"
        ipv4_address  = module.droplets.lb_ip
      }
      

      Here you call and configure the two modules, droplet-lb and dns-records, which will together result in the creation of two Droplets. They’re fronted by a Load Balancer; the DNS records for the supplied domain are set up to point to that Load Balancer. Remember to replace your_dev_domain with your desired domain name for the dev environment, then save and close the file.

      Next, you’ll configure the DigitalOcean provider and create a variable for it to be able to accept the personal access token you’ve created as part of the prerequisites. Open a new file, called provider.tf, for editing:

      Add the following lines:

      terraform-reusability/environments/dev/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
          }
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      In this code, you require the digitalocean provider to be available and pass in the do_token variable to its instance. Save and close the file.

      Initialize the configuration by running:

      You’ll receive the following output:

      Output

      Initializing modules... - dns in ../../modules/dns-records - droplets in ../../modules/droplet-lb Initializing the backend... Initializing provider plugins... - Finding latest version of digitalocean/digitalocean... - Installing digitalocean/digitalocean v2.0.2... - Installed digitalocean/digitalocean v2.0.2 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, we recommend adding version constraints in a required_providers block in your configuration, with the constraint strings suggested below. * digitalocean/digitalocean: version = "~> 2.0.2" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      The configuration for the prod environment is similar. Navigate to its directory by running:

      Create and open main.tf for editing:

      Add the following lines:

      terraform-reusability/environments/prod/main.tf

      module "droplets" {
        source   = "../../modules/droplet-lb"
      
        droplet_count = 5
        group_name    = "prod"
      }
      
      module "dns" {
        source   = "../../modules/dns-records"
      
        domain_name   = "your_prod_domain"
        ipv4_address  = module.droplets.lb_ip
      }
      

      The difference between this and your dev code is that there will be five Droplets deployed. Furthermore, the domain name, which you should replace with your prod domain name, will be different. Save and close the file when you’re done.

      Then, copy over the provider configuration from dev:

      Initialize this configuration as well:

      The output of this command will be the same as the previous time you ran it.

      You can try planning the configuration to see what resources Terraform would create by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output for prod will be the following:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.dns.digitalocean_domain.domain will be created + resource "digitalocean_domain" "domain" { + id = (known after apply) + name = "your_prod_domain" + urn = (known after apply) } # module.dns.digitalocean_record.domain_A will be created + resource "digitalocean_record" "domain_A" { + domain = "your_prod_domain" + fqdn = (known after apply) + id = (known after apply) + name = "@" + ttl = (known after apply) + type = "A" + value = (known after apply) } # module.dns.digitalocean_record.domain_CNAME will be created + resource "digitalocean_record" "domain_CNAME" { + domain = "your_prod_domain" + fqdn = (known after apply) + id = (known after apply) + name = "www" + ttl = (known after apply) + type = "CNAME" + value = (known after apply) } # module.droplets.digitalocean_droplet.droplets[0] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-0" ... } # module.droplets.digitalocean_droplet.droplets[1] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-1" ... } # module.droplets.digitalocean_droplet.droplets[2] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-2" ... } # module.droplets.digitalocean_droplet.droplets[3] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-3" ... } # module.droplets.digitalocean_droplet.droplets[4] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-4" ... } # module.droplets.digitalocean_loadbalancer.www-lb will be created + resource "digitalocean_loadbalancer" "www-lb" { ... + name = "lb-prod" ... Plan: 9 to add, 0 to change, 0 to destroy. ...

      This would deploy five Droplets with a Load Balancer. Also it would create the prod domain you specified with the two DNS records pointing to the Load Balancer. You can try planning the configuration for the dev environment as well—you’ll note that two Droplets would be planned for deployment.

      Note: You can apply this configuration for the dev and prod environments with the following command:

      • terraform apply -var "do_token=${DO_PAT}"

      The following demonstrates how you have structured this project:

      terraform_reusability/
      ├─ environments/
      │  ├─ dev/
      │  │  ├─ main.tf
      │  │  ├─ provider.tf
      │  ├─ prod/
      │  │  ├─ main.tf
      │  │  ├─ provider.tf
      ├─ modules/
      │  ├─ dns-records/
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ records.tf
      │  │  ├─ variables.tf
      │  ├─ droplet-lb/
      │  │  ├─ droplets.tf
      │  │  ├─ lb.tf
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ variables.tf
      ├─ main.tf
      ├─ provider.tf
      

      The addition is the environments directory, which holds the code for the dev and prod environments.

      The benefit of this approach is that further changes to modules automatically propagate to all areas of your project. Barring any possible customizations to module inputs, this approach is not repetitive and promotes reusability as much as possible, even across deployment environments. Overall this reduces clutter and allows you to trace the modifications using a version-control system.

      In the final two sections of this tutorial, you’ll review the depends_on meta argument and the templatefile function.

      Declaring Dependencies to Build Infrastructure in Order

      While planning actions, Terraform automatically tries to sense existing dependencies and builds them into its dependency graph. The main dependencies it can detect are clear references; for example, when an output value of a module is passed to a parameter on another resource. In this scenario the module must first complete its deployment to provide the output value.

      The dependencies that Terraform can’t detect are hidden—they have side effects and mutual references not inferable from the code. An example of this is when an object depends not on the existence, but on the behavior of another one, and does not access its attributes from code. To overcome this, you can use depends_on to manually specify the dependencies in an explicit way. Since Terraform 0.13, you can also use depends_on on modules to force the listed resources to be fully deployed before deploying the module itself. It’s possible to use the depends_on meta argument with every resource type. depends_on will also accept a list of other resources on which its specified resource depends.

      In the previous step of this tutorial, you haven’t specified any explicit dependencies using depends_on, because the resources you’ve created have no side effects not inferable from the code. Terraform is able to detect the references made from the code you’ve written, and will schedule the resources for deployment accordingly.

      depends_on accepts a list of references to other resources. Its syntax looks like this:

      resource "resource_type" "res" {
        depends_on = [...] # List of resources
      
        # Parameters...
      }
      

      Remember that you should only use depends_on as a last-resort option. If used, it should be kept well documented, because the behavior that the resources depend on may not be immediately obvious.

      Using Templates for Customization

      In Terraform, templating is substituting results of expressions in appropriate places, such as when setting attribute values on resources or constructing strings. You’ve used it in the previous steps and the tutorial prerequisites to dynamically generate Droplet names and other parameter values.

      When substituting values in strings, the values are specified and surrounded by ${}. Template substitution is often used in loops to facilitate customization of the created resources. It also allows for module customization by substituting inputs in resource attributes.

      Terraform offers the templatefile function, which accepts two arguments: the file from the disk to read and a map of variables paired with their values. The value it returns is the contents of the file rendered with the expression substituted—just as Terraform would normally do when planning or applying the project. Because functions are not part of the dependency graph, the file cannot be dynamically generated from another part of the project.

      Imagine that the contents of the template file called droplets.tmpl is as follows:

      %{ for address in addresses ~}
      ${address}:80
      %{ endfor ~}
      

      Longer declarations must be surrounded with %{}, as is the case with the for and endfor declarations, which signify the start and end of the for loop respectively. The contents and type of the droplets variable are not known until the function is called and actual values provided, like so:

      templatefile("${path.module}/droplets.tmpl", { addresses = ["192.168.0.1", "192.168.1.1"] })
      

      The value that this templatefile call will return is the following:

      Output

      192.168.0.1:80 192.168.1.1:80

      This function has its use cases, but they are uncommon. For example, you could use it when a part of the configuration is necessary to exist in a proprietary format, but is dependent on the rest of the values and must be generated dynamically. In the majority of cases, it’s better to specify all configuration parameters directly in Terraform code, where possible.

      Conclusion

      In this article, you’ve maximized code reuse in an example Terraform project. The main way is to package often-used features and configurations as a customizable module and use it whenever needed. By doing so, you do not duplicate the underlying code (which can be error prone) and enable faster turnaround times, since modifying the module is almost all you need to do to introduce changes.

      You’re not limited to your own modules. As you’ve seen, Terraform Registry provides third-party modules and providers that you can incorporate in your project.

      Check out the rest of the How To Manage Infrastructure with Terraform series.



      Source link

      DevOps From a Virtual Minecraft World: Using Terraform to Deploy DigitalOcean Kubernetes Apps


      Video

      About the Talk

      In this lighthearted and demo-driven talk, Nic and Erik find themselves stranded with a shipwrecked Kubernetes cluster and no way to salvage it. Determined to never find themselves in this sticky situation again, they set out to learn how to use HashiCorp Terraform to consistently and reliably deploy DigitalOcean Kubernetes clusters and applications, provision storage, and how to expose them using ingress and load balancers.

      Learn with them as they embark on a voyage of discovery, rebuilding their Kubernetes hopes and dreams using infrastructure as code, and ensure you never find yourself lost at sea.

      Resources

      About the Presenters

      Recovering pathological liar, arguably the most handsome man in the north. Nic Jackson is a Developer Advocate at HashiCorp and the author of “Building Microservices in Go,” a book that examines the best patterns and practices for building microservices with the Go programming language. When not working, you can find Nic contributing to the open-source developer tool Shipyard, and teaching application development on YouTube.

      Erik Veld divides his time between developing software and teaching others about technology and modern development practices. He has over a decade of experience working as a developer, operator, and IT consultant for various global companies and startups. The lessons learned and scars accumulated from this time prepared him well for his current love of demoing cutting edge technologies live on stage. He also founded Instruqt, a hands-on learning platform. When he is not doing something with computers, you can find him in the kitchen or renovating his house. He is currently a Developer Advocate at HashiCorp.





      Source link