One place for hosting & domains

      Terraform

      How to Use Terraform With Linode Object Storage


      Terraform is a powerful Infrastructure as Code (IaC) application for deploying and managing infrastructure. It can be used to add, modify, and delete resources including servers, networking elements, and storage objects. Linode has partnered with Terraform to provide an API to configure common Linode infrastructure items. This guide provides a brief introduction to Terraform and explains how to use it to create
      Linode Object Storage solutions.

      What is Terraform?

      Terraform is an open source product that is available in free and commercial editions. Terraform configuration files are declarative in form. The files describe the end state of the system and explain what to configure, but not how to configure it. Terraform files use either Terraform’s HashiCorp Configuration Language (HCL) or the JavaScript Object Notation (JSON) format to define the infrastructure. Both languages work well with Terraform because they are easy to use and read. Terraform uses a modular and incremental approach to encourage reuse and maintainability. It is available for macOS, Windows, and most Linux distributions.

      Terraform uses providers to manage resources. A provider, which is very similar to an API, is typically created in conjunction with the infrastructure vendor. Terraform’s provider-based system allows users to create, modify, and destroy network infrastructure from different vendors. Developers can import these providers into their configuration files to help declare and configure their infrastructure components. Providers are available for most major vendors, including
      Linode. Terraform users can browse through a complete listing of the various providers in the
      Terraform Registry.

      Linode offers a useful
      Beginner’s Guide to Terraform as an introduction to the main Terraform concepts. Additionally, Terraform documentation includes a number of
      Tutorials, including guides to the more popular providers.

      How to Use Terraform

      To use Terraform, create a file that defines the intended configuration of all network elements. This file includes a list of all required providers and data sources. A data source object provides access to a variety of methods and attributes about a particular infrastructure component. The file also fully describes the various resources, including servers and storage objects, that Terraform should create, manage, or delete.

      Terraform files are written using either HCL or JSON as a text file with the .tf extension. It is possible to use input variables, functions, and modules for greater flexibility, modularity, and maintainability. Users develop their configuration files on their own workstations, and use the Terraform client to push the configuration out to their network. The client relies upon implementation details from the providers to execute the changes.

      Before applying the configuration, users should execute the terraform plan command. This command generates a summary of all the intended changes. At this point, the changes have not yet been applied. This means the document can be safely revised or even abandoned if necessary.

      When the Terraform plan is ready to implement, the terraform apply command is used to deploy the changes. Terraform keeps track of all changes in an internal state file. This results in increased efficiency because only changes to the existing configuration are executed. New changes and modifications can be added to existing Terraform files without deleting the pre-existing resources. Terraform also understands the various dependencies between resources, and creates the infrastructure using the proper sequence.

      Terraform can be used in a multi-developer environment in conjunction with a versioning control system. Developers can also build their own provider infrastructure for use instead of, or alongside, third-party providers. Terraform provides more details about how the product works and how to use it in their
      Introduction to Terraform summary.

      Note

      Terraform is very powerful, but it can be a difficult tool to use. Syntax errors can be hard to debug. Before attempting to create any infrastructure, it is a good idea to read the
      Linode Introduction to the HashiCorp Configuration Language. The documentation about the
      Linode Provider in the Terraform Registry is also essential. Consult Linode’s extensive collection of
      Terraform guides for more examples and explanations.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      3. Ensure all Linode servers are updated. The following commands can be used to update Ubuntu systems.

        sudo apt update && sudo apt upgrade

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you are not familiar with the sudo command, see the
      Users and Groups guide.

      How to Download and Install Terraform

      These instructions are geared towards Ubuntu 22.04 users, but are generally applicable to earlier Ubuntu releases. Instructions for other Linux distributions and macOS are available on the
      Terraform Downloads Portal. The following example demonstrates how to download and install the latest release of Terraform.

      1. Install the system dependencies for Terraform.

        sudo apt install software-properties-common gnupg2 curl
      2. Import the GPG key.

        curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
      3. Add the Hashicorp repository to apt.

        sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
      4. Download the updates for Terraform and install the application. This installs Terraform release 1.3.4, the most recent release.

        sudo apt update && sudo apt install terraform
        Get:1 https://apt.releases.hashicorp.com jammy/main amd64 terraform amd64 1.3.4 [19.5 MB]
        Fetched 19.5 MB in 0s (210 MB/s)
        Selecting previously unselected package terraform.
        (Reading database ... 109186 files and directories currently installed.)
        Preparing to unpack .../terraform_1.3.4_amd64.deb ...
        Unpacking terraform (1.3.4) ...
        Setting up terraform (1.3.4) ...
      5. Confirm the application has been installed correctly. Use the terraform command without any parameters and ensure the Terraform help information is displayed.

        Usage: terraform [global options] <subcommand> [args]
        
        The available commands for execution are listed below.
        The primary workflow commands are given first, followed by
        less common or more advanced commands.
        
        Main commands:
        init          Prepare your working directory for other commands
        ...
        -version      An alias for the "version" subcommand.
      6. To determine the current release of Terraform, use the terraform -v command.

        Terraform v1.3.4
        on linux_amd64
      7. Create a directory for the new Terraform project and change to this directory.

        mkdir ~/terraform
        cd ~/terraform

      Creating a Terraform File to Create Linode Object Storage

      To deploy the necessary infrastructure for a Linode Object Storage solution, create a Terraform file defining the final state of the system. This file must include the following sections:

      • The terraform definition, which includes the required providers. In this case, only the Linode provider is included.
      • The Linode provider.
      • The linode_object_storage_cluster data source.
      • At least one linode_object_storage_bucket resource. A storage bucket provides a space to store files and text objects.
      • (Optional) A linode_object_storage_key.
      • A list of linode_object_storage_object items. An object storage object can be a text file or a string of text. All storage objects are stored in a particular object storage bucket.

      To construct the Terraform file, execute the following instructions. For more information on how to create a Terraform file, see the
      Terraform documentation.

      1. Create the file linode-terraform-storage.tf inside the terraform directory.

        nano linode-terraform-storage.tf
      2. At the top of the file, add a terraform section, including all required_providers for the infrastructure. In this case, the only required provider is linode. Set the source to linode/linode. Use the current version of the linode provider. At publication time, the version is 1.29.4. To determine the current version, see the
        Linode Namespace in the Terraform Registry.

        File: /terraform/linode-terraform-storage.tf
        1
        2
        3
        4
        5
        6
        7
        8
        
        terraform {
          required_providers {
            linode = {
              source = "linode/linode"
              version = "1.29.4"
            }
          }
        }
      3. Define the linode provider. Include the
        Linode v4 API token for the account. See the
        Getting Started with the Linode API guide for more information about tokens.

        Note

        To hide sensitive information, such as API tokens, declare a variables.tf file and store the information there. Retrieve the variables using the var keyword. See the
        Linode introduction to HCL for guidance on how to use variables.
        File: /terraform/linode-terraform-storage.tf
        10
        11
        12
        
        provider "linode" {
          token = "THE_LINODE_API_TOKEN"
        }
      4. Create a linode_object_storage_cluster data source. In the following code sample, the new cluster object is named primary. Designate a region for the cluster using the id attribute. In the following example, the region is eu-central-1. The cluster object provides access to the domain, status, and region of the cluster. See the Terraform registry documentation for the
        Linode Object Storage Cluster data source for more information.

        Note

        Not all regions support storage clusters. For a full list of all data centers where a storage cluster can be configured, see the Linode
        Object Storage Product Information.
        File: /terraform/linode-terraform-storage.tf
        14
        15
        16
        
        data "linode_object_storage_cluster" "primary" {
            id = "eu-central-1"
        }
      5. Optional: Create a linode_object_storage_key to control access to the storage objects. Provide a name for the key and a label to help identify it.

        File: /terraform/linode-terraform-storage.tf
        18
        19
        20
        
        resource "linode_object_storage_key" "storagekey" {
            label = "image-access"
        }
      6. Create a linode_object_storage_bucket resource. The cluster attribute for the bucket must contain the id of the cluster data source object. In this example, the cluster identifier can be retrieved using the data.linode_object_storage_cluster.primary.id attribute. Assign a unique label to the storage bucket. This label must be unique within the region, so ensure the label name is reasonably distinctive and unique. The following example sets the label to mybucket-j1145.

        Set the access_key and secret_key attributes to the access_key and secret_key fields of the storage key. In the following example, the name of the key is linode_object_storage_key.storagekey. If you skipped the previous step and are not using an object storage key, do not include these attributes.

        Note

        The Linode Object Storage Bucket resource contains many other configurable attributes. It is possible to set life cycle rules, versioning, and access control rules, and to associate the storage bucket with TLS/SSL certificates. For more information, see the
        Linode Object Storage Bucket documentation in the Terraform registry.
        File: /terraform/linode-terraform-storage.tf
        22
        23
        24
        25
        26
        27
        
        resource "linode_object_storage_bucket" "mybucket-j1145" {
          cluster = data.linode_object_storage_cluster.primary.id
          label = "mybucket-j1145"
          access_key = linode_object_storage_key.storagekey.access_key
          secret_key = linode_object_storage_key.storagekey.secret_key
        }
      7. Add items to the storage bucket. To add a file or a block of text to the bucket, create a linode_object_storage_object resource. Specify a cluster and bucket to store the object in and a key to uniquely identify the storage object within the cluster. To use a storage key, include the secret_key and access_key of the storage key.

        To add a text file to storage, specify the file path as the source attribute using the following example as a guide. This example adds the file terraform_test.txt to the bucket mybucket-j1145 in cluster primary. For more information on adding storage objects, see the
        Linode Storage Object resource documentation.

        File: /terraform/linode-terraform-storage.tf
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        
        resource "linode_object_storage_object" "object1" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "textfile-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            source = pathexpand("~/terraform_test.txt")
        }
      8. Optional: The storage bucket can also hold strings of text. To store a string, declare a new linode_object_storage_object, including the bucket, cluster, and storage key information as before. Choose a new unique key for the text object. The content attribute should be set to the text string. Fill in the content_type and content_language to reflect the nature of the text.

        File: /terraform/linode-terraform-storage.tf
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        resource "linode_object_storage_object" "object2" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "freetext-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            content          = "This is the content of the Object..."
            content_type     = "text/plain"
            content_language = "en"
        }
      9. When all sections have been added, the .tf file should resemble the following example.

        File: /terraform/linode-terraform-storage.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        terraform {
          required_providers {
            linode = {
              source = "linode/linode"
              version = "1.29.4"
            }
          }
        }
        
        provider "linode" {
          token = "THE_LINODE_API_TOKEN"
        }
        
        data "linode_object_storage_cluster" "primary" {
            id = "eu-central-1"
        }
        
        resource "linode_object_storage_key" "storagekey" {
            label = "image-access"
        }
        
        resource "linode_object_storage_bucket" "mybucket-j1145" {
          cluster = data.linode_object_storage_cluster.primary.id
          label = "mybucket-j1145"
          access_key = linode_object_storage_key.storagekey.access_key
          secret_key = linode_object_storage_key.storagekey.secret_key
        }
        
        resource "linode_object_storage_object" "object1" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "textfile-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            source = pathexpand("~/terraform_test.txt")
        }
        
        resource "linode_object_storage_object" "object2" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "freetext-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            content          = "This is the content of the Object..."
            content_type     = "text/plain"
            content_language = "en"
        }
      10. When done, press CTRL+X to exit nano, then Y to save, and Enter to confirm.

      Using Terraform to Configure Linode Object Storage

      Terraform commands act upon the linode-terraform-storage.tf file to analyze the contents and deploy the correct infrastructure. To create the Linode object storage infrastructure items in the file, run the following commands.

      1. Initialize Terraform using the terraform init command. Terraform confirms it is initialized.

        Initializing the backend...
        
        Initializing provider plugins...
        - Finding linode/linode versions matching "1.29.4"...
        - Installing linode/linode v1.29.4...
        - Installed linode/linode v1.29.4 (signed by a HashiCorp partner, key ID F4E6BBD0EA4FE463)
        ...
        Terraform has been successfully initialized!
        ...
      2. Run the terraform plan command to gain an overview of the anticipated infrastructure changes. This plan catalogs the components Terraform intends to add, modify, or delete. It is important to review the output carefully to ensure the plan is accurate and there are no unexpected changes. If the results are not satisfactory, change the .tf file and try again.

        data.linode_object_storage_cluster.primary: Reading...
        data.linode_object_storage_cluster.primary: Read complete after 0s [id=eu-central-1]
        
        Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
        with the following symbols:
          + create
        
        Terraform will perform the following actions:
        
          # linode_object_storage_bucket.mybucket-j1145 will be created
          + resource "linode_object_storage_bucket" "mybucket-j1145" {
              + access_key   = (known after apply)
              + acl          = "private"
              + cluster      = "eu-central-1"
              + cors_enabled = true
              + hostname     = (known after apply)
              + id           = (known after apply)
              + label        = "mybucket-j1145"
              + secret_key   = (sensitive)
              + versioning   = (known after apply)
            }
        
          # linode_object_storage_key.storagekey will be created
          + resource "linode_object_storage_key" "storagekey" {
              + access_key = (known after apply)
              + id         = (known after apply)
              + label      = "image-access"
              + limited    = (known after apply)
              + secret_key = (sensitive value)
            }
        
          # linode_object_storage_object.object1 will be created
          + resource "linode_object_storage_object" "object1" {
              + access_key    = (known after apply)
              + acl           = "private"
              + bucket        = "mybucket-j1145"
              + cluster       = "eu-central-1"
              + content_type  = (known after apply)
              + etag          = (known after apply)
              + force_destroy = false
              + id            = (known after apply)
              + key           = "textfile-object"
              + secret_key    = (sensitive)
              + source        = "/home/username/terraform_test.txt"
              + version_id    = (known after apply)
            }
        
          # linode_object_storage_object.object2 will be created
          + resource "linode_object_storage_object" "object2" {
              + access_key       = (known after apply)
              + acl              = "private"
              + bucket           = "mybucket-j1145"
              + cluster          = "eu-central-1"
              + content          = "This is the content of the Object..."
              + content_language = "en"
              + content_type     = "text/plain"
              + etag             = (known after apply)
              + force_destroy    = false
              + id               = (known after apply)
              + key              = "freetext-object"
              + secret_key       = (sensitive)
              + version_id       = (known after apply)
            }
        
        Plan: 4 to add, 0 to change, 0 to destroy.
      3. When all further changes to the .tf file have been made, use terraform apply to deploy the changes. If any errors appear, edit the .tf file and run terraform plan and terraform apply again. Terraform displays a list of the intended changes and asks whether to proceed.

        Plan: 4 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
      4. Enter yes to continue. Terraform displays a summary of all changes and confirms the operation has been completed. If any errors appear, edit the .tf file and run the commands again.

        linode_object_storage_key.storagekey: Creating...
        linode_object_storage_key.storagekey: Creation complete after 3s [id=367232]
        linode_object_storage_bucket.mybucket-j145: Creating...
        linode_object_storage_bucket.mybucket-j1145: Creation complete after 6s [id=eu-central-1:mybucket-j1145]
        linode_object_storage_object.object1: Creating...
        linode_object_storage_object.object2: Creating...
        linode_object_storage_object.object1: Creation complete after 0s [id=mybucket-j1145/textfile-object]
        linode_object_storage_object.object2: Creation complete after 0s [id=mybucket-j1145/freetext-object]
        
        Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
      5. View the
        Object Storage summary page of the Linode Dashboard to ensure all objects have been correctly created and configured. Select the name of the Object Storage Bucket to view a list of all object storage objects inside the bucket. This page also allows you to download any files and text objects in the bucket.

      Deleting and Editing the Linode Storage Objects

      To delete the storage object configuration, use the terraform destroy command. This causes Terraform to delete any objects listed in the Terraform files in the directory. For example, running terraform destroy against the linode-terraform-storage.tf file deletes all the storage clusters, buckets, keys, and storage objects. To delete only a subset of the configuration, edit the file so it only includes the objects to delete. Any objects that Terraform should retain must be removed from the file. Run the command terraform plan -destroy first to obtain a summary of the objects Terraform intends to delete.

      terraform plan -destroy
      terraform destroy

      To modify the contents of an object storage object, edit the .tf file containing the configuration so it reflects the new configuration. Run terraform plan to review the changes, then run terraform apply. Terraform automatically makes the necessary changes. Use this command with caution because it might cause an object to be deleted and re-created rather than modified.

      terraform plan
      terraform apply

      Conclusion

      Terraform is a powerful and efficient Infrastructure as Code (IaC) application. It automates the process of deploying infrastructure. To use Terraform, use the HCL or JSON formats to describe the final state of the network. Use the terraform plan command from the Terraform client to preview the changes and terraform apply to deploy the configuration.

      The
      Linode Provider includes an API for configuring
      Linode Object Storage infrastructure. First declare the Linode provider and the
      Linode Object Storage Cluster data source. Define the object storage infrastructure using
      Linode object storage buckets,
      object storage keys, and
      object storage objects. The object storage objects are the files or strings of text to be stored. For more information on using Terraform, consult the
      Terraform documentation.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Deploy a Packer Image with Terraform


      Both the Packer and Terraform tools by HashiCorp stand out for remarkable infrastructure-automating. Despite some overlap, the tools have distinct and complimentary features. This makes them an effective pair, with Packer used to create images that Terraform then deploys as a complete infrastructure.

      Learn more about Packer in our
      Using the Linode Packer Builder to Create Custom Images guide. Discover how you can leverage Terraform in our
      Beginner’s Guide to Terraform.

      In this tutorial, find out how to use Packer and Terraform together to deploy Linode instances. The tutorial uses the Linode Terraform provider to deploy several instances based on a Linode image built with Packer.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      How to Install the Prerequisites

      To get started, install both Packer and Terraform on the same system. Below you can find links to installation guides for the two tools, as well as steps covering most Linux operating systems.

      Installing Packer

      Packer’s installation process varies substantially depending on your operating system. Refer to the
      official installation guide for instructions if your system is not covered here.

      Debian / Ubuntu

      sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
      curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -\
      sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
      sudo apt-get update && sudo apt-get install packer

      AlmaLinux / CentOS Stream / Rocky Linux

      sudo yum install -y yum-utils
      sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
      sudo yum -y install packer

      Fedora

      sudo dnf install -y dnf-plugins-core
      sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
      sudo dnf -y install packer

      Afterward, verify your installation and display the installed version with the following command:

      1.8.4

      Installing Terraform

      Terraform’s installation process also varies depending on your operating system. Refer to HashiCorp’s
      official documentation on installing the Terraform CLI for systems that are not covered here. You can also refer to the section on installing Terraform in our guide
      Use Terraform to Provision Linode Environments.

      Debian / Ubuntu

      sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
      wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
      echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
      sudo apt update && sudo apt install terraform

      AlmaLinux / CentOS Stream / Rocky Linux

      sudo yum install -y yum-utils
      sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
      sudo yum -y install terraform

      Fedora

      sudo dnf install -y dnf-plugins-core
      sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
      sudo dnf -y install terraform

      Afterward, verify your installation with:

      Terraform v1.3.3
      on linux_amd64

      How to Build a Packer Image

      Packer automates the creation of machine images. These images are helpful when looking to streamline your process for provisioning infrastructure. Such images give you a consistent basis for deploying instances.

      Moreover, images are much more efficient. Rather than executing a series of installations and commands with each provisioned instance, the provisioning tool can deploy ready-made images.

      The examples in this tutorial uses a Linode image built with Packer. Linode has a builder available for Packer, which lets you put together images specifically for a Linode instance.

      To do so, follow along with our guide on
      Using the Linode Packer Builder to Create Custom Images. By the end, you should have a Packer-built image on your Linode account.

      The remaining steps in this tutorial should work no matter what kind of image you built following the guide linked above. However, the Packer image used in the examples to follow has the label packer-linode-image-1, runs on an Ubuntu 20.04 base, and has NGINX installed.

      How to Configure Terraform

      Terraform focuses on automating the provisioning process, allowing you to deploy your infrastructure entirely from code.

      To learn more about deploying Linode instances with Terraform, see our tutorial on how to
      Use Terraform to Provision Linode Environments.

      This tutorial covers a similar series of steps, but specifically demonstrates how you can work with custom Linode images.

      Before moving ahead, create a directory for your Terraform scripts, and change that to your working directory. This tutorial uses the linode-terraform directory in the current user’s home directory:

      mkdir ~/linode-terraform
      cd ~/linode-terraform

      The rest of the tutorial assumes you are working out of this directory.

      Setting Up the Linode Provider

      Terraform’s providers act as abstractions of APIs, giving Terraform an interface for working with various resources on host platforms.

      Linode has its own Terraform provider, which you can learn more about from its Terraform
      provider registry page.

      To use the provider, you just need a couple of short blocks in a Terraform script.

      Create a new Terraform file named packer-linode.tf, which acts as the base for this tutorial’s Terraform project:

      Give it the contents shown here:

      File: packer-linode.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      terraform {
        required_providers {
          linode = {
            source = "linode/linode"
            version = "1.29.3"
          }
        }
      }
      provider "linode" {
        token = var.token
      }

      The terraform block starts the project by indicating its required providers (e.g. Linode). The provider block then starts the Linode provider. The token argument allows the provider to authenticate its connection to the Linode API.

      When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      Assigning Terraform Variables

      Above, you can see that the token value for the Linode provider uses the var.token variable. Although not required, variables make Terraform scripts much more adaptable and manageable.

      This tutorial handles variables using two files.

      1. First, create a variables.tf file:

        Now fill it with the contents shown below. This file defines all the variables for the Terraform project. Some of these variables have default values, which Terraform automatically uses if not otherwise assigned. Other variables need to be assigned, which you can see in the next file.

        File: variables.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        variable "token" {
          description = "The Linode API Personal Access Token."
        }
        variable "password" {
          description = "The root password for the Linode instances."
        }
        variable "ssh_key" {
          description = "The location of an SSH key file for use on the Linode instances."
          default = "~/.ssh/id_rsa.pub"
        }
        variable "node_count" {
          description = "The number of instances to create."
          default = 1
        }
        variable "region" {
          description = "The name of the region in which to deploy instances."
          default = "us-east"
        }
        variable "image_id" {
          description = "The ID for the Linode image to be used in provisioning the instances"
          default = "linode/ubuntu20.04"
        }

        When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      2. Now create a terraform.tfvars file:

        This file, with the .tfvars ending, is a place for assigning variable values. Give the file the contents below, replacing the values in arrow brackets (<...>) with your actual values:

        File: terraform.tfvars
        1
        2
        3
        4
        
        token = "<LinodeApiToken>"
        password = "<RootPassword>"
        node_count = 2
        image_id = "private/<LinodeImageId>"

        The <LinodeApiToken> needs to be an API token associated with your Linode account. You can follow our
        Get an API Access Token guide to generate a personal access token. Be sure to give the token “Read/Write” permissions.

        Above, you can see a value of private/<LinodeImageId> for the image_id. This value should match the image ID for the Linode image you created with Packer. All custom Linode images are prefaced with private/ and conclude with the image’s ID. In these examples, private/17691867 is assumed to be the ID for the Linode image built with Packer.

        There are two main ways to get your image ID:

        • The Linode image ID appears at the end of the output when you use Packer to create the image. For instance, in the guide on creating a Linode image with Packer linked above, you can find the output:

          ==> Builds finished. The artifacts of successful builds are:
          --> linode.example-linode-image: Linode image: packer-linode-image-1 (private/17691867)
        • The Linode API has an endpoint for listing available images. The list includes your custom images if you call it with your API token.

          You can use a cURL command to list all images available to you, public and private. Replace $LINODE_API_TOKEN with your Linode API token:

          curl -H "Authorization: Bearer $LINODE_API_TOKEN" \https://api.linode.com/v4/images

          The output can be overwhelming in the command line, so you may want to use another tool to prettify the JSON response. This has been done with the result shown here:

          {
              "pages": 1,
              "data": [{
                  "id": "private/17691867",
                  "label": "packer-linode-image-1",
                  "description": "Example Packer Linode Image",
                  // [...]

        When done, press CTRL+X to exit nano, Y to save, and Enter to confirm.

      Defining the Linode Resource

      The next step for the Terraform script is to define the actual resource to be provisioned. In this case, the script needs to provision Linode instances, which can be done using the linode_instance resource.

      Open the packer-linode.tf file created earlier and add the details shown here to the end:

      File: packer-linode.tf
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      
      resource "linode_instance" "packer_linode_instance" {
        count = var.node_count
        image = var.image_id
        label = "packer-image-linode-${count.index + 1}"
        group = "packer-image-instances"
        region = var.region
        type = "g6-standard-1"
        authorized_keys = [ chomp(file(var.ssh_key)) ]
        root_pass = var.password
        connection {
          type = "ssh"
          user = "root"
          password = var.password
          host = self.ip_address
        }
        provisioner "remote-exec" {
          inline = [
            # Update the system.
            "apt-get update -qq",
            # Disable password authentication; users can only connect with an SSH key.
            "sed -i '/PasswordAuthentication/d' /etc/ssh/sshd_config",
            "echo \"PasswordAuthentication no\" >> /etc/ssh/sshd_config",
            # Check to make sure NGINX is running.
            "systemctl status nginx --no-pager"
          ]
        }
      }

      And with that, the Terraform project is ready to provision two Linode instances based on your Packer-built image. Most of the configuration details for the resource block are managed by variables. So you shouldn’t need to fiddle with much of the resource block to adjustment things like the number of instances to provision.

      The remote-exec provisioner, and specifically the inline list within it, is where much of the customization comes in. This block defines shell commands to be executed on the newly provisioned instance. The commands here are relatively simple, but this provisioner can give you fine-grained control of operations on the instance.

      How to Provision a Packer Image with Terraform

      From here, a handful of Terraform commands are all you need to provision and manage Linode instances from the Packer-built image.

      First, Terraform needs to run some initialization around the script. This installs any prerequisites, specifically the linode provider in this example, and sets up Terraform’s lock file.

      Running Terraform’s plan command is also good practice. Here, Terraform checks your script for immediate errors and provides an outline of the projected resources to deploy. You can think of it as a light dry run.

      Review the plan, and when ready, provision your instances with the apply command. This may take several minutes to process, depending on your systems and the number of instances being deployed.

      linode_instance.packer_linode_instance[0] (remote-exec): Connected!
      linode_instance.packer_linode_instance[0] (remote-exec): ● nginx.service - A high performance web server and a reverse proxy server
      linode_instance.packer_linode_instance[0] (remote-exec):      Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
      linode_instance.packer_linode_instance[0] (remote-exec):      Active: active (running) since Thu 2022-10-27 15:56:42 UTC; 9s ago
      [...]
      
      Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

      In the future, whenever you want to remove the instances created with Terraform, you can use the destroy command from within your Terraform script directory.

      As with the apply command, you get a preview of the instances and are asked to confirm before the instances are destroyed.

      Conclusion

      This tutorial outlined how to use Terraform to deploy Linode instances built with a Packer image. This arrangement provides an efficient setup for provisioning and managing Linode instances. Terraform streamlines the process of provisioning infrastructure, and it is made even more efficient using pre-built images from Packer.

      The example covered in this tutorial is fairly simple. But the setup can be readily adapted and expanded on to deploy more robust and complex infrastructures.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How To Use Terraform With Your Team


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      When multiple people are working on the same Terraform project from different locations simultaneously, it is important to handle the infrastructure code and project state correctly to avoid overwriting errors. The solution is to store the state remotely instead of locally. A remote system is available to all members of your team, and it is possible for them to lock the state while they’re working.

      One such remote backend is pg, which stores the state in a PostgreSQL database. During the course of this tutorial, you’ll use it with a DigitalOcean Managed Database to ensure data availability.

      Terraform also supports the official, managed cloud offering by HashiCorp called Terraform Cloud—a proprietary app that syncs your team’s work in one place and offers a user interface for configuration and management.

      In this tutorial, you’ll create an organization in Terraform Cloud to which you’ll connect your project. You’ll then use your organization to set up workspaces and resources. You will store your state in the managed cloud so it is always available. You’ll also set up the pg backend with an accompanying managed PostgreSQL database.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions to create this in How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 of the How To Use Terraform with DigitalOcean tutorial.
      • If you would like to use a pg backend, you will need a Managed PostgreSQL database cluster created and accessible. For more information, visit the Quickstart guide. You can use a separate database for this tutorial.
      • If you would like to use HashiCorp’s managed cloud, you will need an account with Terraform Cloud. You can create one on their sign-up page.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Storing State in Terraform Cloud

      In this step, you’ll create a project that deploys a Droplet, but instead of storing the state locally, you’ll use Terraform Cloud as the backend with the remote provider. This entails creating the organization and workspace in Terraform Cloud, writing the infrastructure code, and planning it.

      Creating an Organization

      Terraform Cloud allows you to have multiple organizations, which house your workspaces and modules. Paid-plan organizations can have multiple teams with access-level control features, while the free plan you’ll use provides only one team per organization. You can invite team members to join the organization.

      Start off by heading over to Terraform Cloud and logging in. If you haven’t yet created an organization, it will prompt you to do so.

      Terraform Cloud - Create a new organization

      Enter an organization name of your choosing and remember that it must be unique among all names in Terraform Cloud. You’ll receive an error if the name already exists. The email address should already be filled in with the address of your account. Once you’re finished, click the Create organization button to continue.

      It will then ask you to select the type of workspace.

      Terraform Cloud - Choosing a workspace type

      Since you’ll interface with Terraform Cloud using the command line, click the CLI-driven workflow option. Then, input a name for your workspace.

      Terraform Cloud - Setting workspace name

      Type in a workspace name of your choosing (we’ll call it sammy), then click Create workspace to finalize the organization creation process. It will then direct you to a workspace settings page.

      Terraform Cloud - Workspace settings

      You’ve now created your workspace, which is a part of your organization. Since you just created it, your workspace contains no infrastructure code. In the central part of the interface, Terraform Cloud gives you starting instructions for connecting to this workspace.

      Before connecting to it, you’ll need to configure the version of Terraform that the cloud will use to execute your commands. To set it, click the Settings dropdown in the upper-right corner and select General from the list. When the page opens, navigate to the Terraform Version dropdown and select 0.13.1 (for this tutorial).

      Terraform Cloud - Setting Terraform Version

      Then, click the Save settings button to save the changes.

      To connect your project to your organization and workspace, you’ll first need to log in using the command line. Before you run the command, navigate to the tokens page to create a new access token for your server, which will provide access to your account. You’ll receive a prompt to create an API token.

      Terraform Cloud - Create API token

      The default description is fine, so click Create API token to create it.

      Terraform Cloud - Created API token

      Click the token value, or the icon after it, to copy the API token. You’ll use this token to connect your project to your Terraform Cloud account.

      In the command line, run the following command to log in:

      You’ll receive the following output:

      Output

      Terraform will request an API token for app.terraform.io using your browser. If login is successful, Terraform will store the token in plain text in the following file for use by subsequent commands: /home/sammy/.terraform.d/credentials.tfrc.json Do you want to proceed? Only 'yes' will be accepted to confirm. ...

      Terraform is warning you that the token will be stored locally. Enter yes when it prompts you:

      Output

      --------------------------------------------------------------------------------- Open the following URL to access the tokens page for app.terraform.io: https://app.terraform.io/app/settings/tokens?source=terraform-login --------------------------------------------------------------------------------- Generate a token using your browser, and copy-paste it into this prompt. Terraform will store the token in plain text in the following file for use by subsequent commands: /home/sammy/.terraform.d/credentials.tfrc.json Token for app.terraform.io: Enter a value:

      Paste in the token you’ve copied and confirm with ENTER. Terraform will show a success message:

      Output

      Retrieved token for user your_username --------------------------------------------------------------------------------- Success! Terraform has obtained and saved an API token. The new API token will be used for any future Terraform command that must make authenticated requests to app.terraform.io.

      You’ve configured your local Terraform installation to access your Terraform Cloud account. You’ll now create a project that deploys a Droplet and configure it to use Terraform Cloud for storing its state.

      Setting Up the Project

      First, create a directory named terraform-team-remote where you’ll store the project:

      • mkdir ~/terraform-team-remote

      Navigate to it:

      • cd ~/terraform-team-remote

      To set up your project, you’ll need to:

      • define and configure the remote provider, which interfaces with Terraform Cloud.
      • require the digitalocean provider to be able to deploy DigitalOcean resources.
      • define and initialize variables that you’ll use.

      You’ll store the provider and module requirements specifications in a file named provider.tf. Create and open it for editing by running:

      Add the following lines:

      ~/terraform-team-remote/provider.tf

      terraform {
        required_version = "0.13.1"
      
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = ">1.22.2"
          }
        }
      
        backend "remote" {
          hostname = "app.terraform.io"
          organization = "your_organization_name"
      
          workspaces {
            name = "your_workspace_name"
          }
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      Here, you first specify your Terraform version. Then, you specify the digitalocean provider as required and set the backend to remote.

      Its hostname is set to app.terraform.io, which is the address of Terraform Cloud. For the organization and workspaces.name, replace the highlighted values with the names you specified.

      Next, you define a variable called do_token, which you pass to the digitalocean provider created after it. You’ve now configured your project to connect to your organization, so save and close the file.

      Initialize your project with the following command:

      The output will be similar to this:

      Output

      Initializing the backend... Successfully configured the backend "remote"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "> 1.22.2"... - Installing digitalocean/digitalocean v2.3.0... - Installed digitalocean/digitalocean v2.3.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html Terraform has been successfully initialized! ...

      Next, define the Droplet in a file called droplets.tf. Create and open it for editing by running:

      Add the following lines:

      ~/terraform-team-remote/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This code will deploy a Droplet called web-1 in the fra1 region, running Ubuntu 18.04 on 1GB RAM and one CPU core. That is all you need to define, so save and close the file.

      What’s left to define are the variable values. The remote provider does not support passing in values to variables through the command line, so you’ll have to pass them in using variable files or set them in Terraform Cloud. Terraform reads variable values from files with a filename ending in .auto.tfvars. Create and open a file called vars.auto.tfvars for editing, in which you’ll define the do_token variable:

      Add the following line, replacing your_do_token with your DigitalOcean API token:

      vars.auto.tfvars

      do_token = "your_do_token"
      

      When you’re done, save and close the file. Terraform will automatically read this file when planning actions.

      Your project is now complete and set up to use Terraform Cloud as its backend. You’ll now plan and apply the Droplet and review how that reflects in the Cloud app.

      Applying the Configuration

      Since you haven’t yet planned or applied your project, the workspace in Terraform Cloud is currently empty. You can try applying the project by running the following command to update it:

      You’ll notice that the output is different from when you use local as your backend:

      Output

      Running apply in the remote backend. Output will stream here. Pressing Ctrl-C will cancel the remote apply if it's still pending. If the apply started it will stop streaming the logs, but will not stop the apply running remotely. Preparing the remote apply... To view this run in a browser, visit: https://app.terraform.io/app/sammy-shark/sammy/runs/run-QnAh2HDwx6zWbNV1 Waiting for the plan to start... Terraform v0.13.1 Configuring remote state backend... Initializing Terraform configuration... Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      When using the remote backend, Terraform is not planning or applying configuration from the local machine. Instead, it delegates those tasks to the cloud, and only streams the output to the console in real time.

      Enter yes when prompted. Terraform will soon finish applying the configuration, and you can navigate to the workspace on the Terraform Cloud website to find that it has applied a new action.

      Terraform Cloud - New Run Applied

      You can now destroy the deployed resources by running the following:

      In this section, you’ve connected your project to Terraform Cloud. You’ll now use another backend, pg, which stores the state in a PostgreSQL database.

      Storing State in a Managed PostgreSQL Database

      In this section, you’ll set up a project that deploys a Droplet, much like the the previous step. This time, however, you’ll store the state in a DigitalOcean Managed PostgreSQL database using the pg provider. This provider supports state locking, so the state won’t ever be overwritten by two or more changes happening at the same time.

      Start by creating a directory named terraform-team-pg in which you’ll store the project:

      • mkdir ~/terraform-team-pg

      Navigate to it:

      Like the previous section, you’ll first define the provider and then pass in the connection string for the database and the digitalocean module. Create and open provider.tf for editing:

      Add the following lines:

      ~/terraform-team-pg/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = ">1.22.2"
          }
        }
      
        backend "pg" {
          conn_str = "your_db_connection_string"
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      Here you require the digitalocean provider and define the pg backend, which accepts a connection string. Then, you define the do_token variable and pass it to the instance of the digitalocean provider.

      Remember to replace your_db_connection_string with the connection string for your managed database from your DigitalOcean Control Panel. Then save and close the file.

      Warning: To continue, in the Settings of your database, make sure you have the IP address of the machine from which you’re running Terraform on an allowlist.

      Initialize the project by running:

      The output will be similar to the following:

      Output

      Initializing the backend... Successfully configured the backend "pg"! Terraform will automatically use this backend unless the backend configuration changes. Error: No existing workspaces. Use the "terraform workspace" command to create and select a new workspace. If the backend already contains existing workspaces, you may need to update the backend configuration.

      Terraform successfully initialized the backend; meaning it connected to the database. However, it complains about not having a workspace, since it does not create one during initialization. To resolve this, create a default workspace and switch to it by running:

      • terraform workspace new default

      The output will be the following:

      Output

      Created and switched to workspace "default"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      To finish the initialization process, run terraform init again:

      You’ll receive output showing it has successfully completed:

      Output

      Initializing the backend... Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "> 1.22.2"... - Installing digitalocean/digitalocean v2.3.0... - Installed digitalocean/digitalocean v2.3.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html Terraform has been successfully initialized!

      Since the Droplet definition is the same as in the previous project, you can copy it over by running:

      • cp ../terraform-team-remote/droplets.tf .

      You’ll need your DigitalOcean token in an environment variable. Create one, replacing your_do_token with your token:

      • export DO_PAT="your_do_token"

      To check that the connection to the database is working, try planning the configuration:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the following:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Terraform reported no errors and planned out the actions as usual. It successfully connected to your PostgreSQL database and stored its state. Multiple people can now work on this simultaneously with the project remaining synchronized.

      Conclusion

      In this tutorial, you’ve used two different backends: Terraform Cloud, which is HashiCorp’s managed cloud offering for Terraform; and pg, which allows you to store the project’s state in a PostgreSQL database. You used a managed PostgreSQL database from DigitalOcean, which you can provision and use with Terraform within minutes.

      For more information about the features of Terraform Cloud, visit the official docs.

      To learn more about using Terraform, check out our series on How To Manage Infrastructure with Terraform.



      Source link