One place for hosting & domains

      Storage

      How to Use Terraform With Linode Object Storage


      Terraform is a powerful Infrastructure as Code (IaC) application for deploying and managing infrastructure. It can be used to add, modify, and delete resources including servers, networking elements, and storage objects. Linode has partnered with Terraform to provide an API to configure common Linode infrastructure items. This guide provides a brief introduction to Terraform and explains how to use it to create
      Linode Object Storage solutions.

      What is Terraform?

      Terraform is an open source product that is available in free and commercial editions. Terraform configuration files are declarative in form. The files describe the end state of the system and explain what to configure, but not how to configure it. Terraform files use either Terraform’s HashiCorp Configuration Language (HCL) or the JavaScript Object Notation (JSON) format to define the infrastructure. Both languages work well with Terraform because they are easy to use and read. Terraform uses a modular and incremental approach to encourage reuse and maintainability. It is available for macOS, Windows, and most Linux distributions.

      Terraform uses providers to manage resources. A provider, which is very similar to an API, is typically created in conjunction with the infrastructure vendor. Terraform’s provider-based system allows users to create, modify, and destroy network infrastructure from different vendors. Developers can import these providers into their configuration files to help declare and configure their infrastructure components. Providers are available for most major vendors, including
      Linode. Terraform users can browse through a complete listing of the various providers in the
      Terraform Registry.

      Linode offers a useful
      Beginner’s Guide to Terraform as an introduction to the main Terraform concepts. Additionally, Terraform documentation includes a number of
      Tutorials, including guides to the more popular providers.

      How to Use Terraform

      To use Terraform, create a file that defines the intended configuration of all network elements. This file includes a list of all required providers and data sources. A data source object provides access to a variety of methods and attributes about a particular infrastructure component. The file also fully describes the various resources, including servers and storage objects, that Terraform should create, manage, or delete.

      Terraform files are written using either HCL or JSON as a text file with the .tf extension. It is possible to use input variables, functions, and modules for greater flexibility, modularity, and maintainability. Users develop their configuration files on their own workstations, and use the Terraform client to push the configuration out to their network. The client relies upon implementation details from the providers to execute the changes.

      Before applying the configuration, users should execute the terraform plan command. This command generates a summary of all the intended changes. At this point, the changes have not yet been applied. This means the document can be safely revised or even abandoned if necessary.

      When the Terraform plan is ready to implement, the terraform apply command is used to deploy the changes. Terraform keeps track of all changes in an internal state file. This results in increased efficiency because only changes to the existing configuration are executed. New changes and modifications can be added to existing Terraform files without deleting the pre-existing resources. Terraform also understands the various dependencies between resources, and creates the infrastructure using the proper sequence.

      Terraform can be used in a multi-developer environment in conjunction with a versioning control system. Developers can also build their own provider infrastructure for use instead of, or alongside, third-party providers. Terraform provides more details about how the product works and how to use it in their
      Introduction to Terraform summary.

      Note

      Terraform is very powerful, but it can be a difficult tool to use. Syntax errors can be hard to debug. Before attempting to create any infrastructure, it is a good idea to read the
      Linode Introduction to the HashiCorp Configuration Language. The documentation about the
      Linode Provider in the Terraform Registry is also essential. Consult Linode’s extensive collection of
      Terraform guides for more examples and explanations.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      3. Ensure all Linode servers are updated. The following commands can be used to update Ubuntu systems.

        sudo apt update && sudo apt upgrade

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you are not familiar with the sudo command, see the
      Users and Groups guide.

      How to Download and Install Terraform

      These instructions are geared towards Ubuntu 22.04 users, but are generally applicable to earlier Ubuntu releases. Instructions for other Linux distributions and macOS are available on the
      Terraform Downloads Portal. The following example demonstrates how to download and install the latest release of Terraform.

      1. Install the system dependencies for Terraform.

        sudo apt install software-properties-common gnupg2 curl
      2. Import the GPG key.

        curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
      3. Add the Hashicorp repository to apt.

        sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
      4. Download the updates for Terraform and install the application. This installs Terraform release 1.3.4, the most recent release.

        sudo apt update && sudo apt install terraform
        Get:1 https://apt.releases.hashicorp.com jammy/main amd64 terraform amd64 1.3.4 [19.5 MB]
        Fetched 19.5 MB in 0s (210 MB/s)
        Selecting previously unselected package terraform.
        (Reading database ... 109186 files and directories currently installed.)
        Preparing to unpack .../terraform_1.3.4_amd64.deb ...
        Unpacking terraform (1.3.4) ...
        Setting up terraform (1.3.4) ...
      5. Confirm the application has been installed correctly. Use the terraform command without any parameters and ensure the Terraform help information is displayed.

        Usage: terraform [global options] <subcommand> [args]
        
        The available commands for execution are listed below.
        The primary workflow commands are given first, followed by
        less common or more advanced commands.
        
        Main commands:
        init          Prepare your working directory for other commands
        ...
        -version      An alias for the "version" subcommand.
      6. To determine the current release of Terraform, use the terraform -v command.

        Terraform v1.3.4
        on linux_amd64
      7. Create a directory for the new Terraform project and change to this directory.

        mkdir ~/terraform
        cd ~/terraform

      Creating a Terraform File to Create Linode Object Storage

      To deploy the necessary infrastructure for a Linode Object Storage solution, create a Terraform file defining the final state of the system. This file must include the following sections:

      • The terraform definition, which includes the required providers. In this case, only the Linode provider is included.
      • The Linode provider.
      • The linode_object_storage_cluster data source.
      • At least one linode_object_storage_bucket resource. A storage bucket provides a space to store files and text objects.
      • (Optional) A linode_object_storage_key.
      • A list of linode_object_storage_object items. An object storage object can be a text file or a string of text. All storage objects are stored in a particular object storage bucket.

      To construct the Terraform file, execute the following instructions. For more information on how to create a Terraform file, see the
      Terraform documentation.

      1. Create the file linode-terraform-storage.tf inside the terraform directory.

        nano linode-terraform-storage.tf
      2. At the top of the file, add a terraform section, including all required_providers for the infrastructure. In this case, the only required provider is linode. Set the source to linode/linode. Use the current version of the linode provider. At publication time, the version is 1.29.4. To determine the current version, see the
        Linode Namespace in the Terraform Registry.

        File: /terraform/linode-terraform-storage.tf
        1
        2
        3
        4
        5
        6
        7
        8
        
        terraform {
          required_providers {
            linode = {
              source = "linode/linode"
              version = "1.29.4"
            }
          }
        }
      3. Define the linode provider. Include the
        Linode v4 API token for the account. See the
        Getting Started with the Linode API guide for more information about tokens.

        Note

        To hide sensitive information, such as API tokens, declare a variables.tf file and store the information there. Retrieve the variables using the var keyword. See the
        Linode introduction to HCL for guidance on how to use variables.
        File: /terraform/linode-terraform-storage.tf
        10
        11
        12
        
        provider "linode" {
          token = "THE_LINODE_API_TOKEN"
        }
      4. Create a linode_object_storage_cluster data source. In the following code sample, the new cluster object is named primary. Designate a region for the cluster using the id attribute. In the following example, the region is eu-central-1. The cluster object provides access to the domain, status, and region of the cluster. See the Terraform registry documentation for the
        Linode Object Storage Cluster data source for more information.

        Note

        Not all regions support storage clusters. For a full list of all data centers where a storage cluster can be configured, see the Linode
        Object Storage Product Information.
        File: /terraform/linode-terraform-storage.tf
        14
        15
        16
        
        data "linode_object_storage_cluster" "primary" {
            id = "eu-central-1"
        }
      5. Optional: Create a linode_object_storage_key to control access to the storage objects. Provide a name for the key and a label to help identify it.

        File: /terraform/linode-terraform-storage.tf
        18
        19
        20
        
        resource "linode_object_storage_key" "storagekey" {
            label = "image-access"
        }
      6. Create a linode_object_storage_bucket resource. The cluster attribute for the bucket must contain the id of the cluster data source object. In this example, the cluster identifier can be retrieved using the data.linode_object_storage_cluster.primary.id attribute. Assign a unique label to the storage bucket. This label must be unique within the region, so ensure the label name is reasonably distinctive and unique. The following example sets the label to mybucket-j1145.

        Set the access_key and secret_key attributes to the access_key and secret_key fields of the storage key. In the following example, the name of the key is linode_object_storage_key.storagekey. If you skipped the previous step and are not using an object storage key, do not include these attributes.

        Note

        The Linode Object Storage Bucket resource contains many other configurable attributes. It is possible to set life cycle rules, versioning, and access control rules, and to associate the storage bucket with TLS/SSL certificates. For more information, see the
        Linode Object Storage Bucket documentation in the Terraform registry.
        File: /terraform/linode-terraform-storage.tf
        22
        23
        24
        25
        26
        27
        
        resource "linode_object_storage_bucket" "mybucket-j1145" {
          cluster = data.linode_object_storage_cluster.primary.id
          label = "mybucket-j1145"
          access_key = linode_object_storage_key.storagekey.access_key
          secret_key = linode_object_storage_key.storagekey.secret_key
        }
      7. Add items to the storage bucket. To add a file or a block of text to the bucket, create a linode_object_storage_object resource. Specify a cluster and bucket to store the object in and a key to uniquely identify the storage object within the cluster. To use a storage key, include the secret_key and access_key of the storage key.

        To add a text file to storage, specify the file path as the source attribute using the following example as a guide. This example adds the file terraform_test.txt to the bucket mybucket-j1145 in cluster primary. For more information on adding storage objects, see the
        Linode Storage Object resource documentation.

        File: /terraform/linode-terraform-storage.tf
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        
        resource "linode_object_storage_object" "object1" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "textfile-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            source = pathexpand("~/terraform_test.txt")
        }
      8. Optional: The storage bucket can also hold strings of text. To store a string, declare a new linode_object_storage_object, including the bucket, cluster, and storage key information as before. Choose a new unique key for the text object. The content attribute should be set to the text string. Fill in the content_type and content_language to reflect the nature of the text.

        File: /terraform/linode-terraform-storage.tf
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        resource "linode_object_storage_object" "object2" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "freetext-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            content          = "This is the content of the Object..."
            content_type     = "text/plain"
            content_language = "en"
        }
      9. When all sections have been added, the .tf file should resemble the following example.

        File: /terraform/linode-terraform-storage.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        terraform {
          required_providers {
            linode = {
              source = "linode/linode"
              version = "1.29.4"
            }
          }
        }
        
        provider "linode" {
          token = "THE_LINODE_API_TOKEN"
        }
        
        data "linode_object_storage_cluster" "primary" {
            id = "eu-central-1"
        }
        
        resource "linode_object_storage_key" "storagekey" {
            label = "image-access"
        }
        
        resource "linode_object_storage_bucket" "mybucket-j1145" {
          cluster = data.linode_object_storage_cluster.primary.id
          label = "mybucket-j1145"
          access_key = linode_object_storage_key.storagekey.access_key
          secret_key = linode_object_storage_key.storagekey.secret_key
        }
        
        resource "linode_object_storage_object" "object1" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "textfile-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            source = pathexpand("~/terraform_test.txt")
        }
        
        resource "linode_object_storage_object" "object2" {
            bucket  = linode_object_storage_bucket.mybucket-j1145.label
            cluster = data.linode_object_storage_cluster.primary.id
            key     = "freetext-object"
        
            secret_key = linode_object_storage_key.storagekey.secret_key
            access_key = linode_object_storage_key.storagekey.access_key
        
            content          = "This is the content of the Object..."
            content_type     = "text/plain"
            content_language = "en"
        }
      10. When done, press CTRL+X to exit nano, then Y to save, and Enter to confirm.

      Using Terraform to Configure Linode Object Storage

      Terraform commands act upon the linode-terraform-storage.tf file to analyze the contents and deploy the correct infrastructure. To create the Linode object storage infrastructure items in the file, run the following commands.

      1. Initialize Terraform using the terraform init command. Terraform confirms it is initialized.

        Initializing the backend...
        
        Initializing provider plugins...
        - Finding linode/linode versions matching "1.29.4"...
        - Installing linode/linode v1.29.4...
        - Installed linode/linode v1.29.4 (signed by a HashiCorp partner, key ID F4E6BBD0EA4FE463)
        ...
        Terraform has been successfully initialized!
        ...
      2. Run the terraform plan command to gain an overview of the anticipated infrastructure changes. This plan catalogs the components Terraform intends to add, modify, or delete. It is important to review the output carefully to ensure the plan is accurate and there are no unexpected changes. If the results are not satisfactory, change the .tf file and try again.

        data.linode_object_storage_cluster.primary: Reading...
        data.linode_object_storage_cluster.primary: Read complete after 0s [id=eu-central-1]
        
        Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
        with the following symbols:
          + create
        
        Terraform will perform the following actions:
        
          # linode_object_storage_bucket.mybucket-j1145 will be created
          + resource "linode_object_storage_bucket" "mybucket-j1145" {
              + access_key   = (known after apply)
              + acl          = "private"
              + cluster      = "eu-central-1"
              + cors_enabled = true
              + hostname     = (known after apply)
              + id           = (known after apply)
              + label        = "mybucket-j1145"
              + secret_key   = (sensitive)
              + versioning   = (known after apply)
            }
        
          # linode_object_storage_key.storagekey will be created
          + resource "linode_object_storage_key" "storagekey" {
              + access_key = (known after apply)
              + id         = (known after apply)
              + label      = "image-access"
              + limited    = (known after apply)
              + secret_key = (sensitive value)
            }
        
          # linode_object_storage_object.object1 will be created
          + resource "linode_object_storage_object" "object1" {
              + access_key    = (known after apply)
              + acl           = "private"
              + bucket        = "mybucket-j1145"
              + cluster       = "eu-central-1"
              + content_type  = (known after apply)
              + etag          = (known after apply)
              + force_destroy = false
              + id            = (known after apply)
              + key           = "textfile-object"
              + secret_key    = (sensitive)
              + source        = "/home/username/terraform_test.txt"
              + version_id    = (known after apply)
            }
        
          # linode_object_storage_object.object2 will be created
          + resource "linode_object_storage_object" "object2" {
              + access_key       = (known after apply)
              + acl              = "private"
              + bucket           = "mybucket-j1145"
              + cluster          = "eu-central-1"
              + content          = "This is the content of the Object..."
              + content_language = "en"
              + content_type     = "text/plain"
              + etag             = (known after apply)
              + force_destroy    = false
              + id               = (known after apply)
              + key              = "freetext-object"
              + secret_key       = (sensitive)
              + version_id       = (known after apply)
            }
        
        Plan: 4 to add, 0 to change, 0 to destroy.
      3. When all further changes to the .tf file have been made, use terraform apply to deploy the changes. If any errors appear, edit the .tf file and run terraform plan and terraform apply again. Terraform displays a list of the intended changes and asks whether to proceed.

        Plan: 4 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
      4. Enter yes to continue. Terraform displays a summary of all changes and confirms the operation has been completed. If any errors appear, edit the .tf file and run the commands again.

        linode_object_storage_key.storagekey: Creating...
        linode_object_storage_key.storagekey: Creation complete after 3s [id=367232]
        linode_object_storage_bucket.mybucket-j145: Creating...
        linode_object_storage_bucket.mybucket-j1145: Creation complete after 6s [id=eu-central-1:mybucket-j1145]
        linode_object_storage_object.object1: Creating...
        linode_object_storage_object.object2: Creating...
        linode_object_storage_object.object1: Creation complete after 0s [id=mybucket-j1145/textfile-object]
        linode_object_storage_object.object2: Creation complete after 0s [id=mybucket-j1145/freetext-object]
        
        Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
      5. View the
        Object Storage summary page of the Linode Dashboard to ensure all objects have been correctly created and configured. Select the name of the Object Storage Bucket to view a list of all object storage objects inside the bucket. This page also allows you to download any files and text objects in the bucket.

      Deleting and Editing the Linode Storage Objects

      To delete the storage object configuration, use the terraform destroy command. This causes Terraform to delete any objects listed in the Terraform files in the directory. For example, running terraform destroy against the linode-terraform-storage.tf file deletes all the storage clusters, buckets, keys, and storage objects. To delete only a subset of the configuration, edit the file so it only includes the objects to delete. Any objects that Terraform should retain must be removed from the file. Run the command terraform plan -destroy first to obtain a summary of the objects Terraform intends to delete.

      terraform plan -destroy
      terraform destroy

      To modify the contents of an object storage object, edit the .tf file containing the configuration so it reflects the new configuration. Run terraform plan to review the changes, then run terraform apply. Terraform automatically makes the necessary changes. Use this command with caution because it might cause an object to be deleted and re-created rather than modified.

      terraform plan
      terraform apply

      Conclusion

      Terraform is a powerful and efficient Infrastructure as Code (IaC) application. It automates the process of deploying infrastructure. To use Terraform, use the HCL or JSON formats to describe the final state of the network. Use the terraform plan command from the Terraform client to preview the changes and terraform apply to deploy the configuration.

      The
      Linode Provider includes an API for configuring
      Linode Object Storage infrastructure. First declare the Linode provider and the
      Linode Object Storage Cluster data source. Define the object storage infrastructure using
      Linode object storage buckets,
      object storage keys, and
      object storage objects. The object storage objects are the files or strings of text to be stored. For more information on using Terraform, consult the
      Terraform documentation.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Migrate to Linode Object Storage


      Linode Object Storage is S3-compatible. So it not only offers all the benefits of S3, but can also leverage common S3 tooling. This lets Linode Object Storage instances work with hyper-scale S3s like AWS and Google Cloud.

      This tutorial the tooling needed to make migration from AWS S3 to Linode Object Storage a smooth and straightforward process. It covers what you need to know before making the migration, then gives you two options depending on your needs:

      Before You Begin

      1. Familiarize yourself with our
        Getting Started with Linode guide, and complete the steps for setting your Linode’s hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our
        How to Secure Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Update your system.

        • Debian and Ubuntu:

          sudo apt update && sudo apt upgrade
          
        • AlmaLinux, CentOS Stream (8 or later), Fedora, and Rocky Linux:

          sudo dnf upgrade
          

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      How S3 Migration Works

      While popularized by AWS, S3 has become a widely used model for object storage. Because they share the same model, these S3-compatible object storage services can interact with the same tooling.

      Linode Object Storage is no different. For instance, you can fully operate your Linode buckets through the s3cmd tool commonly used for managing AWS S3 and other S3-compatible services. You can learn more about that in our guide
      Using S3cmd with Object Storage.

      As such, most tools designed for working with S3 can be used seamlessly with either AWS S3 or Linode Object Storage.

      This includes the two tools used in this tutorial:

      What to Consider Before Migrating to Linode Object Storage

      Both migration processes in this tutorial require you to have access and secret keys for your AWS S3 and Linode Object Storage instances.

      • To learn about creating and managing access keys for Linode Object Storage, take a look at our guide
        Manage Access Keys

      • For more on AWS access keys, take a look at the AWS
        documentation on access keys. Essentially, navigate to the Security Credentials page, scroll down, and select Create access key.

      Throughout the rest of this tutorial, and in its supplementary files, you need to substitute the placeholders for your own keys. This means replacing AWS_ACCESS_KEY and AWS_SECRET_KEY with the access and secret keys, respectively, for your AWS S3 instance. Likewise, replace LINODE_ACCESS_KEY and LINODE_SECRET_KEY with your access and secret keys, respectively, for your Linode Object Storage instance.

      You also need to have the region name used for each instance as well:

      • Linode Object Storage: The region name for your bucket is provided in the endpoint URL. For instance, if your endpoint URL is example-aws-bucket-1.us-southeast-1.linodeobjects.com, the region name for your bucket is us-southeast-1.

      • AWS S3: The region name for your bucket is provided on the dashboard, within the listing of your buckets.

      This tutorial uses us-east-2 as the placeholder for the AWS S3 region and us-southeast-1 as the placeholder for the Linode Object Storage region. Replace both throughout with your own instances’ regions.

      How to Migrate a Bucket from AWS S3 to Linode Object Storage

      When migrating one or only a few buckets, rclone provides the smoothest process. Enter the credentials and connection details for your S3 instances, and you can migrate a bucket with a single command.

      These next few sections walk you through that process. They demonstrate how to set up rclone on your system, how to configure it, and the commands used to migrate buckets.

      Setting Up the Prerequisites

      To get started, you need to install the rclone tool and connect it to both your AWS S3 and Linode Object Storage instances.

      1. Install rclone. rclone specializes in transferring files over SSH, but it also comes with full support for connecting to and transferring data over S3.

        • Debian and Ubuntu:

          sudo apt install rclone
          
        • Fedora:

          sudo dnf install rclone
          
        • AlmaLinux, CentOS Stream, and Rocky Linux:

          sudo dnf install epel-release
          sudo dnf install rclone
          

        You can then verify your installation with:

        rclone version
        
        rclone v1.53.3-DEV
        - os/arch: linux/amd64
        - go version: go1.18
      2. Create an rclone configuration file with the connection details for the AWS S3 and Linode Object Storage instances. The rclone configuration file, located at ~/.config/rclone/rclone.conf, can hold multiple connection configurations. Here, the connections are named awss3 and linodes3.

        Replace the AWS_ACCESS_KEY, AWS_SECRET_KEY, LINODE_ACCESS_KEY, and LINODE_SECRET_KEY with your instances’ access and secret keys. Placeholder regions have been provided below — us-east-2 for AWS and us-southeast-1 for Linode. Be sure to replace these as well with your instances’ actual region names.

        File: ~/.config/rclone/rclone.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        
        [awss3]
        type = s3
        env_auth = false
        acl = private
        access_key_id = AWS_ACCESS_KEY
        secret_access_key = AWS_SECRET_KEY
        region = us-east-2
        location_constraint = us-east-2
        
        [linodes3]
        type = s3
        env_auth = false
        acl = private
        access_key_id = LINODE_ACCESS_KEY
        secret_access_key = LINODE_SECRET_KEY
        region = us-southeast-1
        endpoint = us-southeast-1.linodeobjects.com
      3. You can then verify your configuration by listing the remote storage sources for rclone:

        rclone listremotes --long
        
        awss3:    s3
        linodes3: s3

        You can further verify the connections by checking object contents for buckets on the storage services. This command, for instance, lists the contents of the example-aws-bucket-1 on the services configured under awss3:

        rclone tree awss3:example-aws-bucket-1
        

        In this case, the AWS S3 bucket has two text files.

        /
        ├── example-text-file-1.txt
        └── example-text-file-2.txt

      Syncing Buckets

      rclone works by copying files from a storage source to a storage destination. Once you have a configuration like the one above, copying can be easily done with a command like the following. This example copies objects from an AWS S3 bucket named example-aws-bucket-1 to a Linode Object Storage bucket named example-linode-bucket-1:

      rclone copy awss3:example-aws-bucket-1 linodes3:example-linode-bucket-1 -P
      
      Transferred:   	      177 / 177 Bytes, 100%, 468 Bytes/s, ETA 0s
      Transferred:            2 / 2, 100%
      Elapsed time:         0.5s

      The -P option tells rclone to output the steps in the transfer process. You can also test out a given copy command by using the --dry-run option.

      Alternatively to the copy command, you can use the sync command. With sync, any files in the destination that are not in the source are deleted. In other words, this command has rclone make the contents of the destination bucket match the contents of the source bucket. The sync command should be used when you strictly need the destination to match the source.

      Verifying the Results

      The simplest way to verify the results is through rclone itself. You can use a tree command like the one shown below:

      rclone tree linodes3:example-linode-bucket-1
      
      /
      ├── example-text-file-1.txt
      └── example-text-file-2.txt

      Alternatively, you can also check in the Linode Cloud Manager by navigating to your Object Storage instance and selecting the target bucket.

      Objects reflected in Linode Object Storage bucket

      How to Migrate All Buckets from AWS S3 to Linode Object Storage

      The approach covered above works well when you need to migrate a few buckets. But it quickly becomes unrealistic when you have numerous buckets you need to migrate from AWS to Linode.

      To address this, the following sections walk you through using a custom Python script for migrating AWS S3 buckets to a Linode Object Storage instance.

      The script uses Boto3, Amazon’s Python SDK for interacting with and managing AWS S3 buckets. The SDK can readily interface with many other S3-compatible services, including Linode Object Storage.

      Setting Up the Prerequisites

      This process uses Python 3 with the Boto3 library to connect to and operate the AWS S3 and Linode Object Storage buckets. To get this working, you also need to provide credentials for connecting to each of your instances.

      Follow the steps here to get the prerequisite software you need and find links to download the migration script and its configuration file.

      1. Ensure that you have Python 3 and Pip 3 installed. You can find instructions for installing these on your system in the
        Install Python 3 and pip3 section of our guide on installing the Linode CLI.

      2. Install the Boto3 Python library via Pip 3:

        pip3 install boto3
        
      3. Download the configuration file for the migration script
        here. Then, modify the configurations to match your AWS and Linode instances’ credentials and regions.

        Note that the endpoint_url value needs to have the http/https prefix and should be the Linode endpoint excluding the bucket portion of the URL.

      4. Finally, download the migration script
        here.

      Understanding the Script

      The script downloaded above should already cover most use cases for migrating from an AWS S3 instance to a Linode Object Storage instance. Nevertheless, you may want familiarize yourself with the script and make adjustments to fit your particular needs.

      To help make navigating and reviewing the script easier, here is a rough diagram of its operations. The diagram does not represent a one-to-one outline of the script. Instead, its purpose is to clarify the script’s organization and order of operations.

      Rough diagram of the migration script

      Running the Script

      When you are ready, you can run the script with the following Python command:

      python3 s3_migration.py
      

      The output indicates the script’s progress and provides alerts if any errors are encountered along the way.

      Verifying the Results

      You can verify the script’s success in the same manner as shown in the section on rclone above. Probably the most accessible method here is navigating to your the Linode Cloud Manager and taking a look at your Object Storage instance. There, you should see the buckets from your AWS S3 instance and, within them, the objects that have been migrated.

      Objects migrated from AWS and reflected in a new Linode Object Storage bucket

      Conclusion

      This tutorial has covered the tools you need to migrate from an AWS S3 instance to a Linode Object Storage instance. You can readily migrate one or even a few buckets with a straightforward rclone setup. Or you can use our custom script to migrate all of your buckets from one instance to another.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      Working with CORS Policies on Linode Object Storage


      Linode Object Storage offers a globally-available, S3-compatible storage solution. Whether you are storing critical backup files or data for a static website, S3 object storage can efficiently answer the call.

      To make the most of object storage, you may need to access the data from other domains. For instance, your dynamic applications may opt to use S3 for static file storage.

      This leaves you dealing with Cross-Origin Resource Sharing, or CORS. However, it’s often not clear how to effectively navigate CORS policies or deal with issues as they come up.

      This tutorial aims to clarify how to work with CORS and S3. It covers tools and approaches for effectively reviewing and managing CORS policies for Linode Object Storage or most other S3-compatible storage solutions.

      CORS and S3 Storage – What you Need to Know

      Linode Object Storage is an S3, which stands for simple storage service. With S3, data gets stored as objects in “buckets.” This gives S3s a flat approach to storage, in contrast to the hierarchical and logistically more complicated storage structures like traditional file systems. Objects stored in S3 can also be given rich metadata.

      CORS defines how clients and servers from different domains may share resources. Generally, CORS policies restrict access to resources to requests from the same domain. By managing your CORS policies, you can open up services to requests from specified origin domains, or from any domains whatsoever.

      An S3 like Linode Object Storage can provide excellent storage for applications. However, you also want to keep your data as secure as possible while also allowing your applications the access they need.

      This is where managing CORS policies on your object storage service becomes imperative. Applications and other tools often need to access stored resources from particular domains. Implementing specific CORS policies controls what kinds of requests, and responses, each origin domain is allowed.

      Working with CORS Policies on Linode Object Storage

      One of the best tools for managing policies on your S3, including Linode Object Storage, is s3cmd. Follow along with our guide
      Using S3cmd with Object Storage to:

      1. Install s3cmd on your system. The installation takes place on the system from which you intend to manage your S3 instance.

      2. Configure s3cmd for your Linode Object Storage instance. This includes indicating the instance’s access key, endpoint, etc.

      You can verify the connection to your object storage instance with the command to list your buckets. This example lists the one bucket used for this tutorial, example-cors-bucket:

      s3cmd ls
      
      2022-09-24 16:13  s3://example-cors-bucket

      Once you have s3cmd set up for your S3 instance, use it to follow along with the upcoming sections of this tutorial. These show you how to use the tool to review and deploy CORS policies.

      Reviewing CORS Policies for Linode Object Storage

      You can get the current CORS policies for your S3 bucket using the info flag for s3cmd. The command provides general information on the designated bucket, including its policies:

      s3cmd info s3://example-cors-bucket
      
      s3://example-cors-bucket/ (bucket):
         Location:  default
         Payer:     BucketOwner
         Expiration Rule: none
         Policy:    none
         CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>*</AllowedOrigin><AllowedHeader>*</AllowedHeader></CORSRule></CORSConfiguration>
         ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      This bucket already has a CORS policy in place. This is because it was set up with the CORS Enabled setting using the Linode Cloud Manager web interface.

      The basic CORS policy above is fairly permissive, allowing access for any request method from any domain. Keep reading to see how you can fine-tune such policies to better fit your particular needs.

      Deploying CORS Policies on Linode Object Storage

      As you can see above, the Linode Cloud Manager can set up a general CORS policy for your bucket. However, if you need more fine-grained control, you need to deploy custom CORS policies.

      Creating CORS policies follows a similar methodology to the one outlined in our
      Define Access and Permissions using Bucket Policies tutorial.

      These next sections break down the particular fields needed for CORS policies and how each affects your bucket’s availability.

      Configuring Policies

      The overall structure for CORS policies on S3 looks like the following. While policies on your object storage instance can generally be set with JSON or XML, CORS policies must use the XML format:

      File: cors_policies.xml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      <CORSConfiguration>
        <CORSRule>
          <AllowedHeader>*</AllowedHeader>
      
          <AllowedMethod>GET</AllowedMethod>
          <AllowedMethod>PUT</AllowedMethod>
          <AllowedMethod>POST</AllowedMethod>
          <AllowedMethod>DELETE</AllowedMethod>
          <AllowedMethod>HEAD</AllowedMethod>
      
          <AllowedOrigin>*</AllowedOrigin>
      
          <ExposeHeader>*</ExposeHeader>
      
          <MaxAgeSeconds>3000</MaxAgeSeconds>
        </CORSRule>
      </CORSConfiguration>

      To break this structure down:

      • The policy introduces a list of one or more <CORSRule> elements within a <CORSConfiguration> element. Each <CORSRule> element contains policy details.

      • Policies tend to have some combination of the five types of elements shown in the example above.

        The <AllowedHeader>, <AllowedMethod>, and <AllowedOrigin> elements are almost always present. Further, there may be multiple of these elements within a single <CORSRule>.

        The other two elements, <ExposeHeader> and <MaxAgeSeconds>, are optional. There can be multiple <ExposeHeader> elements, but only one <MaxAgeSeconds>.

      • <AllowedHeader> lets you specify request headers allowed for the given policy. You can find a list of commonly used request headers in AWS’s
        Common Request Headers documentation.

      • <AllowedMethod> lets you specify request methods that the given policy applies to. The full range of supported HTTP request methods is shown in the example above.

      • <AllowedOrigin> lets you specify request origins for the policy. These are the domains from which cross-origin requests can be made.

      • <ExposeHeader> can specify which response headers the policy allows to be exposed. You can find a list of commonly used response headers in AWS’s
        Common Response Headers documentation.

      • <MaxAgeSeconds> can specify the amount of time, in seconds, that browsers are allowed to cache the response to preflight requests. Having this cache allows the browser to repeat the original requests without having to send another preflight request.

      Example CORS Policies

      To give more concrete ideas of how you can work with CORS policies, the following are two additional example policies. One provides another simple, but more limited, policy, while the other presents a more complicated set of two policies.

      • First, a public access read-only policy. This lets any origin, with any request headers, make GET and HEAD requests to the bucket. However, the policy does not expose custom response headers.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>*</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>*</AllowedOrigin>
          </CORSRule>
        </CORSConfiguration>
            
      • Next, a set of policies for fine control over requests from example.com. The <AllowedOrigin> elements specify the range of possible example.com domains. The two policies distinguish the kinds of headers allowed based on the kinds of request methods.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
            <AllowedHeader>Origin</AllowedHeader>
            <AllowedHeader>Content-*</AllowedHeader>
        
            <AllowedMethod>PUT</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>DELETE</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>ETag</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
        </CORSConfiguration>
            

      Deploying Policies

      The next step is to actually deploy your CORS policies. Once you do, your S3 bucket starts following them to determine what origins to allow and what request and response information to permit.

      Follow these steps to put your CORS policies into practice on your S3 instance.

      1. Save your CORS policy into a XML file. This example uses a file named cors_policies.xml which contains the second example policy XML above.

      2. Use s3cmd’s setcors commands to deploy the CORS policies to the bucket. This command takes the policy XML file and the bucket identifier as arguments:

        s3cmd setcors cors_policies.xml s3://example-cors-bucket
        
      3. Verify the new CORS policies using the info command as shown earlier in this tutorial:

        s3cmd info s3://example-cors-bucket
        
        s3://example-cors-bucket/ (bucket):
           Location:  default
           Payer:     BucketOwner
           Expiration Rule: none
           Policy:    none
           CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>Access-Control-Allow-Origin</ExposeHeader></CORSRule><CORSRule><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><AllowedHeader>Content-*</AllowedHeader><AllowedHeader>Origin</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>ETag</ExposeHeader></CORSRule></CORSConfiguration>
           ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      Troubleshooting Common CORS Errors

      Having CORS-related issues on your S3 instance? Take these steps to help narrow down the issue and figure out the kind of policy change needed to resolve it.

      1. Review your instance’s CORS policies using s3cmd:

        s3cmd info s3://example-cors-bucket
        

        This can give you a concrete reference for what policies are in place and the specific details of each, like header and origin information.

      2. Review the request and response data. This can give you insights on any possible inconsistencies between existing CORS policies and the actual requests and responses.

        You can use a tool like cURL for this. First, use s3cmd to create a signed URL to an object on your storage instance. This example command creates a URL for an example.txt object and makes the URL last 300 seconds:

        s3cmd signurl s3://example-cors-bucket/example.txt +300
        

        Now, until the URL expires, you can use a cURL command like this one to send a request for the object:

        curl -v "http://example-cors-bucket.us-southeast-1.linodeobjects.com/index.md?AWSAccessKeyId=example-access-key&Expires=1664121793&Signature=example-signature"
        

        The -v option gives you verbose results, outputting more details to help you dissect any request and response issues.

      3. Compare the results of the cURL request to the CORS policy on your instance.

      Conclusion

      This covers the tools and approaches you need to start managing CORS for your Linode Object Storage or other S3 instance. Once you have these, addressing CORS issues is a matter of reviewing and adjusting policies against desired origins and request types.

      Keep improving your resources for managing your S3 through our collection of
      object storage guides. These cover a range of topics to help you with S3 generally, and Linode Object Storage in particular.

      Have more questions or want some help getting started? Feel free to reach out to our
      Support team.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link