One place for hosting & domains


      WordPress Permalinks: How to Manage Your Website’s URL Structure

      In this article, we’ll explain what permalinks are and give you a tour of all the default permalink structures WordPress offers. Then, you’ll learn how to create custom ones in two ways.

      Each page on your website has a unique URL that enables visitors to identify and access it directly. Ideally, you want these URLs to be as easy to read and type as possible. In technical terms, those unique URLs are known as permalinks.

      When you use a Content Management Systems (CMS) such as WordPress, it automatically generates URLs for your pages and posts based on the permalink structure you choose. However, it’s important to note that some options are more suitable for readability and Search Engine Optimization (SEO).

      What WordPress Permalinks Are

      In a nutshell, WordPress permalinks are the unique URLs the platform sets for each of your posts and pages. Take these two permalinks, for example:

      They both illustrate unique permalink structures. The first uses the blog post’s name as its slug, which is the segment of the URL that identifies a unique page. Instead of its name, the second example uses the post’s unique ID as its slug.

      From a technical standpoint, both URLs work exactly the same. However, it’s easy to see that the first approach is much more user-friendly. Not only is it easier to remember, but it also tells visitors what the page is all about. This is known as a “pretty” (as opposed to “ugly”) permalink.

      More importantly, using keywords that explain what your page is about can help search engines understand its purpose. In other words, they’re better from an SEO perspective. If you’re not using an optimized permalink structure, you’re leaving organic traffic on the table.

      The 6 Types of Permalink Structures in WordPress (And Which One You Should Use)

      Before we jump in, it’s important to note that if you’re building a new WordPress website, you should always set your favorite permalink structure as early as possible. Conventional wisdom suggests locking down your permalink structure within the first six months when your SEO is still in the growth stage.

      If your website has been around for longer than that, you can still change your permalink structure. However, you may impact your search rankings if you don’t implement redirects to your new URLs.

      With that in mind, let’s help you identify the best structure for your WordPress website.

      1. Plain

      We’ve already introduced this permalink structure above. Here’s another example to refresh your memory:


      As its name implies, this structure is bare-bones. The slug is actually the designed page ID from your database. It doesn’t provide you with any information about the page you’re visiting. In most cases, you’ll want to use a structure that provides users (and search engines) with a bit more to go on.

      2. Day and Name

      As the name implies, this permalink structure uses your post or page’s name and the day of its publication as part of the URL. Here’s an example:


      The upside of this structure is that it tells your users how old the post is. In some cases, this can help them assess if it’s still relevant without having to hunt for a date in the text. You’ll often find news or magazine sites use this structure — essentially, any website creating time-specific content.

      Dating your posts also has a downside. For example, imagine you have an excellent post that was published two years ago, and it’s considered a definitive source of information on a particular topic. Some readers might simply look at the date and think the advice is no longer relevant, regardless of whether that’s true or not.

      To be clear, it’s always advisable to include the date of publication somewhere within your post, but there’s no compelling reason to add it as part of your URL.

      3. Month and Name

      This permalink structure is almost identical to the one we just covered. The only difference is that it doesn’t include the day of publication as part of your post’s URL:


      From a functional standpoint, we’re dealing with the same set of pros and cons as with the Day and Name structure. It can be nice for visitors to ascertain how old your post or page is at a glance, but it can also make some of your content look outdated.

      Get Content Delivered Straight to Your Inbox

      Subscribe to our blog and receive great content just like this delivered straight to your inbox.

      4. Numeric

      The Numeric permalink structure shares a lot of similarities with the Plain option. Let’s check out a URL using this setting so you can see it in action:


      As with Plain permalinks, this setting uses your post’s ID as its slug, and it doesn’t provide users with any additional information. In this case, you also get a short permalink, but it comes with no other advantages, so it shouldn’t be your top option.

      5. Post Name

      Out of all the default WordPress permalink structures, this one is our favorite. It identifies your posts and pages according to their name, which makes for clean and memorable URLs, such as:


      The great thing is that you can name your post and pages any way you want and even customize the slugs if those titles get too long. As a rule of thumb, your slug should remain between three to five words long. That way, it’s still short enough for your visitors to remember, and search engines will also be able to easily identify what the post is about.

      6. Custom Structure

      If you’re not sold on any of the structures we’ve talked about so far, WordPress also enables you to build your own. For example, if you’re running a blog, you can set up individual categories for your roundups and reviews and include them in your links.

      Here’s an example:


      In practice, WordPress provides you with ten structure tags that you can use to build custom permalinks. If you want to know what they are and how to use them, keep reading — we’ll cover all the basics in the next section.

      2 Ways to Create Custom Permalinks

      As we mentioned a minute ago, WordPress offers you a built-in method to create custom permalinks. However, you can also use plugins to achieve similar results if you want a bit more control over the procedure. Let’s talk about both methods.

      1. Use WordPress’ Custom Structure Tags

      WordPress enables you to use ten types of structure tags to create custom permalinks. Let’s take a minute to get to know them, then we’ll see them in practice:

      • Date Tags: This category includes %year%, %monthnum%, %day%, %hour%, %minute%, and %second%, and they work exactly as you expect them to. Adding any of these tags to your permalink structure will include those numbers within your URL.
      • Post ID and Name: These tags are %post_id% and %postname%, respectively. The former displays the unique ID for any of your posts or pages, while the second shows their full title.
      • Category and Author: You can add these options using the %category% and %author% tags, respectively.

      To use any of these tags, you need to access your dashboard and go to the Settings > Permalinks tab. Once you’re in, you can choose any permalink structure you want out of the ones we talked about earlier.

      If you want to create your own, select the Custom Structure option at the bottom of the list:

      WordPress custom permalinks structure

      Now, all you have to do is mix and match the structure tags we talked about earlier.

      For example, /%category/%post-name/ would result in this URL:


      You can use as many tags as you want for your custom permalink structure. However, we recommend keeping things short. We also recommend that you avoid using dates, so that your content remains evergreen.

      When you’re ready, remember to save your changes, and that’s it!

      2. Use the Custom Permalinks Plugin

      If you’ve been using WordPress for a while, you’ll know there are plugins for nearly every scenario you can imagine. Customizing permalinks is no exception. One option is the Permalink Manager plugin:

      Permalink Manager Lite WordPress plugin

      This tool enables you to customize your post and page’s permalink structure and change the URLs of any individual posts on a single screen.

      To get started, install the plugin, activate it, then navigate to the Tools > Permalink Manager tab. Inside, you’ll find a list of all your posts and the option to tweak their permalinks:

      Permalink Manager Lite plugin for custom permalinks

      The URL Editor tab section also includes tabs for your Pages and Media, which work just the same as the Posts section. When you’re done checking out these options, move over to the Permastructures tab. Here, you can set unique permalink structures for your posts, pages, and media:

      WordPress default Permalinks structure

      As you can see, the plugin also uses WordPress’ default structure tags to help you build new permalinks. All you have to do is put them in the order you want and save your changes:

      structured tags for custom post types

      If you’re not happy with your new structures, you can always use the Restore to Default Permastructure button below each field. That’s pretty much all you need to know to start using the plugin.

      Set Up Your WordPress Permalinks

      A lot of people don’t pay any attention to the structure of their website’s URLs, which is a mistake. It may seem like a small detail, but using the right permalink structure can improve SEO rankings and make your website more user-friendly.

      As a general rule of thumb, you want to avoid URLs that include long strings of numbers or unwieldy phrases. The best approach is often to use your post names as your default permalink structure and shorten them manually when necessary. If that’s not a good fit for your website, you can always create custom permalink structures using WordPress tags or a plugin.

      Do More with DreamPress

      DreamPress Plus and Pro users get access to Jetpack Professional (and 200+ premium themes) at no added cost!

      Managed WordPress Hosting - DreamPress

      Source link

      How To Structure a Terraform Project


      Structuring Terraform projects appropriately according to their use cases and perceived complexity is essential to ensure their maintainability and extensibility in day-to-day operations. A systematic approach to properly organizing code files is necessary to ensure that the project remains scalable during deployment and usable to you and your team.

      In this tutorial, you’ll learn about structuring Terraform projects according to their general purpose and complexity. Then, you’ll create a project with a simple structure using the more common features of Terraform: variables, locals, data sources, and provisioners. In the end, your project will deploy an Ubuntu 18.04 server (Droplet) on DigitalOcean, install an Apache web server, and point your domain to the web server.


      Note: This tutorial has specifically been tested with Terraform 0.13.

      Understanding a Terraform Project’s Structure

      In this section, you’ll learn what Terraform considers a project, how you can structure the infrastructure code, and when to choose which approach. You’ll also learn about Terraform workspaces, what they do, and how Terraform is storing state.

      A resource is an entity of a cloud service (such as a DigitalOcean Droplet) declared in Terraform code that is created according to specified and inferred properties. Multiple resources form infrastructure with their mutual connections.

      Terraform uses a specialized programming language for defining infrastructure, called Hashicorp Configuration Language (HCL). HCL code is typically stored in files ending with the extension tf. A Terraform project is any directory that contains tf files and has been initialized using the init command, which sets up Terraform caches and default local state.

      Terraform state is the mechanism via which it keeps track of resources that are actually deployed in the cloud. State is stored in backends—locally, on disk, or remotely, on a file storage cloud service or specialized state management software, for optimal redundancy and reliability. You can read more about different backends in the Terraform documentation.

      Project workspaces allow you to have multiple states in the same backend, tied to the same configuration. This allows you to deploy multiple distinct instances of the same infrastructure. Each project starts with a workspace named default—this will be used if you do not explicitly create or switch to another one.

      Modules in Terraform (akin to libraries in other programming languages) are parametrized code containers enclosing multiple resource declarations. They allow you to abstract away a common part of your infrastructure and reuse it later with different inputs.

      A Terraform project can also include external code files for use with dynamic data inputs, which can parse the JSON output of a CLI command and offer it for use in resource declarations. In this tutorial, you’ll do this with a Python script.

      Now that you know what a Terraform project consists of, let’s review two general approaches of Terraform project structuring.

      Simple Structure

      Suitable for small and testing projects, with a few resources of varying types and variables. It has a few configuration files, usually one per resource type (or more helper ones together with a main), and no custom modules, because most of the resources are unique and there aren’t enough to be generalized and reused. Following this, most of the code is stored in the same directory, next to each other. These projects often have a few variables (such as an API key for accessing the cloud) and may use dynamic data inputs and other Terraform and HCL features, though not prominently.

      As an example of the file structure of this approach, this is what the project we’ll build in this tutorial will look like in the end:

      └── tf/
          └── external/

      As this project will deploy an Apache web server Droplet and set up DNS records, the definitions of project variables, the DigitalOcean Terraform provider, the Droplet, and DNS records will be stored in their respective files. The minimum required Terraform and DigitalOcean provider versions will be specified in, while the Python script that will generate a name for the Droplet (and be used as a dynamic data source in will be stored in the external folder, to separate it from HCL code.

      Complex Structure

      Contrary to the simple structure, this approach is suitable for large projects, with clearly defined subdirectory structures containing multiple modules of varying levels of complexity, aside from the usual code. These modules can depend on each other. Coupled with version control systems, these projects can make extensive use of workspaces. This approach is suitable for larger projects managing multiple apps, while reusing code as much as possible.

      Development, staging, quality assurance, and production infrastructure instances can also be housed under the same project in different directories by relying on common modules, thus eliminating duplicate code and making the project the central source of truth. Here is the file structure of an example project with a more complex structure, containing multiple deployment apps, Terraform modules, and target cloud environments:

      └── tf/
          ├── modules/
          │   ├── network/
          │   │   ├──
          │   │   ├──
          │   │   ├──
          │   │   └──
          │   └── spaces/
          │       ├──
          │       ├──
          │       └──
          └── applications/
              ├── backend-app/
              │   ├── env/
              │   │   ├── dev.tfvars
              │   │   ├── staging.tfvars
              │   │   ├── qa.tfvars
              │   │   └── production.tfvars
              │   └──
              └── frontend-app/
                  ├── env/
                  │   ├── dev.tfvars
                  │   ├── staging.tfvars
                  │   ├── qa.tfvars
                  │   └── production.tfvars

      This approach will further be explored later in this series.

      You now know what a Terraform project is, how to best structure it according to perceived complexity, and what role Terraform workspaces serve. In the next steps, you’ll create a project with a simple structure that will provision a Droplet with an Apache web server installed and DNS records set up for your domain. You’ll first initialize your project with the DigitalOcean provider and variables, and then proceed to define the Droplet, a dynamic data source to provide its name, and a DNS record for deployment.

      Step 1 — Setting Up Your Initial Project

      In this section, you’ll add the DigitalOcean Terraform provider to your project, define the project variables, and declare a DigitalOcean provider instance, so that Terraform will be able to connect to your account.

      Start off by creating a directory for your Terraform project with the following command:

      • mkdir ~/apache-droplet-terraform

      Navigate to it:

      • cd ~/apache-droplet-terraform

      Since this project will follow the simple structuring approach, you’ll store the provider, variables, Droplet, and DNS record code in separate files, per the file structure from the previous section. First, you’ll need to add the DigitalOcean Terraform provider to your project as a required provider.

      Create a file named and open it for editing by running:

      Add the following lines:


      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
        required_version = ">= 0.13"

      In this terraform block, you list the required providers (DigitalOcean, version 1.22.2) and set the minimal required version of Terraform to be higher or equal to 0.13. When you are done, save and close the file.

      Then, define the variables your project will expose in the file, following the approach of storing different resource types in separate code files:

      Add the following variables:


      variable "do_token" {}
      variable "domain_name" {}

      Save and close the file.

      The do_token variable will hold your DigitalOcean Personal Access Token and domain_name will specify your desired domain name. The deployed Droplet will have the SSH key, identified by the SSH fingerprint, automatically installed.

      Next, let’s define the DigitalOcean provider instance for this project. You’ll store it in a file named Create and open it for editing by running:

      Add the provider:


      provider "digitalocean" {
        token = var.do_token

      Save and exit when you’re done. You’ve defined the digitalocean provider, which corresponds to the required provider you specified earlier in, and set its token to the value of the variable, which will be supplied during runtime.

      In this step, you have created a directory for your project, requested the DigitalOcean provider to be available, declared project variables, and set up the connection to a DigitalOcean provider instance to use an auth token that will be provided later. You’ll now write a script that will generate dynamic data for your project definitions.

      Step 2 — Creating a Python Script for Dynamic Data

      Before continuing on to defining the Droplet, you’ll create a Python script that will generate the Droplet’s name dynamically and declare a data source resource to parse it. The name will be generated by concatenating a constant string (web) with the current time of the local machine, expressed in the UNIX epoch format. A naming script can be useful when multiple Droplets are generated according to a naming scheme, to easily differentiate between them.

      You’ll store the script in a file named, in a directory named external. First, create the directory by running:

      The external directory resides in the root of your project and will store non-HCL code files, like the Python script you’ll write.

      Create under external and open it for editing:

      • nano external/

      Add the following code:


      import json, time
      fixed_name = "web"
      result = {
        "name": f"{fixed_name}-{int(time.time())}",

      This Python script imports the json and time modules, declares a dictionary named result, and sets the value of the name key to an interpolated string, which combines the fixed_name with the current UNIX time of the machine it runs on. Then, the result is converted into JSON and outputted on stdout. The output will be different each time the script is run:


      {"name": "web-1597747959"}

      When you’re done, save and close the file.

      Note: Large and complex structured projects require more thought put into how external data sources are created and used, especially in terms of portability and error handling. Terraform expects the executed program to write a human-readable error message to stderr and gracefully exit with a non-zero status, which is something not shown in this step because of the simplicity of the task. Additionally, it expects the program to have no side effects, so that it can be re-run as many times as needed.

      For more info on what Terraform expects, visit the official docs on data sources.

      Now that the script is ready, you can define the data source, which will pull the data from the script. You’ll store the data source in a file named in the root of your project as per the simple structuring approach.

      Create it for editing by running:

      Add the following definition:


      data "external" "droplet_name" {
        program = ["python3", "${path.module}/external/"]

      Save and close the file.

      This data source is called droplet_name and executes the script using Python 3, which resides in the external directory you just created. It automatically parses its output and provides the deserialized data under its result attribute for use within other resource definitions.

      With the data source now declared, you can define the Droplet that Apache will run on.

      Step 3 — Defining the Droplet

      In this step, you’ll write the definition of the Droplet resource and store it in a code file dedicated to Droplets, as per the simple structuring approach. Its name will come from the dynamic data source you have just created, and will be different each time it’s deployed.

      Create and open the file for editing:

      Add the following Droplet resource definition:


      data "digitalocean_ssh_key" "ssh_key" {
        name = "your_ssh_key_name"
      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   =
        region = "fra1"
        size   = "s-1vcpu-1gb"
        ssh_keys = [


      You first declare a DigitalOcean SSH key resource called ssh_key, which will fetch a key from your account by its name. Make sure to replace the highlighted code with your SSH key name.

      Then, you declare a Droplet resource, called web. Its actual name in the cloud will be different, because it’s being requested from the droplet_name external data source. To bootstrap the Droplet resource with a SSH key each time it’s deployed, the ID of the ssh_key is passed into the ssh_keys parameter, so that DigitalOcean will know which key to apply.

      For now, this is all you need to configure related to, so save and close the file when you’re done.

      You’ll now write the configuration for the DNS record that will point your domain to the just declared Droplet.

      Step 4 — Defining DNS Records

      The last step in the process is to configure the DNS record pointing to the Droplet from your domain.

      You’ll store the DNS config in a file named, because it’s a separate resource type from the others you have created in the previous steps. Create and open it for editing:

      Add the following lines:


      resource "digitalocean_record" "www" {
        domain = var.domain_name
        type   = "A"
        name   = "@"
        value  = digitalocean_droplet.web.ipv4_address

      This code declares a DigitalOcean DNS record at your domain name (passed in using the variable), of type A. The record has a name of @, which is a placeholder routing to the domain itself and with the Droplet IP address as its value. You can replace the name value with something else, which will result in a subdomain being created.

      When you’re done, save and close the file.

      Now that you’ve configured the Droplet, the name generator data source, and a DNS record, you’ll move on to deploying the project in the cloud.

      Step 5 — Planning and Applying the Configuration

      In this section, you’ll initialize your Terraform project, deploy it to the cloud, and check that everything was provisioned correctly.

      Now that the project infrastructure is defined completely, all that is left to do before deploying it is to initialize the Terraform project. Do so by running the following command:

      You’ll receive the following output:


      Initializing the backend... Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "1.22.2"... - Finding latest version of hashicorp/external... - Installing hashicorp/external v1.2.0... - Installed hashicorp/external v1.2.0 (signed by HashiCorp) - Installing digitalocean/digitalocean v1.22.2... - Installed digitalocean/digitalocean v1.22.2 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, we recommend adding version constraints in a required_providers block in your configuration, with the constraint strings suggested below. * hashicorp/external: version = "~> 1.2.0" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      You’ll now be able to deploy your Droplet with a dynamically generated name and an accompanying domain to your DigitalOcean account.

      Start by defining the domain name, SSH key fingerprint, and your personal access token as environment variables, so you won’t have to copy the values each time you run Terraform. Run the following commands, replacing the highlighted values:

      • export DO_PAT="your_do_api_token"
      • export DO_DOMAIN_NAME="your_domain"

      You can find your API token in your DigitalOcean Control Panel.

      Run the plan command with the variable values passed in to see what steps Terraform would take to deploy your project:

      • terraform plan -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"

      The output will be similar to the following:


      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. data.digitalocean_ssh_key.ssh_key: Refreshing state... data.external.droplet_name: Refreshing state... ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1597780013" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + ssh_keys = [ + "...", ] + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_record.www will be created + resource "digitalocean_record" "www" { + domain = "your_domain" + fqdn = (known after apply) + id = (known after apply) + name = "@" + ttl = (known after apply) + type = "A" + value = (known after apply) } Plan: 2 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run.

      The lines starting with a green + signify that Terraform will create each of the resources that follow after—which is exactly what should happen, so you can apply the configuration:

      • terraform apply -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"

      The output will be the same as before, except that this time you’ll be asked to confirm:


      Plan: 2 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: `yes`

      Enter yes, and Terraform will provision your Droplet and the DNS record:


      digitalocean_droplet.web: Creating... ... digitalocean_droplet.web: Creation complete after 33s [id=204432105] digitalocean_record.www: Creating... digitalocean_record.www: Creation complete after 1s [id=110657456] Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

      Terraform has now recorded the deployed resources in its state. To confirm that the DNS records and the Droplet were connected successfully, you can extract the IP address of the Droplet from the local state and check if it matches to public DNS records for your domain. Run the following command to get the IP address:

      • terraform show | grep "ipv4"

      You’ll receive your Droplet’s IP address:


      ipv4_address = "your_Droplet_IP"

      You can check the public A records by running:

      • nslookup -type=a your_domain | grep "Address" | tail -1

      The output will show the IP address to which the A record points:


      Address: your_Droplet_IP

      They are the same, as they should be, meaning that the Droplet and DNS record were provisioned successfully.

      For the changes in the next step to take place, destroy the deployed resources by running:

      • terraform destroy -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"

      When prompted, enter yes to continue.

      In this step, you have created your infrastructure and applied it to your DigitalOcean account. You’ll now modify it to automatically install the Apache web server on the provisioned Droplet using Terraform provisioners.

      Step 6 — Running Code Using Provisioners

      Now you’ll set up the installation of the Apache web server on your deployed Droplet by using the remote-exec provisioner to execute custom commands.

      Terraform provisioners can be used to execute specific actions on created remote resources (the remote-exec provisioner) or the local machine the code is executing on (using the local-exec provisioner). If a provisioner fails, the node will be marked as tainted in current state, which means that it will be deleted and recreated during the next run.

      To connect to a provisioned Droplet, Terraform needs the private SSH key of the one set up on the Droplet. The best way to pass in the location of the private key is by using variables, so open for editing:

      Add the highlighted line:


      variable "do_token" {}
      variable "domain_name" {}
      variable "private_key" {}

      You have now added a new variable, called private_key, to your project. Save and close the file.

      Next, you’ll add the connection data and remote provisioner declarations to your Droplet configuration. Open for editing by running:

      Extend the existing code with the highlighted lines:


      data "digitalocean_ssh_key" "ssh_key" {
        name = "your_ssh_key_name"
      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   =
        region = "fra1"
        size   = "s-1vcpu-1gb"
        ssh_keys = [

        connection {
          host        = self.ipv4_address
          user        = "root"
          type        = "ssh"
          private_key = file(var.private_key)
          timeout     = "2m"
        provisioner "remote-exec" {
          inline = [
            "export PATH=$PATH:/usr/bin",
            # Install Apache
            "apt update",
            "apt -y install apache2"

      The connection block specifies how Terraform should connect to the target Droplet. The provisioner block contains the array of commands, within the inline parameter, that it will execute after provisioning. That is, updating the package manager cache and installing Apache. Save and exit when you’re done.

      You can create a temporary environment variable for the private key path as well:

      • export DO_PRIVATE_KEY="private_key_location"

      Note: The private key, and any other file that you wish to load from within Terraform, must be placed within the project. You can see the How to Set Up SSH Keys on Ubuntu 18.04 tutorial for more info regarding SSH key set up on Ubuntu 18.04 or other distributions.

      Try applying the configuration again:

      • terraform apply -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}" -var "private_key=${DO_PRIVATE_KEY}"

      Enter yes when prompted. You’ll receive output similar to before, but followed with long output from the remote-exec provisioner:


      digitalocean_droplet.web: Creating... digitalocean_droplet.web: Still creating... [10s elapsed] digitalocean_droplet.web: Still creating... [20s elapsed] digitalocean_droplet.web: Still creating... [30s elapsed] digitalocean_droplet.web: Provisioning with 'remote-exec'... digitalocean_droplet.web (remote-exec): Connecting to remote host via SSH... digitalocean_droplet.web (remote-exec): Host: ... digitalocean_droplet.web (remote-exec): User: root digitalocean_droplet.web (remote-exec): Password: false digitalocean_droplet.web (remote-exec): Private key: true digitalocean_droplet.web (remote-exec): Certificate: false digitalocean_droplet.web (remote-exec): SSH Agent: false digitalocean_droplet.web (remote-exec): Checking Host Key: false digitalocean_droplet.web (remote-exec): Connected! ... digitalocean_droplet.web: Creation complete after 1m5s [id=204442200] digitalocean_record.www: Creating... digitalocean_record.www: Creation complete after 1s [id=110666268] Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

      You can now navigate to your domain in a web browser. You will see the default Apache welcome page.

      Apache Web Server - Default Page

      This means that Apache was installed successfully, and that Terraform provisioned everything correctly.

      To destroy the deployed resources, run the following command and enter yes when prompted:

      • terraform destroy -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}" -var "private_key=${DO_PRIVATE_KEY}"

      You have now completed a small Terraform project with a simple structure, that deploys the Apache web server on a Droplet and sets up DNS records for the desired domain.


      You have learned about two general approaches of structuring your Terraform projects, according to their complexity. You’ve then deployed a Droplet running Apache with DNS records for your domain, following the simple structuring approach, and using the remote-exec provisioner to execute commands.

      For reference, here is the file structure of the project you created in this tutorial:

      └── tf/
          └── external/

      The resources you defined (the Droplet, the DNS record and dynamic data source, the DigitalOcean provider and variables) are stored each in its own separate file, according to the simple project structure outlined in the first section of this tutorial.

      For more information about Terraform provisioners and their parameters, visit the official documentation.

      Source link