One place for hosting & domains

      Improve

      How To Improve Website Performance Using gzip and Nginx on Ubuntu 20.04


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      A website’s performance depends partially on the size of all the files that a user’s browser must download. Reducing the size of those transmitted files can make your website faster. It can also make your website cheaper for those who pay for their bandwidth usage on metered connections.

      gzip is a popular data compression program. You can configure Nginx to use gzip to compress the files it serves on the fly. Those files are then decompressed by the browsers that support it upon retrieval with no loss whatsoever, but with the benefit of a smaller amount of data to transfer between the web server and browser. The good news is that compression support is ubiquitous among all major browsers, and there is no reason not to use it.

      Because of the way compression works in general and how gzip works, certain files compress better than others. For example, text files compress very well, often ending up over two times smaller. On the other hand, images such as JPEG or PNG files are already compressed by their nature, and second compression using gzip yields little or no results. Compressing files use up server resources, so it is best to compress only files that will benefit from the size reduction.

      In this tutorial, you will configure Nginx to use gzip compression. This will reduce the size of content sent to your website’s visitors and improve performance.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Creating Test Files

      In this step, we will create several test files in the default Nginx directory. We’ll use these files later to check Nginx’s default behavior for gzip’s compression and test that the configuration changes have the intended effect.

      To infer what kind of file is served over the network, Nginx does not analyze the file contents; that would be prohibitively slow. Instead, it looks up the file extension to determine the file’s MIME type, which denotes its purpose.

      Because of this behavior, the content of our test files is irrelevant. By naming the files appropriately, we can trick Nginx into thinking that, for example, one entirely empty file is an image and another is a stylesheet.

      Create a file named test.html in the default Nginx directory using truncate. This extension denotes that it’s an HTML page:

      • sudo truncate -s 1k /var/www/html/test.html

      Let’s create a few more test files in the same manner: one jpg image file, one css stylesheet, and one js JavaScript file:

      • sudo truncate -s 1k /var/www/html/test.jpg
      • sudo truncate -s 1k /var/www/html/test.css
      • sudo truncate -s 1k /var/www/html/test.js

      The next step is to check how Nginx behaves with respect to compressing requested files on a fresh installation with the files we have just created.

      Step 2 — Checking the Default Behavior

      Let’s check if the HTML file named test.html is served with compression. The command requests a file from our Nginx server and specifies that it is fine to serve gzip compressed content by using an HTTP header (Accept-Encoding: gzip):

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.html

      In response, you should see several HTTP response headers:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:04:25 GMT Content-Type: text/html Last-Modified: Tue, 09 Feb 2021 19:03:41 GMT Connection: keep-alive ETag: W/"6022dc8d-400" Content-Encoding: gzip

      In the last line, you can see the Content-Encoding: gzip header. This tells us that gzip compression was used to send this file. That’s because Nginx has gzip compression enabled automatically even on the fresh Ubuntu 20.04 installation.

      However, by default, Nginx compresses only HTML files. Every other file will be served uncompressed, which is less than optimal. To verify that, you can request our test image named test.jpg in the same way:

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg

      The result should be slightly different than before:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:05:49 GMT Content-Type: image/jpeg Content-Length: 1024 Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT Connection: keep-alive ETag: "6022dc91-400" Accept-Ranges: bytes

      There is no Content-Encoding: gzip header in the output, which means the file was served without any compression.

      You can repeat the test with the test CSS stylesheet:

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.css

      Once again, there is no mention of compression in the output:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:06:04 GMT Content-Type: text/css Content-Length: 1024 Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT Connection: keep-alive ETag: "6022dc91-400" Accept-Ranges: bytes

      In the next step, we’ll tell Nginx to compress all sorts of files that will benefit from using gzip.

      Step 3 — Configuring Nginx’s gzip Settings

      To change the Nginx gzip configuration, open the main Nginx configuration file in nano or your favorite text editor:

      • sudo nano /etc/nginx/nginx.conf

      Find the gzip settings section, which looks like this:

      /etc/nginx/nginx.conf

      . . .
      ##
      # `gzip` Settings
      #
      #
      gzip on;
      gzip_disable "msie6";
      
      # gzip_vary on;
      # gzip_proxied any;
      # gzip_comp_level 6;
      # gzip_buffers 16 8k;
      # gzip_http_version 1.1;
      # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
      . . .
      

      You can see that gzip compression is indeed enabled by the gzip on directive, but several additional settings are commented out with # sign and have no effect. We’ll make several changes to this section:

      • Enable the additional settings by uncommenting all of the commented lines (i.e., by deleting the # at the beginning of the line)
      • Add the gzip_min_length 256; directive, which tells Nginx not to compress files smaller than 256 bytes. Very small files barely benefit from compression.
      • Append the gzip_types directive with additional file types denoting web fonts, icons, XML feeds, JSON structured data, and SVG images.

      After these changes have been applied, the settings section should look like this:

      /etc/nginx/nginx.conf

      . . .
      ##
      # `gzip` Settings
      #
      #
      gzip on;
      gzip_disable "msie6";
      
      gzip_vary on;
      gzip_proxied any;
      gzip_comp_level 6;
      gzip_buffers 16 8k;
      gzip_http_version 1.1;
      gzip_min_length 256;
      gzip_types
        application/atom+xml
        application/geo+json
        application/javascript
        application/x-javascript
        application/json
        application/ld+json
        application/manifest+json
        application/rdf+xml
        application/rss+xml
        application/xhtml+xml
        application/xml
        font/eot
        font/otf
        font/ttf
        image/svg+xml
        text/css
        text/javascript
        text/plain
        text/xml;
      . . .
      

      Save and close the file to exit.

      To enable the new configuration, restart Nginx:

      • sudo systemctl restart nginx

      Next, let’s make sure our new configuration works.

      Step 4 — Verifying the New Configuration

      Execute the same request as before for the test HTML file:

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.html

      The response will stay the same since compression has already been enabled for that filetype:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:04:25 GMT Content-Type: text/html Last-Modified: Tue, 09 Feb 2021 19:03:41 GMT Connection: keep-alive ETag: W/"6022dc8d-400" Content-Encoding: gzip

      However, if we request the previously uncompressed CSS stylesheet, the response will be different:

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.css

      Now gzip is compressing the file:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:21:54 GMT Content-Type: text/css Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT Connection: keep-alive Vary: Accept-Encoding ETag: W/"6022dc91-400" Content-Encoding: gzip

      From all test files created in step 1, only the test.jpg image file should stay uncompressed. We can test this the same way:

      • curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg

      There is no gzip compression:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 09 Feb 2021 19:25:40 GMT Content-Type: image/jpeg Content-Length: 1024 Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT Connection: keep-alive ETag: "6022dc91-400" Accept-Ranges: bytes

      Here the Content-Encoding: gzip header is not present in the output as expected.

      If that is the case, you have configured gzip compression in Nginx successfully.

      Conclusion

      Changing Nginx configuration to utilize gzip compression is easy, but the benefits can be immense. Not only will visitors with limited bandwidth receive the site faster, but all other users will also see noticeable speed gains. Search engines will be happy about the site loading quicker too. Loading speed is now a crucial metric in how the search engines rank websites, and using gzip is one big step to improve it.



      Source link

      Developer Burnout: Yes, You Can Improve Your Team’s Wellness


      Video

      About the Talk

      Understanding stress management in the workplace can help prevent burnout. Jamie Woo talks through stress prevention techniques supported by trusted research and demonstrates how to incorporate them into your team’s culture.

      What You’ll Learn

      • Ingredients of stress
      • Structural factors that can lead to stress and burnout
      • Techniques to manage stress and reduce burnout

      Resources

      About the Presenter

      Jaime began his career as a molecular biologist before following his passion for communications, working at DigitalOcean, Riot Games, and Shopify, where he launched the engineering communications function. He co-founded Incident Labs, which helps SRE and infrastructure teams gain visibility into work and improve engineering velocity. He has spent three years learning about mental health and mindfulness. He is an avid lover of dumplings.



      Source link

      How To Improve Flexibility Using Terraform Variables, Dependencies, and Conditionals


      Introduction

      Hashicorp Configuration Language (HCL), which Terraform uses, provides many useful structures and capabilities that are present in other programming languages. Using loops in your infrastructure code can greatly reduce code duplication and increase readability, allowing for easier future refactoring and greater flexibility. HCL also provides a few common data structures, such as lists and maps (also called arrays and dictionaries respectively in other languages), as well as conditionals for execution path branching.

      Unique to Terraform is the ability to manually specify the resources one depends on. While the execution graph it builds when running your code already contains the detected links (which are correct in most scenarios), you may find yourself in need of forcing a dependency relationship that Terraform was unable to detect.

      In this article, we’ll review the data structures HCL provides, its looping features for resources (the count key, for_each, and for), and writing conditionals to handle known and unknown values, as well as explicitly specifying dependency relationships between resources.

      Prerequisites

      • A DigitalOcean account. If you do not have one, sign up for a new account.

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. Instructions to do that can be found in this link: How to Generate a Personal Access Token.

      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-flexibility, instead of loadbalance. During Step 2, you do not need to include the pvt_key variable and the SSH key resource.

      • A fully registered domain name added to your DigitalOcean account. For instructions on how to do that, visit the official docs.

      Note: This tutorial has specifically been tested with Terraform 0.13.

      Data Types in HCL

      In this section, before you learn more about loops and other features of HCL that make your code more flexible, we’ll first go over the available data types and their uses.

      The Hashicorp Configuration Language supports primitive and complex data types. Primitive data types are strings, numbers, and boolean values, which are the basic types that can not be derived from others. Complex types, on the other hand, group multiple values into a single one. The two types of complex values are structural and collection types.

      Structural types allow values of different types to be grouped together. The main example is the resource definitions you use to specify what your infrastructure will look like. Compared to the structural types, collection types also group values, but only ones of the same type. The three collection types available in HCL that we are interested in are lists, maps, and sets.

      Lists

      Lists are similar to arrays in other programming languages. They contain a known number of elements of the same type, which can be accessed using the array notation ([]) by their whole-number index, starting from 0. Here is an example of a list variable declaration holding names of Droplets you’ll deploy in the next steps:

      variable "droplet_names" {
        type    = list(string)
        default = ["first", "second", "third"]
      }
      

      For the type, you explicitly specify that it’s a list whose element type is string, and then provide its default value. Values enumerated in brackets signify a list in HCL.

      Maps

      Maps are collections of key-value pairs, where each value is accessed using its key of type string. There are two ways of specifying maps inside curly brackets: by using colons (:) or equal signs (=) for specifying values. In both situations, the value must be enclosed with quotes. When using colons, the key must too be enclosed.

      The following map definition containing Droplet names for different environments is written using the equal sign:

      variable "droplet_env_names" {
        type = map(string)
      
        default = {
          development = "dev-droplet"
          staging = "staging-droplet"
          production = "prod-droplet"
        }
      }
      

      If the key starts with a number, you must use the colon syntax:

      variable "droplet_env_names" {
        type = map(string)
      
        default = {
          "1-development": "dev-droplet"
          "2-staging": "staging-droplet"
          "3-production": "prod-droplet"
        }
      }
      

      Sets

      Sets do not support element ordering, meaning that traversing sets is not guaranteed to yield the same order each time and that their elements can not be accessed in a targeted way. They contain unique elements repeated exactly once, and specifying the same element multiple times will result in them being coalesced with only one instance being present in the set.

      Declaring a set is similar to declaring a list, the only difference being the type of the variable:

      variable "droplet_names" {
        type    = set(string)
        default = ["first", "second", "third", "fourth"]
      }
      

      Now that you’ve learned about the types of data structures HCL offers and reviewed the syntax of lists, maps, and sets, which we’ll use throughout this tutorial, you’ll move on to trying some flexible ways of deploying multiple instances of the same resource in Terraform.

      Setting the Number of Resources Using the count Key

      In this section, you’ll create multiple instances of the same resource using the count key. The count key is a parameter available on all resources that specifies how many instances of it to create.

      You’ll see how it works by writing a Droplet resource, which you’ll store in a file named droplets.tf, in the project directory you created as part of the prerequisites. Create and open it for editing by running:

      Add the following lines:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        count  = 3
        image  = "ubuntu-18-04-x64"
        name   = "web"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This code defines a Droplet resource called test_droplet, running Ubuntu 18.04 with 1GB RAM.

      Note that the value of count is set to 3, which means that Terraform will attempt to create three instances of the same resource. When you are done, save and close the file.

      You can plan the project to see what actions Terraform would take by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to this:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.test_droplet[0] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web" ... } # digitalocean_droplet.test_droplet[1] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web" ... } # digitalocean_droplet.test_droplet[2] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web" ... } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create three instances of test_droplet, all with the same name web. While possible, it is not preferred, so let’s modify the Droplet definition to make the name of each instance different. Open droplets.tf for editing:

      Modify the highlighted line:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        count  = 3
        image  = "ubuntu-18-04-x64"
        name   = "web.${count.index}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      Save and close the file.

      The count object provides the index parameter, which contains the index of the current iteration, starting from 0. The current index is substituted into the name of the Droplet using string interpolation, which allows you to dynamically build a string by substituting variables. You can plan the project again to see the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to this:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.test_droplet[0] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web.0" ... } # digitalocean_droplet.test_droplet[1] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web.1" ... } # digitalocean_droplet.test_droplet[2] will be created + resource "digitalocean_droplet" "test_droplet" { ... name = "web.2" ... } Plan: 3 to add, 0 to change, 0 to destroy. ...

      This time, the three instances of test_droplet will have their index in their names, making them easier to track.

      You now know how to create multiple instances of a resource using the count key, as well as fetch and use the index of an instance during provisioning. Next, you’ll learn how to fetch the Droplet’s name from a list.

      Getting Droplet Names From a List

      In situations when multiple instances of the same resource need to have custom names, you can dynamically retrieve them from a list variable you define. During the rest of the tutorial, you’ll see several ways of automating Droplet deployment from a list of names, promoting flexibility and ease of use.

      You’ll first need to define a list containing the Droplet names. Create a file called variables.tf and open it for editing:

      Add the following lines:

      terraform-flexibility/variables.tf

      variable "droplet_names" {
        type    = list(string)
        default = ["first", "second", "third", "fourth"]
      }
      

      Save and close the file. This code defines a list called droplet_names, containing the strings first, second, third, and fourth.

      Open droplets.tf for editing:

      Modify the highlighted lines:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        count  = length(var.droplet_names)
        image  = "ubuntu-18-04-x64"
        name   =  var.droplet_names[count.index]
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      To improve flexibility, instead of manually specifying a constant number of elements, you pass in the length of the droplet_names list to the count parameter, which will always return the number of elements in the list. For the name, you fetch the element of the list positioned at count.index, using the array bracket notation. Save and close the file when you’re done.

      Try planning the project again. You’ll receive output similar to this:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.test_droplet[0] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "first" ... } # digitalocean_droplet.test_droplet[1] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "second" ... } # digitalocean_droplet.test_droplet[2] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "third" ... } # digitalocean_droplet.test_droplet[3] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "fourth" ... Plan: 4 to add, 0 to change, 0 to destroy. ...

      As a result of modifications, four Droplets would be deployed, successively named after the elements of the droplet_names list.

      You’ve learned about count, its features and syntax, and using it together with a list to modify the resource instances. You’ll now see its disadvantages, and how to overcome them.

      Understanding the Disadvantages of count

      Now that you know how count is used, you’ll see its disadvantages when modifying the list it’s used with.

      Let’s try deploying the Droplets to the cloud:

      • terraform apply -var "do_token=${DO_PAT}"

      Enter yes when prompted. The end of your output will be similar to this:

      Output

      Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

      Now let’s create one more Droplet instance by enlarging the droplet_names list. Open variables.tf for editing:

      Add a new element to the beginning of the list:

      terraform-flexibility/variables.tf

      variable "droplet_names" {
        type    = list(string)
        default = ["zero", "first", "second", "third", "fourth"]
      }
      

      When you’re done, save and close the file.

      Plan the project:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output like this:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create ~ update in-place Terraform will perform the following actions: # digitalocean_droplet.test_droplet[0] will be updated in-place ~ resource "digitalocean_droplet" "test_droplet" { ... ~ name = "first" -> "zero" ... } # digitalocean_droplet.test_droplet[1] will be updated in-place ~ resource "digitalocean_droplet" "test_droplet" { ... ~ name = "second" -> "first" ... } # digitalocean_droplet.test_droplet[2] will be updated in-place ~ resource "digitalocean_droplet" "test_droplet" { ... ~ name = "third" -> "second" ... } # digitalocean_droplet.test_droplet[3] will be updated in-place ~ resource "digitalocean_droplet" "test_droplet" { ... ~ name = "fourth" -> "third" ... } # digitalocean_droplet.test_droplet[4] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "fourth" ... } Plan: 1 to add, 4 to change, 0 to destroy. ...

      The output shows that Terraform would rename the first four Droplets and create a fifth one called fourth, because it considers the instances as an ordered list and identifies the elements (Droplets) by their index number in the list. This is how Terraform initially considers the four Droplets:

      Index Number0123
      Droplet Namefirstsecondthirdfourth

      When the a new Droplet zero is added to the beginning, its internal list representation looks like this:

      Index Number01234
      Droplet Namezerofirstsecondthirdfourth

      The four initial Droplets are now shifted one place to the right. Terraform then compares the two states represented in tables: at position 0, the Droplet was called first, and because it’s different in the second table, it plans an update action. This continues until position 4, which does not have a comparable element in the first table, and instead a Droplet provisioning action is planned.

      This means that adding a new element to the list anywhere but to the very end would result in resources being modified when they do not need to be. Similar update actions would be planned if an element of the droplet_names list was removed.

      Incomplete resource tracking is the main downfall of using count for deploying a dynamic number of differing instances of the same resource. For a constant number of constant instances, count is a simple solution that works well. In situations like this, though, when some attributes are being pulled in from a variable, the for_each loop, which you’ll learn about later in this tutorial, is a much better choice.

      Referencing the Current Resource (self)

      Another downside of count is that referencing an arbitrary instance of a resource by its index is not possible in some cases.

      The main example is destroy-time provisioners, which run when the resource is planned to be destroyed. The reason is that the requested instance may not exist (it’s already destroyed) or would create a mutual dependency cycle. In such situations, instead of referring to the object through the list of instances, you can access only the current resource through the self keyword.

      To demonstrate its usage, you’ll now add a destroy-time local provisioner to the test_droplet definition, which will show a message when run. Open droplets.tf for editing:

      Add the following highlighted lines:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        count  = length(var.droplet_names)
        image  = "ubuntu-18-04-x64"
        name   =  var.droplet_names[count.index]
        region = "fra1"
        size   = "s-1vcpu-1gb"
      
        provisioner "local-exec" {
          when    = destroy
          command = "echo 'Droplet ${self.name} is being destroyed!'"
        }
      }
      

      Save and close the file.

      The local-exec provisioner runs a command on the local machine Terraform is running on. Because the when parameter is set to destroy, it will run only when the resource is going to be destroyed. The command it runs echoes a string to stdout, which substitutes the name of the current resource using self.name.

      Because you’ll be creating the Droplets in a different way in the next section, destroy the currently deployed ones by running the following command:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted. You’ll receive the local-exec provisioner being run four times:

      Output

      ... digitalocean_droplet.test_droplet["first"] (local-exec): Executing: ["/bin/sh" "-c" "echo 'Droplet first is being destroyed!'"] digitalocean_droplet.test_droplet["second"] (local-exec): Executing: ["/bin/sh" "-c" "echo 'Droplet second is being destroyed!'"] digitalocean_droplet.test_droplet["second"] (local-exec): Droplet second is being destroyed! digitalocean_droplet.test_droplet["third"] (local-exec): Executing: ["/bin/sh" "-c" "echo 'Droplet third is being destroyed!'"] digitalocean_droplet.test_droplet["third"] (local-exec): Droplet third is being destroyed! digitalocean_droplet.test_droplet["fourth"] (local-exec): Executing: ["/bin/sh" "-c" "echo 'Droplet fourth is being destroyed!'"] digitalocean_droplet.test_droplet["fourth"] (local-exec): Droplet fourth is being destroyed! digitalocean_droplet.test_droplet["first"] (local-exec): Droplet first is being destroyed! ...

      In this step, you learned the disadvantages of count. You’ll now learn about the for_each loop construct, which overcomes them and works on a wider array of variable types.

      Looping Using for_each

      In this section, you’ll consider the for_each loop, its syntax, and how it helps flexibility when defining resources with multiple instances.

      for_each is a parameter available on each resource, but unlike count, which requires a number of instances to create, for_each accepts a map or a set. Each element of the provided collection is traversed once and an instance is created for it. for_each makes the key and value available under the each keyword as attributes (the pair’s key and value as each.key and each.value, respectively). When a set is provided, the key and value will be the same.

      Because it provides the current element in the each object, you won’t have to manually access the desired element as you did with lists. In case of sets, that’s not even possible, as it has no observable ordering internally. Lists can also be passed in, but they must first be converted into a set using the toset function.

      The main advantage of using for_each, aside from being able to enumerate all three collection data types, is that only the actually affected elements will be modified, created, or deleted. If you change the order of the elements in the input, no actions will be planned, and if you add, remove, or modify an element from the input, appropriate actions will be planned only for that element.

      Let’s convert the Droplet resource from count to for_each and see how it works in practice. Open droplets.tf for editing by running:

      Modify the highlighted lines:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        for_each = toset(var.droplet_names)
        image    = "ubuntu-18-04-x64"
        name     = each.value
        region   = "fra1"
        size     = "s-1vcpu-1gb"
      }
      

      You can remove the local-exec provisioner. When you’re done, save and close the file.

      The first line replaces count and invokes for_each, passing in the droplet_names list in the form of a set using the toset function, which automatically converts the given input. For the Droplet name, you specify each.value, which holds the value of the current element from the set of Droplet names.

      Plan the project by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will detail steps Terraform would take:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.test_droplet["first"] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "first" ... } # digitalocean_droplet.test_droplet["fourth"] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "fourth" ... } # digitalocean_droplet.test_droplet["second"] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "second" ... } # digitalocean_droplet.test_droplet["third"] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "third" ... } # digitalocean_droplet.test_droplet["zero"] will be created + resource "digitalocean_droplet" "test_droplet" { ... + name = "zero" ... } Plan: 5 to add, 0 to change, 0 to destroy. ...

      Unlike when using count, Terraform now considers each instance individually, and not as elements of an ordered list. Every instance is linked to an element of the given set, as signified by the shown string element in the brackets next to each resource that will be created.

      Apply the plan to the cloud by running:

      • terraform apply -var "do_token=${DO_PAT}"

      Enter yes when prompted. When it finishes, you’ll remove one element from the droplet_names list to demonstrate that other instances won’t be affected. Open variables.tf for editing:

      Modify the list to look like this:

      terraform-flexibility/variables.tf

      variable "droplet_names" {
        type    = list(string)
        default = ["first", "second", "third", "fourth"]
      }
      

      Save and close the file.

      Plan the project again, and you’ll receive the following output:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # digitalocean_droplet.test_droplet["zero"] will be destroyed - resource "digitalocean_droplet" "test_droplet" { ... - name = "zero" -> null ... } Plan: 0 to add, 0 to change, 1 to destroy. ...

      This time, Terraform would destroy only the removed instance (zero), and would not touch any of the other instances, which is the correct behavior.

      In this step, you’ve learned about for_each, how to use it, and its advantages over count. Next, you’ll learn about the for loop, its syntax and usage, and when it can be used to automate certain tasks.

      Looping Using for

      The for loop works on collections, and creates a new collection by applying a transformation to each element of the input. The exact type of the output will depend on whether the loop is surrounded by brackets ([]) or braces ({}), which give a list or a map, respectively. As such, it is suitable for querying resources and forming structured outputs for later processing.

      The general syntax of the for loop is:

      for element in collection:
      transform(element)
      if condition
      

      Similarly to other programming languages, you first name the traversal variable (element) and specify the collection to enumerate. The body of the loop is the transformational step, and the optional if clause can be used for filtering the input collection.

      You’ll now work through a few examples using outputs. You’ll store them in a file named outputs.tf. Create it for editing by running the following command:

      Add the following lines to output pairs of deployed Droplet names and their IP addresses:

      terraform-flexibility/outputs.tf

      output "ip_addresses" {
        value = {
          for instance in digitalocean_droplet.test_droplet:
          instance.name => instance.ipv4_address
        }
      }
      

      This code specifies an output called ip_addresses, and specifies a for loop that iterates over the instances of the test_droplet resource you’ve been customizing in the previous steps. Because the loop is surrounded by curly brackets, its output will be a map. The transformational step for maps is similar to lambda functions in other programming languages, and here it creates a key-value pair by combining the instance name as the key with its private IP as its value.

      Save and close the file, then refresh Terraform state to account for the new output by running:

      • terraform refresh -var "do_token=${DO_PAT}"

      The Terraform refresh command updates the local state with the actual infrastructure state in the cloud.

      Then, check the contents of the outputs:

      Output

      ip_addresses = { "first" = "ip_address" "fourth" = "ip_address" "second" = "ip_address" "third" = "ip_address" }

      Terraform has shown the contents of the ip_addresses output, which is a map constructed by the for loop. (The order of the entries may be different for you.) The loop will work seamlessly for every number of entries—meaning that you can add a new element to the droplet_names list and the new Droplet, which would be created without any further manual input, would also show up in this output automatically.

      By surrounding the for loop in square brackets, you can make the output a list. For example, you could output only Droplet IP addresses, which is useful for external software that may be parsing the data. The code would look like this:

      terraform-flexibility/outputs.tf

      output "ip_addresses" {
        value = [
          for instance in digitalocean_droplet.test_droplet:
          instance.ipv4_address
        ]
      }
      

      Here, the transformational step simply selects the IP address attribute. It would give the following output:

      Output

      ip_addresses = [ "ip_address", "ip_address", "ip_address", "ip_address", ]

      As was noted before, you can also filter the input collection using the if clause. Here is how you would write the loop if you’d filter it by the fra1 region:

      terraform-flexibility/outputs.tf

      output "ip_addresses" {
        value = [
          for instance in digitalocean_droplet.test_droplet:
          instance.ipv4_address
          if instance.region == "fra1"
        ]
      }
      

      In HCL, the == operator checks the equality of the values of the two sides—here it checks if instance.region is equal to fra1. If it is, the check passes and the instance is transformed and added to the output, otherwise it is skipped. The output of this code would be the same as the prior example, because all Droplet instances are in the fra1 region, according to the test_droplet resource definition. The if conditional is also useful when you want to filter the input collection for other values in your project, like the Droplet size or distribution.

      Because you’ll be creating resources differently in the next section, destroy the currently deployed ones by running the following command:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      We’ve gone over the for loop, its syntax, and examples of usage in outputs. You’ll now learn about conditionals and how they can be used together with count.

      Directives and Conditionals

      In one of the previous sections, you’ve seen the count key and how it works. You’ll now learn about ternary conditional operators, which you can use elsewhere in your Terraform code, and how they can be used with count.

      The syntax of the ternary operator is:

      condition ? value_if_true : value_if_false
      

      condition is an expression that computes to a boolean (true or false). If the condition is true, then the expression evaluates to value_if_true. On the other hand, if the condition is false, the result will be value_if_false.

      The main use of ternary operators is to enable or disable single resource creation according to the contents of a variable. This can be achieved by passing in the result of the comparison (either 1 or 0) to the count key on the desired resource.

      Let’s add a variable called create_droplet, which will control if a Droplet will be created. First, open variables.tf for editing:

      Add the highlighted lines:

      terraform-flexibility/variables.tf

      variable "droplet_names" {
        type    = list(string)
        default = ["first", "second", "third", "fourth"]
      }
      
      variable "create_droplet" {
        type = bool
        default = true
      }
      

      This code defines the create_droplet variable of type bool. Save and close the file.

      Then, to modify the Droplet declaration, open droplets.tf for editing by running:

      Modify your file like the following:

      terraform-flexibility/droplets.tf

      resource "digitalocean_droplet" "test_droplet" {
        count  = var.create_droplet ? 1 : 0
        image  = "ubuntu-18-04-x64"
        name   =  "test_droplet"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      For count, you use a ternary operator to return either 1 if the create_droplet variable is true, and 0 if false, which will result in no Droplets being provisioned. Save and close the file when you’re done.

      Plan the project execution plan with the variable set to false by running:

      • terraform plan -var "do_token=${DO_PAT}" -var "create_droplet=false"

      You’ll receive the following output:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ No changes. Infrastructure is up-to-date. This means that Terraform did not detect any differences between your configuration and real physical resources that exist. As a result, no actions need to be performed.

      Because create_droplet was passed in the value of false, the count of instances is 0, and no Droplets will be created.

      You’ve reviewed how to use the ternary conditional operator together with the count key to enable a higher level of flexibility in choosing whether to deploy desired resources. Next you’ll learn about explicitly setting resource dependencies for your resources.

      Explicitly Setting Resource Dependencies

      While creating the execution plan for your project, Terraform detects dependency chains between resources and implicitly orders them so that they will be built in the appropriate order. In the majority of cases, it is able to detect relationships by scanning all expressions in resources and building a graph.

      However, when one resource requires access control settings to already be deployed at the cloud provider, in order to be provisioned, there is no clear sign to Terraform that they are related. In turn, Terraform will not know they are dependent on each other behaviorally. In such cases, the dependency must be manually specified using the depends_on argument.

      The depends_on key is available on each resource and used to specify to which resources one has hidden dependency links. Hidden dependency relationships form when a resource depends on another one’s behavior, without using any of its data in its declaration, which would prompt Terraform to connect them one way.

      Here is an example of how depends_on is specified in code:

      resource "digitalocean_droplet" "droplet" {
        image  = "ubuntu-18-04-x64"
        name   = "web"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      
        depends_on = [
          # Resources...
        ]
      }
      

      It accepts a list of references to other resources, and it does not accept arbitrary expressions.

      depends_on should be used sparingly, and only when all other options are exhausted. Its use signifies that what you are trying to declare is stepping outside the boundaries of Terraform’s automated dependency detection system; it may signify that the resource is explicitly depending on more resources than it needs to.

      You’ve now learned about explicitly setting additional dependencies for a resource using the depends_on key, and when it should be used.

      Conclusion

      In this article, we’ve gone over the features of HCL that improve flexibility and scalability of your code, such as count for specifying the number of resource instances to deploy and for_each as an advanced way of looping over collection data types and customizing instances. When used correctly, they greatly reduce code duplication and operational overhead of managing the deployed infrastructure.

      You’ve also learned about conditionals and ternary operators, and how they can be utilized to control if a resource will get deployed. While Terraform’s automated dependency analysis system is quite capable, there may be cases where you need to manually specify resource dependencies using the depends_on key.

      To learn more about Terraform, check out our How To Manage Infrastructure with Terraform series.



      Source link