One place for hosting & domains

      modules

      How to Distribute Go Modules


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Many modern programming languages allow developers to distribute ready-made libraries for others to use in their programs, and Go is no exception. While some languages use a central repository to install these libraries, Go distributes them from the same version control repository used to create the libraries. Go also uses a versioning system called semantic versioning to show users when and what kinds of changes have been made. This helps users know whether a newer version of a module is safe to quickly upgrade to and helps ensure their software continues to work with the module going forward.

      In this tutorial, you will create and publish a new module, learn to use semantic versioning, and publish a semantic version of your module.

      Prerequisites

      Creating a Module to Publish

      Unlike many other programming languages, a Go module is distributed directly from the source code repository it lives in instead of an independent package repository. This makes it easier for users to find modules referenced in their code and for module maintainers to publish new versions of their module. In this section, you’ll create a new module, which you will then publish to make it available for other users.

      To start creating your module, you’ll use git clone on the empty repository you created as part of the prerequisites to download the initial repository. This repository can be cloned anywhere you’d like on your computer, but many developers tend to have a directory for their projects. In this tutorial, you’ll use a directory named projects.

      Make the projects directory and navigate to it:

      mkdir projects
      cd projects
      

      From the projects directory, run git clone to clone your repository to your computer:

      Cloning the module will download your empty module into the pubmodule directory inside your projects directory. You may get a warning that you’ve cloned an empty repository, but this isn’t anything to worry about:

      Output

      Cloning into 'pubmodule'... warning: You appear to have cloned an empty repository.

      Next, change into the directory you downloaded:

      Once you’re in the module’s directory, you’ll use go mod init to create your new module and pass in the repository’s location as the module name. Ensuring the module name matches the repository location is important because this is how the go tool finds where to download your module when it’s used in other projects:

      • go mod init github.com/your_github_username/pubmodule

      Go will confirm your module is created by letting you know it’s created the go.mod file:

      Output

      go: creating new go.mod: module github.com/your_github_username/pubmodule

      Lastly, use your favorite text editor, such as nano, to create and open a file with the same name as your repository: pubmodule.go.

      nano pubmodule.go
      

      The name of this file can be anything, but using the same name as the package makes it easier to know where to start when working with an unfamiliar package. The package name itself, though, should be the same as your repository name. This way, when someone references a method or type from your package, it matches the repository, such as pubmodule.MyFunction. This will make it easier for them to know where the package came from in case they need to refer to it later.

      Next, add a Hello method to your package that will return the string Hello, You!. This will be the function available to anyone importing your package:

      projects/pubmodule/pubmodule.go

      package pubmodule
      
      func Hello() string {
        return "Hello, You!"
      }
      

      You’ve now created a new module using go mod init with a module name that matches your remote repository (github.com/your_github_username/pubmodule). You’ve also added a file named pubmodule.go to your module with a function called Hello that users of your module can call. Next, you’ll publish your module to make it available to others.

      Publishing the Module

      Once you’ve created a local module and you’re ready to make it available to other users, it’s time to publish your module. Since Go modules are distributed from the same code repositories they’re stored in, you’ll commit your code to your local Git repository and push it to your repository at github.com/your_github_username/pubmodule.

      Before you commit your code to your local Git repository, it’s a good idea to make sure you won’t be committing any files you don’t expect to commit, which would then be published publicly when you push the code to GitHub. Using the git status command inside the pubmodule directory will show you all the files and changes that will be committed:

      The output will look similar to this:

      Output

      On branch main No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) go.mod pubmodule.go

      You should see the go.mod file the go mod init command created, and the pubmodule.go file you created the Hello function in. Depending on how you created your repository, you may have a different branch name than this output. Most commonly, the names will be either main or master.

      When you’re sure you have only the files you’re looking for, you can then stage the files with git add and commit them to the repository with git commit:

      • git add .
      • git commit -m "Initial Commit"

      The output will look similar to this:

      Output

      [main (root-commit) 931071d] Initial Commit 2 files changed, 8 insertions(+) create mode 100644 go.mod create mode 100644 pubmodule.go

      Finally, use the git push command to push your module to the GitHub repository:

      The output will look similar to this:

      Output

      Enumerating objects: 4, done. Counting objects: 100% (4/4), done. Delta compression using up to 8 threads Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 367 bytes | 367.00 KiB/s, done. Total 4 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:your_github_username/pubmodule.git * [new branch] main -> main

      After running the git push command, your module will be pushed to your repository and will now be available for anyone else to use. If you don’t have any versions published, Go will use the code in your repository’s default branch as the code for your module. It doesn’t matter if your default branch is named main, master, or something else, only what your repository’s default branch is set to.

      In this section, you took the local Go module you created and published it to your GitHub repository to make it available for other people to use. While you now have a published module, another part of maintaining a public module is ensuring users of your module can use a stable version of it. You’ll likely want to make changes and add features to your module going forward, but if you make those changes without using versions in your module, you could accidentally break the code of someone using your module. To solve this problem, you can add versions to your module when you reach a new milestone in development. When adding new versions, though, be sure to choose a meaningful version number so your users know whether it’s safe for them to upgrade right away or not.

      Semantic Versioning

      A meaningful version number gives your users an idea of how much the public interface, or API, they interact with has changed. Go conveys these changes through a versioning scheme known as semantic versioning, or “SemVer” for short. (Semantic versioning uses the version string to convey meaning about code changes, which is where Semantic Versioning gets its name.) Go’s module system follows SemVer to determine which versions are newer than the version you’re currently using, as well as whether a newer version of a module is safe to upgrade to automatically.

      Semantic versioning gives each number in a version string a meaning. A typical version in SemVer contains three primary numbers: the major version, the minor version, and the patch version. Each of these numbers is combined with a . to form the version, such as 1.2.3. The numbers are ordered with the major version first, the minor version second, and the patch version last. This way, when looking at a version, you can see which one is newer because the number in a specific spot is higher than previous versions. For example, the version 2.2.3 is newer than 1.2.3 because the major version is higher. Likewise, the version 1.4.3 is newer than 1.2.10 because the minor version is higher. Even though 10 is higher than 3 in the patch version, the minor version 4 is higher than 2 so that version takes precedence. When a number in the version string increases, all the other parts of the version following it reset to 0. For example, increasing the minor version of 1.3.10 would result in 1.4.0 and increasing the major version of 2.4.1 would result in 3.0.0.

      Using these rules allows Go to determine which version of a module to use when you run go get. As an example, suppose you have a project using version 1.4.3 of the module, github.com/your_github_username/pubmodule. If you depend on pubmodule being stable, you may only want to automatically upgrade the patch version (the .3). If you run the command go get -u=patch github.com/your_github_username/pubmodule, Go would see that you want to upgrade the patch version of the module and would only look for new versions with 1.4 as the major and minor part of the version.

      When creating a new release of your module, it’s important to consider how the public API of your module has changed. Each part of a semantic version string conveys the scope of API change to both you and your users. These types of changes typically fall into three different categories, lining up with each component of the version. The smallest changes increase the patch version, medium-sized changes increase the minor version, and the largest changes increase the major version. Using these categories to determine which version number to increase will help you avoid breaking your own code and the code of anyone else who relies on your module.

      Major Version Numbers

      The first number in a SemVer version is the major version number (1.4.3). The major version number is the most important number to consider when releasing a new version of your module. A major version change is used to signal backward-incompatible changes to your public API. A backward-incompatible change would be any change in your module that would cause someone’s program to break if they upgraded without making any other changes. Breaking could mean anything from a failure to build because a function name has changed, or a change in how the library works that results in the same method returning "v1" instead of "1". This is only for your public API, though, meaning any exported types or methods someone else could use. If the version only includes improvements a user of your library would not notice, it doesn’t need a major version change. A way to remember which changes fit into this category might be that anything considered an “update” or a “delete” would be a major version increase.

      Note: Unlike the other types of numbers in SemVer, the major version 0 has an additional special significance. The major version 0 is considered the “in development” version. Any SemVer with a major version 0 is not considered stable and anything can change in the API at any time. When you create a new module it’s best to start with major version 0 and only update minor and patch versions until you’ve finished initial development of your module. Once your module’s public API is done changing and considered stable for your users, it’s time to start with version 1.0.0.

      Take the following code as an example of what a major version change might look like. You have a function called UserAddress that currently accepts a string as a parameter and returns a string:

      func UserAddress(username string) string {
          // return user address as a string
      }
      

      While the function currently returns a string, you may determine it would be better for you and your users if the function returned a struct like *Address. This way you can include additional data already split apart, such as a postal code:

      type Address struct {
          Address    string
          PostalCode string
      }
      
      func UserAddress(username string) *Address {
          // return user address and postal code struct
      }
      

      This would be an example of a major version change because it would require your users to make changes to their own code in order to use it. The same would be true if you decided to remove UserAddress completely because your users would need to update their code to use the replacement.

      Another example of a major version change would be adding a new parameter to the UserAddress function, even if it still returns a string:

      func UserAddress(username string, uppercase bool) string {
          // return user address as a string, uppercase if bool is true
      }
      

      Since this change also requires your users to update their code if they’re using the UserAddress function, it would also require a major version increase.

      Not all changes you make to your code will be as drastic, though. Sometimes you’ll make changes to your public API that add new functions or values, but that don’t change any existing ones.

      Minor Version Numbers

      The second number in a SemVer version is the minor version number (1.4.3). A minor version change is used to signal backward-compatible changes to your public API. A backward-compatible change would be any change that doesn’t affect code or projects currently using your module. Similar to the major version number, this only affects your public API. A way to remember which changes fit in this category might be anything considered an “addition”, but not an “update”.

      Using the same example from the major version number, imagine you have a method named UserAddress that returns a string:

      func UserAddress(username string) string {
          // return user address as a string
      }
      

      This time, though, instead of updating UserAddress to return *Address, you decide to add a completely new method named UserAddressDetail:

      type Address struct {
          Address    string
          PostalCode string
      }
      
      func UserAddress(username string) string {
          // return user address as a string
      }
      
      func UserAddressDetail(username string) *Address {
          // return user address and postal code struct
      }
      

      Adding this new UserAddressDetail function doesn’t require changes by your users if they update to this version of your module, so it would be considered a minor version number increase. They can continue using UserAddress and would only need to update their code if they’d rather include the additional information from UserAddressDetail.

      Public API changes likely aren’t the only time you’ll release a new version of your module, though. Bugs are an inevitable part of software development and the patch version number is there to cover up those holes.

      Patch Version Numbers

      The patch version number is the last number in a SemVer version (1.4.3). A patch version change is any change that doesn’t affect the module’s public API. Changes that don’t affect a module’s public API tend to be things like bug fixes or security fixes. Using the UserAddress function from the previous examples again, suppose a release of your module is missing part of an address in the string the function returns. If you release a new version of your module to fix that bug, it would only increase the patch version. The release wouldn’t include any changes to how a user uses the UserAddress public API, only the correctness of the data returned.

      As you’ve seen in this section, carefully choosing a new version number is an important way to earn the trust of your users. Using semantic versioning shows users the amount of work required to update to a new version, and you won’t accidentally surprise them with an update that breaks their program. After considering the changes you’ve made to your module and determining the next version number to use, you can publish the new version and make it available to your users.

      Publishing a New Module Version

      Before you publish a new version of your module, you’ll need to update your module with the changes you’re planning to make. Without any changes, you won’t be able to determine which part of the semantic version to increase. For the module in this tutorial, you’ll add a new Goodbye method to complement the Hello method, and then you’ll publish that new version for users to use.

      First, open the pubmodule.go file and add the new Goodbye method to your public API:

      pubmodule/pubmodule.go

      package pubmodule
      
      func Hello() string {
        return "Hello, You!"
      }
      
      func Goodbye() string {
        return "Goodbye for now!"
      }
      

      Once you’ve saved your change, you’ll want to check which changes are expected to be committed by running git status:

      The output will look similar to this, showing that the only change in your module is the method you added to pubmodule.go:

      Output

      On branch main Your branch is up to date with 'origin/main'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: pubmodule.go no changes added to commit (use "git add" and/or "git commit -a")

      Next, add the change to the staged files and commit the change to your local repository with git add and git commit:

      • git add .
      • git commit -m "Add Goodbye method"

      The output will look similar to this:

      Output

      [main 3235010] Add Goodbye method 1 file changed, 4 insertions(+)

      After the changes are committed, you’ll need to push them to your GitHub repository. In a larger software project, or when working with other developers on a project, this step would commonly be slightly different. When doing development on a new feature, a developer would create a Git branch to put changes in until the new feature is stable and ready to be released. Once that happens, another developer would review the changes in the branch to add a second pair of eyes that might catch issues the first developer may have missed. Once the review is finished, the branch would then be merged into the default branch (such as master or main). Between releases, the default branch would accumulate these types of changes until it’s time to publish a new release.

      Since your module here doesn’t go through this process, pushing the changes you’ve made to the repository will simulate the accumulation of changes instead:

      The output will look similar to this:

      Output

      numerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 369 bytes | 369.00 KiB/s, done. Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:your_github_username/pubmodule.git 931071d..3235010 main -> main

      The output shows the new code is ready for users to use in the default branch.

      Up to this point, everything you’ve done has been the same as initially publishing your module. However, now an important part of releasing a new version comes up: choosing a new version number.

      If you look at the changes you’ve made to the module, the only change to the public API (or really any change) is adding the Goodbye method to your module. Since a user could update from the previous version, which only had the Hello function, without making changes on their part, this change would be a backward-compatible change. In semantic versioning, a backward-compatible change to the public API would mean an increase in the minor version number. This is the first version of your module being published, though, so there’s no previous version to increase. If you consider 0.0.0 to be “no version” then incrementing the minor version would lead you to version 0.1.0, the next version of your module.

      Now that you have a version number to give to the release of your module, you can use it, paired with Git tags, to publish a new version. When developers use Git to keep track of their source code, even in languages other than Go, a common convention is to use Git’s tags to keep track of which code was released for a specific version. This way, if they ever need to make changes to an old version, they can use the tag. Since Go is already downloading modules from the source repositories, it takes advantage of this practice by using those same version tags.

      To publish a new version of your own module using these tags, you will tag the code you’re releasing with the git tag command. As an argument to the git tag command, you’ll also need to provide the version tag. To create the version tag, start with the prefix v, for version, and add your SemVer immediately after it. In the case of your module, your final verison tag would be v0.1.0. Now, run git tag to tag your module with the version tag:

      Once the version tag is added locally, you’ll still need to push the tag to your GitHub repository, which you can do using git push with origin:

      After the git push command succeeds, you’ll see that a new tag, v0.1.0, has been created:

      Output

      Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:your_github_username/pubmodule.git * [new tag] v0.1.0 -> v0.1.0

      The output above shows that your tag has been pushed and your GitHub repository has a new v0.1.0 tag available for users of your module to reference.

      Now that you’ve published a new version of your module with git tag, whenever a user runs go get to get the latest version of your module, it will no longer download a version based on the latest commit hash from the default branch. Once a module has a released version, the go tool will start using those versions to determine the best way to update the module. Paired with semantic versioning, this allows you to iterate and improve your modules while also providing your users with a consistent and stable experience.

      Conclusion

      In this tutorial, you created a public Go module and published it to a GitHub repository so that other people can use it. You also used semantic versioning to determine the best version number for your module. Finally, you expanded your module’s functionality and, using semantic versioning, published the new version with the confidence that you won’t be breaking programs that depend on it.

      If you’d like more information on semantic versioning, including how to add information other than numbers to your version, the Semantic Versioning website goes into great detail. The Go documentation also has a module version numbering page that explains how Go specifically uses SemVer.

      For more information on Go modules, the Go project has a series of blog posts detailing how Go tools interact with and understand modules. The Go project also has a very detailed and technical reference for Go modules in the Go Modules Reference.

      This tutorial is also part of the DigitalOcean How to Code in Go series. The series covers a number of Go topics, from installing Go for the first time to how to use the language itself.



      Source link

      How to Use Go Modules


      Introduction

      In version 1.13, the authors of Go added a new way of managing the libraries a Go project depends on, called Go modules. Go modules were added in response to a growing need to make it easier for developers to maintain various versions of their dependencies, as well as add more flexibility in the way developers organize their projects on their computer. Go modules commonly consist of one project or library and contain a collection of Go packages that are then released together. Go modules solve many problems with GOPATH, the original system, by allowing users to put their project code in their chosen directory and specify versions of dependencies for each module.

      In this tutorial, you will create your own public Go module and add a package to your new module. In addition, you will also add someone else’s public module to your own project as well as add a specific version of that module to your project.

      Prerequisites

      To follow this tutorial, you will need:

      Creating a New Module

      At first glance, a Go module looks similar to a Go package. A module has a number of Go code files implementing the functionality of a package, but it also has two additional and important files in the root: the go.mod file and the go.sum file. These files contain information the go tool uses to keep track of your module’s configuration, and are commonly maintained by the tool so you don’t need to.

      The first thing to do is decide the directory the module will live in. With the introduction of Go modules, it became possible for Go projects to be located anywhere on the filesystem, not just a specific directory defined by Go. You may already have a directory for your projects, but in this tutorial, you’ll create a directory called projects and the new module will be called mymodule. You can create the projects directory either through an IDE or via the command line.

      If you’re using the command line, begin by making the projects directory and navigating to it:

      mkdir projects
      cd projects
      

      Next, you’ll create the module directory itself. Usually, the module’s top-level directory name is the same as the module name, which makes things easier to keep track of. In your projects directory, run the following command to create the mymodule directory:

      Once you’ve created the module directory, the directory structure will look like this:

      └── projects
          └── mymodule
      

      The next step is to create a go.mod file within the mymodule directory to define the Go module itself. To do this, you’ll use the go tool’s mod init command and provide it with the module’s name, which in this case is mymodule. Now create the module by running go mod init from the mymodule directory and provide it with the module’s name, mymodule:

      This command will return the following output when creating the module:

      Output

      go: creating new go.mod: module mymodule

      With the module created, your directory structure will now look like this:

      └── projects
          └── mymodule
              └── go.mod
      

      Now that you have created a module, let’s take a look inside the go.mod file to see what the go mod init command did.

      Understanding the go.mod File

      When you run commands with the go tool, the go.mod file is a very important part of the process. It’s the file that contains the name of the module and versions of other modules your own module depends on. It can also contain other directives, such as replace, which can be helpful for doing development on multiple modules at once.

      In the mymodule directory, open the go.mod file using nano, or your favorite text editor:

      nano go.mod
      

      The contents will look similar to this, which isn’t much:

      projects/mymodule/go.mod

      module mymodule
      
      go 1.16
      

      The first line, the module directive, tells Go the name of your module so that when it’s looking at import paths in a package, it knows not to look elsewhere for mymodule. The mymodule value comes from the parameter you passed to go mod init:

      module mymodule
      

      The only other line in the file at this point, the go directive, tells Go which version of the language the module is targeting. In this case, since the module was created using Go 1.16, the go directive says 1.16:

      go 1.16
      

      As more information is added to the module, this file will expand, but it’s a good idea to look at it now to see how it changes as dependencies are added further on.

      You’ve now created a Go module with go mod init and looked at what an initial go.mod file contains, but your module doesn’t do anything yet. It’s time to take your module further and add some code.

      Adding Go Code to Your Module

      To ensure the module is created correctly and to add code so you can run your first Go module, you’ll create a main.go file within the mymodule directory. The main.go file is commonly used in Go programs to signal the starting point of a program. The file’s name isn’t as important as the main function inside, but matching the two makes it easier to find. In this tutorial, the main function will print out Hello, Modules! when run.

      To create the file, open the main.go file using nano, or your favorite text editor:

      In the main.go file, add the following code to define your main package, import the fmt package, then print out the Hello, Modules! message in the main function:

      projects/mymodule/main.go

      package main
      
      import "fmt"
      
      func main() {
          fmt.Println("Hello, Modules!")
      }
      

      In Go, each directory is considered its own package, and each file has its own package declaration line. In the main.go file you just created, the package is named main. Typically, you can name the package any way you’d like, but the main package is special in Go. When Go sees that a package is named main it knows the package should be considered a binary, and should be compiled into an executable file, instead of a library designed to be used in another program.

      After the package is defined, the import declaration says to import the fmt package so you can use its Println function to print the "Hello, Modules! message to the screen.

      Finally, the main function is defined. The main function is another special case in Go, related to the main package. When Go sees a function named main inside a package named main, it knows the main function is the first function it should run. This is known as a program’s entry point.

      Once you have created the main.go file, the module’s directory structure will look similar to this:

      └── projects
          └── mymodule
              └── go.mod
              └── main.go
      

      If you are familiar with using Go and the GOPATH, running code in a module is similar to how you would do it from a directory in the GOPATH. (Don’t worry if you are not familiar with the GOPATH, because using modules replaces its usage.)

      There are two common ways to run an executable program in Go: building a binary with go build or running a file with go run. In this tutorial, you’ll use go run to run the module directly instead of building a binary, which would have to be run separately.

      Run the main.go file you’ve created with go run:

      Running the command will print the Hello, Modules! text as defined in the code:

      Output

      Hello, Modules!

      In this section, you added a main.go file to your module with an initial main function that prints Hello, Modules!. At this point, your program doesn’t yet benefit from being a Go module — it could be a file anywhere on your computer being run with go run. The first real benefit of Go modules is being able to add dependencies to your project in any directory and not just the GOPATH directory structure. You can also add packages to your module. In the next section, you will expand your module by creating an additional package within it.

      Adding a Package to Your Module

      Similar to a standard Go package, a module may contain any number of packages and sub-packages, or it may contain none at all. For this example, you’ll create a package named mypackage inside the mymodule directory.

      Create this new package by running the mkdir command inside the mymodule directory with the mypackage argument:

      This will create the new directory mypackage as a sub-package of the mymodule directory:

      └── projects
          └── mymodule
              └── mypackage
              └── main.go
              └── go.mod
      

      Use the cd command to change the directory to your new mypackage directory, and then use nano, or your favorite text editor, to create a mypackage.go file. This file could have any name, but using the same name as the package makes it easier to find the primary file for the package:

      • cd mypackage
      • nano mypackage.go

      In the mypackage.go file, add a function called PrintHello that will print the message Hello, Modules! This is mypackage speaking! when called:

      projects/mymodule/mypackage/mypackage.go

      package mypackage
      
      import "fmt"
      
      func PrintHello() {
          fmt.Println("Hello, Modules! This is mypackage speaking!")
      }
      

      Since you want the PrintHello function to be available from another package, the capital P in the function name is important. The capital letter means the function is exported and available to any outside program. For more information about how package visibility works in Go, Understanding Package Visibility in Go includes more detail.

      Now that you’ve created the mypackage package with an exported function, you will need to import it from the mymodule package to use it. This is similar to how you would import other packages, such as the fmt package previously, except this time you’ll include your module’s name at the beginning of the import path. Open your main.go file from the mymodule directory and add a call to PrintHello by adding the highlighted lines below:

      projects/mymodule/main.go

      
      package main
      
      import (
          "fmt"
      
          "mymodule/mypackage"
      )
      
      func main() {
          fmt.Println("Hello, Modules!")
      
          mypackage.PrintHello()
      }
      

      If you take a closer look at the import statement, you’ll see the new import begins with mymodule, which is the same module name you set in the go.mod file. This is followed by the path separator and the package you want to import, mypackage in this case:

      "mymodule/mypackage"
      

      In the future, if you add packages inside mypackage, you would also add them to the end of the import path in a similar way. For example, If you had another package called extrapackage inside mypackage, your import path for that package would be mymodule/mypackage/extrapackage.

      Run your updated module with go run and main.go from the mymodule directory as before:

      When you run the module again you’ll see both the Hello, Modules! message from earlier as well as the new message printed from your new mypackage’s PrintHello function:

      Output

      Hello, Modules! Hello, Modules! This is mypackage speaking!

      You’ve now added a new package to your initial module by creating a directory called mypackage with a PrintHello function. As your module’s functionality expands, though, it can be useful to start using other peoples’ modules in your own. In the next section, you’ll add a remote module as a dependency to yours.

      Adding a Remote Module as a Dependency

      Go modules are distributed from version control repositories, commonly Git repositories. When you want to add a new module as a dependency to your own, you use the repository’s path as a way to reference the module you’d like to use. When Go sees the import path for these modules, it can infer where to find it remotely based on this repository path.

      For this example, you’ll add a dependency on the github.com/spf13/cobra library to your module. Cobra is a popular library for creating console applications, but we won’t address that in this tutorial.

      Similar to when you created the mymodule module, you’ll again use the go tool. However, this time, you’ll run the go get command from the mymodule directory. Run go get and provide the module you’d like to add. In this case, you’ll get github.com/spf13/cobra:

      • go get github.com/spf13/cobra

      When you run this command, the go tool will look up the Cobra repository from the path you specified and determine which version of Cobra is the latest by looking at the repository’s branches and tags. It will then download that version and keep track of the one it chose by adding the module name and the version to the go.mod file for future reference.

      Now, open the go.mod file in the mymodule directory to see how the go tool updated the go.mod file when you added the new dependency. The example below could change depending on the current version of Cobra that’s been released or the version of the Go tooling you’re using, but the overall structure of the changes should be similar:

      projects/mymodule/go.mod

      module mymodule
      
      go 1.16
      
      require (
          github.com/inconshreveable/mousetrap v1.0.0 // indirect
          github.com/spf13/cobra v1.2.1 // indirect
          github.com/spf13/pflag v1.0.5 // indirect
      )
      

      A new section using the require directive has been added. This directive tells Go which module you want, such as github.com/spf13/cobra, and the version of the module you added. Sometimes require directives will also include an // indirect comment. This comment says that, at the time the require directive was added, the module is not referenced directly in any of the module’s source files. A few additional require lines were also added to the file. These lines are other modules Cobra depends on that the Go tool determined should be referenced as well.

      You may have also noticed a new file, go.sum, was created in the mymodule directory after running the go run command. This is another important file for Go modules and contains information used by Go to record specific hashes and versions of dependencies. This ensures consistency of the dependencies, even if they are installed on a different machine.

      Once you have the dependency downloaded you’ll want to update your main.go file with some minimal Cobra code to use the new dependency. Update your main.go file in the mymodule directory with the Cobra code below to use the new dependency:

      projects/mymodule/main.go

      package main
      
      import (
          "fmt"
      
          "github.com/spf13/cobra"
      
          "mymodule/mypackage"
      )
      
      func main() {
          cmd := &cobra.Command{
              Run: func(cmd *cobra.Command, args []string) {
                  fmt.Println("Hello, Modules!")
      
                  mypackage.PrintHello()
              },
          }
      
          fmt.Println("Calling cmd.Execute()!")
          cmd.Execute()
      }
      

      This code creates a cobra.Command structure with a Run function containing your existing “Hello” statements, which will then be executed with a call to cmd.Execute(). Now, run the updated code:

      You’ll see the following output, which looks similar to what you saw before. This time, though, it’s using your new dependency as shown by the Calling cmd.Execute()! line:

      Output

      Calling cmd.Execute()! Hello, Modules! Hello, Modules! This is mypackage speaking!

      Using go get to add the latest version of a remote dependency, such as github.com/sp13/cobra here, makes it easier to keep your dependencies updated with the latest bug fixes. However, sometimes there may be times where you’d rather use a specific version of a module, a repository tag, or a repository branch. In the next section, you’ll use go get to reference these versions when you’d like that option.

      Using a Specific Version of a Module

      Since Go modules are distributed from a version control repository, they can use version control features such as tags, branches, and even commits. You can reference these in your dependencies using the @ symbol at the end of the module path along with the version you’d like to use. Earlier, when you installed the latest version of Cobra, you were taking advantage of this capability, but you didn’t need to add it explicitly to your command. The go tool knows that if a specific version isn’t provided using @, it should use the special version latest. The latest version isn’t actually in the repository, like my-tag or my-branch may be. It’s built into the go tool as a helper so you don’t need to search for the latest version yourself.

      For example, when you added your dependency initially, you could have also used the following command for the same result:

      • go get github.com/spf13/cobra@latest

      Now, imagine there’s a module you use that’s currently in development. For this example, call it your_domain/sammy/awesome. There’s a new feature being added to this awesome module and work is being done in a branch called new-feature. To add this branch as a dependency of your own module you would provide go get with the module path, followed by the @ symbol, followed by the name of the branch:

      • go get your_domain/sammy/awesome@new-feature

      Running this command would cause go to connect to the your_domain/sammy/awesome repository, download the new-feature branch at the current latest commit for the branch, and add that information to the go.mod file.

      Branches aren’t the only way you can use the @ option, though. This syntax can be used for tags and even specific commits to the repository. For example, sometimes the latest version of the library you’re using may have a broken commit. In these cases, it can be useful to reference the commit just before the broken commit.

      Using your module’s Cobra dependency as an example, suppose you need to reference commit 07445ea of github.com/spf13/cobra because it has some changes you need and you can’t use another version for some reason. In this case, you can provide the commit hash after the @ symbol the same as you would for a branch or a tag. Run the go get command in your mymodule directory with the module and version to download the new version:

      • go get github.com/spf13/cobra@07445ea

      If you open your module’s go.mod file again you’ll see that go get has updated the require line for github.com/spf13/cobra to reference the commit you specified:

      projects/mymodule/go.mod

      module mymodule
      
      go 1.16
      
      require (
          github.com/inconshreveable/mousetrap v1.0.0 // indirect
          github.com/spf13/cobra v1.1.2-0.20210209210842-07445ea179fc // indirect
          github.com/spf13/pflag v1.0.5 // indirect
      )
      

      Since a commit is a particular point in time, unlike a tag or a branch, Go includes additional information in the require directive to ensure it’s using the correct version in the future. If you look closely at the version, you’ll see it does include the commit hash you provided: v1.1.2-0.20210209210842-07445ea179fc.

      Go modules also use this functionality to support releasing different versions of the module. When a Go module releases a new version, a new tag is added to the repository with the version number as the tag. If you want to use a specific version, you can look at a list of tags in the repository to find the version you’re looking for. If you already know the version, you may not need to search through the tags because version tags are named consistently.

      Returning to Cobra as an example, suppose you want to use Cobra version 1.1.1. You could look at the Cobra repository and see it has a tag named v1.1.1, among others. To use this tagged version, you would use the @ symbol in a go get command, just as you would use a non-version tag or branch. Now, update your module to use Cobra 1.1.1 by running the go get command with v1.1.1 as the version:

      • go get github.com/spf13/cobra@v1.1.1

      Now if you open your module’s go.mod file, you’ll see go get has updated the require line for github.com/spf13/cobra to reference the version you provided:

      projects/mymodule/go.mod

      module mymodule
      
      go 1.16
      
      require (
          github.com/inconshreveable/mousetrap v1.0.0 // indirect
          github.com/spf13/cobra v1.1.1 // indirect
          github.com/spf13/pflag v1.0.5 // indirect
      )
      

      Finally, if you’re using a specific version of a library, such as the 07445ea commit or v1.1.1 from earlier, but you determine you’d rather start using the latest version, it’s possible to do this by using the special latest version. To update your module to the latest version of Cobra, run go get again with the module path and the latest version:

      • go get github.com/spf13/cobra@latest

      Once this command finishes, the go.mod file will update to look like it did before you referenced a specific version of Cobra. Depending on your version of Go and the current latest version of Cobra your output may look slightly different, but you should still see that the github.com/spf13/cobra line in the require section is updated to the latest version again:

      module mymodule
      
      go 1.16
      
      require (
          github.com/inconshreveable/mousetrap v1.0.0 // indirect
          github.com/spf13/cobra v1.2.1 // indirect
          github.com/spf13/pflag v1.0.5 // indirect
      )
      

      The go get command is a powerful tool you can use to manage dependencies in your go.mod file without needing to edit it manually. As you saw in this section, using the @ character with a module name allows you to use particular versions for a module, from release versions to specific repository commits. It can even be used to go back to the latest version of your dependencies. Using a combination of these options will allow you to ensure the stability of your programs in the future.

      Conclusion

      In this tutorial, you created a Go module with a sub-package and used that package within your module. You also added another module to yours as a dependency and explored how to reference module versions in various ways.

      For more information on Go modules, the Go project has a series of blog posts about how the Go tools interact with and understand modules. The Go project also has a very detailed and technical reference for Go modules in the Go Modules Reference.

      This tutorial is also part of the DigitalOcean How to Code in Go series. The series covers a number of Go topics, from installing Go for the first time to how to use the language itself.



      Source link

      How To Create Reusable Infrastructure with Terraform Modules and Templates


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      One of the main benefits of Infrastructure as Code (IAC) is reusing parts of the defined infrastructure. In Terraform, you can use modules to encapsulate logically connected components into one entity and customize them using input variables you define. By using modules to define your infrastructure at a high level, you can separate development, staging, and production environments by only passing in different values to the same modules, which minimizes code duplication and maximizes conciseness.

      You are not limited to using only your custom modules. Terraform Registry is integrated into Terraform and lists modules and providers that you can incorporate in your project right away by defining them in the required_providers section. Referencing public modules can speed up your workflow and reduce code duplication. If you have a useful module and would like to share it with the world, you can look into publishing it on the Registry for other developers to use.

      In this tutorial, we’ll consider some of the ways of defining and reusing code in Terraform projects. You’ll reference modules from the Terraform Registry, separate development and production environments using modules, learn about templates and how they are used, and how to specify resource dependencies explicitly using the depends_on meta argument.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. You can find instructions to do that at: How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial and be sure to name the project folder terraform-reusability, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.
      • The droplet-lb module available under modules in terraform-reusability. Follow the How to Build a Custom Module tutorial and work through it until the droplet-lb module is functionally complete. (That is, until the cd ../.. command in the Creating a Module section.)
      • Knowledge of Terraform project structuring approaches. For more information, see How To Structure a Terraform Project.
      • (Optional) Two separate domains whose nameservers are pointed to DigitalOcean at your registrar. Refer to the How To Point to DigitalOcean Nameservers From Common Domain Registrars tutorial to set this up. Note that you don’t need to do this if you don’t plan on deploying the project you’ll create through this tutorial.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Separating Development and Production Environments

      In this section, you’ll use modules to achieve separation between your target deployment environments. You’ll arrange these according to the structure of a more complex project. You’ll first create a project with two modules, one of which will define the Droplets and Load Balancers, and the other one will set up the DNS domain records. After, you’ll write configuration for two different environments (dev and prod), which will call the same modules.

      Creating the dns-records module

      As part of the prerequisites, you have set up the project initially under terraform-reusability and created the droplet-lb module in its own subdirectory under modules. You’ll now set up the second module, called dns-records, containing variables, outputs, and resource definitions. Assuming you’re in terraform-reusability, create dns-records by running:

      • mkdir modules/dns-records

      Navigate to it:

      This module will comprise the definitions for your domain and the DNS records that you’ll later point to the Load Balancers. You’ll first define the variables, which will become inputs that this module will expose. You’ll store them in a file called variables.tf. Create it for editing:

      Add the following variable definitions:

      terraform-reusability/modules/dns-records/variables.tf

      variable "domain_name" {}
      variable "ipv4_address" {}
      

      Save and close the file. You’ll now define the domain and the accompanying A and CNAME records in a file named records.tf. Create and open it for editing by running:

      Add the following resource definitions:

      terraform-reusability/modules/dns-records/records.tf

      resource "digitalocean_domain" "domain" {
        name = var.domain_name
      }
      
      resource "digitalocean_record" "domain_A" {
        domain = digitalocean_domain.domain.name
        type   = "A"
        name   = "@"
        value  = var.ipv4_address
      }
      
      resource "digitalocean_record" "domain_CNAME" {
        domain = digitalocean_domain.domain.name
        type   = "CNAME"
        name   = "www"
        value  = var.ipv4_address
      }
      

      First, you define the domain in your DigitalOcean account for your domain name. The cloud will automatically add the three DigitalOcean nameservers as NS records. Then, you define an A record for your domain, routing it (the @ as value signifies the true domain name, without subdomains) to the IP address supplied as the variable ipv4_address. For the sake of completeness, the CNAME record that follows specifies that the www subdomain should also point to the same IP address. Save and close the file when you’re done.

      Next, you’ll define the outputs for this module. The outputs will show the FQDN (fully qualified domain name) of the created records. Create and open outputs.tf for editing:

      Add the following lines:

      terraform-reusability/modules/dns-records/outputs.tf

      output "A_fqdn" {
        value = digitalocean_record.domain_A.fqdn
      }
      
      output "CNAME_fqdn" {
        value = digitalocean_record.domain_CNAME.fqdn
      }
      

      Save and close the file when you’re done.

      With the variables, DNS records, and outputs defined, the last thing you’ll need to specify are the provider requirements for this module. You’ll specify that the dns-records module requires the digitalocean provider in a file called provider.tf. Create and open it for editing:

      Add the following lines:

      terraform-reusability/modules/dns-records/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
          }
        }
        required_version = ">= 0.13"
      }
      

      When you’re done, save and close the file. The dns-records module now requires the digitalocean provider and is functionally complete.

      Creating Different Environments

      The following is the current structure of the terraform-reusability project:

      terraform_reusability/
      ├─ modules/
      │  ├─ dns-records/
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ records.tf
      │  │  ├─ variables.tf
      │  ├─ droplet-lb/
      │  │  ├─ droplets.tf
      │  │  ├─ lb.tf
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ variables.tf
      ├─ main.tf
      ├─ provider.tf
      

      So far, you have two modules in your project: the one you just created (dns-records) and droplet-lb, which you created as part of the prerequisites.

      To facilitate different environments, you’ll store the dev and prod environment config files under a directory called environments, which will reside in the root of the project. Both environments will call the same two modules, but with different parameter values. The advantage of this is when the modules change internally in the future, you’ll only need to update the values you are passing in.

      First, navigate to the root of the project by running:

      Then, create the dev and prod directories under environments at the same time:

      • mkdir -p environments/dev && mkdir environments/prod

      The -p argument orders mkdir to create all directories in the given path.

      Navigate to the dev directory, as you’ll first configure that environment:

      You’ll store the code in a file named main.tf, so create it for editing:

      Add the following lines:

      terraform-reusability/environments/dev/main.tf

      module "droplets" {
        source   = "../../modules/droplet-lb"
      
        droplet_count = 2
        group_name    = "dev"
      }
      
      module "dns" {
        source   = "../../modules/dns-records"
      
        domain_name   = "your_dev_domain"
        ipv4_address  = module.droplets.lb_ip
      }
      

      Here you call and configure the two modules, droplet-lb and dns-records, which will together result in the creation of two Droplets. They’re fronted by a Load Balancer; the DNS records for the supplied domain are set up to point to that Load Balancer. Remember to replace your_dev_domain with your desired domain name for the dev environment, then save and close the file.

      Next, you’ll configure the DigitalOcean provider and create a variable for it to be able to accept the personal access token you’ve created as part of the prerequisites. Open a new file, called provider.tf, for editing:

      Add the following lines:

      terraform-reusability/environments/dev/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
          }
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      In this code, you require the digitalocean provider to be available and pass in the do_token variable to its instance. Save and close the file.

      Initialize the configuration by running:

      You’ll receive the following output:

      Output

      Initializing modules... - dns in ../../modules/dns-records - droplets in ../../modules/droplet-lb Initializing the backend... Initializing provider plugins... - Finding latest version of digitalocean/digitalocean... - Installing digitalocean/digitalocean v2.0.2... - Installed digitalocean/digitalocean v2.0.2 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, we recommend adding version constraints in a required_providers block in your configuration, with the constraint strings suggested below. * digitalocean/digitalocean: version = "~> 2.0.2" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      The configuration for the prod environment is similar. Navigate to its directory by running:

      Create and open main.tf for editing:

      Add the following lines:

      terraform-reusability/environments/prod/main.tf

      module "droplets" {
        source   = "../../modules/droplet-lb"
      
        droplet_count = 5
        group_name    = "prod"
      }
      
      module "dns" {
        source   = "../../modules/dns-records"
      
        domain_name   = "your_prod_domain"
        ipv4_address  = module.droplets.lb_ip
      }
      

      The difference between this and your dev code is that there will be five Droplets deployed. Furthermore, the domain name, which you should replace with your prod domain name, will be different. Save and close the file when you’re done.

      Then, copy over the provider configuration from dev:

      Initialize this configuration as well:

      The output of this command will be the same as the previous time you ran it.

      You can try planning the configuration to see what resources Terraform would create by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output for prod will be the following:

      Output

      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.dns.digitalocean_domain.domain will be created + resource "digitalocean_domain" "domain" { + id = (known after apply) + name = "your_prod_domain" + urn = (known after apply) } # module.dns.digitalocean_record.domain_A will be created + resource "digitalocean_record" "domain_A" { + domain = "your_prod_domain" + fqdn = (known after apply) + id = (known after apply) + name = "@" + ttl = (known after apply) + type = "A" + value = (known after apply) } # module.dns.digitalocean_record.domain_CNAME will be created + resource "digitalocean_record" "domain_CNAME" { + domain = "your_prod_domain" + fqdn = (known after apply) + id = (known after apply) + name = "www" + ttl = (known after apply) + type = "CNAME" + value = (known after apply) } # module.droplets.digitalocean_droplet.droplets[0] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-0" ... } # module.droplets.digitalocean_droplet.droplets[1] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-1" ... } # module.droplets.digitalocean_droplet.droplets[2] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-2" ... } # module.droplets.digitalocean_droplet.droplets[3] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-3" ... } # module.droplets.digitalocean_droplet.droplets[4] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "prod-4" ... } # module.droplets.digitalocean_loadbalancer.www-lb will be created + resource "digitalocean_loadbalancer" "www-lb" { ... + name = "lb-prod" ... Plan: 9 to add, 0 to change, 0 to destroy. ...

      This would deploy five Droplets with a Load Balancer. Also it would create the prod domain you specified with the two DNS records pointing to the Load Balancer. You can try planning the configuration for the dev environment as well—you’ll note that two Droplets would be planned for deployment.

      Note: You can apply this configuration for the dev and prod environments with the following command:

      • terraform apply -var "do_token=${DO_PAT}"

      The following demonstrates how you have structured this project:

      terraform_reusability/
      ├─ environments/
      │  ├─ dev/
      │  │  ├─ main.tf
      │  │  ├─ provider.tf
      │  ├─ prod/
      │  │  ├─ main.tf
      │  │  ├─ provider.tf
      ├─ modules/
      │  ├─ dns-records/
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ records.tf
      │  │  ├─ variables.tf
      │  ├─ droplet-lb/
      │  │  ├─ droplets.tf
      │  │  ├─ lb.tf
      │  │  ├─ outputs.tf
      │  │  ├─ provider.tf
      │  │  ├─ variables.tf
      ├─ main.tf
      ├─ provider.tf
      

      The addition is the environments directory, which holds the code for the dev and prod environments.

      The benefit of this approach is that further changes to modules automatically propagate to all areas of your project. Barring any possible customizations to module inputs, this approach is not repetitive and promotes reusability as much as possible, even across deployment environments. Overall this reduces clutter and allows you to trace the modifications using a version-control system.

      In the final two sections of this tutorial, you’ll review the depends_on meta argument and the templatefile function.

      Declaring Dependencies to Build Infrastructure in Order

      While planning actions, Terraform automatically tries to sense existing dependencies and builds them into its dependency graph. The main dependencies it can detect are clear references; for example, when an output value of a module is passed to a parameter on another resource. In this scenario the module must first complete its deployment to provide the output value.

      The dependencies that Terraform can’t detect are hidden—they have side effects and mutual references not inferable from the code. An example of this is when an object depends not on the existence, but on the behavior of another one, and does not access its attributes from code. To overcome this, you can use depends_on to manually specify the dependencies in an explicit way. Since Terraform 0.13, you can also use depends_on on modules to force the listed resources to be fully deployed before deploying the module itself. It’s possible to use the depends_on meta argument with every resource type. depends_on will also accept a list of other resources on which its specified resource depends.

      In the previous step of this tutorial, you haven’t specified any explicit dependencies using depends_on, because the resources you’ve created have no side effects not inferable from the code. Terraform is able to detect the references made from the code you’ve written, and will schedule the resources for deployment accordingly.

      depends_on accepts a list of references to other resources. Its syntax looks like this:

      resource "resource_type" "res" {
        depends_on = [...] # List of resources
      
        # Parameters...
      }
      

      Remember that you should only use depends_on as a last-resort option. If used, it should be kept well documented, because the behavior that the resources depend on may not be immediately obvious.

      Using Templates for Customization

      In Terraform, templating is substituting results of expressions in appropriate places, such as when setting attribute values on resources or constructing strings. You’ve used it in the previous steps and the tutorial prerequisites to dynamically generate Droplet names and other parameter values.

      When substituting values in strings, the values are specified and surrounded by ${}. Template substitution is often used in loops to facilitate customization of the created resources. It also allows for module customization by substituting inputs in resource attributes.

      Terraform offers the templatefile function, which accepts two arguments: the file from the disk to read and a map of variables paired with their values. The value it returns is the contents of the file rendered with the expression substituted—just as Terraform would normally do when planning or applying the project. Because functions are not part of the dependency graph, the file cannot be dynamically generated from another part of the project.

      Imagine that the contents of the template file called droplets.tmpl is as follows:

      %{ for address in addresses ~}
      ${address}:80
      %{ endfor ~}
      

      Longer declarations must be surrounded with %{}, as is the case with the for and endfor declarations, which signify the start and end of the for loop respectively. The contents and type of the droplets variable are not known until the function is called and actual values provided, like so:

      templatefile("${path.module}/droplets.tmpl", { addresses = ["192.168.0.1", "192.168.1.1"] })
      

      The value that this templatefile call will return is the following:

      Output

      192.168.0.1:80 192.168.1.1:80

      This function has its use cases, but they are uncommon. For example, you could use it when a part of the configuration is necessary to exist in a proprietary format, but is dependent on the rest of the values and must be generated dynamically. In the majority of cases, it’s better to specify all configuration parameters directly in Terraform code, where possible.

      Conclusion

      In this article, you’ve maximized code reuse in an example Terraform project. The main way is to package often-used features and configurations as a customizable module and use it whenever needed. By doing so, you do not duplicate the underlying code (which can be error prone) and enable faster turnaround times, since modifying the module is almost all you need to do to introduce changes.

      You’re not limited to your own modules. As you’ve seen, Terraform Registry provides third-party modules and providers that you can incorporate in your project.

      Check out the rest of the How To Manage Infrastructure with Terraform series.



      Source link