One place for hosting & domains

      Project

      How to Use a Private Go Module in Your Own Project


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      One beneficial aspect of Go’s ecosystem is that a large number of modules are open source. Since they’re open source they can be freely accessed, examined, used, and learned from. However, sometimes it’s necessary to make a private Go module for various reasons, such as keeping proprietary business logic internal to your company.

      In this tutorial, you will publish a private Go module, set up authentication to access a private module, and use a private Go module in a project.

      Prerequisites

      Distributing a Private Module

      Unlike many programming languages, Go distributes modules from repositories instead of a central package server. One benefit of this approach is that publishing a private module is very similar to publishing a public one. Instead of requiring a completely separate private package server, a Go private module is distributed via a private source code repository. Since most source code hosting options support this out of the box, there’s no need to set up an additional private server.

      In order to use a private module, you’ll need to have access to a private Go module. In this section, you’ll create and publish a private module you can use later in the tutorial to access a private module from another Go program.

      To create your new private Go module, start by cloning the private GitHub repository where it will live. As part of the prerequisites you created a private, empty repository named mysecret in your GitHub account and this is the one you will use for your private module. This repository can be cloned anywhere you’d like on your computer, but many developers tend to have a directory for their projects. In this tutorial, you’ll use a directory named projects.

      Make the projects directory and navigate to it:

      • mkdir projects
      • cd projects

      From the projects directory, run git clone to clone your private mysecret repository to your computer:

      Git will confirm it has cloned your module and may warn you that you have cloned an empty repository. If so, this is not something you need to worry about:

      Output

      Cloning into 'mysecret'... warning: You appear to have cloned an empty repository.

      Next, use cd to go into the new mysecret directory you cloned and use go mod init, along with the name of your private repository, to create a new Go module:

      • cd mysecret
      • go mod init github.com/your_github_username/mysecret

      Now that your module is created, it’s time to add a function you can use from another project. Use nano, or your favorite text editor, to open a file with the same name as your repository, such as mysecret.go. The name isn’t significant, and could be anything, but using the same name as the repository makes it easier to determine which file to look in first when working with a new module:

      In the mysecret.go file, name the package with the same name as your repository, then add a SecretProcess function to print the line Running the secret process! when called:

      projects/mysecret/mysecret.go

      package mysecret
      
      import "fmt"
      
      func SecretProcess() {
          fmt.Println("Running the secret process!")
      }
      

      Now that you have your private module created, you will publish it to your private repository for others to use. Since your private repository only allows you to access it initially, you’re able to control who has access to your private module. You might restrict access to yourself, but you could also give access to friends or coworkers as well.

      Since both private and public Go modules are source repositories, publishing a private Go module follows the same process as publishing a public one. To publish your new module, stage your changes in the current directory using the git add command, then commit those changes to your local repository with the git commit command:

      • git add .
      • git commit -m "Initial private module implementation"

      You will see a confirmation from Git that your initial commit has succeeded as well as a summary of the files included in the commit:

      Output

      [main (root-commit) bda059d] Initial private module implementation 2 files changed, 10 insertions(+) create mode 100644 go.mod create mode 100644 mysecret.go

      Now the only part left is to move your changes to your GitHub repository. Similar to a public module, use the git push command to publish your code:

      Git will then push your changes and make them available to anyone with access to your private repository:

      git push origin main
      Enumerating objects: 4, done.
      Counting objects: 100% (4/4), done.
      Delta compression using up to 8 threads
      Compressing objects: 100% (3/3), done.
      Writing objects: 100% (4/4), 404 bytes | 404.00 KiB/s, done.
      Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
      To github.com:your_github_username/mysecret.git
       * [new branch]      main -> main
      

      As with a public Go module, you can also add versions to your private Go module. The Publishing a New Module Version section of the How to Distribute Go Modules tutorial includes information on how to do this.

      In this section, you created a new module with a SecretProcess function and published it to your private mysecret GitHub repository, making it a private Go module. In order to access this module from another Go program, though, you’ll need to configure Go so it knows how to access the module.

      Configuring Go to Access Private Modules

      While Go modules are commonly distributed from their source code repositories, the Go team also runs a few central Go module services to aid ensure modules continue to exist if something happens to the original repository. By default, Go is configured to use these services, but they can cause problems when you try to download a private module because they don’t have access to those modules. To tell Go that some import paths are private and that it shouldn’t try to use the central Go services, you can use the GOPRIVATE environment variable. The GOPRIVATE environment variable is a comma-separated list of import path prefixes where, when encountered, the Go tools will try to access them directly instead of going through the central services. One such example would be the private module you just created.

      In order to use the private module, you will tell Go which path to consider private by setting it in the GOPRIVATE variable. There are a few choices you can make when setting your GOPRIVATE variable values. One option is to set GOPRIVATE to github.com. This might not be what you’re looking for, though, because this would tell Go not to use the central services for any module hosted on github.com, including the ones that aren’t yours.

      The next option would be to set GOPRIVATE to only your own user path, such as github.com/your_github_username. This solves the problem of considering all of GitHub private, but at some point you may have public modules you’ve created that you’d like to download through the Go module mirror. Doing this wouldn’t cause any problems and would be a perfectly reasonable option, but you also have the option of getting even more specific.

      The most specific option would be setting GOPRIVATE to match the path of your module exactly, such as: github.com/your_github_username/mysecret. This solves both of the issues from the previous options, but also introduces the issue that you need to add each of your private repositories to GOPRIVATE individually, such as shown here:

      GOPRIVATE=github.com/your_github_username/mysecret,github.com/your_github_username/othersecret
      

      Choosing the best option for youself is a matter of weighing the pros and cons in your situation.

      Since you only have a single private module now, we’ll use the full repository name for the value. To set the GOPRIVATE=github.com/your_github_username/mysecret environment variable in your current terminal, use the export command:

      • export GOPRIVATE=github.com/your_github_username/mysecret

      If you’d like to double-check that it’s set, you can use the env command along with grep to check for the GOPRIVATE name:

      Output

      GOPRIVATE=github.com/your_github_username/mysecret

      Even though Go now knows your module is private, it’s still not quite enough to use the module yet. If you try to go get your private module into another module, you’ll likely see an error similar to:

      • go get github.com/your_github_username/mysecret

      Output

      go get: module github.com/your_github_username/mysecret: git ls-remote -q origin in /Users/your_github_username/go/pkg/mod/cache/vcs/2f8c...b9ea: exit status 128: fatal: could not read Username for 'https://github.com': terminal prompts disabled Confirm the import path was entered correctly. If this is a private repository, see https://golang.org/doc/faq#git_https for additional information.

      This error message says Go tried to download your module, but it encountered something it still doesn’t have access to. Since Git is being used to download the module, it would usually ask you to enter your credentials. However, in this case, Go is calling Git for you and can’t prompt for them. At this point, to access your module you’ll need to provide a way for Git to retrieve your credentials without your immediate input.

      Providing Private Module Credentials for HTTPS

      One way to tell Git how to log in on your behalf is the .netrc file. Located in a user’s home directory, the .netrc file contains various host names as well as log in credentials for those hosts. It’s widely used by a number of tools, including Git.

      By default, when go get tries to download a module, it will try to use HTTPS first. However, as shown in the previous example, it’s not able to prompt you for your username and password. To give Git your credentials, you’ll need to have a .netrc that includes github.com in your home directory.

      To create a .netrc file on Linux, MacOS, or Windows Subsystem for Linux (WSL), open the .netrc file in your home directory (~/) so you can edit it:

      Next, create a new entry in the file. The machine value should be the hostname you’re setting the credentials for, which is github.com in this case. The login value should then be your GitHub username. Finally, the password value should be the GitHub personal access token you created.

      ~/.netrc

      machine github.com
      login your_github_username
      password your_github_access_token
      

      If you’d prefer, you can also put the entire entry on one line in the file as well:

      ~/.netrc

      machine github.com login your_github_username password your_github_access_token
      

      Note: If you are using Bitbucket for your source code hosting you may also need to add a second entry for api.bitbucket.org in addition to bitbucket.org. In the past, Bitbucket provided hosting for multiple types of version control, so Go would use the API to check the type of repository before trying to download it. While this is no longer the case, the API check still exists. If you encounter this issue, an example error message may look like this:

      go get bitbucket.org/your_github_username/mysecret: reading https://api.bitbucket.org/2.0/repositories/your_bitbucket_username/protocol?fields=scm: 403 Forbidden
          server response: Access denied. You must have write or admin access.
      

      If you see the 403 Forbidden error when trying to download a private module, double check the hostname Go is trying to connect to. It could indicate another hostname, such as api.bitbucket.org, that you need to add to your .netrc file.

      Now your environment is set up to use HTTPS authentication for downloading your private module. Even though HTTPS is the default way Go and Git will try to download a module, it’s also possible to tell Git to use SSH instead. Using SSH instead of HTTPS can be useful so you can use the same SSH key you used to push your private module. It also allows you to use deploy keys when setting up a CI/CD environment if you’d rather not create a personal access token.

      Providing Private Module Credentials for SSH

      To use your SSH key as the authentication method for your private Go module instead of HTTPS, Git provides a configuration option called insteadOf. The insteadOf option allows you to say that “instead of” using https://github.com/ as the request URL for all Git requests, you’d prefer to use ssh://[email protected]/.

      On Linux, MacOS, and WSL this configuration lives in the .gitconfig file. You may already be familiar with this file as it’s also where your commit email address and name are configured as well. To edit the file, use nano, or your favorite text editor, and open the ~/.gitconfig file in your home directory:

      Once you have the file open, edit it to include a url section for ssh://[email protected]/ as in the example below:

      ~/.gitconfig

      [user]
          email = [email protected]
          name = Sammy the Shark
      
      [url "ssh://[email protected]/"]
          insteadOf = https://github.com/
      

      The order of the url section relative to the user section doesn’t matter, and you also don’t need to worry if there’s nothing else in the file except for the url section you just added. The order of the email and name fields inside the user section also does not matter.

      This new section tells Git that any URL you use that starts with https://github.com/ should have that prefixed replaced with ssh://[email protected]/ instead. Since Go uses HTTPS by default, this also affects your go get commands. Using your private module as an example, this means Go turns the github.com/your_github_username/mysecret import path into the URL https://github.com/your_github_username/mysecret. When Git encounters this URL it will see the URL matches the https://github.com/ prefix referenced by insteadOf and will turn the resulting URL into ssh://[email protected]/your_github_username/mysecret.

      This same pattern can be used for domains other than GitHub as long as the ssh://git@ URL works for that host as well.

      In this section, you configured Git to use SSH to download Go modules by updating your .gitconfig file and adding a url section. Now that authentication for your private module is set up, you can access it for use in your Go programs.

      Using a Private Module

      In the previous sections, you configured Go to access your private Go module via HTTPS, SSH, or possibly both. Now that Go can access your private module, it can be used similar to any public module you may have used in the past. In this section, you’ll create a new Go module that uses your private module.

      In the directory you use for your projects, such as projects, create a directory named myproject for the new project using the mkdir command:

      Once the directory is created, go to the directory using cd and initialize a new Go module using go mod init for your project based on the repository URL your project would live at, such as github.com/your_github_username/myproject. If you don’t plan for your project to be pushed to any other repository the module name could be just myproject, or any other name, but it’s good practice to use full URLs since most modules being shared will need them.

      • cd myproject
      • go mod init github.com/your_github_username/myproject

      Output

      go: creating new go.mod: module github.com/your_github_username/myproject

      Now, create your project’s first code file by opening main.go with nano, or your favorite text editor:

      Inside the file, set up the initial main function you will call your private module from:

      projects/myproject/main.go

      package main
      
      import "fmt"
      
      func main() {
          fmt.Println("My new project!")
      }
      

      To run your project now and make sure everything is set up correctly, you can use the go run command and provide it the main.go file:

      Output

      My new project!

      Next, add your private module as a dependency of your new project using go get, similar to how you would for a public module:

      • go get github.com/your_github_username/mysecret

      The go tool will then download your private module’s code and add it as a dependency using a version string matching your latest commit hash and the time of that commit:

      Output

      go: downloading github.com/your_github_username/mysecret v0.0.0-20210920195630-bda059d63fa2 go get: added github.com/your_github_username/mysecret v0.0.0-20210920195630-bda059d63fa2

      Finally, open the main.go file again and update it to add a call to your private module’s SecretProcess function in the main function. You’ll also need to update the import statement to add your github.com/your_github_username/mysecret private module as an import as well:

      projects/myproject/main.go

      package main
      
      import (
          "fmt"
      
          "github.com/your_github_username/mysecret"
      )
      
      func main() {
          fmt.Println("My new project!")
          mysecret.SecretProcess()
      }
      

      To see the final project running with your private module, use the go run command again while providing the main.go file as the parameter:

      You will see the My new project! line from the original code, but now you’ll also see a Running the secret process! line from your imported mysecret module as well:

      Output

      My new project! Running the secret process!

      In this section, you used go init to create a new Go module to access the private module you published earlier. Once you had the module created, you then used go get to download your private module as you would with a public Go module. Finally, you used go run to compile and run your Go program using the private module.

      Conclusion

      In this tutorial, you created and published a private Go module. You also set up both HTTPS and SSH authentication to access your private Go module. Finally, you used your private module in a new project.

      For more information on Go modules, the Go project has a series of blog posts detailing how the Go tools interact with and understand modules. The Go project also has a very detailed and technical reference for Go modules in the Go Modules Reference.

      In addition to the GOPRIVATE environment variable, more variables are available to use when working with private Go modules. They can be seen in detail in the Private Modules section of the Go Modules Reference.

      If you’re interested in exploring the .netrc file in more detail, the GNU website on .netrc includes a list of all the available keywords. The git-config documentation also includes more information about how the insteadOf configuration option you used works in addition to other options that are available.

      This tutorial is also part of the DigitalOcean How to Code in Go series. The series covers a number of Go topics, from installing Go for the first time to how to use the language itself.



      Source link

      How To Set Up a Gatsby Project with TypeScript


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      TypeScript is a superset of JavaScript that adds optional static typing at build time, which cuts down on debugging runtime errors. It has grown into a powerful alternative to JavaScript over the years. At the same time, Gatsby has emerged as a useful front-end framework for creating static websites. TypeScript’s static-typing abilities go well with a static-site generator like Gatsby, and Gatsby has built-in support for coding in TypeScript.

      In this tutorial, you’re going to use Gatsby’s built-in capabilities to configure a Gatsby project for TypeScript. After this tutorial, you will have learned how to integrate TypeScript into your Gatsby project.

      Prerequisites

      • You will need to have both Node and npm installed in order to run a development environment and handle TypeScript- or Gatsby-related packages, respectively. This tutorial was tested with Node.js version 14.13.0 and npm version 6.14.8. To install on macOS or Ubuntu 18.04, follow the steps in How to Install Node.js and Create a Local Development Environment on macOS or the Installing Using a PPA section of How To Install Node.js on Ubuntu 18.04.
      • To create a new Gatsby project, you will need the Gatsby CLI command line tool installed on your computer. To set this up, follow Step 1 in How to Set Up Your First Gatsby Site. This step will also show you how to create a new Gatsby project with the gatsby new command.
      • You will need some familiarity with GraphQL queries and using GraphiQL to query for local image data. If you’d like a refresher on the query sandbox in GraphiQL, read How to Handle Images with GraphQL and the Gatsby Image API.
      • You will need sufficient knowledge of JavaScript, especially ES6+ syntax such as destructuring and imports/exports. You can find more information on these topics in Understanding Destructuring, Rest Parameters, and Spread Syntax in JavaScript and Understanding Modules and Import and Export Statements in JavaScript.
      • Since Gatsby is a React-based framework, you will be refactoring and creating components in this tutorial. You can learn more about this in How to Create Custom Components in React.
      • Additionally, you will need TypeScript installed on your machine. To do this, refer to the official TypeScript website. If you are using an editor besides Visual Studio Code, you may need to go through a few extra steps to make sure you have TypeScript performing type-checks at build time and showing any errors. For example, if you’re using Atom, you’ll need to install the atom-typescript package to be able to achieve a true TypeScript experience. If you would like to download TypeScript only for your project, do so after the Gatsby project folder has been set up.

      Step 1 — Creating a New Gatsby Site and Removing Boilerplate

      To get started, you’re going to create your Gatsby site and make sure that you can run a server and view the site. After that, you will remove some unused boilerplate files and code. This will set your project up for edits in later steps.

      Open your computer’s console/terminal and run the following command:

      • gatsby new gatsby-typescript-tutorial

      This will take a few seconds to run as it sets up the necessary boilerplate files and folders for the Gatsby site. After it is finished, cd into the project’s directory:

      • cd gatsby-typescript-tutorial

      To make sure the site’s development environment can start properly, run the following command:

      After a few seconds, you will receive the following message in the console:

      Output

      ... You can now view gatsby-starter-default in the browser. http://localhost:8000

      Usually, the default port is :8000, but you can change this by running gatsby develop -p another_number instead.

      Head over to your preferred browser and type http://localhost:8000 in the address bar to find the site. It will look like this:

      Gatsby Default Starter Site

      Next, you’ll remove all unnecessary files. This includes gatsby-node.js, gastby-browser.js, and gatsby-ssr.js:

      • rm gatsby-node.js
      • rm gastby-browser.js
      • rm gatsby-ssr.js

      Next, to finish setup, you’re going to remove some boilerplate code from your project’s index page. In your project’s root directory, head to the src directory, followed by pages and then open the index.js file.

      For this tutorial, you are only going to work with an <Image /> component, so you can delete code related to the <Link /> component, along with the h1 and p elements. Your file will then look like the following:

      gatsby-typescript-tutorial/src/pages/index.js

      import React from "react"
      
      import Layout from "../components/layout"
      import Image from "../components/image"
      import SEO from "../components/seo"
      
      const IndexPage = () => (
        <Layout>
          <SEO title="Home" />
          <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
            <Image />
          </div>
        </Layout>
      )
      
      export default IndexPage
      

      Save and close the file.

      Now that you’ve created your project and completed some initial setup, you are ready to install the necessary plugins.

      Step 2 — Installing Dependencies

      In order to set up support for TypeScript in Gatsby, you’ll need some additional plugins and dependencies, which you will install in this step.

      The gatsby-plugin-typescript plugin already comes with a newly created Gatsby site. Unless you want to change any of its default options, you don’t have to add this plugin to your gatsby-config.js file explicitly. This Gatsby plugin makes writing .ts and .tsx files in TypeScript possible.

      Since your app can read TypeScript files, you will now change Gatsby’s JavaScript files to a TypeScript file extension. In particular, change header.js, image.js, layout.js, and seo.js in src/components and index.js in src/pages to header.tsx, image.tsx, layout.tsx, seo.tsx, and index.tsx:

      • mv src/components/header.js src/components/header.tsx
      • mv src/components/image.js src/components/image.tsx
      • mv src/components/layout.js src/components/layout.tsx
      • mv src/components/seo.js src/components/seo.tsx
      • mv src/pages/index.js src/pages/index.tsx

      You are using the mv command to rename the files to the second argument. .tsx is the file extension, since these files use JSX.

      There is one important caveat about the gatsby-plugin-typescript plugin, however: it doesn’t include type-checking at build time (a core function of TypeScript). If you’re using VS Code, this won’t be an issue because TypeScript is a supported language in Visual Studio. But if you’re using another editor, like Atom, you will need to do some extra configurations to achieve a full TypeScript development experience.

      Since Gatsby is a React-based framework, adding some additional React-related typing is also recommended. To add type-checking for types specific to React, run the following command:

      To add type-checking for types related to the React DOM, use this command:

      Now that you’ve become familiar with the plugin gatsby-plugin-typescript, you are ready to configure your Gatsby site for TypeScript in the next step.

      Step 3 — Configuring TypeScript for Gatsby with the tsconfig.json File

      In this step, you will create a tsconfig.json file. A tsconfig.json file has two primary purposes: establishing the root directory of the TypeScript project (include) and overriding the TypeScript compiler’s default configurations (compilerOptions). There are a couple of ways to create this file. If you have the tsc command line tool installed with npm, you could create a new tsconfig file with tsc --init. But the file is then populated with many default options and comments.

      Instead, create a new file at the root of your directory (gatsby-typescript-project/) and name it tsconfig.json.

      Next, create an object with two properties, compilerOptions and include, populated with the following code:

      gatsby-typescript-tutorial/tsconfig.json

       {
        "compilerOptions": {
          "module": "commonjs",
          "target": "es6",
          "jsx": "preserve",
          "lib": ["dom", "es2015", "es2017"],
          "strict": true,
          "noEmit": true,
          "isolatedModules": true,
          "esModuleInterop": true,
          "skipLibCheck": true,
          "noUnusedLocals": true,
          "noUnusedParameters": true,
          "removeComments": false
        },
        "include": ["./src/**/*"]
      }
      

      Note:
      This configuration is partially based on the gatsby-starter-typescript-plus starter.

      Save this file and close it when you are done.

      The include property points to an array of filenames or paths that the compiler knows to convert from TypeScript to JavaScript.

      Here is a brief explanation of each option used in compilerOptions:

      • module - Sets the module system for the project; commonjs is used by default.
      • target - Depending on what version of JavaScript you’re using, this option determines which features to downlevel and which to leave alone. This can be helpful if your project is deployed to older environments vs. newer environments.
      • jsx - Setting for how JSX is treated in .tsx files. The preserve option leaves the JSX unchanged.
      • lib - An array of specified type-definitions of different JS libraries/APIs (dom, es2015, etc.).
      • strict - When set to true, this enables TypeScript’s type-checking abilities at build-time.
      • noEmit - Since Gatsby already uses Babel to compile your code to readable JavaScript, you change this option to true to leave TypeScript out it.
      • isolatedModules - By choosing Babel as your compiler/transpiler, you are opting for compilation one file at a time, which may cause potential problems at runtime. Setting this option to true allows TypeScript to warn you if you are about to run into this problem.
      • esModuleIterop - Enabling this option allows your use of CommonJS (your set module) and ES modules (importing and exporting custom variables and functions) to better work together and allow namespace objects for all imports.
      • noUnusedLocals and noUnusedParamters - Enabling these two options disables the errors TypeScript would normally report if you were to create an unused local variable or parameter.
      • removeComments - Setting this to false (or not setting it at all) allows there to be comments present after any TypeScript files have been converted to JavaScript.

      You can learn more about these different options and many more by visiting TypeScript’s reference guide for tsconfig.

      Now that TypeScript is configured for Gatsby, you are going to complete your TypeScript integration by refactoring some of your boilerplate files in src/components and src/pages.

      Step 4 — Refactoring seo.tsx for TypeScript

      In this step, you’re going to add some TypeScript syntax to the seo.tsx file. This step goes in depth to explain some concepts of TypeScript; the next step will show how to refactor other boilerplate code in a more abbreviated manner.

      One feature of TypeScript is its flexibility with its syntax. If you don’t want to add typing to your variables explicitly, you don’t have to. Gatsby believes that adopting TypeScript in your workflow “can and should be incremental”, and so this step will concentrate on three core TypeScript concepts:

      • basic types
      • defining types and interfaces
      • working with build-time errors

      Basic Types in TypeScript

      TypeScript supports basic datatypes including: boolean, number, and string. The major syntactical difference with TypeScript, compared to JavaScript, is that variables can now be defined with an associated type.

      For example, the following code block shows how to assign the basic types with the highlighted code:

      let num: number;
      num = 0
      
      let str: string;
      str = "TypeScript & Gatsby"
      
      let typeScriptIsAwesome: boolean;
      typeScriptIsAwesome = true;
      

      In this code, num must be a number, str must be a string, and typeScriptIsAwesome must be a boolean.

      Now you will examine the defaultProps and propTypes declarations in the seo.tsx file, found in the src/components directory. Open the file in your editor and look for the following highlighted lines:

      gatsby-typescript-tutorial/src/components/seo.tsx

      ...
      import React from "react"
      import PropTypes from "prop-types"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      ...
            ].concat(meta)}
          />
        )
      }
      
      
      SEO.defaultProps = {
        lang: `en`,
        meta: [],
        description: ``,
      }
      
      SEO.propTypes = {
        description: PropTypes.string,
        lang: PropTypes.string,
        meta: PropTypes.arrayOf(PropTypes.object),
        title: PropTypes.string.isRequired,
      }
      
      export default SEO
      

      By default, a Gatsby site’s SEO component comes with a weak typing system using PropTypes. The defaultProps and propTypes are explicitly declared, using the imported PropsTypes class. For example, in the meta prop (or alias) of the propTypes object, its value is an array of objects, each of which is itself a prop of the PropTypes component. Some props are marked as required (isRequired) while others are not, implying they are optional.

      Since you are using TypeScript, you will be replacing this typing system. Go ahead and delete defaultProps and propTypes (along with the import statement for the PropTypes at the top of the file). Your file will look like the following:

      gatsby-typescript-tutorial/src/components/seo.tsx

       ...
      import React from "react"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      
      ...
            ].concat(meta)}
          />
        )
      }
      
      export default SEO
      

      Now that you’ve removed the default typing, you’ll write out the type aliases with TypeScript.

      Defining TypeScript Interfaces

      In TypeScript, an interface is used to define the “shape” of a custom type. These are used to represent the value type of complex pieces of data like React components and function parameters. In the seo.tsx file, you’re going to build an interface to replace the defaultProps and propTypes definitions that were deleted.

      Add the following highlighted lines:

      gatsby-typescript-tutorial/src/components/seo.ts

       ...
      import React from "react"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      ...
      
      
      

      The interface SEOProps accomplishes what SEO.propTypes did by setting each of the properties associated data type as well as marking some as required with the ? character.

      Typing a Function

      Just like in JavaScript, functions play an important role in TypeScript applications. You can even type functions by specifying the datatype of the arguments passed into them. In the seo.tsx file, you will now work on the defined SEO function component. Under where the interface for SEOProps was defined, you’re going to explicitly declare the type of the SEO component’s function arguments, along with a return type of SEOProps right after:

      Add the following highlighted code:

      gatsby-typescript-tutorial/src/components/seo.ts

      ...
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      function SEO({ description='', lang='en', meta=[], title }: SEOProps) {
        ...
      }
      

      Here you set defaults for the SEO function arguments so that they adhere to the interface, and added the interface with : SEOProps. Remember that you at least have to include the title in the list of arguments passed to the SEO component because it was defined as a required property in the SEOProps interface you defined earlier.

      Lastly, you can revise the metaDescription and defaultTitle constant declarations by setting their type, which is string in this case:

      gatsby-typescript-tutorial/src/components/seo.tsx

       ...
      function SEO({ description='', lang='en', meta=[], title }: SEOProps) {
        const { site } = useStaticQuery(
          graphql`
            query {
              site {
                siteMetadata {
                  title
                  description
                  author
                }
              }
            }
          `
        )
      
        const metaDescription: string = description || site.siteMetadata.description
        const defaultTitle: string = site.siteMetadata?.title
      ...
      

      Another type in TypeScript is the any type. For situations where you’re dealing with a variable whose type is unclear or difficult to define, use any as a last resort to avoid any build-time errors.

      An example of using the any type is when dealing with data fetched from a third-party, like an API request or a GraphQL query. In the seo.tsx file, where the destructured site property is defined with a GraphQL static query, set its type to any:

      gatsby-typescript-tutorial/src/components/seo.tsx

      ...
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      function SEO({ description='', lang='en', meta=[], title }: Props) {
        const { site }: any = useStaticQuery(
          graphql`
            query {
              site {
                siteMetadata {
                  title
                  description
                  author
                }
              }
            }
          `
        )
        ...
      }
      

      Save and exit the file.

      It’s important to always keep the defined values consistent with their type. Otherwise, you will see build-time errors appear via the TypeScript compiler.

      Build-Time Errors

      It will be helpful to become accustomed to the errors TypeScript will catch and report at build-time. The idea is that TypeScript catches these errors, mostly type-related, at build-time, and this cuts down on the amount of debugging in the long run (in compile-time).

      One example of a build-time error occurring is when you declare a variable of one type but assign it a value that is of another type. If you were to change the value of one of the keyword arguments passed to the SEO component to one of a different type, the TypeScript compiler will detect the inconsistency and report the error. The following is an image of what this looks like in VSCode:

      A build-time error in VSCode when the description variable is set to a number.

      The error says Type 'number' is not assignable to type 'string'. This is because, when you set up your interface, you said the description property would be of type string. The value 0 is of type number. If you change the value of description back into a “string”, the error message will go away.

      Step 5 — Refactoring the Rest of the Boilerplate

      Lastly, you will refactor the remaining boilerplate files with TypeScript: layout.tsx, image.tsx, and header.tsx. Like seo.tsx, these component files are located in the src/components directory.

      Open src/components/layout.tsx. Towards the bottom, is the defined Layout.propTypes. Delete the following highlighted lines:

      gatsby-typescript-tutorial/src/components/layout.tsx

       import React from "react"
      import PropTypes from "prop-types"
      import { useStaticQuery, graphql } from "gatsby"
      ...
      
      Layout.propTypes = {
        children: PropTypes.node.isRequired,
      }
      
      export default Layout
      

      The children prop shows that its value is of type node per the PropTypes class. Plus, it’s a required prop. Since the children in the layout could be anything from simple text to React child components, use ReactNode as the associated type by importing near the top and adding it to the interface:

      Add the following highlighted lines:

      gatsby-typescript-tutorial/src/components/layout.tsx

      ...
      import React, { ReactNode } from "react"
      import { useStaticQuery, graphql } from "gatsby"
      
      import Header from "./header"
      import "./layout.css"
      
      interface LayoutProps {
        children: ReactNode
      }
      
      const Layout = ({ children }: LayoutProps) => {
        ...
      

      Next, add a type to the data variable that stores a GraphQL query that fetches site title data. Since this query object is coming from a third-party entity like GraphQL, give data an any type. Lastly, add the string type to the siteTitle variable that works with that data:

      gatsby-typescript-tutorial/src/components/layout.tsx

       ...
      const Layout = ({ children }: LayoutProps) => {
        const data: any = useStaticQuery(graphql`
        query MyQuery {
          site {
            siteMetadata {
              title
            }
          }
        }
      `)
      
      const siteTitle: string = data.site.siteMetadata?.title || `Title`
      
        return (
          <>
            <Header siteTitle={siteTitle} />
            <div
      ...
      

      Save and close the file.

      Next, open the src/components/image.tsx file.

      Here you are dealing with a similar situation as layout.tsx. There is a data variable that stores a GraphQL query that could have an any type. The image fluid data that is passed into the fluid attribute of the <Img /> component could be separated from the return statement into its own variable. It’s also a complex variable like data, so give this an any type as well:

      gatsby-typescript-tutorial/src/components/image.tsx

      ...
      const Image = () => {
        const data: any = useStaticQuery(graphql`
          query {
            placeholderImage: file(relativePath: { eq: "gatsby-astronaut.png" }) {
              childImageSharp {
                fluid(maxWidth: 300) {
                  ...GatsbyImageSharpFluid
                }
              }
            }
          }
        `)
      
        if (!data?.placeholderImage?.childImageSharp?.fluid) {
          return <div>Picture not found</div>
        }
      
        const imageFluid: any = data.placeholderImage.childImageSharp.fluid
      
        return <Img fluid={imageFluid} />
      }
      
      export default Image
      

      Save and close the file.

      Now open the src/components/header.tsx file. This file also comes with predefined prop types, using the PropTypes class. Like seo.tsx, image.tsx, and layout.tsx, replace Header.defaultProps and Header.propTypes with an interface using the same prop names:

      gatsby-typescript-tutorial/src/components/header.tsx

      import { Link } from "gatsby"
      import React from "react"
      
      interface HeaderProps {
        siteTitle: string
      }
      
      const Header = ({ siteTitle }: HeaderProps) => (
        <header
          style={{
            background: `rebeccapurple`,
            marginBottom: `1.45rem`,
          }}
        >
          <div
            style={{
              margin: `0 auto`,
              maxWidth: 960,
              padding: `1.45rem 1.0875rem`,
            }}
          >
            <h1 style={{ margin: 0 }}>
              <Link
                to="/"
                style={{
                  color: `white`,
                  textDecoration: `none`,
                }}
              >
                {siteTitle}
              </Link>
            </h1>
          </div>
        </header>
      )
      
      export default Header
      

      Save and close the file.

      With your files refactored for TypeScript, you can now restart the server to make sure everything is working. Run the following command:

      When you navigate to localhost:8000, your browser will render the following:

      Gatsby Default Development page

      Conclusion

      TypeScript’s static-typing capabilities go a long way in keeping debugging at a minimum. It’s also a great language for Gatsby sites since it’s supported by default. Gatsby itself is a useful front-end tool for creating static-sites, such as landing pages.

      You now have two popular tools at your disposal. To learn more about TypeScript and all you can do with it, head over to the official TypeScript handbook.



      Source link

      How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

      Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

      In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
      • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Deploying Multiple Infrastructure Instances Using Workspaces

      Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

      However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

      Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

      Deploying Resources in Workspaces

      You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

      You’ll store the Droplet definition in a file called droplets.tf.

      Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

      Apply the project for Terraform to run its actions with:

      • terraform apply -var "do_token=${DO_PAT}"

      Your output will be similar to the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted to deploy the Droplet in the default workspace.

      The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

      You’ll receive the following output:

      Output

      * default

      The asterisk (*) means that you currently have that workspace selected.

      Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

      • terraform workspace new testing

      You’ll have output similar to:

      Output

      Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      You plan the deployment of the Droplet again by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the previous run:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

      To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

      The output will be the name of the current workspace:

      Output

      testing

      To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

      • terraform workspace select default

      You’ll receive output of Terraform confirming the switch:

      Output

      Switched to workspace "default".

      You can then delete it by running workspace delete:

      • terraform workspace delete testing

      Terraform will then perform the deletion:

      Output

      Deleted workspace "testing"!

      You can destroy the Droplet you’ve deployed in the default workspace by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

      Deploying Stateful Resources

      Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

      Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

      Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

      Open it for editing:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      resource "digitalocean_volume" "volume" {
        region                  = "fra1"
        name                    = "new-volume"
        size                    = 10
        initial_filesystem_type = "ext4"
        description             = "New Volume for Droplet"
      }
      
      resource "digitalocean_volume_attachment" "volume_attachment" {
        droplet_id = digitalocean_droplet.web.id
        volume_id  = digitalocean_volume.volume.id
      }
      

      Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

      When you’re done, save and close the file.

      Plan this configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The actions that Terraform will plan will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

      You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

      Referencing Pre-made Modules

      Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

      In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

      Terraform Registry - SSH key Module

      You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

      Add the following lines:

      ssh-key.tf

      module "ssh-key" {
        source         = "clouddrove/ssh-key/digitalocean"
        key_path       = "~/.ssh/id_rsa.pub"
        key_name       = "new-ssh-key"
        enable_ssh_key = true
      }
      

      This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

      When you’re done, save and close the file.

      Before you plan this code, you must download the referenced module by running:

      You’ll receive output similar to the following:

      Output

      Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

      You can now plan the code for the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output similar to this:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ...

      The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

      Conclusion

      Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

      For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.



      Source link