One place for hosting & domains

      July 2019

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode

      Contributed by

      Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      ToolDescriptionMaster NodeWorker Nodes
      kubeadmThis tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling.xx
      Container RuntimeA container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime.xx
      kubeletkubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior.xx
      kubectlA command line tool used to manage a Kubernetes cluster.xx
      Control PlaneSeries of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide.x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.


      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh [email protected]
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Create and Deploy a Docker Container Image to a Kubernetes Cluster


      Updated by Linode

      Contributed by

      Linode

      Kubernetes and Docker

      Kubernetes is a system that automates the deployment, scaling, and management of containerized applications. Containerizing an application requires a base image that can be used to create an instance of a container. Once an application’s image exists, you can push it to a centralized container registry that Kubernetes can use to deploy container instances in a cluster’s pods.

      While Kubernetes supports several container runtimes, Docker is a very popular choice. Docker images are created using a Dockerfile that contains all commands, in their required order of execution, needed to build a given image. For example, a Dockerfile might contain instructions to install a specific operating system referencing another image, install an application’s dependencies, and execute configuration commands in the running container.

      Docker Hub is a centralized container image registry that can host your images and make them available for sharing and deployment. You can also find and use official Docker images and vendor specific images. When combined with a remote version control service, like GitHub, Docker Hub allows you to automate building container images and trigger actions for further automation with other services and tooling.

      Scope of This Guide

      This guide will show you how to package a Hugo static site in a Docker container image, host the image on Docker Hub, and deploy the container image on a Kubernetes cluster running on Linode. This example, is meant to demonstrate how applications can be containerized using Docker to leverage the deployment and scaling power of Kubernetes.

      Hugo is written in Go and is known for being extremely fast to compile sites, even very large ones. It is well-supported, well-documented, and has an active community. Some useful Hugo features include shortcodes, which are an easy way to include predefined templates inside of your Markdown, and built-in LiveReload web server, which allows you to preview your site changes locally as you make them.

      Note

      This guide was written using version 1.14 of Kubectl.

      Before You Begin

      1. Create a Kubernetes cluster with one worker node. This can be done in two ways:

        1. Deploy a Kubernetes cluster using kubeadm.
          • You will need to deploy two Linodes. One will serve as the master node and the other will serve as a worker node.
        2. Deploy a Kubernetes cluster using k8s-alpha CLI.
      2. Create a GitHub account if you don’t already have one.

      3. Create a Docker Hub account if you don’t already have one.

      Set up the Development Environment

      Development of your Hugo site and Docker image will take place locally on your personal computer. You will need to install Hugo, Docker CE, and Git, a version control software, on your personal computer to get started.

      1. Use the How to Install Git on Linux, Mac or Windows guide for the steps needed to install Git.

      2. Install Hugo. Hugo’s official documentation contains more information on installation methods, like Installing Hugo from Tarball. Below are installation instructions for common operating systems:

        • Debian/Ubuntu:

          sudo apt-get install hugo
          
        • Fedora, Red Hat and CentOS:

          sudo dnf install hugo
          
        • Mac, using Homebrew:

          brew install hugo
          
      3. These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

        1. Remove any older installations of Docker that may be on your system:

          sudo apt remove docker docker-engine docker.io
          
        2. Make sure you have the necessary packages to allow the use of Docker’s repository:

          sudo apt install apt-transport-https ca-certificates curl software-properties-common
          
        3. Add Docker’s GPG key:

          curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
          
        4. Verify the fingerprint of the GPG key:

          sudo apt-key fingerprint 0EBFCD88
          

          You should see output similar to the following:

            
          pub   4096R/0EBFCD88 2017-02-22
                  Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
          uid                  Docker Release (CE deb) 
          sub   4096R/F273FCD8 2017-02-22
          
          
        5. Add the stable Docker repository:

          sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
          

          Note

          For Ubuntu 19.04 if you get an E: Package 'docker-ce' has no installation candidate error this is because the stable version of docker for is not yet available. Therefore, you will need to use the edge / test repository.

          sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable edge test"
          
        6. Update your package index and install Docker CE:

          sudo apt update
          sudo apt install docker-ce
          
        7. Add your limited Linux user account to the docker group:

          sudo usermod -aG docker $USER
          

          Note

          After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

        8. Check that the installation was successful by running the built-in “Hello World” program:

          docker run hello-world
          

      Create a Hugo Site

      Initialize the Hugo Site

      In this section you will use the Hugo CLI (command line interface) to create your Hugo site and initialize a Hugo theme. Hugo’s CLI provides several useful commands for common tasks needed to build, configure, and interact with your Hugo site.

      1. Create a new Hugo site on your local computer. This command will create a folder named example-site and scaffold Hugo’s directory structure inside it:

        hugo new site example-site
        
      2. Move into your Hugo site’s root directory:

        cd example-site
        
      3. You will use Git to add a theme to your Hugo site’s directory. Initialize your Hugo site’s directory as a Git repository:

        git init
        
      4. Install the Ananke theme as a submodule of your Hugo site’s Git repository. Git submodules allow one Git repository to be stored as a subdirectory of another Git repository, while still being able to maintain each repository’s version control information separately. The Ananke theme’s repository will be located in the ~/example-site/themes/ananke directory of your Hugo site.

        git submodule add https://github.com/budparr/gohugo-theme-ananke.git themes/ananke
        

        Note

        Hugo has many available themes that can be installed as a submodule of your Hugo site’s directory.
      5. Add the theme to your Hugo site’s configuration file. The configuration file (config.toml) is located at the root of your Hugo site’s directory.

        echo 'theme = "ananke"' >> config.toml
        

      Add Content to the Hugo Site

      You can now begin to add content to your Hugo site. In this section you will add a new post to your Hugo site and generate the corresponding static file by building the Hugo site on your local computer.

      1. Create a new content file for your site. This command will generate a Markdown file with an auto-populated date and title:

        hugo new posts/my-first-post.md
        
      2. You should see a similar output. Note that the file is located in the content/posts/ directory of your Hugo site:

          
        /home/username/example-site/content/posts/my-first-post.md created
            
        
      3. Open the Markdown file in the text editor of your choice to begin modifying its content; you can copy and paste the example snippet into your file, which contains an updated front matter section at the top and some example Markdown body text.

        Set your desired value for title. Then, set the draft state to false and add your content below the --- in Markdown syntax, if desired:

        /home/username/example-site/content/posts/my-first-post.md
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        ---
        title: "My First Post"
        date: 2019-05-07T11:25:11-04:00
        draft: false
        ---
        
        # Kubernetes Objects
        
        In Kubernetes, there are a number of objects that are abstractions of your Kubernetes system’s desired state. These objects represent your application, its networking, and disk resources – all of which together form your application. Kubernetes objects can describe:
        
        - Which containerized applications are running on the cluster
        - Application resources
        - Policies that should be applied to the application


        About front matter

        Front matter is a collection of metadata about your content, and it is embedded at the top of your file within opening and closing --- delimiters.

        Front matter is a powerful Hugo feature that provides a mechanism for passing data that is attached to a specific piece of content to Hugo’s rendering engine. Hugo accepts front matter in TOML, YAML, and JSON formats. In the example snippet, there is YAML front matter for the title, date, and draft state of the Markdown file. These variables will be referenced and displayed by your Hugo theme.

      4. Once you have added your content, you can preview your changes by building and serving the site using Hugo’s built-in webserver:

        hugo server
        
      5. You will see a similar output:

          
        &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| EN
        +------------------+----+
          Pages&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| 11
          Paginator pages&nbsp&nbsp&nbsp&nbsp|  0
          Non-page files&nbsp&nbsp&nbsp&nbsp&nbsp|  0
          Static files&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  3
          Processed images&nbsp&nbsp&nbsp|  0
          Aliases&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Sitemaps&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Cleaned&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  0
        
        Total in 7 ms
        Watching for changes in /home/username/example-site/{content,data,layouts,static,themes}
        Watching for config changes in /home/username/example-site/config.toml
        Serving pages from memory
        Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
        Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
        Press Ctrl+C to stop
        
        
      6. The output will provide a URL to preview your site. Copy and paste the URL into a browser to access the site. In the above example Hugo’s web server URL is http://localhost:1313/.

      7. When you are happy with your site’s content you can build the site:

        hugo -v
        

        Hugo will generate your site’s static HTML files and store them in a public directory that it will create inside your project. The static files that are generated by Hugo are the files that will be served to the internet through your Kubernetes cluster.

      8. View the contents of your site’s public directory:

        ls public
        

        Your output should resemble the following example. When you built the site, the Markdown file you created and edited in steps 6 and 7 was used to generate its corresponding static HTML file in the public/posts/my-first-post/index.html directory.

          
          404.html    categories  dist        images      index.html  index.xml   posts       sitemap.xml tags
            
        

      Version Control the Site with Git

      The example Hugo site was initialized as a local Git repository in the previous section. You can now version control all content, theme, and configuration files with Git. Once you have used Git to track your local Hugo site files, you can easily push them to a remote Git repository, like GitHub or GitLab. Storing your Hugo site files on a remote Git repository opens up many possibilities for collaboration and automating Docker image builds. This guide will not cover automated builds, but you can learn more about it on Docker’s official documentation.

      1. Add a .gitignore file to your Git repository. Any files or directories added to the .gitignore file will not be tracked by Git. The Docker image you will create in the next section will handle building your static site files. For this reason it is not necessary to track the public directory and its content.

        echo 'public/' >> .gitignore
        
      2. Display the state of your current working directory (root of your Hugo site):

        git status
        
      3. Stage all your files to be committed:

        git add -A
        
      4. Commit all your changes and add a meaningful commit message:

        git commit -m 'Add content, theme, and config files.'
        

        Note

        Any time you complete work related to one logical change to the Hugo site, you should make sure you commit the changes to your Git repository. Keeping your commits attached to small changes makes it easier to understand the changes and to roll back to previous commits, if necessary. See the Getting Started with Git guide for more information.

      Create a Docker Image

      Create the Dockerfile

      A Dockerfile contains the steps needed to build a Docker image. The Docker image provides the minimum set up and configuration necessary to deploy a container that satisfies its specific use case. The Hugo site’s minimum Docker container configuration requirements are an operating system, Hugo, the Hugo site’s content files, and the NGINX web server.

      1. In your Hugo site’s root directory, create and open a file named Dockerfile using the text editor of your choice. Add the following content to the file. You can read the Dockerfile comments to learn what each command will execute in the Docker container.

        Dockerfile
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        
        #Install the container's OS.
        FROM ubuntu:latest as HUGOINSTALL
        
        # Install Hugo.
        RUN apt-get update
        RUN apt-get install hugo
        
        # Copy the contents of the current working directory to the hugo-site
        # directory. The directory will be created if it doesn't exist.
        COPY . /hugo-site
        
        # Use Hugo to build the static site files.
        RUN hugo -v --source=/hugo-site --destination=/hugo-site/public
        
        # Install NGINX and deactivate NGINX's default index.html file.
        # Move the static site files to NGINX's html directory.
        # This directory is where the static site files will be served from by NGINX.
        FROM nginx:stable-alpine
        RUN mv /usr/share/nginx/html/index.html /usr/share/nginx/html/old-index.html
        COPY --from=HUGOINSTALL /hugo-site/public/ /usr/share/nginx/html/
        
        # The container will listen on port 80 using the TCP protocol.
        EXPOSE 80
            
      2. Add a .dockerignore file to your Hugo repository. It is important to ensure that your images are as small as possible to reduce the time it takes to build, pull, push, and deploy the container. The .dockerignore file excludes files and directories that are not necessary for the function of your container or that may contain sensitive information that you do not want to included in the image. Since the Docker image will build the static Hugo site files, you can ignore the public/ directory. You can also exclude any Git related files and directories because they are not needed on the running container.

        echo -e "public/n.git/n.gitmodules/n.gitignore" >> .dockerignore
        
      3. Follow the steps 2 – 4 in the Version Control the Site with Git section to add any new files created in this section to your local git repository.

      Build the Docker Image

      You are now ready to build the Docker image. When Docker builds an image it incorporates the build context. A build context includes any files and directories located in the current working directory. By default, Docker assumes the current working directory is also the location of the Dockerfile.

      Note

      If you have not yet created a Docker Hub account, you will need to do so before proceeding with this section.
      1. Build the Docker image and add a tag mydockerhubusername/hugo-site:v1 to the image. Ensure you are in the root directory of your Hugo site. The tag will make it easy to reference a specific image version when creating your Kubernetes deployment manifest. Replace mydockerhubusername with your Docker Hub username and hugo-site with a Docker repository name you prefer.

        docker build -t mydockerhubusername/hugo-site:v1 .
        

        You should see a similar output. The entirety of the output has been removed for brevity:

          
        Sending build context to Docker daemon  3.307MB
        Step 1/10 : FROM ubuntu:latest as HUGOINSTALL
         ---> 94e814e2efa8
        Step 2/10 : ENV HUGO_VERSION=0.55.4
         ---> Using cache
         ---> e651df397e32
         ...
        
        Successfully built 50c590837916
        Successfully tagged hugo-k8s:v1
            
        
      2. View all locally available Docker images:

        docker images
        

        You should see the docker image hugo-site:v1 listed in the output:

          
        REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
        hugo-k8s            v1                  50c590837916        1 day ago          16.5MB
            
        


      Push your Hugo Site Repository to GitHub

      You can push your local Hugo site’s Git repository to GitHub in order to set up Docker automated builds. Docker automated builds will build an image using a external repository as the build context and automatically push the image to your Docker Hub repository. This step is not necessary to complete this guide.

      Host your Image on Docker Hub

      Hosting your Hugo site’s image on Docker Hub will enable you to use the image in a Kubernetes cluster deployment. You will also be able to share the image with collaborators and the rest of the Docker community.

      1. Log into your Docker Hub account via the command line on your local computer. Enter your username and password when prompted.

        docker login
        
      2. Push the local Docker image to Docker Hub. Replace mydockerhubusername/hugo-site:v1 with your image’s tag name.

        docker push mydockerhubusername/hugo-site:v1
        
      3. Navigate to Docker Hub to view your image on your account.

        The url for your image repository should be similar to the following: https://cloud.docker.com/repository/docker/mydockerhubusername/hugo-site. Replace the username and repository name with your own.

      Configure your Kubernetes Cluster

      This section will use kubectl to configure and manage your Kubernetes cluster. If your cluster was deployed using kubeadm, you will need to log into your master node to execute the kubectl commands in this section. If, instead, you used the k8s-alpha CLI you can run all commands from your local computer.

      In this section, you will create namespace, deployment, and service manifest files for your Hugo site deployment and apply them to your cluster with kubectl. Each manifest file creates different resources on the Kubernetes API that are used to create and the Hugo site’s pods on the worker nodes.

      Create the Namespace

      Namespaces provide a powerful way to logically partition your Kubernetes cluster and isolate components and resources to avoid collisions across the cluster. A common use-case is to encapsulate dev/testing/production environments with namespaces so that they can each utilize the same resource names across each stage of development.

      Namespaces add a layer of complexity to a cluster that may not always be necessary. It is important to keep this in mind when formulating the architecture for a project’s application. This example will create a namespace for demonstration purposes, but it is not a requirement. One situation where a namespace would be beneficial, in the context of this guide, would be if you were a developer and wanted to manage Hugo sites for several clients with a single Kubernetes cluster.

      1. Create a directory to store your Hugo site’s manifest files.

        mkdir -p clientx/k8s-hugo/
        
      2. Create the manifest file for your Hugo site’s namespace with the following content:

        clientx/k8s-hugo/ns-hugo-site.yaml
        1
        2
        3
        4
        5
        
        apiVersion: v1
        kind: Namespace
        metadata:
          name: hugo-site
              
        • The manifest file declares the version of the API in use, the kind of resource that is being defined, and metadata about the resource. All manifest files should provide this information.
        • The key-value pair name: hugo-site defines the namespace object’s unique name.
      3. Create the namespace from the ns-hugo-site.yaml manifest.

        kubectl create -f clientx/k8s-hugo/ns-hugo-site.yaml
        
      4. View all available namespaces in your cluster:

        kubectl get namespaces
        

        You should see the hugo-site namespace listed in the output:

          
        NAME          STATUS   AGE
        default       Active   1d
        hugo-site     Active   1d
        kube-public   Active   1d
        kube-system   Active   1d
            
        

      Create the Service

      The service will group together all pods for the Hugo site, expose the same port on all pods to the internet, and load balance site traffic between all pods. It is best to create a service prior to any controllers (like a deployment) so that the Kubernetes scheduler can distribute the pods for the service as they are created by the controller.

      The Hugo site’s service manifest file will use the NodePort method to get external traffic to the Hugo site service. NodePort opens a specific port on all the Nodes and any traffic that is sent to this port is forwarded to the service. Kubernetes will choose the port to open on the nodes if you do not provide one in your service manifest file. It is recommended to let Kubernetes handle the assignment. Kubernetes will choose a port in the default range, 30000-32767.

      Note

      The k8s-alpha CLI creates clusters that are pre-configured with useful Linode service integrations, like the Linode Cloud Controller Manager (CCM) which provides access to Linode’s load balancer service, NodeBalancers. In order to use Linode’s NodeBalancers you can use the LoadBalancer service type instead of NodePort in your Hugo site’s service manifest file. For more details, see the Kubernetes Cloud Controller Manager for Linode GitHub repository.
      1. Create the manifest file for your service with the following content.

        clientx/k8s-hugo/service-hugo.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        apiVersion: v1
        kind: Service
        metadata:
          name: : hugo-site
          namespace: hugo-site
        spec:
          selector:
            app: hugo-site
          ports:
          - protocol: TCP
            port: 80
            targetPort: 80
          type: NodePort
            
        • The spec key defines the Hugo site service object’s desired behavior. It will create a service that exposes TCP port 80 on any pod with the app: hugo-site label.
        • The exposed container port is defined by the targetPort:80 key-value pair.
      2. Create the service for your hugo site:

        kubectl create -f clientx/k8s-hugo/service-hugo.yaml
        
      3. View the service and its corresponding information:

        kubectl get services -n hugo-site
        

        Your output will resemble the following:

          
        NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
        hugo-site   NodePort   10.108.110.6           80:30304/TCP   1d
            
        

      Create the Deployment

      A deployment is a controller that helps manage the state of your pods. The Hugo site deployment will define how many pods should be kept up and running with the Hugo site service and which container image should be used.

      1. Create the manifest file for your Hugo site’s deployment. Copy the following contents to your file.

        clientx/k8s-hugo/deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hugo-site
          namespace: hugo-site
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: hugo-site
          template:
            metadata:
              labels:
                app: hugo-site
            spec:
              containers:
              - name: hugo-site
                image: mydockerhubusername/hugo-site:v1
                imagePullPolicy: Always
                ports:
                - containerPort: 80
              
        • The deployment’s object spec states that the deployment should have 3 replica pods. This means at any given time the cluster will have 3 pods that run the Hugo site service.
        • The template field provides all the information needed to create actual pods.
        • The label app: hugo-site helps the deployment know which service pods to target.
        • The container field states that any containers connected to this deployment should use the Hugo site image mydockerhubusername/hugo-site:v1 that was created in the Build the Docker Image section of this guide.
        • imagePullPolicy: Always means that the container image will be pulled every time the pod is started.
        • containerPort: 80 states the port number to expose on the pod’s IP address. The system does not rely on this field to expose the container port, instead, it provides information about the network connections a container uses.
      2. Create the deployment for your hugo site:

        kubectl create -f clientx/k8s-hugo/deployment.yaml
        
      3. View the Hugo site’s deployment:

        kubectl get deployment hugo-site -n hugo-site
        

        Your output will resemble the following:

          
        NAME        READY   UP-TO-DATE   AVAILABLE   AGE
        hugo-site   3/3     3            3           1d
            
        

      View the Hugo Site

      After creating all required manifest files to configure your Hugo site’s Kubernetes cluster, you should be able to view the site using a worker node’s IP address and its exposed port.

      1. Get your worker node’s external IP address. Copy down the EXTERNAL-IP value for any worker node in the cluster:

        kubectl get nodes -o wide
        
      2. Access the hugo-site services to view its exposed port.

        kubectl get svc -n hugo-site
        

        The output will resemble the following. Copy down the listed port number in the 30000-32767 range.

          
        NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
        hugo-site   NodePort   10.108.110.6           80:30304/TCP   1d
            
        
      3. Open a browser window and enter in a worker node’s IP address and exposed port. An example url to your Hugo site would be, http://192.0.2.1:30304. Your Hugo site should appear.

        If desired, you can purchase a domain name and use Linode’s DNS Manager to assign a domain name to the cluster’s worker node IP address.

      Tear Down Your Cluster

      To avoid being further billed for your Kubernetes cluster, tear down your cluster’s Linodes. If you have Linodes that existed for only part a monthly billing cycle, you’ll be billed at the hourly rate for that service. See How Hourly Billing Works to learn more.

      Next Steps

      Now that you are familiar with basic Kubernetes concepts, like configuring pods, grouping resources, and deploying services, you can deploy a Kubernetes cluster on Linode for production use by using the steps in the following guides:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Troubleshooting Kubernetes


      Updated by Linode

      Written by Linode Community

      Troubleshooting issues with Kubernetes can be complex, and it can be difficult to account for all the possible error conditions you may see. This guide tries to equip you with the core tools that can be useful when troubleshooting, and it introduces some situations that you may find yourself in.


      Where to go for help outside this guide

      If your issue is not covered by this guide, we also recommend researching and posting in the Linode Community Questions site and in #linode on the Kubernetes Slack, where other Linode users (and the Kubernetes community) can offer advice.

      If you are running a cluster on Linode’s managed LKE service, and you are experiencing an issue related to your master/control plane components, you can report these issues to Linode by contacting Linode Support. Examples in this category include:

      • Kubernetes’ API server not running. If kubectl does not respond as expected, this can indicate problems with the API server.

      • The CCM, CSI, Calico, or kube-dns pods are not running.

      • Annotations on LoadBalancer services aren’t functioning.

      • PersistentVolumes are not re-attaching.

      Please note that the kube-apiserver and etcd pods will not be visible for LKE clusters, and this is expected. Issues outside of the scope of Linode Support include:

      In this guide we will:

      General Troubleshooting Strategies

      To troubleshoot issues with the applications running on your cluster, you can rely on the kubectl command to gather debugging information. kubectl includes a set of subcommands that can be used to research issues with your cluster, and this guide will highlight four of them: get, describe, logs, and exec.

      To troubleshoot issues with your cluster, you may need to directly view the logs that are generated by Kubernetes’ components.

      kubectl get

      Use the get command to list different kinds of resources in your cluster (nodes, pods, services, etc). The output will show the status for each resource returned. For example, this output shows that a pod is in the CrashLoopBackOff status, which means it should be investigated further:

      kubectl get pods
      NAME              READY     STATUS             RESTARTS   AGE
      ghost-0           0/1       CrashLoopBackOff   34         2h
      mariadb-0         1/1       Running            0          2h
      
      • Use the --namespace flag to show resources in a certain namespace:

        # Show pods in the `kube-system` namespace
        kubectl get pods --namespace kube-system
        

        Note

        If you’ve set up Kubernetes using automated solutions like Linode’s Kubernetes Engine, k8s-alpha CLI, or Rancher, you’ll see csi-linode and ccm-linode pods in the kube-system namespace. This is normal as long as they’re in the Running status.
      • Use the -o flag to return the resources as YAML or JSON. The Kubernetes API’s complete description for the returned resources will be shown:

        # Get pods as YAML API objects
        kubectl get pods -o yaml
        
      • Sort the returned resources with the --sort-by flag:

        # Sort by name
        kubectl get pods --sort-by=.metadata.name
        
      • Use the --selector or -l flag to get resources that match a label. This is useful for finding all pods for a given service:

        # Get pods which match the app=ghost selector
        kubectl get pods -l app=ghost
        
      • Use the --field-selector flag to return resources which match different resource fields:

        # Get all pods that are Pending
        kubectl get pods --field-selector status.phase=Pending
        
        # Get all pods that are not in the kube-system namespace
        kubectl get pods --field-selector metadata.namespace!=kube-system
        

      kubectl describe

      Use the describe command to return a detailed report of the state of one or more resources in your cluster. Pass a resource type to the describe command to get a report for each of those resources:

      kubectl describe nodes
      

      Pass the name of a resource to get a report for just that object:

      kubectl describe pods ghost-0
      

      You can also use the --selector (-l) flag to filter the returned resources, as with the get command.

      kubectl logs

      Use the logs command to print logs collected by a pod:

      kubectl logs mariadb-0
      
      • Use the --selector (-l) flag to print logs from all pods that match a selector:

        kubectl logs -l app=ghost
        
      • If a pod’s container was killed and restarted, you can view the previous container’s logs with the --previous or -p flag:

        kubectl logs -p ghost-0
        

      kubectl exec

      You can run arbitrary commands on a pod’s container by passing them to kubectl’s exec command:

      kubectl exec mariadb-0 -- ps aux
      

      The full syntax for the command is:

      kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
      

      Note

      The -c flag is optional, and is only needed when the specified pod is running more than one container.

      It is possible to run an interactive shell on an existing pod/container. Pass the -it flags to exec and run the shell:

      kubectl exec -it mariadb-0 -- /bin/bash
      

      Enter exit to later leave this shell.

      Viewing Master and Worker Logs

      If the Kubernetes API server isn’t working normally, then you may not be able to use kubectl to troubleshoot. When this happens, or if you are experiencing other more fundamental issues with your cluster, you can instead log directly into your nodes and view the logs present on your filesystem.

      Non-systemd systems

      If your nodes do not run systemd, the location of logs on your master nodes should be:

      On your worker nodes:

      systemd systems

      If your nodes run systemd, you can access the logs that kubelet generates with journalctl:

      journalctl --unit kubelet
      

      Logs for your other Kubernetes software components can be found through your container runtime. When using Docker, you can use the docker ps and docker logs commands to investigate. For example, to find the container running your API server:

      docker ps | grep apiserver
      

      The output will display a list of information separated by tabs:

      2f4e6242e1a2    cfdda15fbce2    "kube-apiserver --au…"  2 days ago  Up 2 days   k8s_kube-apiserver_kube-apiserver-k8s-trouble-1-master-1_kube-system_085b2ab3bd6d908bde1af92bd25e5aaa_0
      

      The first entry (in this example: 2f4e6242e1a2) will be an alphanumeric string, and it is the ID for the container. Copy this string and pass it to docker logs to view the logs for your API server:

      docker logs ${CONTAINER_ID}
      

      Troubleshooting Examples

      Viewing the Wrong Cluster

      If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. To view all of the cluster contexts on your system, run:

      kubectl config get-contexts
      

      An asterisk will appear next to the active context:

      CURRENT   NAME                                        CLUSTER            AUTHINFO
                my-cluster-kayciZBRO5s@my-cluster           my-cluster         my-cluster-kayciZBRO5s
      *         other-cluster-kaUzJOMWJ3c@other-cluster     other-cluster      other-cluster-kaUzJOMWJ3c
      

      To switch to another context, run:

      kubectl config use-context ${CLUSTER_NAME}
      

      E.g.:

      kubectl config use-context my-cluster-kayciZBRO5s@my-cluster
      

      Can’t Provision Cluster Nodes

      If you are not able to create new nodes in your cluster, you may see an error message similar to:

        
      Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
      
      

      This is a reference to the total number of Linode resources that can exist on your account. To create new Linode instances for your cluster, you will need to either remove other instances on your account, or request a limit increase. To request a limit increase, contact Linode Support.

      Insufficient CPU or Memory

      If one of your pods requests more memory or CPU than is available on your worker nodes, then one of these scenarios may happen:

      • The pod will remain in the Pending state, because the scheduler cannot find a node to run it on. This will be visible when running kubectl get pods.

        If you run the kubectl describe command on your pod, the Events section may list a FailedScheduling event, along with a message like Failed for reason PodExceedsFreeCPU and possibly others. You can run kubectl describe nodes to view information about the allocated resources for each node.

      • The pod may continually crash. For example, the Ghost pod specified by Ghost’s Helm chart will show the following error in its logs when not enough memory is available:

        kubectl logs ghost --tail=5
        1) SystemError
        
        Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~58 MB available.
        

      If your cluster has insufficient resources for a new pod, you will need to:

      • Reduce the number of other pods/deployments/applications running on your cluster,
      • Resize the Linode instances that represent your worker nodes to a higher-tier plan, or
      • Add a new worker node to your cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link