One place for hosting & domains

      Workflow

      Using Git Hooks in Your Development Workflow

      Introduction

      Git, a version control system created by Linus Torvalds, author of the Linux kernel, has become one of the most popular version control systems used globally. Certainly, this is because of its distributed nature, high performance, and reliability.

      In this tutorial, we’ll look at git hooks. These hooks are a feature of git which furthers its extensibility by allowing developers to create event-triggered scripts.

      We’ll look through the different types of git hooks and implement a few to get you well on the way to customizing your own.

      A git hook is a script that git executes before or after a relevant git event or action is triggered.

      Throughout the developer version control workflow, git hooks enable you to customize git’s internal behavior when certain events are triggered.

      They can be used to perform actions such as:

      1. Push to staging or production without leaving git
      2. No need to mess with SSH or FTP
      3. Prevent commits through enforcing commit policy.
      4. Prevent pushes or merges that don’t conform to certain standards or meet guideline expectations.
      5. Facilitate continuous deployment.

      This proves extremely helpful for developers as git gives them the flexibility to fine-tune their development environment and automate development.

      Before we get started, there are a few key programs we need to install.

      1. git
      2. Node.js
      3. bash

      Confirm that you’ve installed them correctly by running the following in your terminal:

      1. git --version && node --version && bash --version

      You should see similar results

      1. git version 2.7.4 (Apple Git-66)
      2. v6.2.2
      3. GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
      4. Copyright (C) 2007 Free Software Foundation, Inc.

      We’ll be using the following directory structure, so go ahead and lay out your project like this.

      +-- git-hooks
          +-- custom-hooks
          +-- src
          |   +-- index.js
          +-- test
          |   +-- test.js
          +-- .jscsrc
      

      That’s all for now as far as prerequisites go, so let’s dive in.

      git hooks can be categorized into two main types. These are:

      1. Client-side hooks
      2. Server-side hooks

      In this tutorial, we’ll focus more on client-side hooks. However, we will briefly discuss server-side hooks.

      These are hooks installed and maintained on the developer’s local repository and are executed when events on the local repository are triggered. Because they are maintained locally, they are also known as local hooks.

      Since they are local, they cannot be used as a way to enforce universal commit policies on a remote repository as each developer can alter their hooks. However, they make it easier for developers to adhere to workflow guidelines like linting and commit message guides.

      Installing local hooks

      Initialize the project we just created as a git repository by running

      1. git init

      Next, let’s navigate to the .git/hooks directory in our project and expose the contents of the folder

      1. cd ./.git/hooks && ls

      We’ll notice a few files inside the hooks directory, namely

      applypatch-msg.sample
      commit-msg.sample
      post-update.sample
      pre-applypatch.sample
      pre-commit.sample
      pre-push.sample
      pre-rebase.sample
      prepare-commit-msg.sample
      update.sample
      

      These scripts are the default hooks that git has so helpfully gifted us with. Notice that their names make reference to git events like pushes, commits, and rebases.

      Useful in their own right, they also serve as a guideline on how hooks for certain events can be triggered.

      The .sample extension prevents them from being run, so to enable them, remove the .sample extension from the script name.

      The hooks we’ll write here will be in bash though you can use Python or even Perl. Git hooks can be written in any language as long as the file is executable.

      We make the hook executable by using the chmod utility.

      1. chmod +x .git/hooks/<insert-hook-name-here>

      Order of execution

      Mimicking the developer workflow for the commit process, hooks are executed in the following hierarchy.

              <pre-commit>
                   |
          <prepare-commit-msg>
                   |
              <commit-msg>
                   |
             <post-commit>
      

      pre-commit

      The pre-commit hook is executed before git asks the developer for a commit message or creates a commit package. This hook can be used to make sure certain checks pass before a commit can be considered worthy to be made to the remote. No arguments are passed to the pre-commit script and if the script exists with a non-zero value, the commit event will be aborted.

      Before we get into anything heavy, let’s create a simple pre-commit hook to get us comfortable.

      Create a pre-commit hook inside the .git/hooks directory like this.

      1. touch pre-commit && vi pre-commit

      Enter the following into the pre-commit hook file

      #!/bin/bash
      
      echo "Can you make a commit? Well, it depends."
      exit 1
      

      Save and exit the editor by running:

      1. esc then :wq

      Don’t forget to make the hook file executable by running:

      1. chmod + x .git/hooks/pre-commit

      Let’s write out some code to test our newly minted hook against. At the root of our project, create a file called hello-world.py:

      1. touch hello-world.py

      Inside the file, enter the following:

      print ('Hello Hooks') 
      
      

      Next, let’s add the file into our git staging environment and begin a commit.

      1. git add . && git commit

      Are you surprised that git doesn’t let us commit our work?

      As an experiment, modify the last line in the pre-commit hook we created from exit 1 to exit 0 and trigger another commit.

      Now that we understand that a hook is just an event-triggered script, let’s create something with more utility.

      In our example below, we want to make sure that all the tests for our code pass and that we have no linting errors before we commit.

      We’re using mocha as our javascript test framework and jscs as our linter.

      Fill the following into the .git/hooks/pre-commit file

      #!/bin/bash
      
      
      num_of_failures=`mocha -R json | grep failures -m 1 | awk '{print $2}' | sed 's/[,]/''/'`
      
      errors=`jscs -r inline ./test/test.js`
      num_of_linting_errors=`jscs -r junit ./test/test.js | grep failures -m 1 | awk '{print $4}' | sed 's/failures=/''/' | sed s/">"/''/ | sed s/\"/''/ | sed s/\"/''/`
      
      if [ $num_of_failures != '0' ]; then
        echo "$num_of_failures tests have failed. You cannot commit until all tests pass.
              Commit exiting with a non-zero status."
        exit 1
      fi
      
      if [ $num_of_linting_errors !=  '0' ]; then
        echo "Linting errors present. $errors"
        exit 1
      fi
      

      Save the document and exit the vi editor as usual by using,

      1. esc then :wq

      The first line of the script indicates that we want the script to be run as a bash script. If the script was a python one, we would instead use

      1. #!/usr/bin/env python

      Make the file executable as we mentioned before by running

      1. chmod +x .git/hooks/pre-commit

      To give our commit hook something to test against, we’ll be creating a method that returns true when an input string contains vowels and false otherwise.

      Create and populate a package.json file at the root of our git-hooks folder by running

      1. npm init --yes

      Install the project dependencies like this:

      1. npm install chai mocha jscs --save-dev

      Let’s write a test for our prospective hasVowels method.

      git-hooks/test/test.js

      const expect = require('chai').expect;
      require('../src/index');
      
      describe('Test hasVowels', () => {
        it('should return false if the string has no vowels', () => {
          expect('N VWLS'.hasVowels()).to.equal(false);
        });
        it('should return true if the string has vowels', () => {
          expect('No vowels'.hasVowels()).to.equal(true)
      
          
          expect('Has vowels'.hasVowels()).to.equal(false);
        });
      });
      

      git-hooks/src/index.js

      
      String.prototype.hasVowels = function hasVowels() {
        const vowels = new RegExp('[aeiou]', 'i');
        return vowels.test(this);
      };
      

      To configure the jscs linter, fill the following into the .jscsrc file we’d created in the beginning.

      .jscsrc

      {
          "preset": "airbnb",
          "disallowMultipleLineBreaks": null,
          "requireSemicolons": true
      }
      

      Now add all the created files into the staging environment and trigger a commit.

      1. git add . && git commit

      What do you think will happen?

      You’re right. Git prevents us from making a commit. Rightfully so, because our tests have failed.
      Worry not. Our pre-commit script has helpfully provided us with hints regarding what could be wrong.

      This is what it tells us:

      1 tests have failed. You cannot commit until all tests pass.
              Commit exiting with a non-zero status.
      

      If you can’t take my word for it, the screenshot below serves as confirmation.

      pre-commit-fail-test

      Let’s fix things. Edit line 13 in test/test.js to

      expect('Has vowels'.hasVowels()).to.equal(true);
      

      Next, add the files to your staging environment, git add . like we did before, and git commit

      Git still prevents us from committing.

      Linting errors present. ./test/test.js: line 10, col 49, requireSemicolons: Missing semicolon after statement
      

      Edit line 10 in test/test.js to

      expect('No vowels'.hasVowels()).to.equal(true);
      

      Now, running git commit after git add . should provide no challenges because our tests and linting have both passed.

      You can skip the pre-commit hook by running git commit --no-verify.

      prepare-commit-msg

      The prepare-commit-msg hook is executed after the pre-commit hook and its execution populates the vi editor commit message.

      This hook takes one, two, or three arguments.

      1. The name of the file that contains the commit message to be used.
      2. The type of commit. This can be message, template, merge, or squash.
      3. The SHA-1/hash of a commit (when operating on an existing commit).

      In the code below, we’re electing to populate the commit editor workspace with a helpful commit message format reminder prefaced by the name of the current branch.

      .git/hooks/prepare-commit-msg

      #!/bin/bash
      
      
      current_branch=`git rev-parse --abbrev-ref HEAD`
      
      echo "#$current_branch Commit messages should be of the form [#StoryID:CommitType] Commit Message." > $1
      

      Running git commit will yield the following in the commit text editor

      #$main Commit messages should be of the form [#StoryID:CommitType] Commit Message.
      

      We can continue to edit our commit message and exit out of the editor as usual.

      commit-msg

      This hook is executed after the prepare-commit-msg hook. It can be used to reformat the commit message after it has been input or to validate the message against some checks. For example, it could be used to check for commit message spelling errors or length, before the commit is allowed.

      This hook takes one argument, that is the location of the file that holds the commit message.

      .git/hooks/commit-msg

      #!/bin/bash
      
      
      
      
      
      commit_standard_regex='[#[0-9]{9,}:[a-z]{3,}]:[a-z].+|merge'
      error_message="Aborting commit. Please ensure your commit message meets the
                     standard requirement. '[#StoryID:CommitType]Commit Message'
                    Use '[#135316555:Feature]Create Kafka Audit Trail' for reference"
      
      
      if ! grep -iqE "$commit_standard_regex" "$1"; then
          echo "$error_message" >&2
          exit 1
      fi
      

      In the code above, we’re validating the user-supplied commit message against a standard commit using a regular expression. If the supplied commit does not conform to the regular expression, an error message is directed to the shell’s standard output, the script exits with a status of one, and the commit is aborted.

      Go ahead. Create a change and try to make a commit of a form other than [#135316555:Chore]Test commit-msg hook

      Git will abort the commit process and give you a handly little tip regarding the format of your commit message.

      commit-msg hook

      post-commit

      This hook is executed after the commit-msg hook and since the commit has already been made it cannot abort the commit process.

      It can however be used to notify the relevant stakeholders that a commit has been made to the remote repository.
      We could write a post-commit hook, say, to email our project team lead whenever we make a commit to the organization’s remote repository.

      In this case, let’s congratulate ourselves on our hard work.

      .git/hooks/post-commit

      #!/bin/bash
      
      say Congratulations! You\'ve just made a commit! Time for a break.
      

      post-checkout

      The post-checkout hook is executed after a successful git checkout is performed. It can be used to conveniently delete temporary files or prepare the checked out development environment by performing installations.

      Its exit status does not affect the checkout process.

      In the hook below, before checkout to another branch, we’ll pull changes made by others on the remote branch and perform some installation.

      .git/hooks/post-checkout

      #!/bin/bash
      
      
      repository_name=`basename`git rev-parse --show-toplevel``
      current_branch=`git rev-parse --abbrev-ref HEAD`present_working_directory=`pwd`requirements=`ls | grep 'requirements.txt' `echo "Pulling remote branch ....."
      git pull origin $current_branch
      
      echo
      
      echo "Installing nodeJS dependencies ....."
      npm install
      
      echo
      
      echo "Installing yarn package ....."
      npm install yarn
      echo "Yarning dependencies ......"
      yarn
      
      echo
      
      
      if [ $present_working_directory == $repository_name ] && [ $requirements == 'requirements.txt']; then
        echo "Creating virtual environments for project ......."
        source`which virtualenv`
        echo
        mkvirtualenv $repository_name/$current_branch
        workon $repository_name/$current_branch
        echo "Installing python dependencies ......."
        pip install -r requirements.txt
      fi
      

      Don’t forget to make the script executable.

      To test the script out, create another branch and check it out like this.

      1. git checkout -b <new-branch>

      pre-rebase

      This hook is executed before a rebase and can be used to stop the rebase if it is not desirable.

      It takes one or two parameters:

      1. The upstream repository
      2. The branch to be rebased. (This parameter is empty if the rebase is being performed on the current branch)

      Let’s outlaw all rebasing on our repository.

      .git/hooks/pre-rebase

      #!/bin/bash
      
      echo " No rebasing until we grow up. Aborting rebase."
      exit 1
      

      Phew! We’ve gone through quite a number of client-side hooks. If you’re still with me, good work!

      Persisting hooks

      I’ve got some bad news and good news. Which one would you like first?

      The bad

      The .git/hooks directory is not tagged by version control and so does not persist when we clone a remote repository or when we push changes to a remote repository. This is why we’d earlier stated that local hooks cannot be used to enforce commit policies.

      The good

      Now before you start sweating, there are a few ways we can get around this.

      1. We can use symbolic links or symlinks to link our custom hooks to the ones in the .git/hooks folder.

      Create a pre-rebase file in our custom-hooks directory and copy the pre-rebase hook we created in .git/hooks/pre-rebase into it. Next, the rm command removes the pre-rebase hook in .git/hooks:

      1. touch custom-hooks/pre-rebase && cp .git/hooks/pre-rebase custom-hooks/pre-rebase && rm -f .git/hooks/pre-rebase

      Next, use the ln command to link the pre-rebase file in custom-hooks to the .git/hooks directory.

      1. ln -s custom-hooks/pre-rebase .git/hooks/pre-rebase

      To confirm that the files have been linked, run the following

      1. ls -la .git/hooks

      The output for the pre-rebase file should be similar to this:

      1. lrwxr-xr-x 1 emabishi staff 23B Dec 27 14:57 pre-rebase -> custom-hooks/pre-rebase

      Notice the l character prefixing the filesystem file permissions line.

      To unlink the two files,

      1. unlink .git/hooks/pre-rebase

      or

      1. rm -f .git/hooks/pre-rebase
      1. We can create a directory to store our hooks outside the .git/hooks directory. We’ve already done this by storing our pre-rebase hook in the custom-hooks directory. Like our other files, this folder can be pushed to our remote repository.

      These are hooks that are executed in a remote repository on the triggering of certain events.

      Is it clear now? Client-side hooks respond to events on a local repository whilst server-side hooks respond to events triggered on a remote repository.

      We’d come across some of them when we listed the files in the .git/hooks directory.

      Let’s look at a few of these hooks now.

      Order of execution

      The server-side hooks we’ll look at here are executed with the following hierarchy.

             <pre-receive>
                   |
                <update>
                   |
             <post-receive>
      

      pre-receive

      This hook is triggered on the remote repository just before the pushed files are updated and can abort the receive process if it exists with a non-zero status.

      Since the hook is executed just before the remote is updated, it can be used to enforce commit policies and reject the entire commit if it is deemed unsatisfactory.

      update

      The update hook is called after the pre-receive hook and functions similarly. The difference is that ii filters each commit ref made to the remote repository independently. It can be used as a fine-tooth comb to reject or accept each ref being pushed.

      post-receive

      This hook is triggered after an update has been done on the remote repository and so cannot abort the update process. Like the post-commit client-side hook, it can be used to trigger notifications on a successful remote repository update.

      In fact, it is more suited for this because a log of the notifications will be stored on a remote server.

      We’ve looked at quite a few hooks which should get you up and running. However, I’d love for you to do some more exploration.

      For a more comprehensive look at git hooks, I’d like to direct you to:

      It’s a brave new world out there when it comes to git hooks, so luckily, you don’t always have to write your own custom scripts. You can find a pretty comprehensive list of useful frameworks here.

      All the code we’ve written can be found here.

      How To Migrate a Docker Compose Workflow for Rails Development to Kubernetes


      Introduction

      When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

      • Extracting necessary configuration information from your code.
      • Offloading your application’s state.
      • Packaging your application for repeated use.

      You will also have written service definitions that specify how your container images should run.

      To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

      In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Rails application with a PostgreSQL database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Ruby on Rails Application for Development with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

      Prerequisites

      Step 1 — Installing kompose

      To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.22.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

      • curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose

      For details about installing on non-Linux systems, please refer to the installation instructions.

      Make the binary executable:

      Move it to your PATH:

      • sudo mv ./kompose /usr/local/bin/kompose

      To verify that it has been installed properly, you can do a version check:

      If the installation was successful, you will see output like the following:

      Output

      1.22.0 (955b78124)

      With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

      Step 2 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose, which uses a demo Rails application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series Rails on Containers.

      Clone the repository into a directory called rails_project:

      • git clone https://github.com/do-community/rails-sidekiq.git rails_project

      Navigate to the rails_project directory:

      Now checkout the code for this tutorial from the compose-workflow branch:

      • git checkout compose-workflow

      Output

      Branch 'compose-workflow' set up to track remote branch 'compose-workflow' from 'origin'. Switched to a new branch 'compose-workflow'

      The rails_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a PostgreSQL database.

      For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

      Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it rails-kubernetes or a name of your own choosing:

      • docker build -t your_dockerhub_user/rails-kubernetes .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_user/rails-kubernetes latest 24f7e88b6ef2 2 days ago 606MB alpine latest d6e46aa2470d 6 weeks ago 5.57MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_user

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_user with your own Docker Hub username:

      • docker push your_dockerhub_user/rails-kubernetes

      You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

      Step 3 — Translating Compose Services to Kubernetes Objects with kompose

      Our Docker Compose file, here called docker-compose.yml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

      We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

      First, we will need to modify some of the definitions in our docker-compose.yml file to work with Kubernetes. We will include a reference to our newly-built application image in our app service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

      If you have followed the steps in this tutorial and checked out the compose-workflow branch with git, then you should have a docker-compose.yml file in your working directory.

      If you don’t have a docker-compose.yml then be sure to visit the previous tutorial in this series, Containerizing a Ruby on Rails Application for Development with Docker Compose, and paste the contents from the linked section into a new docker-compose.yml file.

      Open the file with nano or your favorite editor:

      The current definition for the app application service looks like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Make the following edits to your service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      The finished service definition will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Next, scroll down to the database service definition and make the following edits:

      • Remove the - ./init.sql:/docker-entrypoint-initdb.d/init.sql volume line. Instead of using values from the local SQL file, we will pass the values for our POSTGRES_USER and POSTGRES_PASSWORD to the database container using the Secret we will create in Step 4.
      • Add a ports: section that will make PostgreSQL available inside your Kubernetes cluster on port 5432.
      • Add an environment: section with a PGDATA variable that points to a directory inside /var/lib/postgresql/data. This setting is required when PostgreSQL is configured to use block storage, since the database engine expects to find its data files in a sub-directory.

      The database service definition should look like this when you are finished editing it:

      ~/rails_project/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
          ports:
            - "5432:5432"
          environment:
            PGDATA: /var/lib/postgresql/data/pgdata
      . . .
      

      Next, edit the redis service definition to expose its default TCP port by adding a ports: section with the default 6379 port. Adding the ports: section will make Redis available inside your Kubernetes cluster. Your edited redis service should resemble the following:

      ~/rails_project/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
          ports:
            - "6379:6379"
      

      After editing the redis section of the file, continue to the sidekiq service definition. Just as with the app service, you’ll need to switch from building a local docker image to pulling from Docker Hub. Make the following edits to your sidekiq service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      ~/rails_project/docker-compose.yml

      . . .
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - app
            - database
            - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Finally, at the bottom of the file, remove the gem_cache and node_modules volumes from the top-level volumes key. The key will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      volumes:
        db_data:
      

      Save and close the file when you are finished editing.

      For reference, your completed docker-compose.yml file should contain the following:

      ~/rails_project/docker-compose.yml

      version: '3'
      
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - database
              - redis
          ports:
              - "3000:3000"
          env_file: .env
          environment:
              RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
              - db_data:/var/lib/postgresql/data
          ports:
              - "5432:5432"
          environment:
              PGDATA: /var/lib/postgresql/data/pgdata
      
        redis:
          image: redis:5.0.7
          ports:
              - "6379:6379"
      
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - app
              - database
              - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        db_data:
      

      Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Ruby on Rails Application for Development with Docker Compose for a longer explanation of this file.

      In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the rails-sidekiq repository in Step 2 of this tutorial. We will therefore need to recreate it now.

      Create the file:

      kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the app service definition in our Compose file, we will only add settings for the PostgreSQL and Redis. We will assign the database name, username, and password separately when we manually create a Secret object in Step 4.

      Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

      ~/rails_project/.env

      DATABASE_HOST=database
      DATABASE_PORT=5432
      REDIS_HOST=redis
      REDIS_PORT=6379
      

      Save and close the file when you are finished editing.

      You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

      • Create yaml files based on the service definitions in your docker-compose.yml file with kompose convert.
      • Create Kubernetes objects directly with kompose up.
      • Create a Helm chart with kompose convert -c.

      For now, we will convert our service definitions to yaml files and then add to and revise the files that kompose creates.

      Convert your service definitions to yaml files with the following command:

      After you run this command, kompose will output information about the files it has created:

      Output

      INFO Kubernetes file "app-service.yaml" created INFO Kubernetes file "database-service.yaml" created INFO Kubernetes file "redis-service.yaml" created INFO Kubernetes file "app-deployment.yaml" created INFO Kubernetes file "env-configmap.yaml" created INFO Kubernetes file "database-deployment.yaml" created INFO Kubernetes file "db-data-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-deployment.yaml" created INFO Kubernetes file "sidekiq-deployment.yaml" created

      These include yaml files with specs for the Rails application Service, Deployment, and ConfigMap, as well as for the db-data PersistentVolumeClaim and PostgreSQL database Deployment. Also included are files for Redis and Sidekiq respectively.

      To keep these manifests out of the main directory for your Rails project, create a new directory called k8s-manifests and then use the mv command to move the generated files into it:

      • mkdir k8s-manifests
      • mv *.yaml k8s-manifests

      Finally, cd into the k8s-manifests directory. We’ll work from inside this directory from now on to keep things tidy:

      These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose we will need to make a few additions and changes to the files that kompose has generated.

      Step 4 — Creating Kubernetes Secrets

      In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database name, username and password.

      The first step in manually creating a Secret will be to convert the data to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

      First convert the database name to base64 encoded data:

      • echo -n 'your_database_name' | base64

      Note down the encoded value.

      Next convert your database username:

      • echo -n 'your_database_username' | base64

      Again record the value you see in the output.

      Finally, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define your DATABASE_NAME, DATABASE_USER and DATABASE_PASSWORD using the encoded values you just created. Be sure to replace the highlighted placeholder values here with your encoded database name, username and password:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
      

      We have named the Secret object database-secret, but you are free to name it anything you would like.

      These secrets are used with the Rails application so that it can connect to PostgreSQL. However, the database itself needs to be initialized with these same values. So next, copy the three lines and paste them at the end of the file. Edit the last three lines and change the DATABASE prefix for each variable to POSTGRES. Finally change the POSTGRES_NAME variable to read POSTGRES_DB.

      Your final secret.yaml file should contain the following:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
        POSTGRES_DB: your_database_name
        POSTGRES_PASSWORD: your_encoded_password
        POSTGRES_USER: your_encoded_username
      

      Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

      With secret.yaml written, our next step will be to ensure that our application and database Deployments both use the values that we added to the file. Let’s start by adding references to the Secret to our application Deployment.

      Open the file called app-deployment.yaml:

      The file’s container specifications include the following environment variables defined under the env key:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: DATABASE_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_HOST
                        name: env
                  - name: DATABASE_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_PORT
                        name: env
                  - name: RAILS_ENV
                    value: development
                  - name: REDIS_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_HOST
                        name: env
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
      . . .
      

      We will need to add references to our Secret so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our env ConfigMap, as is the case with the existing values, we’ll include a secretKeyRef key to point to the values in our database-secret secret.

      Add the following Secret references after the - name: REDIS_PORT variable section:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      . . .
          spec:
            containers:
              - env:
              . . .  
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
                  - name: DATABASE_NAME
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_NAME
                  - name: DATABASE_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_PASSWORD
                  - name: DATABASE_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_USER
      . . .
      
      

      Save and close the file when you are finished editing. As with your secrets.yaml file, be sure to validate your edits using kubectl to ensure there are no issues with spaces, tabs, and indentation:

      • kubectl create -f app-deployment.yaml --dry-run --validate=true

      Output

      deployment.apps/app created (dry run)

      Next, we’ll add the same values to the database-deployment.yaml file.

      Open the file for editing:

      • nano database-deployment.yaml

      In this file, we will add references to our Secret for following variable keys: POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD. The postgres image makes these variables available so that you can modify the initialization of your database instance. The POSTGRES_DB creates a default database that is available when the container starts. The POSTGRES_USER and POSTGRES_PASSWORD together create a privileged user that can access the created database.

      Using the these values means that the user we create has access to all of the administrative and operational privileges of that role in PostgreSQL. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

      Under the POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD variables, add references to the Secret values:

      ~/rails_project/k8s-manifests/database-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: PGDATA
                    value: /var/lib/postgresql/data/pgdata
                  - name: POSTGRES_DB
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_DB
                  - name: POSTGRES_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_PASSWORD        
                  - name: POSTGRES_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_USER
      . . .
      

      Save and close the file when you are finished editing. Again be sure to lint your edited file using kubectl with the --dry-run --validate=true arguments.

      With your Secret in place, you can move on to creating the database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

      Step 5 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

      Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

      First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

      To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage.

      We can check this by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE do-block-storage (default) dobs.csi.digitalocean.com Delete Immediate true 76m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      When kompose created db-data-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

      Open db-data-persistentvolumeclaim.yaml:

      • nano db-data-persistentvolumeclaim.yaml

      Replace the storage value with 1Gi:

      ~/rails_project/k8s-manifests/db-data-persistentvolumeclaim.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: db-data
        name: db-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      status: {}
      

      Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

      Save and close the file when you are finished.

      Next, open app-service.yaml:

      We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

      Within the Service spec, specify LoadBalancer as the Service type:

      ~/rails_project/k8s-manifests/app-service.yaml

      apiVersion: v1
      kind: Service
      . . .
      spec:
        type: LoadBalancer
        ports:
      . . .
      

      When we create the app Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

      Save and close the file when you are finished editing.

      With all of our files in place, we are ready to start and test the application.

      Note:
      If you would like to compare your edited Kubernetes manifests to a set of reference files to be certain that your changes match this tutorial, the companion Github repository contains a set of tested manifests. You can compare each file individually, or you can also switch your local git branch to use the kubernetes-workflow branch.

      If you opt to switch branches, be sure to copy your secrets.yaml file into the new checked out version since we added it to .gitignore earlier in the tutorial.

      Step 6 — Starting and Accessing the Application

      It’s time to create our Kubernetes objects and test that our application is working as expected.

      To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Rails application and PostgreSQL database, Redis cache, and Sidekiq Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

      • kubectl create -f app-deployment.yaml,app-service.yaml,database-deployment.yaml,database-service.yaml,db-data-persistentvolumeclaim.yaml,env-configmap.yaml,redis-deployment.yaml,redis-service.yaml,secret.yaml,sidekiq-deployment.yaml

      You receive the following output, indicating that the objects have been created:

      Output

      deployment.apps/app created service/app created deployment.apps/database created service/database created persistentvolumeclaim/db-data created configmap/env created deployment.apps/redis created service/redis created secret/database-secret created deployment.apps/sidekiq created

      To check that your Pods are running, type:

      You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this kubectl create command, along with the name of your Namespace.

      You will see output similar to the following while your database container is starting (the status will be either Pending or ContainerCreating):

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 23s database-c77d55fbb-bmfm8 0/1 Pending 0 23s redis-7d65467b4d-9hcxk 1/1 Running 0 23s sidekiq-867f6c9c57-mcwks 1/1 Running 0 23s

      Once the database container is started, you will have output like this:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Now that your application is up and running, the last step that is required is to run Rails’ database migrations. This step will load a schema into the PostgreSQL database for the demo application.

      To run pending migrations you’ll exec into the running application pod and then call the rake db:migrate command.

      First, find the name of the application pod with the following command:

      Find the pod that corresponds to your application like the highlighted pod name in the following output:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      With that pod name noted down, you can now run the kubectl exec command to complete the database migration step.

      Run the migrations with this command:

      • kubectl exec your_app_pod_name -- rake db:migrate

      You should receive output similar to the following, which indicates that the database schema has been loaded:

      Output

      == 20190927142853 CreateSharks: migrating ===================================== -- create_table(:sharks) -> 0.0190s == 20190927142853 CreateSharks: migrated (0.0208s) ============================ == 20190927143639 CreatePosts: migrating ====================================== -- create_table(:posts) -> 0.0398s == 20190927143639 CreatePosts: migrated (0.0421s) ============================= == 20191120132043 CreateEndangereds: migrating ================================ -- create_table(:endangereds) -> 0.8359s == 20191120132043 CreateEndangereds: migrated (0.8367s) =======================

      With your containers running and data loaded, you can now access the application. To get the IP for the app LoadBalancer, type:

      You will receive output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app LoadBalancer 10.245.73.142 your_lb_ip 3000:31186/TCP 21m database ClusterIP 10.245.155.87 <none> 5432/TCP 21m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 21m redis ClusterIP 10.245.119.67 <none> 6379/TCP 21m

      The EXTERNAL_IP associated with the app service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip:3000.

      You should see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will have a page with a button to create a new shark:

      Shark Info Form

      Click it and when prompted, enter the username and password from earlier in the tutorial series. If you did not change these values then the defaults are sammy and shark respectively.

      In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You now have a single instance setup of a Rails application with a PostgreSQL database running on a Kubernetes cluster. You also have a Redis cache and a Sidekiq worker to process data that users submit.

      Conclusion

      The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:



      Source link

      How to Use a Remote Docker Server to Speed Up Your Workflow


      Introduction

      Building CPU-intensive images and binaries is a very slow and time-consuming process that can turn your laptop into a space heater at times. Pushing Docker images on a slow connection takes a long time, too. Luckily, there’s an easy fix for these issues. Docker lets you offload all those tasks to a remote server so your local machine doesn’t have to do that hard work.

      This feature was introduced in Docker 18.09. It brings support for connecting to a Docker host remotely via SSH. It requires very little configuration on the client, and only needs a regular Docker server without any special config running on a remote machine. Prior to Docker 18.09, you had to use Docker Machine to create a remote Docker server and then configure the local Docker environment to use it. This new method removes that additional complexity.

      In this tutorial, you’ll create a Droplet to host the remote Docker server and configure the docker command on your local machine to use it.

      Prerequisites

      To follow this tutorial, you’ll need:

      • A DigitalOcean account. You can create an account if you don’t have one already.
      • Docker installed on your local machine or development server. If you are working with Ubuntu 18.04, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04; otherwise, follow the official documentation for information about installing on other operating systems. Be sure to add your non-root user to the docker group, as described in Step 2 of the linked tutorial.

      Step 1 – Creating the Docker Host

      To get started, spin up a Droplet with a decent amount of processing power. The CPU Optimized plans are perfect for this purpose, but Standard ones work just as well. If you will be compiling resource-intensive programs, the CPU Optimized plans provide dedicated CPU cores which allow for faster builds. Otherwise, the Standard plans offer a more balanced CPU to RAM ratio.

      The Docker One-click image takes care of all of the setup for us. Follow this link to create a 16GB/8vCPU CPU-Optimized Droplet with Docker from the control panel.

      Alternatively, you can use doctl to create the Droplet from your local command line. To install it, follow the instructions in the doctl README file on GitHub.

      The following command creates a new 16GB/8vCPU CPU-Optimized Droplet in the FRA1 region based on the Docker One-click image:

      • doctl compute droplet create docker-host
      • --image docker-18-04
      • --region fra1
      • --size c-8
      • --wait
      • --ssh-keys $(doctl compute ssh-key list --format ID --no-header | sed 's/$/,/' | tr -d 'n' | sed 's/,$//')

      The doctl command uses the ssh-keys value to specify which SSH keys it should apply to your new Droplet. We use a subshell to call doctl compute ssh-key-list to retrieve the SSH keys associated with your DigitalOcean account, and then parse the results using the sed and tr commands to format the data in the correct format. This command includes all of your account’s SSH keys, but you can replace the highlighted subcommand with the fingerprint of any key you have in your account.

      Once the Droplet is created you’ll see its IP address among other details:

      Output

      ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags Features Volumes 148681562 docker-host your_server_ip 16384 8 100 fra1 Ubuntu Docker 5:18.09.6~3 on 18.04 active

      You can learn more about using the doctl command in the tutorial How To Use doctl, the Official DigitalOcean Command-Line Client.

      When the Droplet is created, you’ll have a ready to use Docker server. For security purposes, create a Linux user to use instead of root.

      First, connect to the Droplet with SSH as the root user:

      Once connected, add a new user. This command adds one named sammy:

      Then add the user to the docker group to give it permission to run commands on the Docker host.

      • sudo usermod -aG docker sammy

      Finally, exit from the remote server by typing exit.

      Now that the server is ready, let's configure the local docker command to use it.

      Step 2 – Configuring Docker to Use the Remote Host

      To use the remote host as your Docker host instead of your local machine, set the DOCKER_HOST environment variable to point to the remote host. This variable will instruct the Docker CLI client to connect to the remote server.

      • export DOCKER_HOST=ssh://sammy@your_server_ip

      Now any Docker command you run will be run on the Droplet. For example, if you start a web server container and expose a port, it will be run on the Droplet and will be accessible through the port you exposed on the Droplet's IP address.

      To verify that you're accessing the Droplet as the Docker host, run docker info.

      You will see your Droplet's hostname listed in the Name field like so:

      Output

      … Name: docker-host

      One thing to keep in mind is that when you run a docker build command, the build context (all files and folders accessible from the Dockerfile) will be sent to the host and then the build process will run. Depending on the size of the build context and the amount of files, it may take a longer time compared to building the image on a local machine. One solution would be to create a new directory dedicated to the Docker image and copy or link only the files that will be used in the image so that no unneeded files will be uploaded inadvertently.

      Conclusion

      You've created a remote Docker host and connected to it locally. The next time your laptop's battery is running low or you need to build a heavy Docker image, use your shiny remote Docker server instead of your local machine.

      You might also be interested in learning how to optimize Docker images for production, or how to optimize them specifically for Kubernetes.



      Source link