One place for hosting & domains

      September 2018

      Webinar Series: Building Blocks for Doing CI/CD with Kubernetes


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a Cloud Native approach to building, testing, and deploying applications, covering release management, Cloud Native tools, Service Meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the first session of the series, Building Blocks for Doing CI/CD with Kubernetes.

      Introduction

      If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.

      Two building blocks for doing automation include container images and container orchestrators. Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:

      • Build container images with Docker, Buildah, and Kaniko.
      • Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
      • Extend the functionality of a Kubernetes cluster with Custom Resources.

      By the end of this tutorial, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.

      Future articles in the series will cover related topics: package management for Kubernetes, CI/CD tools like Jenkins X and Spinnaker, Services Meshes, and GitOps.

      Prerequisites

      Step 1 — Building Container Images with Docker and Buildah

      A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.

      Building Container Images with Dockerfiles

      Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.

      Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:

      Next, create a Dockerfile inside the demo folder:

      Add the following content to the file:

      ~/demo/Dockerfile

      FROM ubuntu:16.04
      
      LABEL MAINTAINER [email protected]
      
      RUN apt-get update 
          && apt-get install -y nginx 
          && apt-get clean 
          && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* 
          && echo "daemon off;" >> /etc/nginx/nginx.conf
      
      EXPOSE 80
      CMD ["nginx"]
      

      This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you've also configured nginx to be the default command when the container starts.

      Next, you'll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image nkhare/nginx:latest:

      • sudo docker image build -t nkhare/nginx:latest .

      You will see the following output:

      Output

      Sending build context to Docker daemon 49.25MB Step 1/5 : FROM ubuntu:16.04 ---> 7aa3602ab41e Step 2/5 : MAINTAINER [email protected] ---> Using cache ---> 552b90c2ff8d Step 3/5 : RUN apt-get update && apt-get install -y nginx && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && echo "daemon off;" >> /etc/nginx/nginx.conf ---> Using cache ---> 6bea966278d8 Step 4/5 : EXPOSE 80 ---> Using cache ---> 8f1c4281309e Step 5/5 : CMD ["nginx"] ---> Using cache ---> f545da818f47 Successfully built f545da818f47 Successfully tagged nginx:latest

      Your image is now built. You can list your Docker images using the following command:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE nkhare/nginx latest 4073540cbcec 3 seconds ago 171MB ubuntu 16.04 7aa3602ab41e 11 days ago

      You can now use the nkhare/nginx:latest image to create containers.

      Building Container Images with Project Atomic-Buildah

      Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.

      Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you'll compile Buildah from source and then use it to create a container image.

      To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:

      • cd
      • sudo apt-get install software-properties-common
      • sudo add-apt-repository ppa:alexlarsson/flatpak
      • sudo add-apt-repository ppa:gophers/archive
      • sudo apt-add-repository ppa:projectatomic/ppa
      • sudo apt-get update
      • sudo apt-get install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man

      Because you will compile the buildah source code to create its package, you'll also need to install Go:

      • sudo apt-get update
      • sudo curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
      • sudo tar -xvf go1.8.linux-amd64.tar.gz
      • sudo mv go /usr/local
      • sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
      • source ~/.profile
      • go version

      You will see the following output, indicating a successful installation:

      Output

      go version go1.8 linux/amd64

      You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.

      Run the following commands to install runc and buildah:

      • mkdir ~/buildah
      • cd ~/buildah
      • export GOPATH=`pwd`
      • git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
      • cd ./src/github.com/projectatomic/buildah
      • make runc all TAGS="apparmor seccomp"
      • sudo cp ~/buildah/src/github.com/opencontainers/runc/runc /usr/bin/.
      • sudo apt install buildah

      Next, create the /etc/containers/registries.conf file to configure your container registries:

      • sudo nano /etc/containers/registries.conf

      Add the following content to the file to specify your registries:

      /etc/containers/registries.conf

      
      # This is a system-wide configuration file used to
      # keep track of registries for various container backends.
      # It adheres to TOML format and does not support recursive
      # lists of registries.
      
      # The default location for this configuration file is /etc/containers/registries.conf.
      
      # The only valid categories are: 'registries.search', 'registries.insecure',
      # and 'registries.block'.
      
      [registries.search]
      registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org']
      
      # If you need to access insecure registries, add the registry's fully-qualified name.
      # An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
      [registries.insecure]
      registries = []
      
      # If you need to block pull access from a registry, uncomment the section below
      # and add the registries fully-qualified name.
      #
      # Docker only
      [registries.block]
      registries = []
      

      The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.

      Now run the following command to build an image, using the https://github.com/do-community/rsvpapp repository as the build context. This repository also contains the relevant Dockerfile:

      • sudo buildah build-using-dockerfile -t rsvpapp:buildah github.com/do-community/rsvpapp

      This command creates an image named rsvpapp:buildah from the Dockerfille available in the https://github.com/do-community/rsvpapp repository.

      To list the images, use the following command:

      You will see the following output:

      Output

      IMAGE ID IMAGE NAME CREATED AT SIZE b0c552b8cf64 docker.io/teamcloudyuga/python:alpine Sep 30, 2016 04:39 95.3 MB 22121fd251df localhost/rsvpapp:buildah Sep 11, 2018 14:34 114 MB

      One of these images is localhost/rsvpapp:buildah, which you just created. The other, docker.io/teamcloudyuga/python:alpine, is the base image from the Dockerfile.

      Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:

      • docker login -u your-dockerhub-username -p your-dockerhub-password

      Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.

      For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:

      • sudo buildah push --authfile ~/.docker/config.json rsvpapp:buildah docker://your-dockerhub-username/rsvpapp:buildah

      You can also push the resulting image to the local Docker daemon using the following command:

      • sudo buildah push rsvpapp:buildah docker-daemon:rsvpapp:buildah

      Finally, take a look at the Docker images you have created:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE rsvpapp buildah 22121fd251df 4 minutes ago 108MB nkhare/nginx latest 01f0982d91b8 17 minutes ago 172MB ubuntu 16.04 b9e15a5d1e1a 5 days ago 115MB

      As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.

      You now have experience building container images with two different tools, Docker and Buildah. Let's move on to discussing how to set up a cluster of containers with Kubernetes.

      Step 2 — Setting Up a Kubernetes Cluster on DigitalOcean using kubeadm and Terraform

      There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

      Since this tutorial series discusses taking a Cloud Native approach to application development, we'll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.

      Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.

      On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:

      You will see the following output:

      Output

      Generating public/private rsa key pair. Enter file in which to save the key (~/.ssh/id_rsa):

      Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.

      Next, you will see the following prompt:

      Output

      Enter passphrase (empty for no passphrase):

      In this case, press ENTER without a password to enable password-less logins to your nodes.

      You will see a confirmation that your key pair has been created:

      Output

      Your identification has been saved in ~/.ssh/id_rsa. Your public key has been saved in ~/.ssh/id_rsa.pub. The key fingerprint is: SHA256:lCVaexVBIwHo++NlIxccMW5b6QAJa+ZEr9ogAElUFyY root@3b9a273f18b5 The key's randomart image is: +---[RSA 2048]----+ |++.E ++o=o*o*o | |o +..=.B = o | |. .* = * o | | . =.o + * | | . . o.S + . | | . +. . | | . ... = | | o= . | | ... | +----[SHA256]-----+

      Get your public key by running the following command, which will display it in your terminal:

      Add this key to your DigitalOcean account by following these directions.

      Next, install Terraform:

      • sudo apt-get update
      • sudo apt-get install unzip
      • wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
      • unzip terraform_0.11.7_linux_amd64.zip
      • sudo mv terraform /usr/bin/.
      • terraform version

      You will see output confirming your Terraform installation:

      Output

      Terraform v0.11.7

      Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user's home directory:

      • sudo apt-get install apt-transport-https
      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • sudo touch /etc/apt/sources.list.d/kubernetes.list
      • echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update
      • sudo apt-get install kubectl
      • mkdir -p ~/.kube

      Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.

      Next, clone the sample project repository for this tutorial, which contains the Terraform scripts for setting up the infrastructure:

      • git clone https://github.com/do-community/k8s-cicd-webinars.git

      Go to the Terrafrom script directory:

      • cd k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Get a fingerprint of your SSH public key:

      • ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'

      You will see output like the following, with the highlighted portion representing your key:

      Output

      MD5:dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Keep in mind that your key will differ from what's shown here.

      Save the fingerprint to an environmental variable so Terraform can use it:

      • export FINGERPRINT=dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Next, export your DO personal access token:

      • export TOKEN=your-do-access-token

      Now take a look at the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ project directory:

      Output

      cluster.tf destroy.sh files outputs.tf provider.tf script.sh

      This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.

      Execute the script.sh script to trigger the Kubernetes cluster setup:

      When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you've created.

      List the cluster nodes using kubectl get nodes:

      Output

      NAME STATUS ROLES AGE VERSION k8s-master-node Ready master 2m v1.10.0 k8s-worker-node-1 Ready <none> 1m v1.10.0 k8s-worker-node-2 Ready <none> 57s v1.10.0

      You now have one master and two worker nodes in the Ready state.

      With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.

      Step 3 — Building Container Images with Kaniko

      Earlier in this tutorial, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn't native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.

      A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.

      In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let's use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.

      To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:

      • sudo kubectl create configmap docker-config --from-file=$HOME/.docker/config.json

      Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).

      First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Create the pod-kaniko.yml file:

      Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace your-dockerhub-username in the Pod's args field with your own Docker Hub username:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/pod-kaniko.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: kaniko
      spec:
        containers:
        - name: kaniko
          image: gcr.io/kaniko-project/executor:latest
          args: ["--dockerfile=./Dockerfile",
                  "--context=/tmp/rsvpapp/",
                  "--destination=docker.io/your-dockerhub-username/rsvpapp:kaniko",
                  "--force" ]
          volumeMounts:
            - name: docker-config
              mountPath: /root/.docker/
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        initContainers:
          - image: python
            name: demo
            command: ["/bin/sh"]
            args: ["-c", "git clone https://github.com/do-community/rsvpapp.git /tmp/rsvpapp"] 
            volumeMounts:
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        volumes:
          - name: docker-config
            configMap:
              name: docker-config
          - name: demo
            emptyDir: {}
      

      This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile, https://github.com/do-community/rsvpapp.git, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.

      To deploy the kaniko pod, run the following command:

      • kubectl apply -f pod-kaniko.yml

      You will see the following confirmation:

      Output

      pod/kaniko created

      Get the list of pods:

      You will see the following list:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Init:0/1 0 47s

      Wait a few seconds, and then run kubectl get pods again for a status update:

      You will see the following:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 1/1 Running 0 1m

      Finally, run kubectl get pods once more for a final status update:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 2m

      This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.

      Check the logs of the pod:

      You will see the following output:

      Output

      time="2018-08-02T05:01:24Z" level=info msg="appending to multi args docker.io/your-dockerhub-username/rsvpapp:kaniko" time="2018-08-02T05:01:24Z" level=info msg="Downloading base image nkhare/python:alpine" . . . ime="2018-08-02T05:01:46Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:48Z" level=info msg="cmd: CMD" time="2018-08-02T05:01:48Z" level=info msg="Replacing CMD in config with [/bin/sh -c python rsvp.py]" time="2018-08-02T05:01:48Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:49Z" level=info msg="No files were changed, appending empty layer to config." 2018/08/02 05:01:51 mounted blob: sha256:bc4d09b6c77b25d6d3891095ef3b0f87fbe90621bff2a333f9b7f242299e0cfd 2018/08/02 05:01:51 mounted blob: sha256:809f49334738c14d17682456fd3629207124c4fad3c28f04618cc154d22e845b 2018/08/02 05:01:51 mounted blob: sha256:c0cb142e43453ebb1f82b905aa472e6e66017efd43872135bc5372e4fac04031 2018/08/02 05:01:51 mounted blob: sha256:606abda6711f8f4b91bbb139f8f0da67866c33378a6dcac958b2ddc54f0befd2 2018/08/02 05:01:52 pushed blob sha256:16d1686835faa5f81d67c0e87eb76eab316e1e9cd85167b292b9fa9434ad56bf 2018/08/02 05:01:53 pushed blob sha256:358d117a9400cee075514a286575d7d6ed86d118621e8b446cbb39cc5a07303b 2018/08/02 05:01:55 pushed blob sha256:5d171e492a9b691a49820bebfc25b29e53f5972ff7f14637975de9b385145e04 2018/08/02 05:01:56 index.docker.io/your-dockerhub-username/rsvpapp:kaniko: digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 size: 1243

      From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.

      You can now pull the Docker image. Be sure again to replace your-dockerhub-username with your Docker Hub username:

      • docker pull your-dockerhub-username/rsvpapp:kaniko

      You will see a confirmation of the pull:

      Output

      kaniko: Pulling from your-dockerhub-username/rsvpapp c0cb142e4345: Pull complete bc4d09b6c77b: Pull complete 606abda6711f: Pull complete 809f49334738: Pull complete 358d117a9400: Pull complete 5d171e492a9b: Pull complete Digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 Status: Downloaded newer image for your-dockerhub-username/rsvpapp:kaniko

      You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let's move on to discussing Deployments and Services.

      Step 4 — Create Kubernetes Deployments and Services

      Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.

      First, open the file:

      Add the following configuration to the file to define your Nginx Deployment:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      
      

      This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.

      To deploy the Deployment, run the following command:

      • kubectl apply -f deployment.yml

      You will see a confirmation that the Deployment was created:

      Output

      deployment.apps/nginx-deployment created

      List your Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 29s

      You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.

      To list the Pods that the Deployment created, run the following command:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 9m nginx-deployment-75675f5897-nhwsp 1/1 Running 0 1m nginx-deployment-75675f5897-pxpl9 1/1 Running 0 1m nginx-deployment-75675f5897-xvf4f 1/1 Running 0 1m

      You can see from this output that the desired number of Pods are running.

      To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.

      To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following content to define your Service:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/service.yml

      kind: Service
      apiVersion: v1
      metadata:
        name: nginx-service
      spec:
        selector:
          app: nginx
        type: NodePort
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30111
      

      These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.

      To deploy the Service run the following command:

      • kubectl apply -f service.yml

      You will see a confirmation:

      Output

      service/nginx-service created

      List the Services:

      You will see the following list:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h nginx-service NodePort 10.100.98.213 <none> 80:30111/TCP 7s

      Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx's standard welcome page.

      Once you have tested the Deployment, you can clean up both the Deployment and Service:

      • kubectl delete deployment nginx-deployment
      • kubectl delete service nginx-service

      These commands will delete the Deployment and Service you have created.

      Now that you have worked with Deployments and Services, let's move on to creating Custom Resources.

      Step 5 — Creating Custom Resources in Kubernetes

      Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes' offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.

      In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.

      In this step, you will create a Custom Resource and related objects.

      To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following Custom Resource Definition (CRD):

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/crd.yml

      apiVersion: apiextensions.k8s.io/v1beta1
      kind: CustomResourceDefinition
      metadata:
        name: webinars.digitalocean.com
      spec:
        group: digitalocean.com
        version: v1
        scope: Namespaced
        names:
          plural: webinars
          singular: webinar
          kind: Webinar
          shortNames:
          - wb
      

      To deploy the CRD defined in crd.yml, run the following command:

      • kubectl create -f crd.yml

      You will see a confirmation that the resource has been created:

      Output

      customresourcedefinition.apiextensions.k8s.io/webinars.digitalocean.com created

      The crd.yml file has created a new RESTful resource path: /apis/digtialocean.com/v1/namespaces/*/webinars. You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:

      • kubectl proxy & curl 127.0.0.1:8001/apis/digitalocean.com

      Note: If you followed the initial server setup guide in the prerequisites, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:

      You will see the following output:

      Output

      HTTP/1.1 200 OK Content-Length: 238 Content-Type: application/json Date: Fri, 03 Aug 2018 06:10:12 GMT { "apiVersion": "v1", "kind": "APIGroup", "name": "digitalocean.com", "preferredVersion": { "groupVersion": "digitalocean.com/v1", "version": "v1" }, "serverAddressByClientCIDRs": null, "versions": [ { "groupVersion": "digitalocean.com/v1", "version": "v1" } ] }

      Next, create the object for using new Custom Resources by opening a file called webinar.yml:

      Add the following content to create the object:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/webinar.yml

      apiVersion: "digitalocean.com/v1"
      kind: Webinar
      metadata:
        name: webinar1
      spec:
        name: webinar
        image: nginx
      

      Run the following command to push these changes to the cluster:

      • kubectl apply -f webinar.yml

      You will see the following output:

      Output

      webinar.digitalocean.com/webinar1 created

      You can now manage your webinar objects using kubectl. For example:

      Output

      NAME CREATED AT webinar1 21s

      You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.

      Deleting a Custom Resource Definition

      To delete all of the objects for your Custom Resource, use the following command:

      • kubectl delete webinar --all

      You will see:

      Output

      webinar.digitalocean.com "webinar1" deleted

      Remove the Custom Resource itself:

      • kubectl delete crd webinars.digitalocean.com

      You will see a confirmation that it has been deleted:

      Output

      customresourcedefinition.apiextensions.k8s.io "webinars.digitalocean.com" deleted

      After deletion you will not have access to the API endpoint that you tested earlier with the curl command.

      This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.

      Step 6 — Deleting the Kubernetes Cluster

      To destroy the Kubernetes cluster itself, you can use the destroy.sh script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom

      Run the script:

      By running this script, you'll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.

      Conclusion

      In this tutorial, you used different tools to create container images. With these images, you can create containers in any environment. You also set up a Kubernetes cluster using Terraform, and created Deployment and Service objects to deploy and expose your application. Additionally, you extended Kubernetes' functionality by defining a Custom Resource.

      You now have a solid foundation to build a CI/CD environment on Kubernetes, which we'll explore in future articles.



      Source link

      How to Manage an SQL Database


      Introduction

      SQL databases come installed with all the commands you need to add, modify, delete, and query your data. This cheat sheet-style guide provides a quick reference to some of the most commonly-used SQL commands.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets
      • Jump to any section that is relevant to the task you are trying to complete
      • When you see highlighted text in this guide’s commands, keep in mind that this text should refer to the columns, tables, and data in your own database.
      • Throughout this guide, the example data values given are all wrapped in apostrophes ('). In SQL, it is necessary to wrap any data values that consist of strings in apostrophes. This isn’t required for numeric data, but it also won’t cause any issues if you do include apostrophes.

      Please note that, while SQL is recognized as a standard, most SQL database programs have their own proprietary extensions. This guide uses MySQL as the example relational database management system (RDBMS), but the commands given will work with other relational database programs, including PostgreSQL, MariaDB, and SQLite. Where there are significant differences between RDBMSs, we have included the alternative commands.

      Opening up the Database Prompt (using Socket/Trust Authentication)

      By default on Ubuntu 18.04, the root MySQL user can authenticate without a password using the following command:

      To open up a PostgreSQL prompt, use the following command. This example will log you in as the postgres user, which is the included superuser role, but you can replace that with any already-created role:

      Opening up the Database Prompt (using Password Authentication)

      If your root MySQL user is set to authenticate with a password, you can do so with the following command:

      If you've already set up a non-root user account for your database, you can also use this method to log in as that user:

      The above command will prompt you for your password after you run it. If you'd like to supply your password as part of the command, immediately follow the -p option with your password, with no space between them:

      Creating a Database

      The following command creates a database with default settings.

      • CREATE DATABASE database_name;

      If you want your database to use a character set and collation different than the defaults, you can specify those using this syntax:

      • CREATE DATABASE database_name CHARACTER SET character_set COLLATE collation;

      Listing Databases

      To see what databases exist in your MySQL or MariaDB installation, run the following command:

      In PostgreSQL, you can see what databases have been created with the following command:

      Deleting a Database

      To delete a database, including any tables and data held within it, run a command that follows this structure:

      • DROP DATABASE IF EXISTS database;

      Creating a User

      To create a user profile for your database without specifying any privileges for it, run the following command:

      • CREATE USER username IDENTIFIED BY 'password';

      PostgreSQL uses a similar, but slightly different, syntax:

      • CREATE USER user WITH PASSWORD 'password';

      If you want to create a new user and grant them privileges in one command, you can do so by issuing a GRANT statement. The following command creates a new user and grants them full privileges to every database and table in the RDBMS:

      • GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password';

      Deleting a User

      Use the following syntax to delete a database user profile:

      • DROP USER IF EXISTS username;

      Note that this command will not by default delete any tables created by the deleted user, and attempts to access such tables may result in errors.

      Selecting a Database

      Before you can create a table, you first have to tell the RDBMS the database in which you'd like to create it. In MySQL and MariaDB, do so with the following syntax:

      In PostgreSQL, you must use the following command to select your desired database:

      Creating a Table

      The following command structure creates a new table with the name table, and includes two columns, each with their own specific data type:

      • CREATE TABLE table ( column_1 column_1_data_type, column_2 column_2_data_taype );

      Deleting a Table

      To delete a table entirely, including all its data, run the following:

      • DROP TABLE IF EXISTS table

      Inserting Data into a Table

      Use the following syntax to populate a table with one row of data:

      • INSERT INTO table ( column_A, column_B, column_C ) VALUES ( 'data_A', 'data_B', 'data_C' );

      You can also populate a table with multiple rows of data using a single command, like this:

      • INSERT INTO table ( column_A, column_B, column_C ) VALUES ( 'data_1A', 'data_1B', 'data_1C' ), ( 'data_2A', 'data_2B', 'data_2C' ), ( 'data_3A', 'data_3B', 'data_3C' );

      Deleting Data from a Table

      To delete a row of data from a table, use the following command structure. Note that value should be the value held in the specified column in the row that you want to delete:

      • DELETE FROM table WHERE column='value';

      Note: If you do not include a WHERE clause in a DELETE statement, as in the following example, it will delete all the data held in a table, but not the columns or the table itself:

      Changing Data in a Table

      Use the following syntax to update the data held in a given row. Note that the WHERE clause at the end of the command tells SQL which row to update. value is the value held in column_A that aligns with the row you want to change.

      Note: If you fail to include a WHERE clause in an UPDATE statement, the command will replace the data held in every row of the table.

      • UPDATE table SET column_1 = value_1, column_2 = value_2 WHERE column_A=value;

      Inserting a Column

      The following command syntax will add a new column to a table:

      • ALTER TABLE table ADD COLUMN column data_type;

      Deleting a Column

      A command following this structure will delete a column from a table:

      • ALTER TABLE table DROP COLUMN column;

      Performing Basic Queries

      To view all the data from a single column in a table, use the following syntax:

      • SELECT column FROM table;

      To query multiple columns from the same table, separate the column names with a comma:

      • SELECT column_1, column_2 FROM table;

      You can also query every column in a table by replacing the names of the columns with an asterisk (*). In SQL, asterisks act as placeholders to represent “all”:

      Using WHERE Clauses

      You can narrow down the results of a query by appending the SELECT statement with a WHERE clause, like this:

      • SELECT column FROM table WHERE conditions_that_apply;

      For example, you can query all the data from a single row with a syntax like the following. Note that value should be a value held in both the specified column and the row you want to query:

      • SELECT * FROM table WHERE column = value;

      Working with Comparison Operators

      A comparison operator in a WHERE clause defines how the specified column should be compared against the value. Here are some common SQL comparison operators:

      OperatorWhat it does
      =tests for equality
      !=tests for inequality
      <tests for less-than
      >tests for greater-than
      <=tests for less-than or equal-to
      >=tests for greater-than or equal-to
      BETWEENtests whether a value lies within a given range
      INtests whether a row's value is contained in a set of specified values
      EXISTStests whether rows exist, given the specified conditions
      LIKEtests whether a value matches a specified string
      IS NULLtests for NULL values
      IS NOT NULLtests for all values other than NULL

      Working with Wildcards

      SQL allows the use of wildcard characters. These are useful if you're trying to find a specific entry in a table, but aren't sure of what that entry is exactly.

      Asterisks (*) are placeholders that represent “all,” this will query every column in a table:

      Percentage signs (%) represent zero or more unknown characters.

      • SELECT * FROM table WHERE column LIKE val%;

      Underscores (_) are used to represent a single unknown character:

      • SELECT * FROM table WHERE column LIKE v_lue;

      Counting Entries in a Column

      The COUNT function is used to find the number of entries in a given column. The following syntax will return the total number of values held in column:

      • SELECT COUNT(column) FROM table;

      You can narrow down the results of a COUNT function by appending a WHERE clause, like this:

      • SELECT COUNT(column) FROM table WHERE column=value;

      Finding the Average Value in a Column

      The AVG function is used to find the average (in this case, the mean) amongst values held in a specific column. Note that the AVG function will only work with columns holding numeric values; when used on a column holding string values, it may return an error or 0:

      • SELECT AVG(column) FROM table;

      Finding the Sum of Values in a Column

      The SUM function is used to find the sum total of all the numeric values held in a column:

      • SELECT SUM(column) FROM table;

      As with the AVG function, if you run the SUM function on a column holding string values it may return an error or just 0, depending on your RDBMS.

      Finding the Largest Value in a Column

      To find the largest numeric value in a column or the last value alphabetically, use the MAX function:

      • SELECT MAX(column) FROM table;

      Finding the Smallest Value in a Column

      To find the smallest numeric value in a column or the first value alphabetically, use the MIN function:

      • SELECT MIN(column) FROM table;

      Sorting Results with ORDER BY Clauses

      An ORDER BY clause is used to sort query results. The following query syntax returns the values from column_1 and column_2 and sorts the results by the values held in column_1 in ascending order or, for string values, in alphabetical order:

      • SELECT column_1, column_2 FROM table ORDER BY column_1;

      To perform the same action, but order the results in descending or reverse alphabetical order, append the query with DESC:

      • SELECT column_1, column_2 FROM table ORDER BY column_1 DESC;

      Sorting Results with GROUP BY Clauses

      The GROUP BY clause is similar to the ORDER BY clause, but it is used to sort the results of a query that includes an aggregate function such as COUNT, MAX, MIN, or SUM. On their own, the aggregate functions described in the previous section will only return a single value. However, you can view the results of an aggregate function performed on every matching value in a column by including a GROUP BY clause.

      The following syntax will count the number of matching values in column_2 and group them in ascending or alphabetical order:

      • SELECT COUNT(column_1), column_2 FROM table GROUP BY column_2;

      To perform the same action, but group the results in descending or reverse alphabetical order, append the query with DESC:

      • SELECT COUNT(column_1), column_2 FROM table GROUP BY column_2 DESC;

      Querying Multiple Tables with JOIN Clauses

      JOIN clauses are used to create result-sets that combine rows from two or more tables. A JOIN clause will only work if the two tables each have a column with an identical name and data type, as in this example:

      • SELECT table_1.column_1, table_2.column_2 FROM table_1 JOIN table_2 ON table_1.common_column=table_2.common_column;

      This is an example of an INNER JOIN clause. An INNER JOIN will return all the records that have matching values in both tables, but won't show any records that don't have matching values.

      It's possible to return all the records from one of two tables, including values that do not have a corresponding match in the other table, by using an outer JOIN clause. Outer JOIN clauses are written as either LEFT JOIN or RIGHT JOIN.

      A LEFT JOIN clause returns all the records from the “left” table and only the matching records from the “right” table. In the context of outer JOIN clauses, the left table is the one referenced in the FROM clause, and the right table is any other table referenced after the JOIN statement. The following will show every record from table_1 and only the matching values from table_2. Any values that do not have a match in table_2 will appear as NULL in the result-set:

      • SELECT table_1.column_1, table_2.column_2 FROM table_1 LEFT JOIN table_2 ON table_1.common_column=table_2.common_column;

      A RIGHT JOIN clause functions the same as a LEFT JOIN, but it prints the all the results from the right table, and only the matching values from the left:

      • SELECT table_1.column_1, table_2.column_2 FROM table_1 RIGHT JOIN table_2 ON table_1.common_column=table_2.common_column;

      Combining Multiple SELECT Statements with UNION Clauses

      A UNION operator is useful for combining the results of two (or more) SELECT statements into a single result-set:

      • SELECT column_1 FROM table UNION SELECT column_2 FROM table;

      Additionally, the UNION clause can combine two (or more) SELECT statements querying different tables into the same result-set:

      • SELECT column FROM table_1 UNION SELECT column FROM table_2;

      Conclusion

      This guide covers some of the more common commands in SQL used to manage databases, users, and tables, and query the contents held in those tables. There are, however, many combinations of clauses and operators that all produce unique result-sets. If you're looking for a more comprehensive guide to working with SQL, we encourage you to check out Oracle's Database SQL Reference.

      Additionally, if there are common SQL commands you'd like to see in this guide, please ask or make suggestions in the comments below.



      Source link

      Kubernetes Networking Under the Hood


      Introduction

      Kubernetes is a powerful container orchestration system that can manage the deployment and operation of containerized applications across clusters of servers. In addition to coordinating container workloads, Kubernetes provides the infrastructure and tools necessary to maintain reliable network connectivity between your applications and services.

      The Kubernetes cluster networking documentation states that the basic requirements of a Kubernetes network are:

      • all containers can communicate with all other containers without NAT
      • all nodes can communicate with all containers (and vice-versa) without NAT
      • the IP that a container sees itself as is the same IP that others see it as

      In this article we will discuss how Kubernetes satisfies these networking requirements within a cluster: how data moves inside a pod, between pods, and between nodes.

      We will also show how a Kubernetes Service can provide a single static IP address and DNS entry for an application, easing communication with services that may be distributed among multiple constantly scaling and shifting pods.

      If you are unfamiliar with the terminology of Kubernetes pods and nodes or other basics, our article An Introduction to Kubernetes covers the general architecture and components involved.

      Let’s first take a look at the networking situation within a single pod.

      Pod Networking

      In Kubernetes, a pod is the most basic unit of organization: a group of tightly-coupled containers that are all closely related and perform a single function or service.

      Networking-wise, Kubernetes treats pods similar to a traditional virtual machine or a single bare-metal host: each pod receives a single unique IP address, and all containers within the pod share that address and communicate with each other over the lo loopback interface using the localhost hostname. This is achieved by assigning all of the pod’s containers to the same network stack.

      This situation should feel familiar to anybody who has deployed multiple services on a single host before the days of containerization. All the services need to use a unique port to listen on, but otherwise communication is uncomplicated and has low overhead.

      Pod to Pod Networking

      Most Kubernetes clusters will need to deploy multiple pods per node. Pod to pod communication may happen between two pods on the same node, or between two different nodes.

      Pod to Pod Communication on One Node

      On a single node you can have multiple pods that need to communicate directly with each other. Before we trace the route of a packet between pods, let’s inspect the networking setup of a node. The following diagram provides an overview, which we will walk through in detail:

      Networking overview of a single Kubernetes node

      Each node has a network interface – eth0 in this example – attached to the Kubernetes cluster network. This interface sits within the node’s root network namespace. This is the default namespace for networking devices on Linux.

      Just as process namespaces enable containers to isolate running applications from each other, network namespaces isolate network devices such as interfaces and bridges. Each pod on a node is assigned its own isolated network namespace.

      Pod namespaces are connected back to the root namespace with a virtual ethernet pair, essentially a pipe between the two namespaces with an interface on each end (here we’re using veth1 in the root namespace, and eth0 within the pod).

      Finally, the pods are connected to each other and to the node’s eth0 interface via a bridge, br0 (your node may use something like cbr0 or docker0). A bridge essentially works like a physical ethernet switch, using either ARP (address resolution protocol) or IP-based routing to look up other local interfaces to direct traffic to.

      Let’s trace a packet from pod1 to pod2 now:

      • pod1 creates a packet with pod2‘s IP as its destination
      • The packet travels over the virtual ethernet pair to the root network namespace
      • The packet continues to the bridge br0
      • Because the destination pod is on the same node, the bridge sends the packet to pod2‘s virtual ethernet pair
      • the packet travels through the virtual ethernet pair, into pod2‘s network namespace and the pod’s eth0 network interface

      Now that we’ve traced a packet from pod to pod within a node, let’s look at how pod traffic travels between nodes.

      Pod to Pod Communication Between Two Nodes

      Because each pod in a cluster has a unique IP, and every pod can communicate directly with all other pods, a packet moving between pods on two different nodes is very similar to the previous scenario.

      Let’s trace a packet from pod1 to pod3, which is on a different node:

      Networking diagram between two Kubernetes nodes

      • pod1 creates a packet with pod3‘s IP as its destination
      • The packet travels over the virtual ethernet pair to the root network namespace
      • The packet continues to the bridge br0
      • The bridge finds no local interface to route to, so the packet is sent out the default route toward eth0
      • Optional: if your cluster requires a network overlay to properly route packets to nodes, the packet may be encapsulated in a VXLAN packet (or other network virtualization technique) before heading to the network. Alternately, the network itself may be set up with the proper static routes, in which case the packet travels to eth0 and out the the network unaltered.
      • The packet enters the cluster network and is routed to the correct node.
      • The packet enters the destination node on eth0
      • Optional: if your packet was encapsulated, it will be de-encapsulated at this point
      • The packet continues to the bridge br0
      • The bridge routes the packet to the destination pod’s virtual ethernet pair
      • The packet passes through the virtual ethernet pair to the pod’s eth0 interface

      Now that we are familiar with how packets are routed via pod IP addresses, let’s take a look at Kubernetes services and how they build on top of this infrastructure.

      Pod to Service Networking

      It would be difficult to send traffic to a particular application using just pod IPs, as the dynamic nature of a Kubernetes cluster means pods can be moved, restarted, upgraded, or scaled in and out of existence. Additionally, some services will have many replicas, so we need some way to load balance between them.

      Kubernetes solves this problem with Services. A Service is an API object that maps a single virtual IP (VIP) to a set of pod IPs. Additionally, Kubernetes provides a DNS entry for each service’s name and virtual IP, so services can be easily addressed by name.

      The mapping of virtual IPs to pod IPs within the cluster is coordinated by the kube-proxy process on each node. This process sets up either iptables or IPVS to automatically translate VIPs into pod IPs before sending the packet out to the cluster network. Individual connections are tracked so packets can be properly de-translated when they return. IPVS and iptables can both do load balancing of a single service virtual IP into multiple pod IPs, though IPVS has much more flexibility in the load balancing algorithms it can use.

      Note: this translation and connection tracking processes happens entirely in the Linux kernel. kube-proxy reads from the Kubernetes API and updates iptables ip IPVS, but it is not in the data path for individual packets. This is more efficient and higher performance than previous versions of kube-proxy, which functioned as a user-land proxy.

      Let’s follow the route a packet takes from a pod, pod1 again, to a service, service1:

      Networking diagram between two Kubernetes nodes, showing DNAT translation of virtual IPs

      • pod1 creates a packet with service1‘s IP as its destination
      • The packet travels over the virtual ethernet pair to the root network namespace
      • The packet continues to the bridge br0
      • The bridge finds no local interface to route the packet to, so the packet is sent out the default route toward eth0
      • Iptables or IPVS, set up by kube-proxy, match the packet’s destination IP and translate it from a virtual IP to one of the service’s pod IPs, using whichever load balancing algorithms are available or specified
      • Optional: your packet may be encapsulated at this point, as discussed in the previous section
      • The packet enters the cluster network and is routed to the correct node.
      • The packet enters the destination node on eth0
      • Optional: if your packet was encapsulated, it will be de-encapsulated at this point
      • The packet continues to the bridge br0
      • The packet is sent to the virtual ethernet pair via veth1
      • The packet passes through the virtual ethernet pair and enters the pod network namespace via its eth0 network interface

      When the packet returns to node1 the VIP to pod IP translation will be reversed, and the packet will be back through the bridge and virtual interface to the correct pod.

      Conclusion

      In this article we’ve reviewed the internal networking infrastructure of a Kubernetes cluster. We’ve discussed the building blocks that make up the network, and detailed the hop-by-hop journey of packets in different scenarios.

      For more information about Kubernetes, take a look at our Kubernetes tutorials tag and the official Kubernetes documentation.



      Source link