WordPress has always been a user-friendly platform that is flexible and easy to learn. However, those without coding skills may struggle to perform certain customization tasks.
Fortunately, Version 5.9 introduced Full-Site Editing, which brings together both new and existing features to provide centralized control of your entire site. One of the most significant changes is the addition of new site level blocks.
In this post, we’ll discuss everything you need to know about site blocks in WordPress. We’ll also look at some examples and show you how to use them. Let’s get started!
An Introduction to WordPress Blocks
Blocks have been a fundamental aspect of WordPress since late 2018. Version 5.0 of WordPress replaced the classic editor with a new WordPress block editor called Gutenberg.
A block is a specific element that you can add to your site. For instance, there are blocks for images, headings, lists, paragraphs, and more. This system provides users with a simple and intuitive way to create a unique website.
Each block comes with a set of customization options, such as alignment, color, and size. Additionally, blocks can be moved around on via a drag-and-drop editor, facilitating a simpler page-building process.
Common WordPress Blocks
Gutenberg introduced blocks for various purposes. There may be some blocks that you will never touch. However, there are others you’ll probably use every time you create a post. Let’s take a look at some of the most common options in the new block editor.
The Heading Block
The Heading block provides several choices for configuring and styling headings:
This block can help you organize your content more efficiently. For instance, you can select the heading level H2 for main sections, and H3-H6 for subsections. Additionally, you can add a hyperlink to the heading.
The Paragraph Block
Paragraphs are the most frequently-used block in the Gutenberg editor:
This element enables users to write text and customize the typography. Usually, headings are used to group relevant paragraphs together and split up the page’s content.
The Image Block
Image blocks enable you to upload photos or artwork to your site:
You can then use the settings to resize and crop your images. You can also add captions and alt text.
The Video Block
You can also add videos to your post. There are different options for displaying videos:
For instance, you can upload them to your site’s Media Library, or embed them from YouTube and other video-sharing platforms. You can also add text tracks such as subtitles, captions, chapters, and descriptions to the block.
The List Block
The List block enables you to insert bulleted or numbered lists into your page:
This block comes with styling options such as bold and italics, as well as more intricate rich-text controls. Additionally, you can add hyperlinks to list items.
Skip the line and get tips right in your inbox
Click below to sign up for more how-to’s and tutorials just like this one, delivered to your inbox.
New Site Blocks in WordPress 5.9
Now that Full-Site Editing is here, individual blocks can also be used for editing your site’s theme. You can use the new editor to customize all aspects of your site:
This feature has replaced the Customizer. However, it only supports block-based themes, such as Twenty Twenty-Two. If you’re using a ‘standard’ theme, you’ll still have access to the Customizer (and the Gutenberg block editor), but you won’t be able to use the Full-Site Editor.
The Full-Site Editor comes with templates for different pages, such as your archive or home page. Additionally, you can customize more areas of your site, such as your header and footer. There is also a new Global Styles feature, which enables you to define site-wide settings for your blocks.
Related: Decoding WordPress: An Introduction to Global Styles
In addition, the Full-Site Editor has introduced a range of ‘theme blocks’. Often nicknamed ‘site blocks’, these new additions enable you to use and edit global elements such as the site logo and tagline, navigation, and post lists.
The Benefits of Using Theme Blocks
The new theme blocks were introduced to make the web design process in WordPress simpler and more streamlined. Previously, the WordPress theme editor had limited customization options, and users who wanted unique designs often needed to use custom code.
Theme blocks remove the need for coding (and third-party page builder plugins) in most cases. Each block has a variety of styling and display options, offering users the flexibility to create almost any layout and design. Whether you’re a WordPress beginner or an experienced web developer, the process of creating custom sites is now faster and easier.
Let’s take a look at some notable site blocks that have been added with WordPress 5.9. This is just a brief introduction – we’ll delve more deeply into each of these Gutenberg blocks shortly.
Navigation
This feature enables you to add your site’s navigation menu to a page:
You can customize both the design and structure of your menu. For instance, you can add submenu items, change the color and alignment, and more.
Query Loop
A query loop is a block that displays a set of posts based on specific conditions and parameters:
This is a great way to showcase posts on a particular topic. You can filter content by post categories, tags, authors, and keywords. The block also comes with different styling options for the post feed.
Template Part
Template parts are used to organize the structure of a site. They’re essentially collections or containers of other content blocks:
They can only be used when editing templates, so you’ll find this block in the Full-Site Editor. Each template part has a user-generated name. When adding a block, you can choose an existing template or create a new one.
Related: Decoding WordPress: Custom Templates and Template Parts
How to Use Common WordPress Site Blocks (6 New Theme Blocks)
Now, let’s take a detailed look at a few common theme blocks. For each of the new blocks, we’ll discuss its purpose and the steps for using it.
1. Navigation
Navigation blocks are used for editing your site’s menus. When you add this block to your page, you are given three options: select an existing menu, add all of the site’s pages, or start with an empty menu:
You can include additional menu items as well as indented items, which appear as subpages. Moreover, you can change the links and names of each item using the “anchor” icon in the toolbar. The toolbar also enables you to change the alignment and other layout settings.
2. Login/out
The login/out block provides a simple way to add a login and logout button to your website:
It automatically displays the correct link depending on the status of the user. You also have the option to display the login/out button as a switch.
3. Template Part
This element can be thought of as a group of blocks. The template part helps you organize the structure of your page. These blocks can only be used when editing templates, and they’re an excellent way to manage global areas such as headers and footers:
Template parts can be added in the site editor. Upon selecting the block, you’ll be asked if you want to add an existing template or create a new one.
If you opt for the latter, you’ll be prompted to enter a name for the template. Then, you can go ahead and add in blocks to create the desired layout.
4. Site Title
As the name suggests, this block is used to display the title of your site:
By default, the title links to the home page, but this can be turned off with a toggle switch in the settings. There’s also a range of styling options, including text and background colors, font size, line height, letter spacing, and other typography settings.
5. Post Excerpt
Post excerpts give readers a sneak peek into a post, and can help them decide whether or not they wish to read the entire article:
Most of the time, this block will be a child element of a query loop. It displays either the first 55 words of a post, or the set excerpt for that post. You can also add a “read more” link. This will take the user directly to the full post.
6. Query Loop
Query loops can be used to display a set of posts based on specific conditions and parameters. For example, you may use this block to show all posts in a particular category or by a specific author:
Query loops are made up of multiple blocks, including post titles, dates, excerpts, and featured images. You have the option to start blank and add nested blocks manually, or start with a premade layout and edit from there.
You can then alter the width, alignment, arrangement, and colors. You can also change the number of posts that the query loop displays.
Conclusion
In the past, customizing WordPress sites may have been challenging for some users. However, with the release of new site blocks in version 5.9, the process has become a lot easier.
Thanks to the Full-Site Editing feature, you can now make changes to your entire site from a unified interface. You can also customize individual elements such as the site title and tagline, navigation menu, and template parts like headers and footers.
Related: WordPress 6.0: Making Gutenberg “Guten-Better
If you’re looking for a fast, reliable, and affordable place to host your WordPress site, we’ve got you covered. Check out our DreamHost WordPress Hosting plans!
Power Your Website with DreamHost
We make sure your website is fast, secure and always up so your visitors trust you. Plans start at $1.99/mo.
The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.
Introduction
When creating a web application using Vue.js, it’s a best practice to construct your application in small, modular blocks of code. Not only does this keep the parts of your application focused, but it also makes the application easier to update as it grows in complexity. Since an app generated from the Vue CLI requires a build step, you have access to a Single-File Components (SFC) to introduce modularity into your app. SFCs have the .vue extension and contain an HTML <template>, <script>, and <style> tags and can be implemented in other components.
SFCs give the developer a way to create their own HTML tags for each of their components and then use them in their application. In the same way that the <p> HTML tag will render a paragraph in the browser, and hold non-rendered functionality as well, the component tags will render the SFC wherever it is placed in the Vue template.
In this tutorial, you are going to create a SFC and use props to pass data down and slots to inject content between tags. By the end of this tutorial, you will have a general understanding of what SFCs are and how to approach code re-usability.
Prerequisites
Step 1 — Setting Up the Project
In this tutorial, you are going to be creating an airport card component that displays a number of airports and their codes in a series of cards. After following the Prerequisites section, you will have a new Vue project named sfc-project. In this section, you will import data into this generated application. This data will be an array of objects consisting of a few properties that you will use to display information in the browser.
Once the project is generated, open your terminal and cd or change directory into the root src folder:
From there, create a new directory named data with the mkdir command, then create a new file with the name us-airports.js using the touch command:
mkdir data
touch data/us-airports.js
In your text editor of choice, open this new JavaScript file and add in the following local data:
This data is an array of objects consisting of a few airports in the United States. This will be rendered in a Single-File Component later in the tutorial.
Save and exit the file.
Next, you will create another set of airport data. This data will consist of European airports. Using the touch command, create a new JavaScript file named eu-airports.js:
This set of data is for European airports in France, Germany, and Italy, respectively.
Save and exit the file.
Next, in the root directory, run the following command in your terminal to start your Vue CLI application running on a local development server:
This will open the application in your browser on localhost:8080. The port number may be different on your machine.
Visit the address in your browser. You will find the following start up screen:
Next, start a new terminal and open your App.vue file in your src folder. In this file, delete the img and HelloWorld tags in the <template> and the components section and import statement in the <script>. Your App.vue will resemble the following:
After this, import the us-airports.js file that you created earlier. In order to make this data reactive so you can use it in the <template>, you need to import the ref function from vue. You will need to return the airport reference so the data can be used in the HTML template.
In this snippet, you imported the data and rendered it using <div> elements and the v-for directive in the template.
At this point, the data is imported and ready to be used in the App.vue component. But first, add some styling to make the data easier for users to read. In this same file, add the following CSS in the <style> tag:
In this case, you are using CSS Grid to compose these cards of airport codes into a grid of three. Notice how this grid is set up in the .wrapper class. The .card class is the card or section that contains each airport code, name, and location. If you would like to learn more about CSS, check out our How To Style HTML with CSS.
Open your browser and navigate to localhost:8080. You will find a number of cards with airport codes and information:
Now that you have set up your initial app, you can refactor the data into a Single-File Component in the next step.
Step 2 — Creating a Single-File Component
Since Vue CLI uses Webpack to build your app into something the browser can read, your app can use SFCs or .vue files instead of plain JavaScript. These files are a way for you to create small blocks of scalable and reusable code. If you were to change one component, it would be updated everywhere.
These .vue components usually consist of these three things: <template>, <script>, and <style> elements. SFC components can either have scoped or unscoped styles. When a component has scoped styles, that means the CSS between the <style> tags will only affect the HTML in the <template> in the same file. If a component has un-scoped styles, the CSS will affect the parent component as well as its children.
With your project successfully set up, you are now going to break these airport cards into a component called AirportCards.vue. As it stands now, the HTML in the App.vue is not very reusable. You will break this off into its own component so you can import it anywhere else into this app while preserving the functionality and visuals.
In your terminal, create this .vue file in the components directory:
touch src/components/AirportCards.vue
Open the AiportCards.vue component in your text editor. To illustrate how you can re-use blocks of code using components, move most of the code from the App.vue file to the AirportCards.vue component:
Now that AirportCards is a standalone component, you have put it in the <template> HTML as you would a <p> tag.
When you open up localhost:8080 in your browser, nothing will change. The same three airport cards will still display, because you are rendering the new SFC in the <AirportCards /> element.
Next, add this same component in the template again to illustrate the re-usability of components:
You may notice that this new instance of AirportCards.vue is using kebab-case over PascalCase. When referencing components, Vue does not care which one you use. All capitalized words and letters will be separated by a hyphen and will be lower case. The same applies to props as well, which will be explained in the next section.
Note: The case that you use is up to personal preference, but consistency is important. Vue.js recommends using kebab-case as it follows the HTML standard.
Open the browser and visit localhost:8080. You will find the cards duplicated:
This adds modularity to your app, but the data is still static. The row of cards is useful if you want to show the same three airports, but changing the data source would require changing the hard-coded data. In the next step, you are going to expand this component further by registering props and passing data from the parent down to the child component.
Step 3 — Leveraging Props to Pass Down Data
In the previous step, you created an AirportCards.vue component that rendered a number of cards from the data in the us-airports.js file. In addition to that, you also doubled the same component reference to illustrate how you can easily duplicate code by adding another instance of that component in the <template>.
However, leaving the data static will make it difficult to change the data in the future. When working with SFCs, it can help if you think of components as functions. These functions are components that can take in arguments (props) and return something (HTML). For this case, you will pass data into the airports parameter to return dynamic HTML.
Open the AirportCards.vue component in your text editor. You are currently importing data from the us-airports.js file. Remove this import statement, as well as the setup function in the <script> tag:
Save the file. At this point, nothing will render in the browser.
Next, move forward by defining a prop. This prop can be named anything; it just describes the data coming in and associates it with a name.
To create a prop, add the props property in the component. The value of this is a series of key/value pairs. The key is the name of the prop, and the value is the description of the data. It’s best to provide as much description as you can:
Now, in AirportCard.vue, the property airports refers to the data that is passed in. Save and exit from the file.
Next, open the App.vue component in your text editor. Like before, you will need to import data from the us-airports.js file and import the ref function from Vue to make it reactive for the HTML template:
If you open your browser and visit localhost:8080, you will find the same US airports as before:
There is another AirportCards.vue instance in your template. Since you defined props within that component, you can pass any data with the same structure to render a number of cards from different airports. This is where the eu-airports.js file from the initial setup comes in.
In App.vue, import the eu-airports.js file, wrap it in the ref function to make it reactive, and return it:
Open your browser and visit localhost:8080. You will find the European airport data rendered below the US airport data:
You have now successfully passed in a different dataset into the same component. With props, you are essentially re-assigning data to a new name and using that new name to reference data in the child component.
At this point, this application is starting to become more dynamic. But there is still something else that you can do to make this even more focused and re-usable. In Vue.js, you can use something called slots. In the next step, you are going to create a Card.vue component with a default slot that injects the HTML into a placeholder.
Step 4 — Creating a General Card Component Using Slots
Slots are a great way to create re-usable components, especially if you do not know if the HTML in that component will be similar. In the previous step, you created another AirportCards.vue instance with different data. In that example, the HTML is the same for each. It’s a <div> in a v-for loop with paragraph tags.
Open your terminal and create a new file using the touch command. This file will be named Card.vue:
touch src/components/Card.vue
In your text editor, open the new Card.vue component. You are going to take some of the CSS from AirportCards.vue and add it into this new component.
Create a <style> tag and add in the following CSS:
In between the <div class="card">, add the <slot /> component. This is a component that is provided to you by Vue. There is no need to import this component; it is globally imported by Vue.js:
Since you have a <slot /> in your Card.vue component, the HTML between the <card> tags is injected in its place while preserving all styles that have been associated with a card.
Save the file. When you open your browser at localhost:8080, you will find the same cards that you’ve had previously. The difference now is that your AirportCards.vue now reference the Card.vue component:
To show the power of slots, open the App.vue component in your application and import the Card.vue component:
Save the file and visit localhost:8080 in the browser. Your browser will now render additional elements displaying the number of airports in the datasets:
The HTML between the <card /> tags is not exactly the same, but it still renders a generic card. When leveraging slots, you can use this functionality to create small, re-usable components that have a number of different uses.
Conclusion
In this tutorial, you created single-file components and used props and slots to create reusable blocks of code. In the project, you created an AirportCards.vue component that renders a number of airport cards. You then broke up the AirportCards.vue component further into a Card.vue component with a default slot.
You ended up with a number of components that are dynamic and can be used in a number of different uses, all while keeping code maintainable and in keeping with the D.R.Y. software principle.
To learn more about Vue components, it is recommended to ready through the Vue documentation. For more tutorials on Vue, check out the How To Develop Websites with Vue.js series page.
This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a Cloud Native approach to building, testing, and deploying applications, covering release management, Cloud Native tools, Service Meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.
This tutorial includes the concepts and commands from the first session of the series, Building Blocks for Doing CI/CD with Kubernetes.
Introduction
If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.
Two building blocks for doing automation include container images and container orchestrators. Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:
Build container images with Docker, Buildah, and Kaniko.
Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
Extend the functionality of a Kubernetes cluster with Custom Resources.
By the end of this tutorial, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.
Future articles in the series will cover related topics: package management for Kubernetes, CI/CD tools like Jenkins X and Spinnaker, Services Meshes, and GitOps.
Prerequisites
Step 1 — Building Container Images with Docker and Buildah
A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.
Building Container Images with Dockerfiles
Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.
Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:
This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you've also configured nginx to be the default command when the container starts.
Next, you'll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image nkhare/nginx:latest:
Your image is now built. You can list your Docker images using the following command:
Output
REPOSITORY TAG IMAGE ID CREATED SIZE
nkhare/nginx latest 4073540cbcec 3 seconds ago 171MB
ubuntu 16.04 7aa3602ab41e 11 days ago
You can now use the nkhare/nginx:latest image to create containers.
Building Container Images with Project Atomic-Buildah
Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.
Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you'll compile Buildah from source and then use it to create a container image.
To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:
You will see the following output, indicating a successful installation:
Output
go version go1.8 linux/amd64
You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.
Run the following commands to install runc and buildah:
Next, create the /etc/containers/registries.conf file to configure your container registries:
sudo nano /etc/containers/registries.conf
Add the following content to the file to specify your registries:
/etc/containers/registries.conf
# This is a system-wide configuration file used to
# keep track of registries for various container backends.
# It adheres to TOML format and does not support recursive
# lists of registries.
# The default location for this configuration file is /etc/containers/registries.conf.
# The only valid categories are: 'registries.search', 'registries.insecure',
# and 'registries.block'.
[registries.search]
registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org']
# If you need to access insecure registries, add the registry's fully-qualified name.
# An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
[registries.insecure]
registries = []
# If you need to block pull access from a registry, uncomment the section below
# and add the registries fully-qualified name.
#
# Docker only
[registries.block]
registries = []
The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.
Now run the following command to build an image, using the https://github.com/do-community/rsvpapp repository as the build context. This repository also contains the relevant Dockerfile:
This command creates an image named rsvpapp:buildah from the Dockerfille available in the https://github.com/do-community/rsvpapp repository.
To list the images, use the following command:
You will see the following output:
Output
IMAGE ID IMAGE NAME CREATED AT SIZE
b0c552b8cf64 docker.io/teamcloudyuga/python:alpine Sep 30, 2016 04:39 95.3 MB
22121fd251df localhost/rsvpapp:buildah Sep 11, 2018 14:34 114 MB
One of these images is localhost/rsvpapp:buildah, which you just created. The other, docker.io/teamcloudyuga/python:alpine, is the base image from the Dockerfile.
Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:
Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.
For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:
Finally, take a look at the Docker images you have created:
Output
REPOSITORY TAG IMAGE ID CREATED SIZE
rsvpapp buildah 22121fd251df 4 minutes ago 108MB
nkhare/nginx latest 01f0982d91b8 17 minutes ago 172MB
ubuntu 16.04 b9e15a5d1e1a 5 days ago 115MB
As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.
You now have experience building container images with two different tools, Docker and Buildah. Let's move on to discussing how to set up a cluster of containers with Kubernetes.
Step 2 — Setting Up a Kubernetes Cluster on DigitalOcean using kubeadm and Terraform
There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.
Since this tutorial series discusses taking a Cloud Native approach to application development, we'll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.
Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.
On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:
You will see the following output:
Output
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa):
Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.
Next, you will see the following prompt:
Output
Enter passphrase (empty for no passphrase):
In this case, press ENTER without a password to enable password-less logins to your nodes.
You will see a confirmation that your key pair has been created:
Output
Your identification has been saved in ~/.ssh/id_rsa.
Your public key has been saved in ~/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:lCVaexVBIwHo++NlIxccMW5b6QAJa+ZEr9ogAElUFyY root@3b9a273f18b5
The key's randomart image is:
+---[RSA 2048]----+
|++.E ++o=o*o*o |
|o +..=.B = o |
|. .* = * o |
| . =.o + * |
| . . o.S + . |
| . +. . |
| . ... = |
| o= . |
| ... |
+----[SHA256]-----+
Get your public key by running the following command, which will display it in your terminal:
Add this key to your DigitalOcean account by following these directions.
You will see output confirming your Terraform installation:
Output
Terraform v0.11.7
Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user's home directory:
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install kubectl
mkdir -p ~/.kube
Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.
Next, clone the sample project repository for this tutorial, which contains the Terraform scripts for setting up the infrastructure:
This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.
Execute the script.sh script to trigger the Kubernetes cluster setup:
When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you've created.
List the cluster nodes using kubectl get nodes:
Output
NAME STATUS ROLES AGE VERSION
k8s-master-node Ready master 2m v1.10.0
k8s-worker-node-1 Ready <none> 1m v1.10.0
k8s-worker-node-2 Ready <none> 57s v1.10.0
You now have one master and two worker nodes in the Ready state.
With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.
Step 3 — Building Container Images with Kaniko
Earlier in this tutorial, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn't native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.
A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.
In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let's use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.
To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:
Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).
First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:
cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/
Create the pod-kaniko.yml file:
Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace your-dockerhub-username in the Pod's args field with your own Docker Hub username:
This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile, https://github.com/do-community/rsvpapp.git, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.
To deploy the kaniko pod, run the following command:
kubectl apply -f pod-kaniko.yml
You will see the following confirmation:
Output
pod/kaniko created
Get the list of pods:
You will see the following list:
Output
NAME READY STATUS RESTARTS AGE
kaniko 0/1 Init:0/1 0 47s
Wait a few seconds, and then run kubectl get pods again for a status update:
You will see the following:
Output
NAME READY STATUS RESTARTS AGE
kaniko 1/1 Running 0 1m
Finally, run kubectl get pods once more for a final status update:
Output
NAME READY STATUS RESTARTS AGE
kaniko 0/1 Completed 0 2m
This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.
Check the logs of the pod:
You will see the following output:
Output
time="2018-08-02T05:01:24Z" level=info msg="appending to multi args docker.io/your-dockerhub-username/rsvpapp:kaniko"
time="2018-08-02T05:01:24Z" level=info msg="Downloading base image nkhare/python:alpine"
.
.
.
ime="2018-08-02T05:01:46Z" level=info msg="Taking snapshot of full filesystem..."
time="2018-08-02T05:01:48Z" level=info msg="cmd: CMD"
time="2018-08-02T05:01:48Z" level=info msg="Replacing CMD in config with [/bin/sh -c python rsvp.py]"
time="2018-08-02T05:01:48Z" level=info msg="Taking snapshot of full filesystem..."
time="2018-08-02T05:01:49Z" level=info msg="No files were changed, appending empty layer to config."
2018/08/02 05:01:51 mounted blob: sha256:bc4d09b6c77b25d6d3891095ef3b0f87fbe90621bff2a333f9b7f242299e0cfd
2018/08/02 05:01:51 mounted blob: sha256:809f49334738c14d17682456fd3629207124c4fad3c28f04618cc154d22e845b
2018/08/02 05:01:51 mounted blob: sha256:c0cb142e43453ebb1f82b905aa472e6e66017efd43872135bc5372e4fac04031
2018/08/02 05:01:51 mounted blob: sha256:606abda6711f8f4b91bbb139f8f0da67866c33378a6dcac958b2ddc54f0befd2
2018/08/02 05:01:52 pushed blob sha256:16d1686835faa5f81d67c0e87eb76eab316e1e9cd85167b292b9fa9434ad56bf
2018/08/02 05:01:53 pushed blob sha256:358d117a9400cee075514a286575d7d6ed86d118621e8b446cbb39cc5a07303b
2018/08/02 05:01:55 pushed blob sha256:5d171e492a9b691a49820bebfc25b29e53f5972ff7f14637975de9b385145e04
2018/08/02 05:01:56 index.docker.io/your-dockerhub-username/rsvpapp:kaniko: digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 size: 1243
From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.
You can now pull the Docker image. Be sure again to replace your-dockerhub-username with your Docker Hub username:
You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let's move on to discussing Deployments and Services.
Step 4 — Create Kubernetes Deployments and Services
Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.
First, open the file:
Add the following configuration to the file to define your Nginx Deployment:
This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.
To deploy the Deployment, run the following command:
kubectl apply -f deployment.yml
You will see a confirmation that the Deployment was created:
Output
deployment.apps/nginx-deployment created
List your Deployments:
Output
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 29s
You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.
To list the Pods that the Deployment created, run the following command:
Output
NAME READY STATUS RESTARTS AGE
kaniko 0/1 Completed 0 9m
nginx-deployment-75675f5897-nhwsp 1/1 Running 0 1m
nginx-deployment-75675f5897-pxpl9 1/1 Running 0 1m
nginx-deployment-75675f5897-xvf4f 1/1 Running 0 1m
You can see from this output that the desired number of Pods are running.
To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.
To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:
These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.
To deploy the Service run the following command:
kubectl apply -f service.yml
You will see a confirmation:
Output
service/nginx-service created
List the Services:
You will see the following list:
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h
nginx-service NodePort 10.100.98.213 <none> 80:30111/TCP 7s
Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx's standard welcome page.
Once you have tested the Deployment, you can clean up both the Deployment and Service:
kubectl delete deployment nginx-deployment
kubectl delete service nginx-service
These commands will delete the Deployment and Service you have created.
Now that you have worked with Deployments and Services, let's move on to creating Custom Resources.
Step 5 — Creating Custom Resources in Kubernetes
Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes' offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.
In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.
In this step, you will create a Custom Resource and related objects.
To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:
Add the following Custom Resource Definition (CRD):
To deploy the CRD defined in crd.yml, run the following command:
kubectl create -f crd.yml
You will see a confirmation that the resource has been created:
Output
customresourcedefinition.apiextensions.k8s.io/webinars.digitalocean.com created
The crd.yml file has created a new RESTful resource path: /apis/digtialocean.com/v1/namespaces/*/webinars. You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:
Note: If you followed the initial server setup guide in the prerequisites, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:
Run the following command to push these changes to the cluster:
kubectl apply -f webinar.yml
You will see the following output:
Output
webinar.digitalocean.com/webinar1 created
You can now manage your webinar objects using kubectl. For example:
Output
NAME CREATED AT
webinar1 21s
You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.
Deleting a Custom Resource Definition
To delete all of the objects for your Custom Resource, use the following command:
kubectl delete webinar --all
You will see:
Output
webinar.digitalocean.com "webinar1" deleted
Remove the Custom Resource itself:
kubectl delete crd webinars.digitalocean.com
You will see a confirmation that it has been deleted:
After deletion you will not have access to the API endpoint that you tested earlier with the curl command.
This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.
Step 6 — Deleting the Kubernetes Cluster
To destroy the Kubernetes cluster itself, you can use the destroy.sh script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:
cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom
Run the script:
By running this script, you'll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.
Conclusion
In this tutorial, you used different tools to create container images. With these images, you can create containers in any environment. You also set up a Kubernetes cluster using Terraform, and created Deployment and Service objects to deploy and expose your application. Additionally, you extended Kubernetes' functionality by defining a Custom Resource.
You now have a solid foundation to build a CI/CD environment on Kubernetes, which we'll explore in future articles.