One place for hosting & domains

      June 2019

      Recommended Steps For New FreeBSD 12.0 Servers


      Introduction

      When setting up a new FreeBSD server, there are a number of optional steps you can take to get your server into a more production-friendly state. In this guide, we will cover some of the most common examples.

      We will set up a simple, easy-to-configure firewall that denies most traffic. We will also make sure that your server’s time zone accurately reflects its location. We will set up NTP polling in order to keep the server’s time accurate and, finally, demonstrate how to add some extra swap space to your server.

      Before you get started with this guide, you should log in and configure your shell environment the way you’d like it. You can find out how to do this by following this guide.

      How To Configure a Simple IPFW Firewall

      The first task is setting up a simple firewall to secure your server.

      FreeBSD supports and includes three separate firewalls. These are called pf, ipfw, and ipfilter. In this guide, we will be using ipfw as our firewall. ipfw is a secure, stateful firewall written and maintained as part of FreeBSD.

      Configuring the Basic Firewall

      Almost all of your configuration will take place in the /etc/rc.conf file. To modify the configuration you’ll use the sysrc command, which allows users to change configuration in /etc/rc.conf in a safe manner. Inside this file you’ll add a number of different lines to enable and control how the ipfw firewall will function. You’ll start with the essential rules; run the following command to begin:

      • sudo sysrc firewall_enable="YES"

      Each time you run sysrc to modify your configuration, you’ll receive output showing the changes:

      Output

      firewall_enable: NO -> YES

      As you may expect, this first command enables the ipfw firewall, starting it automatically at boot and allowing it to be started with the usual service commands.

      Now run the following:

      • sudo sysrc firewall_quiet="YES"

      This tells ipfw not to output anything to standard out when it performs certain actions. This might seem like a matter of preference, but it actually affects the functionality of the firewall.

      Two factors combine to make this an important option. The first is that the firewall configuration script is executed in the current shell environment, not as a background task. The second is that when the ipfw command reads a configuration script without the "quiet" flag, it reads and outputs each line, in turn, to standard out. When it outputs a line, it immediately executes the associated action.

      Most firewall configuration files flush the current rules at the top of the script in order to start with a clean slate. If the ipfw firewall comes across a line like this without the quiet flag, it will immediately flush all rules and revert to its default policy, which is usually to deny all connections. If you’re configuring the firewall over SSH, this would drop the connection, close the current shell session, and none of the rules that follow would be processed, effectively locking you out of the server. The quiet flag allows the firewall to process the rules as a set instead of implementing each one individually.

      After these two lines, you can begin configuring the firewall’s behavior. Now select "workstation" as the type of firewall you’ll configure:

      • sudo sysrc firewall_type="workstation"

      This sets the firewall to protect the server from which you’re configuring the firewall using stateful rules. A stateful firewall monitors the state of network connections over time and stores information about these connections in memory for a short time. As a result, not only can rules be defined on what connections the firewall should allow, but a stateful firewall can also use the data it has learned about previous connections to evaluate which connections can be made.

      The /etc/rc.conf file also allows you to customize the services you want clients to be able to access by using the firewall_myservices and firewall_allowservices options.

      Run the following command to open ports that should be accessible on your server, such as port 22 for your SSH connection and port 80 for a conventional HTTP web server. If you use SSL on your web server, make sure to add port 443:

      • sudo sysrc firewall_myservices="22/tcp 80/tcp 443/tcp"

      The firewall_myservices option is set to a list of TCP ports or services, separated by spaces, that should be accessible on your server.

      Note: You could also use services by name. The services that FreeBSD knows by name are listed in the /etc/services file. For instance, you could change the previous command to something like this:

      • firewall_myservices="ssh http https"

      This would have the same results.

      The firewall_allowservices option lists items that should be allowed to access the provided services. Therefore it allows you to limit access to your exposed services (from firewall_myservices) to particular machines or network ranges. For example, this could be useful if you want a machine to host web content for an internal company network. The keyword "any" means that any IPs can access these services, making them completely public:

      • sudo sysrc firewall_allowservices="any"

      The firewall_logdeny option tells ipfw to log all connection attempts that are denied to a file located at /var/log/security. Run the following command to set this:

      • sudo sysrc firewall_logdeny="YES"

      To check on the changes you’ve made to the firewall configuration, run the following command:

      • grep 'firewall' /etc/rc.conf

      This portion of the /etc/rc.conf file will look like this:

      Output

      firewall_enable="YES" firewall_quiet="YES" firewall_type="workstation" firewall_myservices="22 80 443" firewall_allowservices="any" firewall_logdeny="YES"

      Remember to adjust the firewall_myservices option to reference the services you wish to expose to clients.

      Allowing UDP Connections (Optional)

      The ports and services listed in the firewall_myservices option in the /etc/rc.conf file allow access for TCP connections. If you have services that you wish to expose that use UDP, you need to edit the /etc/rc.firewall file:

      You configured your firewall to use the "workstation" firewall type, so look for a section that looks like this:

      /etc/rc.firewall

      . . .
      
      [Ww][Oo][Rr][Kk][Ss][Tt][Aa][Tt][Ii][Oo][Nn])
      
      . . .
      

      There is a section within this block that is dedicated to processing the firewall_allowservices and firewall_myservices values that you set. It will look like this:

      /etc/rc.firewall

      for i in ${firewall_allowservices} ; do
        for j in ${firewall_myservices} ; do
          ${fwcmd} add pass tcp from $i to me $j
        done
      done
      

      After this section, you can add any services or ports that should accept UDP packets by adding lines like this:

      ${fwcmd} add pass udp from any to me port_num
      

      In vi, press i to switch to INSERT mode and add your content, then save and close the file by pressing ESC, typing :wq, and pressing ENTER. In the previous example, you can leave the "any" keyword if the connection should be allowed for all clients or change it to a specific IP address or network range. The port_num should be replaced by the port number or service name you wish to allow UDP access to. For example, if you're running a DNS server, you may wish to have a line that looks something like this:

      for i in ${firewall_allowservices} ; do
        for j in ${firewall_myservices} ; do
          ${fwcmd} add pass tcp from $i to me $j
        done
      done
      
      ${fwcmd} add pass udp from 192.168.2.0/24 to me 53
      

      This will allow any client from within the 192.168.2.0/24 network range to access a DNS server operating on the standard port 53. Note that in this example you would also want to open this port up for TCP connections as that is used by DNS servers for longer replies.

      Save and close the file when you are finished.

      Starting the Firewall

      When you are finished with your configuration, you can start the firewall by typing:

      The firewall will start correctly, blocking unwanted traffic while adhering to your allowed services and ports. This firewall will start automatically at every boot.

      You also want to configure a limit on how many denials per IP address you'll log. This will prevent your logs from filling up from a single, persistent user. You can do this in the /etc/sysctl.conf file:

      At the bottom of the file, you can limit your logging to "5" by adding the following line:

      /etc/sysctl.conf

      ...
      net.inet.ip.fw.verbose_limit=5
      

      Save and close the file when you are finished. This will configure that setting on the next boot.

      To implement this same behavior for your currently active session without restarting, you can use the sysctl command itself, like this:

      • sudo sysctl net.inet.ip.fw.verbose_limit=5

      This should immediately implement the limit for this boot.

      How To Set the Time Zone for Your Server

      It is a good idea to correctly set the time zone for your server. This is an important step for when you configure NTP time synchronization in the next section.

      FreeBSD comes with a menu-based tool called tzsetup for configuring time zones. To set the time zone for your server, call this command with sudo privileges:

      First, you will be asked to select the region of the world your server is located in:

      FreeBSD region of the world

      You will need to choose a sub-region or country next:

      FreeBSD country

      Note: To navigate these menus, you'll need to use the PAGE UP and PAGE DOWN keys. If you do not have these on your keyboard, you can use FN + DOWN or FN + UP.

      Finally, select the specific time zone that is appropriate for your server:

      FreeBSD time zone

      Confirm the time zone selection that is presented based on your choices.

      At this point, your server's time zone should match the selections you made.

      How To Configure NTP to Keep Accurate Time

      Now that you have the time zone configured on your server, you can set up NTP, or Network Time Protocol. This will help keep your server's time in sync with others throughout the world. This is important for time-sensitive client-server interactions as well as accurate logging.

      Again, you can enable the NTP service on your server by adjusting the /etc/rc.conf file. Run the following command to add the line ntpd_enable="YES" to the file:

      • sudo sysrc ntpd_enable="YES"

      You also need to add a second line that will sync the time on your machine with the remote NTP servers at boot. This is necessary because it allows your server to exceed the normal drift limit on initialization. Your server will likely be outside of the drift limit at boot because your time zone will be applied prior to the NTP daemon starting, which will offset your system time:

      • sudo sysrc ntpd_sync_on_start="YES"

      If you did not have this line, your NTP daemon would fail when started due to the timezone settings that skew your system time prior in the boot process.

      You can start your ntpd service by typing:

      This will maintain your server's time by synchronizing with the NTP servers listed in /etc/ntp.conf.

      On FreeBSD servers configured on DigitalOcean, 1 Gigabyte of swap space is automatically configured regardless of the size of your server. You can see this by typing:

      It should show something like this:

      Output

      Device 1G-blocks Used Avail Capacity /dev/gpt/swapfs 1 0 1 0%

      Some users and applications may need more swap space than this. This is accomplished by adding a swap file.

      The first thing you need to do is to allocate a chunk of the filesystem for the file you want to use for swap. You'll use the truncate command, which can quickly allocate space on the fly.

      We'll put the swapfile in /swapfile for this tutorial but you can put the file anywhere you wish, like /var/swapfile for example. This file will provide an additional 1 Gigabyte of swap space. You can adjust this number by modifying the value given to the -s option:

      • sudo truncate -s 1G /swapfile

      After you allocate the space, you need to lock down access to the file. Normal users should not have any access to the file:

      • sudo chmod 0600 /swapfile

      Next, associate a pseudo-device with your file and configure it to mount at boot by typing:

      • echo "md99 none swap sw,file=/swapfile,late 0 0" | sudo tee -a /etc/fstab

      This command adds a line that looks like this to the /etc/fstab file:

      md99 none swap sw,file=/swapfile,late 0 0
      

      After the line is added to your /etc/fstab file, you can activate the swap file for the session by typing:

      You can verify that the swap file is now working by using the swapinfo command again:

      You should see the additional device (/dev/md99) associated with your swap file:

      Output

      Device 1G-blocks Used Avail Capacity /dev/gpt/swapfs 1 0 1 0% /dev/md99 1 0 1 0% Total 2 0 2 0%

      This swap file will be mounted automatically at each boot.

      Conclusion

      The steps outlined in this guide can be used to bring your FreeBSD server into a more production-ready state. By configuring basic essentials like a firewall, NTP synchronization, and appropriate swap space, your server can be used as a good base for future installations and services.



      Source link

      How to Wireframe a Website (In 6 Steps)


      If you’re in the process of creating a website, either for yourself or a client, you’re likely concerned about User Experience (UX). After all, your site won’t be very successful if visitors can’t figure out how to navigate it and find the information they need.

      Fortunately, there’s a handy strategy you can use to work on improving UX before your site ever hits the web. By using a wireframe, you can test drive user flows and page layouts, so you know exactly how they’ll work on your live website.

      In this post, we’ll discuss what wireframes are and why they’re essential in web design. Then we’ll share six steps to help you create mockups for your own site. Let’s get started!

      Professional Website Design Made Easy

      Make your site stand out with a professional design from our partners at RipeConcepts. Packages start at $299.

      An Introduction to Wireframes (And Why They’re Useful)

      A wireframe is like a UX blueprint for your website. It maps out certain features of your site, such as menus, buttons, and layouts, while stripping away the visual design. This gives you an idea of your site’s underlying functionality and navigation, without distracting elements such as its color scheme and content.

      An example of a wireframe.

      The purpose of a wireframe is to maximize a site’s UX potential before it’s even available to visitors. By creating mockups of your site’s UX on paper or with a digital wireframing tool, you can troubleshoot issues before they become a problem for your users. This can save you time and money down the line.

      Whether you’re planning a small one-page site, a huge company portal, or something in between, wireframing can be a beneficial part of the planning process. Unless you’re reusing a tried-and-true template with a UX design you’re confident in, wireframing could provide significant benefits to your site.

      After all, effective UX design focuses on getting your site’s key functionality just right. Without a design that supports a strong, positive UX, you run the risk of higher bounce rates and lower conversion rates. A wireframe will not only smooth out your creative process; it could also help promote your site’s overall success.

      How to Wireframe a Website (In 6 Steps)

      Creating a wireframe can become a time-consuming process, especially if things don’t go well during the testing stage. However, taking the time to iron out UX issues ahead of time will give your site a much better chance of success down the line. The six steps listed below will help you get started.

      Step 1: Gather the Tools for Wireframing

      There are two main methods for creating wireframes — by hand or digitally. If you’re going with the former option, all you’ll need is a pen and paper to get started. Some designers begin with a ‘low-fidelity’ paper wireframe for brainstorming and then create a ‘high-fidelity’ digital version later.

      As far as digital options go, there are a wide variety of wireframe tools available. If this is your first wireframe, or if you’re a single Do It Yourself (DIY) site owner and not a designer, you might try a free tool such as Wireframe.cc.

      The Wireframe.cc tool.

      This simple wireframing tool keeps your drafts from becoming cluttered by limiting your color palette. You can create easy designs with its drag-and-drop interface, and annotate your drafts so that you don’t forget important information.

      Another option is Wirify, a bookmarklet that you can add to your browser.

      The Wirify bookmarklet.

      This tool’s interface turns existing web pages into wireframes. Rather than helping you draft UX design for a new site, it’s most helpful for website redesigns.

      If you’re willing to spend a little money, on the other hand, you might look into Balsamiq mockups.

      The Balsamiq wireframing platform.

      It boasts an easy-to-use, collaborative wireframing interface that’s great for teams and professionals who need real-time collaboration. However, it is limited to static wireframing. If you’d like a more comprehensive tool that can also be used for prototyping (which we’ll discuss later in this post), you might try out Prott.

      Step 2: Do Your Target User and UX Design Research

      Before you start drafting your wireframe, it’s helpful to do some research. For starters, you’ll want to know who your target audience is. This can help you determine which features need to be most prominent on your site so that visitors can find what they need.

      User personas can be a helpful design tool for this part of the process. Try creating some for your potential user groups, so you have a reference you can return to throughout the wireframe design process. Personas can also help create a marketing strategy later on, so hang on to them.

      It’s also wise to research some UX design trends and best practices. This can provide insight into elements such as menu layouts, the positioning of your logo and other significant branding elements, and content layouts. Users find it easier to navigate a website that follows convention when it comes to these features.

      Step 3: Determine Your Optimal User Flows

      A ‘user flow’ refers to the path a visitor takes to complete a specific goal on your website. So for example, if you have an e-commerce site, one user flow might be from a product page to the end of the checkout process.

      Determining the key tasks users will need to complete on your site can help you create the most straightforward user flow for each potential goal. This will help maximize UX by making your website easy and enjoyable to use.

      That said, it can be hard to get into the mind of a hypothetical user. Asking yourself these questions can help when you’re trying to work out your primary user flows:

      • What problems do you intend to solve for users? What goals might they be hoping to achieve by coming to your site?
      • How can you organize your content (such as buttons, links, and menus) to support those goals?
      • What should users see first when they arrive on your site, which can help orient them and let them know they’re in the right place?
      • What are the user expectations for a site like yours?
      • What Call to Action (CTA) buttons will you provide, and where can you place them so users will notice?

      Each of these answers will suggest something vital about the way you’ll need to design your pages.

      Step 4: Start Drafting Your Wireframe

      Now that you’ve gathered your tools and key information for your wireframe, you can start drafting. Keep in mind that the purpose of this task is not to create a complete design for your website. You’re focusing solely on UX, and how you can create a page that is easy to navigate and understand.

      To that end, your wireframe should include features and formats that are important to how your users will interact with and make use of your website. These might include:

      • A layout noting where you’ll place any images, branding elements, written content, and video players
      • Your navigation menu, including a list of each item it will include and the order in which they will appear
      • Any links and buttons present on the page
      • Footer content, such as your contact information and social media links

      Your answers to the questions in the previous step will likely help with this stage of the process as well. Remember to consider web design conventions, user expectations, and information hierarchies when placing these elements on your page.

      There are also several elements that aren’t appropriate for a wireframe. Visual design features, such as your color scheme, typography, and any decorative displays, should be left off of your wireframe. In fact, it’s best to keep your wireframe in grayscale so that you can focus on usability.

      You also don’t need to insert images, videos, written content, or your actual brand elements such as your logo and tagline. Placeholders for these features will get the job done. The idea is to avoid incorporating anything that could provide a distraction from user flows and navigation elements that are fundamental to UX.

      Be Awesome on the Internet

      Join our monthly newsletter for tips and tricks to build your dream website!

      Step 5: Perform Usability Testing to Try Out Your Design

      Once you have your initial wireframe completed, you’ll need to carry out some testing. This will help you determine if it has accomplished its goal of mapping out the simplest and most natural user flows and UX for your site. There are several ways to go about this.

      If you’re working with a team, your first round of testing will probably take place internally. Each team member should spend some time with the wireframe to see if it makes sense. Have everyone work independently so as not to influence one another, and take notes on any issues they run into.

      However, there are also tools that can provide more objective usability testing for your wireframe. These tests are meant to imitate actual users, which can be particularly helpful. Just because your team of web designers finds your wireframe logical doesn’t mean that the average site user will.

      UsabilityHub is a platform that connects designs with real users to give you feedback on how the average visitor perceives your wireframe.

      The UsabilityHub home page.

      It offers a free plan so that even small sites and non-designers can put this tool to good use. For professional designers and teams, there are also plans that provide advanced features to help with more extensive and in-depth testing.

      Step 6: Turn Your Wireframe Into a Prototype

      After your wireframe has undergone testing, and you’ve determined the best possible UX design for your site, it’s time to turn it into a prototype. Unlike wireframes, which are static, prototypes include some basic functionality so that you can test out user flows more realistically.

      As we mentioned in the first step, it can be helpful to choose a platform that can turn your wireframe into a prototype. Prott, for instance, enables you to create interactive, high-fidelity prototypes from your wireframe.

      The Prott wireframing prototyping platform.

      However, if you prefer a different wireframing tool, some platforms focus specifically on prototyping. InVision is a high-quality platform that makes it easy for teams to work together and communicate about mockups.

      The InVision prototyping platform.

      Whichever tool you choose, you’ll want to put your prototype through another round of user testing once it’s complete. After your prototype has passed, you can get to building your actual site with the confidence that your UX will be top-notch right from your launch date.

      Making Wireframes to Improve UX

      When it comes to designing a website, solid UX is crucial if you want to set your project up for success. Wireframing your website before you start building pages can help you get UX right before you’ve even launched your site.

      After you’ve finished designing your site, you’ll need a hosting plan that can keep up with your stellar UX. At DreamHost, we provide high-quality shared hosting plans that won’t let your users down. Check them out today!



      Source link

      How To Do Canary Deployments With Istio and Kubernetes


      Introduction

      When introducing new versions of a service, it is often desirable to shift a controlled percentage of user traffic to a newer version of the service in the process of phasing out the older version. This technique is called a canary deployment.

      Kubernetes cluster operators can orchestrate canary deployments natively using labels and Deployments. This technique has certain limitations, however: traffic distribution and replica counts are coupled, which in practice means replica ratios must be controlled manually in order to limit traffic to the canary release. In other words, to direct 10% of traffic to a canary deployment, you would need to have a pool of ten pods, with one pod receiving 10% of user traffic, and the other nine receiving the rest.

      Deploying with an Istio service mesh can address this issue by enabling a clear separation between replica counts and traffic management. The Istio mesh allows fine-grained traffic control that decouples traffic distribution and management from replica scaling. Instead of manually controlling replica ratios, you can define traffic percentages and targets, and Istio will manage the rest.

      In this tutorial, you will create a canary deployment using Istio and Kubernetes. You will deploy two versions of a demo Node.js application, and use Virtual Service and Destination Rule resources to configure traffic routing to both the newer and older versions. This will be a good starting point to build out future canary deployments with Istio.

      Prerequisites

      Note: We highly recommend a cluster with at least 8GB of available memory and 4vCPUs for this setup. This tutorial will use three of DigitalOcean’s standard 4GB/2vCPU Droplets as nodes.

      Step 1 — Packaging the Application

      In the prerequisite tutorial, How To Install and Use Istio With Kubernetes, you created a node-demo Docker image to run a shark information application and pushed this image to Docker Hub. In this step, you will create another image: a newer version of the application that you will use for your canary deployment.

      Our original demo application emphasized some friendly facts about sharks on its Shark Info page:

      Shark Info Page

      But we have decided in our new canary version to emphasize some scarier facts:

      Scary Shark Info Page

      Our first step will be to clone the code for this second version of our application into a directory called node_image. Using the following command, clone the nodejs-canary-app repository from the DigitalOcean Community GitHub account. This repository contains the code for the second, scarier version of our application:

      • git clone https://github.com/do-community/nodejs-canary-app.git node_image

      Navigate to the node_image directory:

      This directory contains files and folders for the newer version of our shark information application, which offers users information about sharks, like the original application, but with an emphasis on scarier facts. In addition to the application files, the directory contains a Dockerfile with instructions for building a Docker image with the application code. For more information about the instructions in the Dockerfile, see Step 3 of How To Build a Node.js Application with Docker.

      To test that the application code and Dockerfile work as expected, you can build and tag the image using the docker build command, and then use the image to run a demo container. Using the -t flag with docker build will allow you to tag the image with your Docker Hub username so that you can push it to Docker Hub once you've tested it.

      Build the image with the following command:

      • docker build -t your_dockerhub_username/node-demo-v2 .

      The . in the command specifies that the build context is the current directory. We've named the image node-demo-v2, to reference the node-demo image we created in How To Install and Use Istio With Kubernetes.

      Once the build process is complete, you can list your images with docker images:

      You will see the following output confirming the image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-demo-v2 latest 37f1c2939dbf 5 seconds ago 77.6MB node 10-alpine 9dfa73010b19 2 days ago 75.3MB

      Next, you'll use docker run to create a container based on this image. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a customized name.

      Run the following command to build the container:

      • docker run --name node-demo-v2 -p 80:8080 -d your_dockerhub_username/node-demo-v2

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo-v2 "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:80->8080/tcp node-demo-v2

      You can now visit your server IP in your browser to test your setup: http://your_server_ip. Your application will display the following landing page:

      Application Landing Page

      Click on the Get Shark Info button to get to the scarier shark information:

      Scary Shark Info Page

      Now that you have tested the application, you can stop the running container. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo-v2 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:80->8080/tcp node-demo-v2

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      Now that you have tested the image, you can push it to Docker Hub. First, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-demo-v2

      You now have two application images saved to Docker Hub: the node-demo image, and node-demo-v2. We will now modify the manifests you created in the prerequisite tutorial How To Install and Use Istio With Kubernetes to direct traffic to the canary version of your application.

      Step 2 — Modifying the Application Deployment

      In How To Install and Use Istio With Kubernetes, you created an application manifest with specifications for your application Service and Deployment objects. These specifications describe each object's desired state. In this step, you will add a Deployment for the second version of your application to this manifest, along with version labels that will enable Istio to manage these resources.

      When you followed the setup instructions in the prerequisite tutorial, you created a directory called istio_project and two yaml manifests: node-app.yaml, which contains the specifications for your Service and Deployment objects, and node-istio.yaml, which contains specifications for your Istio Virtual Service and Gateway resources.

      Navigate to the istio_project directory now:

      Open node-app.yaml with nano or your favorite editor to make changes to your application manifest:

      Currently, the file looks like this:

      ~/node-istio.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nodejs
        labels: 
          app: nodejs
      spec:
        selector:
          app: nodejs
        ports:
        - name: http
          port: 8080 
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nodejs
        labels:
          version: v1
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nodejs
        template:
          metadata:
            labels:
              app: nodejs
              version: v1
          spec:
            containers:
            - name: nodejs
              image: your_dockerhub_username/node-demo
              ports:
              - containerPort: 8080
      

      For a full explanation of this file's contents, see Step 3 of How To Install and Use Istio With Kubernetes.

      We have already included version labels in our Deployment metadata and template fields, following Istio's recommendations for Pods and Services. Now we can add specifications for a second Deployment object, which will represent the second version of our application, and make a quick modification to the name of our first Deployment object.

      First, change the name of your existing Deployment object to nodejs-v1:

      ~/node-istio.yaml

      ...
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nodejs-v1
        labels:
          version: v1
      ...
      

      Next, below the specifications for this Deployment, add the specifications for your second Deployment. Remember to add the name of your own image to the image field:

      ~/node-istio.yaml

      ...
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nodejs-v2
        labels:
          version: v2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nodejs
        template:
          metadata:
            labels:
              app: nodejs
              version: v2
          spec:
            containers:
            - name: nodejs
              image: your_dockerhub_username/node-demo-v2
              ports:
              - containerPort: 8080
      

      Like the first Deployment, this Deployment uses a version label to specify the version of the application that corresponds to this Deployment. In this case, v2 will distinguish the application version associated with this Deployment from v1, which corresponds to our first Deployment.

      We've also ensured that the Pods managed by the v2 Deployment will run the node-demo-v2 canary image, which we built in the previous Step.

      Save and close the file when you are finished editing.

      With your application manifest modified, you can move on to making changes to your node-istio.yaml file.

      Step 3 — Weighting Traffic with Virtual Services and Adding Destination Rules

      In How To Install and Use Istio With Kubernetes, you created Gateway and Virtual Service objects to allow external traffic into the Istio mesh and route it to your application Service. Here, you will modify your Virtual Service configuration to include routing to your application Service subsets — v1 and v2. You will also add a Destination Rule to define additional, version-based policies to the routing rules you are applying to your nodejs application Service.

      Open the node-istio.yaml file:

      Currently, the file looks like this:

      ~/istio_project/node-istio.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: nodejs-gateway
      spec:
        selector:
          istio: ingressgateway 
        servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
          - "*"
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: nodejs
      spec:
        hosts:
        - "*"
        gateways:
        - nodejs-gateway
        http:
        - route:
          - destination:
              host: nodejs
      

      For a complete explanation of the specifications in this manifest, see Step 4 of How To Install and Use Istio With Kubernetes.

      Our first modification will be to the Virtual Service. Currently, this resource routes traffic entering the mesh through our nodejs-gateway to our nodejs application Service. What we would like to do is configure a routing rule that will send 80% of traffic to our original application, and 20% to the newer version. Once we are satisfied with the canary's performance, we can reconfigure our traffic rules to gradually send all traffic to the newer application version.

      Instead of routing to a single destination, as we did in the original manifest, we will add destination fields for both of our application subsets: the original version (v1) and the canary (v2).

      Make the following additions to the Virtual Service to create this routing rule:

      ~/istio_project/node-istio.yaml

      ...
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: nodejs
      spec:
        hosts:
        - "*"
        gateways:
        - nodejs-gateway
        http:
        - route:
          - destination:
              host: nodejs
              subset: v1
            weight: 80
          - destination:
              host: nodejs
              subset: v2
            weight: 20
      

      The policy that we have added includes two destinations: the subset of our nodejs Service that is running the original version of our application, v1, and the subset that is running the canary, v2. Subset one will receive 80% of incoming traffic, while the canary will receive 20%.

      Next, we will add a Destination Rule that will apply rules to incoming traffic after that traffic has been routed to the appropriate Service. In our case, we will configure subset fields to send traffic to Pods with the appropriate version labels.

      Add the following code below your Virtual Service definition:

      ~/istio_project/node-istio.yaml

      ...
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: DestinationRule
      metadata:
        name: nodejs
      spec:
        host: nodejs
        subsets:
        - name: v1
          labels:
            version: v1
        - name: v2
          labels:
            version: v2
      

      Our Rule ensures that traffic to our Service subsets, v1 and v2, reaches Pods with the appropriate labels: version: v1 and version: v2. These are the labels that we included in our application Deployment specs.

      If we wanted, however, we could also apply specific traffic policies at the subset level, enabling further specificity in our canary deployments. For additional information about defining traffic policies at this level, see the official Istio documentation.

      Save and close the file when you have finished editing.

      With your application manifests revised, you are ready to apply your configuration changes and examine your application traffic data using the Grafana telemetry addon.

      Step 4 — Applying Configuration Changes and Accessing Traffic Data

      The application manifests are updated, but we still need to apply these changes to our Kubernetes cluster. We'll use the kubectl apply command to apply our changes without completely overwriting the existing configuration. After doing this, you will be able to generate some requests to your application and look at the associated data in your Istio Grafana dashboards.

      Apply your configuration to your application Service and Deployment objects:

      • kubectl apply -f node-app.yaml

      You will see the following output:

      Output

      service/nodejs unchanged deployment.apps/nodejs-v1 created deployment.apps/nodejs-v2 created

      Next, apply the configuration updates you've made to node-istio.yaml, which include the changes to the Virtual Service and the new Destination Rule:

      • kubectl apply -f node-istio.yaml

      You will see the following output:

      Output

      gateway.networking.istio.io/nodejs-gateway unchanged virtualservice.networking.istio.io/nodejs configured destinationrule.networking.istio.io/nodejs created

      You are now ready to generate traffic to your application. Before doing that, however, first check to be sure that you have the grafana Service running:

      • kubectl get svc -n istio-system | grep grafana

      Output

      grafana ClusterIP 10.245.233.51 <none> 3000/TCP 4d2h

      Also check for the associated Pods:

      • kubectl get svc -n istio-system | grep grafana

      Output

      grafana-67c69bb567-jpf6h 1/1 Running 0 4d2h

      Finally, check for the grafana-gateway Gateway and grafana-vs Virtual Service:

      • kubectl get gateway -n istio-system | grep grafana

      Output

      grafana-gateway 3d5h
      • kubectl get virtualservice -n istio-system | grep grafana

      Output

      grafana-vs [grafana-gateway] [*] 4d2h

      If you don't see output from these commands, check Steps 2 and 5 of How To Install and Use Istio With Kubernetes, which discuss how to enable the Grafana telemetry addon when installing Istio and how to enable HTTP access to the Grafana Service.

      You can now access your application in the browser. To do this, you will need the external IP associated with your istio-ingressgateway Service, which is a LoadBalancer Service type. We matched our nodejs-gateway Gateway with this controller when writing our Gateway manifest in How To Install and Use Istio With Kubernetes. For more detail on the Gateway manifest, see Step 4 of that tutorial.

      Get the external IP for the istio-ingressgateway Service with the following command:

      • kubectl get svc -n istio-system

      You will see output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.245.85.162 <none> 3000/TCP 42m istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 42m istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 42m istio-ingressgateway LoadBalancer 10.245.171.39 ingressgateway_ip 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 42m istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 42m istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 42m istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 42m istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 42m prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 42m

      The istio-ingressgateway should be the only Service with the TYPE LoadBalancer, and the only Service with an external IP.

      Navigate to this external IP in your browser: http://ingressgateway_ip.

      You should see the following landing page:

      Application Landing Page

      Click on Get Shark Info button. You will see one of two shark information pages:

      Shark Info Page

      Scary Shark Info Page

      Click refresh on this page a few times. You should see the friendlier shark information page more often than the scarier version.

      Once you have generated some load by refreshing five or six times, you can head over to your Grafana dashboards.

      In your browser, navigate to the following address, again using your istio-ingressgateway external IP and the port that's defined in the Grafana Gateway manifest: http://ingressgateway_ip:15031.

      You will see the following landing page:

      Grafana Home Dash

      Clicking on Home at the top of the page will bring you to a page with an istio folder. To get a list of dropdown options, click on the istio folder icon:

      Istio Dash Options Dropdown Menu

      From this list of options, click on Istio Service Dashboard.

      This will bring you to a landing page with another dropdown menu:

      Service Dropdown in Istio Service Dash

      Select nodejs.default.svc.cluster.local from the list of available options.

      If you navigate down to the Service Workloads section of the page, you will be able to look at Incoming Requests by Destination And Response Code:

      Service Workloads Dashboards

      Here, you will see a combination of 200 and 304 HTTP response codes, indicating successful OK and Not Modified responses. The responses labeled nodejs-v1 should outnumber the responses labeled nodejs-v2, indicating that incoming traffic is being routed to our application subsets following the parameters we defined in our manifests.

      Conclusion

      In this tutorial, you deployed a canary version of a demo Node.js application using Istio and Kubernetes. You created Virtual Service and Destination Rule resources that together allowed you to send 80% of your traffic to your original application service, and 20% to the newer version. Once you are satisfied with the performance of the newer application version, you can update your configuration settings as desired.

      For more information about traffic management in Istio, see the related high-level overview in the documentation, as well as specific examples that use Istio's bookinfo and helloworld sample applications.



      Source link