One place for hosting & domains

      Webinar

      How to Create and Host Your First Webinar


      Hosting a webinar is still one of the most significant — if not slightly daunting — tactics for engaging your existing audience, building a new one, and growing your business online.

      Especially in this new era of working from home, online events have proliferated, with a threefold increase in webinar audience reported in 2020 compared to 2019.

      If the very mention of a webinar works you up into a cold sweat, you’re not alone. Webinars can be super scary if you’ve never created and hosted one before.

      But take a few breaths. Relax. Stay calm.

      We’re here to break it down for you and make sure you’re extra-prepared for hosting your very first webinar.

      To save you scrolling through this entire article, here are the things we’re going to cover so you can jump down to the bits you want:

      What Is a Webinar, Anyway?

      A webinar is simply a video workshop or presentation. It’s often live (but not always) and is usually interactive and largely unscripted.

      The point isn’t to sell, sell, sell to the webinar attendee. It’s about providing information and advice to encourage and inspire your audience to solve issues themselves. Of course, there’s no harm in promoting your service and product somewhere along the way, but making it your primary focus won’t go down well with most audiences.

      Our take: The best webinar marketing focuses on building brand awareness and engagement with attendees rather than hard sells.

      Shared Hosting That Powers Your Purpose

      We make sure your website is fast, secure, and always up so your visitors trust you. Plans start at $2.59/mo.

      Why You Should Host a Webinar

      Here are some important reasons for you to host your first webinar:

      They’re Cheap and Easy

      For a start, webinars are a lot less hassle than they seem. They’re relatively straightforward to create, as long as you find the right hosting software and have a good strategy. That said, you’ll still need to put in the hard work to make it a success.

      They can also be done on a shoestring budget, using just your smartphone. So, while you can splash out and buy all the fancy gear, there’s absolutely no reason you can’t do it on the smallest of budgets.

      They Make Great Evergreen Content

      These days, a lot of visual content is here today, gone tomorrow, but an evergreen webinar can live on and on!

      If you record your webinar, you can reuse it in the future, either by reposting it or repurposing it for different audiences.

      For example, you might try chopping your webinar into smaller pieces and posting them in a series on your social media or blog. Or you could put it on your website in full to be downloaded in return for a sign-up to your newsletter.

      They Add Value

      By providing something informative, valuable, and high-quality, you’re both helping your customers solve a problem while simultaneously promoting your own brand. It’s a win-win.

      So whether you’re demonstrating your latest product or educating about a certain technique or issue, make sure you focus it on your audience’s real — rather than perceived — need.

      It’s Great for Your Brand

      Along with social media, interactive content like a live webinar is a good way to connect with your audience and potential customers.

      Not only do webinars help put a human face to your brand — they’re a great way of showing your customers a sneak peek behind the curtain — but they can also set you up as a thought leader in your industry.

      By providing your audience with actionable tips and valuable knowledge, you can come across as an authority, not just another brand.

      They Can Boost Conversions

      Whether the goal for your webinar is to directly increase your revenue or whether conversions are a happy by-product, there’s good evidence to demonstrate that webinars have a great ROI.

      By establishing yourself as an authoritative, trustworthy brand and by reaching your audience more directly, you can expect to increase sales on your website.

      1. Choose Your Webinar Topic

      Choosing your topic isn’t quite as easy as it sounds. Should you talk about what you know? Of course.

      But first, consider this: What does your audience want to know?

      After all, you can spend all day talking about things you’re passionate about. But if those topics don’t help your audience, they’ll drop out fast.

      If you’re looking to provide value to your audience, your webinar needs to match up with what your audience is asking for. Your expertise needs to answer their questions.

      But how do you know what your audience wants to hear about?

      The most obvious answer is to ask them directly. Use your social channels to directly ask your audience what questions they need answers to. You could also send out a survey to your email list to gather more details.

      Alternatively, you can search through your data to see which social posts or blog articles get the most traction. Or, you can look at your Google Analytics to identify some of the search queries that bring people to your website.

      2. Choose Your Webinar Format

      There are several options for how you conduct your webinar.

      You could go down the traditional route of a webinar presentation or take on a more dynamic Q&A interview format. Just keep in mind that the best webinars always find a way to solicit audience participation.

      What’s most important for your first webinar is that you choose a format that you think will work best for your goals. You can always iterate for future webinars based on what you learn from the first attempt.

      A Presentation

      Most webinars are done as presentations simply because presentations are the most straightforward format. They’re also a good option if you’re targeting a small audience.

      Usually, the host will address the camera while delivering a presentation that can be followed on a PowerPoint, whiteboard, or video running in the background.

      If you want to inject a bit more interactivity into this option, you could enable a chat box so your audience can ask questions throughout the webinar for you to answer at the end.

      Product Demonstrations

      Great for e-commerce businesses, a demonstration-style webinar is a good option if you have a new product or service that you’d like to share with users.

      Depending on your product, you can simply address the camera while showing your audience how to use the item, or you can screen share if your product is digital.

      Interview

      A Q&A with an industry expert or panel of influencers is guaranteed to give your audience extra value.

      Identify and reach out to people who are regarded as thought leaders in your niche and make it an exciting opportunity for them to take part in (after all, it will give them exposure too).

      This format might take a bit more preparation. You shouldn’t rehearse the interview ahead of time, as it may come across as stale to your audience. But it would be helpful to create a list of questions and send them to your interviewees in advance so they can be prepared and share informative responses.

      3. Build Your Toolkit

      Next, it’s time to handle the technical details of creating a webinar.

      There are plenty of tools for hosting webinars available, as well as popular webinar software like ZoomLivestorm, or ClickMeeting.

      When choosing your webinar platform, you need to think about a few things:

      • How much does it cost?
      • How big an audience does it allow?
      • How easy is it to use?
      • Does it let you record the video?
      • Does it allow for Q&As?
      • Does it let you screen share or show a PowerPoint presentation?

      Once you’ve chosen a webinar service, it’s time to think about the other tools you might need: a camera (although most smartphones have high-resolution cameras), microphones, a recording device (if this isn’t built into your hosting tool), strong internet, and good lighting.

      4. Produce the Webinar Content

      So, you know your topic. You’ve identified your format. You have your webinar tool. Now it’s time to focus on the content.

      If you’re hosting a presentation, then you can get started on producing your slide deck. Be careful not to write a formal script, but make a few points on your slide deck to allow your audience to follow along with what you’re saying.

      Make your slide deck visually appealing and include images and color. Bright Talk has found that video-based learning is their preferred learning format among its respondents, and 85% prefer the video to be webinars.

      A Q&A with an expert or a panel will involve more planning. Ask for questions from your audience in advance so that you can field them to your experts on the day.

      Make sure you plan out your speakers and the order of the questions, ensuring you time it so you know how long to allow for your webinar.

      5. Set Up a Landing Page

      Your webinar’s landing page is where you can send your audience to find out more about the event and register to attend. This should be hosted on your website, allowing you to learn more about your attendees and giving your attendees an easy way to learn more about your business by clicking around on your website after they’ve signed up for the webinar.

      Your landing page needs to be fully optimized. Include a target keyword in your page title and in the on-page copy. Embed a registration form that captures enough information without being overly complex.

      Ideally, you should integrate your form into your other marketing tools so that you can turn an attendee into a prospect (use an opt-in checkbox and watch out for GDPR rules for data privacy).

      6. Determine the Date and Time

      When is the best time to host your webinar? ON24 says Wednesdays and Thursdays at 11:00 a.m. are best. LiveWebinar suggests that any midweek day (Tuesday to Thursday) is fine.

      We suggest letting your own data tell you when’s best. Use Google Analytics to see when people are most engaging with your website. Chances are that when they’re on your website, they’ll also be likely to check out your webinar.

      Watch out for time zones if you have an international audience. And steer clear of the start or end of the working day when your audience may be commuting.

      7. Promote Your Webinar

      Of course, your webinar is nothing without an audience.

      But how do you ensure you get people — the right people — to attend?

      You’ve already optimized a landing page. You now have to be creative in how you share that link.

      First of all, you could set up a series of paid ads, either on Google AdWords or social ads (Facebook Ads, Sponsored Tweets, etc.). This tactic gives you more control over the audience you’re trying to reach, but it does come at a modest cost.

      Your free options include promoting your landing page on your free social media accounts. If you already have a regular e-newsletter, create an email campaign inviting your contacts to the webinar.

      Send reminder emails as the date approaches, and entice your audience with some information about the webinar, such as the key points you’ll be covering or, for instance, the guest speaker.

      8. Prepare and Practice

      When it comes to your first webinar, you won’t want to leave anything to chance.

      Set aside time to rehearse your webinar and do a tech run, paying particular attention to details like lighting and sound, camera angles, and your backdrop. Make a note of timing so you don’t run over or, worse, have time to fill. Have a plan in place for any technical issues that might arise on the day.

      Most importantly, practice strategies for staying calm and confident. Focus on your breathing and remind yourself that you are the expert and you’re well-prepared.

      9. Follow up After Your Webinar

      You just hosted a great webinar — congratulations! Give yourself a well-deserved pat on the back.

      But the job’s not over yet.

      The last thing you want to do is let all your hard work go to waste by forgetting to follow up with your audience.

      After your webinar ends, send the attendees a thank you email with the webinar recording or slide deck, along with a request for feedback. If your webinar included a product demonstration or special offer, be sure to include those details in a post-webinar email campaign.

      Make sure you send the recorded webinar to your audience who registered but who couldn’t attend as well. And don’t forget to repurpose the webinar into a blog post, YouTube video, Twitter series, or ebook as part of your content marketing strategy.

      Ready to Host Your First Live Event?

      Whether you need help finding a target audience, crafting the ideal social media strategy, or building a webinar registration site, we can help! Subscribe to our monthly digest so you never miss an article.

      Now Over to You…

      You know that hosting an online webinar is a solid marketing idea. It’ll improve your brand awareness, set you apart as a trusted authority, and drive brand awareness for your business.

      These tips will help prepare you to host a successful webinar, but if you need help hosting your website, we’re here to help you with that too. Check out our affordable shared hosting plans today!



      Source link

      Webinar Series: GitOps Tool Sets on Kubernetes with CircleCI and Argo CD


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the last session of the series, GitOps Tool Sets on Kubernetes with CircleCI and Argo CD.

      Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.

      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with the increased control comes an increased complexity that can make CI/CD systems of cooperative code development, version control, change logging, and automated deployment and rollback particularly difficult to manage manually. To account for these difficulties, DevOps engineers have developed several methods of Kubernetes CI/CD automation, including the system of tooling and best practices called GitOps. GitOps, as proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are many tools that use Git as a focal point for DevOps processes on Kubernetes, including Gitkube developed by Hasura, Flux by Weaveworks, and Jenkins X, the topic of the second webinar in this series. In this tutorial, you will run through a demonstration of two additional tools that you can use to set up your own cloud-based GitOps CI/CD system: The Continuous Integration tool CircleCI and Argo CD, a declarative Continuous Delivery tool.

      CircleCI uses GitHub or Bitbucket repositories to organize application development and to automate building and testing on Kubernetes. By integrating with the Git repository, CircleCI projects can detect when a change is made to the application code and automatically test it, sending notifications of the change and the results of testing over email or other communication tools like Slack. CircleCI keeps logs of all these changes and test results, and the browser-based interface allows users to monitor the testing in real time, so that a team always knows the status of their project.

      As a sub-project of the Argo workflow management engine for Kubernetes, Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including ksonnet applications, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this last article of the CI/CD with Kubernetes series, you will try out these GitOps tools by:

      By the end of this tutorial, you will have a basic understanding of how to construct a CI/CD pipeline on Kubernetes with a GitOps tool set.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.

      • A Docker Hub Account. For an overview on getting started with Docker Hub, please see these instructions.

      • A GitHub account and basic knowledge of GitHub. For a primer on how to use GitHub, check out our How To Create a Pull Request on GitHub tutorial.

      • Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.

      • A Kubernetes cluster with the kubectl command line tool. This tutorial has been tested on a simulated Kubernetes cluster, set up in a local environment with Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster. To create a Minikube cluster, follow Step 1 of the second webinar in this series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.

      Step 1 — Setting Up your CircleCI Workflow

      In this step, you will put together a standard CircleCI workflow that involves three jobs: testing code, building an image, and pushing that image to Docker Hub. In the testing phase, CircleCI will use pytest to test the code for a sample RSVP application. Then, it will build the image of the application code and push the image to DockerHub.

      First, give CircleCI access to your GitHub account. To do this, navigate to https://circleci.com/ in your favorite web browser:

      CircleCI Landing Page

      In the top right of the page, you will find a Sign Up button. Click this button, then click Sign Up with GitHub on the following page. The CircleCI website will prompt you for your GitHub credentials:

      Sign In to GitHub CircleCI Page

      Entering your username and password here gives CircleCI the permission to read your GitHub email address, deploy keys and add service hooks to your repository, create a list of your repositories, and add an SSH key to your GitHub account. These permissions are necessary for CircleCI to monitor and react to changes in your Git repository. If you would like to read more about the requested permissions before giving CircleCI your account information, see the CircleCI documentation.

      Once you have reviewed these permissions, enter your GitHub credentials and click Sign In. CircleCI will then integrate with your GitHub account and redirect your browser to the CircleCI welcome page:

      Welcome page for CircleCI

      Now that you have access to your CircleCI dashboard, open up another browser window and navigate to the GitHub repository for this webinar, https://github.com/do-community/rsvpapp-webinar4. If prompted to sign in to GitHub, enter your username and password. In this repository, you will find a sample RSVP application created by the CloudYuga team. For the purposes of this tutorial, you will use this application to demonstrate a GitOps workflow. Fork this repository to your GitHub account by clicking the Fork button at the top right of the screen.

      When you’ve forked the repository, GitHub will redirect you to https://github.com/your_GitHub_username/rsvpapp-webinar4. On the left side of the screen, you will see a Branch: master button. Click this button to reveal the list of branches for this project. Here, the master branch refers to the current official version of the application. On the other hand, the dev branch is a development sandbox, where you can test changes before promoting them to the official version in the master branch. Select the dev branch.

      Now that you are in the development section of this demonstration repository, you can start setting up a pipeline. CircleCI requires a YAML configuration file in the repository that describes the steps it needs to take to test your application. The repository you forked already has this file at .circleci/config.yml; in order to practice setting up CircleCI, delete this file and make your own.

      To create this configuration file, click the Create new file button and make a file named .circleci/config.yml:

      GitHub Create a new file Page

      Once you have this file open in GitHub, you can configure the workflow for CircleCI. To learn about this file’s contents, you will add the sections piece by piece. First, add the following:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
      . . .
      

      In the preceding code, version refers to the version of CircleCI that you will use. jobs:test: means that you are setting up a test for your application, and machine:image: indicates where CircleCI will do the testing, in this case a virtual machine based on the circleci/classic:201808-01 image.

      Next, add the steps you would like CircleCI to take during the test:

      .circleci/config.yml

      . . .
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
      . . .
      

      The steps of the test are listed out after steps:, starting with - checkout, which will checkout your project’s source code and copy it into the job’s space. Next, the - run: name: install dependencies step runs the listed commands to install the dependencies required for the test. In this case, you will be using the Django Web framework’s built-in test-runner and the testing tool pytest. After CircleCI downloads these dependencies, the -run: name: run tests step will instruct CircleCI to run the tests on your application.

      With the test job completed, add in the following contents to describe the build job:

      .circleci/config.yml

      . . .
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      . . .
      

      As before, machine:image: means that CircleCI will build the application in a virtual machine based on the specified image. Under steps:, you will find - checkout again, followed by - run: name: build image. This means that CircleCi will build a Docker container from the rsvpapp image in your Docker Hub repository. You will set the $DOCKERHUB_USERNAME environment variable in the CircleCI interface, which the tutorial will cover after this YAML file is complete.

      After the build job is done, the push job will push the resulting image to your Docker Hub account.

      Finally, add the following lines to determine the workflows that coordinate the jobs you defined earlier:

      .circleci/config.yml

      . . .
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      These lines ensure that CircleCI executes the test, build, and push jobs in the correct order. context: DOCKERHUB refers to the context in which the test will take place. You will create this context after finalizing this YAML file. The only: dev line restrains the workflow to trigger only when there is a change to the dev branch of your repository, and ensures that CircleCI will build and test the code from dev.

      Now that you have added all the code for the .circleci/config.yml file, its contents should be as follows:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      Once you have added this file to the dev branch of your repository, return to the CircleCI dashboard.

      Next, you will create a CircleCI context to house the environment variables needed for the workflow that you outlined in the preceding YAML file. On the left side of the screen, you will find a SETTINGS button. Click this, then select Contexts under the ORGANIZATION heading. Finally, click the Create Context button on the right side of the screen:

      Create Context Screen for CircleCI

      CircleCI will then ask you for the name of this context. Enter DOCKERHUB, then click Create. Once you have created the context, select the DOCKERHUB context and click the Add Environment Variable button. For the first, type in the name DOCKERHUB_USERNAME, and in the Value enter your Docker Hub username.

      Add Environment Variable Screen for CircleCI

      Then add another environment variable, but this time, name it DOCKERHUB_PASSWORD and fill in the Value field with your Docker Hub password.

      When you’ve create the two environment variables for your DOCKERHUB context, create a CircleCI project for the test RSVP application. To do this, select the ADD PROJECTS button from the left-hand side menu. This will yield a list of GitHub projects tied to your account. Select rsvpapp-webinar4 from the list and click the Set Up Project button.

      Note: If rsvpapp-webinar4 does not show up in the list, reload the CircleCI page. Sometimes it can take a moment for the GitHub projects to show up in the CircleCI interface.

      You will now find yourself on the Set Up Project page:

      Set Up Project Screen for CircleCI

      At the top of the screen, CircleCI instructs you to create a config.yml file. Since you have already done this, scroll down to find the Start Building button on the right side of the page. By selecting this, you will tell CircleCI to start monitoring your application for changes.

      Click on the Start Building button. CircleCI will redirect you to a build progress/status page, which as yet has no build.

      To test the pipeline trigger, go to the recently forked repository at https://github.com/your_GitHub_username/rsvpapp-webinar4 and make some changes in the dev branch only. Since you have added the branch filter only: dev to your .circleci/config file, CI will build only when there is change in the dev branch. Make a change to the dev branch code, and you will find that CircleCI has triggered a new workflow in the user interface. Click on the running workflow and you will find the details of what CircleCI is doing:

      CircleCI Project Workflow Page

      With your CircleCI workflow taking care of the Continuous Integration aspect of your GitOps CI/CD system, you can install and configure Argo CD on top of your Kubernetes cluster to address Continuous Deployment.

      Step 2 — Installing and Configuring Argo CD on your Kubernetes Cluster

      Just as CircleCI uses GitHub to trigger automated testing on changes to source code, Argo CD connects your Kubernetes cluster into your GitHub repository to listen for changes and to automatically deploy the updated application. To set this up, you must first install Argo CD into your cluster.

      First, create a namespace named argocd:

      • kubectl create namespace argocd

      Within this namespace, Argo CD will run all the services and resources it needs to create its Continuous Deployment workflow.

      Next, download the Argo CD manifest from the official GitHub respository for Argo:

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml

      In this command, the -n flag directs kubectl to apply the manifest to the namespace argocd, and -f specifies the file name for the manifest that it will apply, in this case the one downloaded from the Argo repository.

      By using the kubectl get command, you can find the pods that are now running in the argocd namespace:

      • kubectl get pod -n argocd

      Using this command will yield output similar to the following:

      NAME                                      READY     STATUS    RESTARTS   AGE
      application-controller-6d68475cd4-j4jtj   1/1       Running   0          1m
      argocd-repo-server-78f556f55b-tmkvj       1/1       Running   0          1m
      argocd-server-78f47bf789-trrbw            1/1       Running   0          1m
      dex-server-74dc6c5ff4-fbr5g               1/1       Running   0          1m
      

      Now that Argo CD is running on your cluster, download the Argo CD CLI tool so that you can control the program from your command line:

      • curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64

      Once you’ve downloaded the file, use chmod to make it executable:

      • chmod +x /usr/local/bin/argocd

      To find the Argo CD service, run the kubectl get command in the namespace argocd:

      • kubectl get svc -n argocd argocd-server

      You will get output similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server ClusterIP 10.109.189.243 <none> 80/TCP,443/TCP 8m

      Now, access the Argo CD API server. This server does not automatically have an external IP, so you must first expose the API so that you can access it from your browser at your local workstation. To do this, use kubectl port-forward to forward port 8080 on your local workstation to the 80 TCP port of the argocd-server service from the preceding output:

      • kubectl port-forward svc/argocd-server -n argocd 8080:80

      The output will be:

      Output

      Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080

      Once you run the port-forward command, your command prompt will disappear from your terminal. To enter more commands for your Kubernetes cluster, open a new terminal window and log onto your remote server.

      To complete the connection, use ssh to forward the 8080 port from your local machine. First, open up an additional terminal window and, from your local workstation, enter the following command, with remote_server_IP_address replaced by the IP address of the remote server on which you are running your Kubernetes cluster:

      • ssh -L 8080:localhost:8080 root@remote_server_IP_address

      To make sure that the Argo CD server is exposed to your local workstation, open up a browser and navigate to the URL localhost:8080. You will see the Argo CD landing page:

      Sign In Page for ArgoCD

      Now that you have installed Argo CD and exposed its server to your local workstation, you can continue to the next step, in which you will connect GitHub into your Argo CD service.

      Step 3 — Connecting Argo CD to GitHub

      To allow Argo CD to listen to GitHub and synchronize deployments to your repository, you first have to connect Argo CD into GitHub. To do this, log into Argo.

      By default, the password for your Argo CD account is the name of the pod for the Argo CD API server. Switch back to the terminal window that is logged into your remote server but is not handling the port forwarding. Retrieve the password with the following command:

      • kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2

      You will get the name of the pod running the Argo API server:

      Output

      argocd-server-b686c584b-6ktwf

      Enter the following command to log in from the CLI:

      • argocd login localhost:8080

      You will receive the following prompt:

      Output

      WARNING: server certificate had error: x509: certificate signed by unknown authority. Proceed insecurely (y/n)?

      For the purposes of this demonstration, type y to proceed without a secure connection. Argo CD will then prompt you for your username and password. Enter admin for username and the complete argocd-server pod name for your password. Once you put in your credentials, you’ll receive the following message:

      Output

      'admin' logged in successfully Context 'localhost:8080' updated

      Now that you have logged in, use the following command to change your password:

      • argocd account update-password

      Argo CD will ask you for your current password and the password you would like to change it to. Choose a secure password and enter it at the prompts. Once you have done this, use your new password to relogin:

      Enter your password again, and you will get:

      Output

      Context 'localhost:8080' updated

      If you were deploying an application on a cluster external to the Argo CD cluster, you would need to register the application cluster's credentials with Argo CD. If, as is the case with this tutorial, Argo CD and your application are on the same cluster, then you will use https://kubernetes.default.svc as the Kubernetes API server when connecting Argo CD to your application.

      To demonstrate how one might register an external cluster, first get a list of your Kubernetes contexts:

      • kubectl config get-contexts

      You'll get:

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube

      To add a cluster, enter the following command, with the name of your cluster in place of the highlighted name:

      • argocd cluster add minikube

      In this case, the preceding command would yield:

      Output

      INFO[0000] ServiceAccount "argocd-manager" created INFO[0000] ClusterRole "argocd-manager-role" created INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role" Cluster 'minikube' added

      Now that you have set up your log in credentials for Argo CD and tested how to add an external cluster, move over to the Argo CD landing page and log in from your local workstation. Argo CD will direct you to the Argo CD applications page:

      Argo CD Applications Screen

      From here, click the Settings icon from the left-side tool bar, click Repositories, then click CONNECT REPO. Argo CD will present you with three fields for your GitHub information:

      Argo CD Connect Git Repo Page

      In the field for Repository URL, enter https://github.com/your_GitHub_username/rsvpapp-webinar4, then enter your GitHub username and password. Once you've entered your credentials, click the CONNECT button at the top of the screen.

      Once you've connected your repository containing the demo RSVP app to Argo CD, choose the Apps icon from the left-side tool bar, click the + button in the top right corner of the screen, and select New Application. From the Select Repository page, select your GitHub repository for the RSVP app and click next. Then choose CREATE APP FROM DIRECTORY to go to a page that asks you to review your application parameters:

      Argo CD Review application parameters Page

      The Path field designates where the YAML file for your application resides in your GitHub repository. For this project, type k8s. For Application Name, type rsvpapp, and for Cluster URL, select https://kubernetes.default.svc from the dropdown menu, since Argo CD and your application are on the same Kubernetes cluster. Finally, enter default for Namespace.

      Once you have filled out your application parameters, click on CREATE at the top of the screen. A box will appear, representing your application:

      Argo CD APPLICATIONS Page with rsvpapp

      After Status:, you will see that your application is OutOfSync with your GitHub repository. To deploy your application as it is on GitHub, click ACTIONS and choose Sync. After a few moments, your application status will change to Synced, meaning that Argo CD has deployed your application.

      Once your application has been deployed, click your application box to find a detailed diagram of your application:

      Argo CD Application Details Page for rsvpapp

      To find this deployment on your Kubernetes cluster, switch back to the terminal window for your remote server and enter:

      You will receive output with the pods that are running your app:

      Output

      NAME READY STATUS RESTARTS AGE rsvp-755d87f66b-hgfb5 1/1 Running 0 12m rsvp-755d87f66b-p2bsh 1/1 Running 0 12m rsvp-db-54996bf89-gljjz 1/1 Running 0 12m

      Next, check the services:

      You'll find a service for the RSVP app and your MongoDB database, in addition to the number of the port from which your app is running, highlighted in the following:

      NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
      kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2h
      mongodb      ClusterIP   10.102.150.54   <none>        27017/TCP      25m
      rsvp         NodePort    10.106.91.108   <none>        80:31350/TCP   25m
      

      You can find your deployed RSVP app by navigating to your_remote_server_IP_address:app_port_number in your browser, using the preceding highlighted number for app_port_number:

      RSVP Application

      Now that you have deployed your application using Argo CD, you can test your Continuous Deployment system and adjust it to automatically sync with GitHub.

      Step 4 — Testing your Continuous Deployment Setup

      With Argo CD set up, test out your Continuous Deployment system by making a change in your project and triggering a new build of your application.

      In your browser, navigate to https://github.com/your_GitHub_username/rsvpapp-webinar4, click into the master branch, and update the k8s/rsvp.yaml file to deploy your app using the image built by CircleCI as a base. Add dev after image: nkhare/rsvpapp:, as shown in the following:

      rsvpapp-webinar2/k8s/rsvp.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: rsvp
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: rsvp
        template:
          metadata:
            labels:
              app: rsvp
          spec:
            containers:
            - name: rsvp-app
              image: nkhare/rsvpapp: dev
              imagePullPolicy: Always
              livenessProbe:
                httpGet:
                  path: /
                  port: 5000
                periodSeconds: 30
                timeoutSeconds: 1
                initialDelaySeconds: 50
              env:
              - name: MONGODB_HOST
                value: mongodb
              ports:
              - containerPort: 5000
                name: web-port
      . . .
      

      Instead of pulling the original image from Docker Hub, Argo CD will now use the dev image created in the Continuous Integration system to build the application.

      Commit the change, then return to the ArgoCD UI. You will notice that nothing has changed yet; this is because you have not activated automatic synchronization and must sync the application manually.

      To manually sync the application, click the blue circle in the top right of the screen, and click Sync. A new menu will appear, with a field to name your new revision and a checkbox labeled PRUNE:

      Synchronization Page for Argo CD

      Clicking this checkbox will ensure that, once Argo CD spins up your new application, it will destroy the outdated version. Click the PRUNE box, then click SYNCHRONIZE at the top of the screen. You will see the old elements of your application spinning down, and the new ones spinning up with your CircleCI-made image. If the new image included any changes, you would find these new changes reflected in your application at the URL your_remote_server_IP_address:app_port_number.

      As mentioned before, Argo CD also has an auto-sync option that will incorporate changes into your application as you make them. To enable this, open up your terminal for your remote server and use the following command:

      • argocd app set rsvpapp --sync-policy automated

      To make sure that revisions are not accidentally deleted, the default for automated sync has prune turned off. To turn automated pruning on, simply add the --auto-prune flag at the end of the preceding command.

      Now that you have added Continuous Deployment capabilities to your Kubernetes cluster, you have completed the demonstration GitOps CI/CD system with CircleCI and Argo CD.

      Conclusion

      In this tutorial, you created a pipeline with CircleCI that triggers tests and builds updated images when you change code in your GitHub repository. You also used Argo CD to deploy an application, automatically incorporating the changes integrated by CircleCI. You can now use these tools to create your own GitOps CI/CD system that uses Git as its organizing theme.

      If you'd like to learn more about Git, check out our An Introduction to Open Source series of tutorials. To explore more DevOps tools that integrate with Git repositories, take a look at How To Install and Configure GitLab on Ubuntu 18.04.



      Source link

      Webinar Series: Kubernetes Package Management with Helm and CI/CD with Jenkins X


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the second session of the series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.

      Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.

      Introduction

      In order to reduce error and organize complexity when deploying an application, CI/CD systems must include robust tooling for package management/deployment and pipelines with automated testing. But in modern production environments, the increased complexity of cloud-based infrastructure can present problems for putting together a reliable CI/CD environment. Two Kubernetes-specific tools developed to solve this problem are the Helm package manager and the Jenkins X pipeline automation tool.

      Helm is a package manager specifically designed for Kubernetes, maintained by the Cloud Native Computing Foundation (CNCF) in collaboration with Microsoft, Google, Bitnami, and the Helm contributor community. At a high level, it accomplishes the same goals as Linux system package managers like APT or YUM: managing the installation of applications and dependencies behind the scenes and hiding the complexity from the user. But with Kubernetes, the need for this kind of management is even more pronounced: Installing applications requires the complex and tedious orchestration of YAML files, and upgrading or rolling back releases can be anywhere from difficult to impossible. In order to solve this problem, Helm runs on top of Kubernetes and packages applications into pre-configured resources called charts, which the user can manage with simple commands, making the process of sharing and managing applications more user-friendly.

      Jenkins X is a CI/CD tool used to automate production pipelines and environments for Kubernetes. Using Docker images, Helm charts, and the Jenkins pipeline engine, Jenkins X can automatically manage releases and versions and promote applications between environments on GitHub.

      In this second article of the CI/CD with Kubernetes series, you will preview these two tools by:

      • Managing, creating, and deploying Kubernetes packages with Helm.

      • Building a CI/CD pipeline with Jenkins X.

      Though a variety of Kubernetes platforms can use Helm and Jenkins X, in this tutorial you will run a simulated Kubernetes cluster, set up in your local environment. To do this, you will use Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster.

      By the end of this tutorial, you will have a basic understanding of how these Kubernetes-native tools can help you implement a CI/CD system for your cloud application.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.

      • A GitHub account and GitHub API token. Be sure to record this API token so that you can enter it during the Jenkins X portion of this tutorial.

      • Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.

      Step 1 — Creating a Local Kubernetes Cluster with Minikube

      Before setting up Minikube, you will have to install its dependencies, including the Kubernetes command line tool kubectl, the bidirectional data transfer relay socat, and the container program Docker.

      First, make sure that your system’s package manager can access packages over HTTPS with apt-transport-https:

      • apt-get update
      • apt-get install apt-transport-https

      Next, in order to ensure the kubectl download is valid, add the GPG key for the official Google repository to your system:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

      Once you have added the GPG key, create the file /etc/apt/sources.list.d/kubernetes.list by opening it in your text editor:

      • nano /etc/apt/sources.list.d/kubernetes.list

      Once this file is open, add the following line:

      /etc/apt/sources.list.d/kubernetes.list

      deb http://apt.kubernetes.io/ kubernetes-xenial main
      

      This will show your system the source for downloading kubectl. Once you have added the line, save and exit the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Finally, update the source list for APT and install kubectl, socat, and docker.io:

      • apt-get update
      • apt-get install -y kubectl socat docker.io

      Note: For Minikube to simulate a Kubernetes cluster, you must download the docker.io package rather than the newer docker-ce release. For production-ready environments, docker-ce would be the more appropriate choice, since it is better maintained in the official Docker repository.

      Now that you have installed kubectl, you can proceed with installing Minikube. First, use curl to download the program’s binary:

      • curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.0/minikube-linux-amd64

      Next, change the access permissions of the file you just downloaded so that your system can execute it:

      Finally, copy the minikube file to the executable path at /usr/local/bin/ and remove the original file from your home directory:

      • cp minikube /usr/local/bin/
      • rm minikube

      With Minikube installed on your machine, you can now start the program. To create a Minikube Kubernetes cluster, use the following command:

      • minikube start --vm-driver none

      The flag --vm-driver none instructs Minikube to run Kubernetes on the local host using containers rather than a virtual machine. Running Minikube this way means that you do not need to download a VM driver, but also means that the Kubernetes API server will run insecurely as root.

      Warning: Because the API server with root privileges will have unlimited access to the local host, it is not recommended to run Minikube using the none driver on personal workstations.

      Now that you have started Minikube, check to make sure that your cluster is running with the following command:

      You will receive the following output, with your IP address in place of your_IP_address:

      minikube: Running
      cluster: Running
      kubectl: Correctly Configured: pointing to minikube-vm at your_IP_address
      

      Now that you have set up your simulated Kubernetes cluster using Minikube, you can gain experience with Kubernetes package management by installing and configuring the Helm package manager on top of your cluster.

      Step 2 — Setting Up the Helm Package Manager on your Cluster

      In order to coordinate the installation of applications on your Kubernetes cluster, you will now install the Helm package manager. Helm consists of a helm client that runs outside the cluster and a tiller server that manages application releases from within the cluster. You will have to install and configure both to successfully run Helm on your cluster.

      To install the Helm binaries, first use curl to download the following installation script from the official Helm GitHub repository into a new file named get_helm.sh:

      • curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh

      Since this script requires root access, change the permission of get_helm.sh so that the owner of the file (in this case, root) can read, write, and execute it:

      Now, execute the script:

      When the script finishes, you will have helm installed to /usr/local/bin/helm and tiller installed to /usr/local/bin/tiller.

      Though tiller is now installed, it does not yet have the correct roles and permissions to access the necessary resources in your Kubernetes cluster. To assign these roles and permissions to tiller, you will have to create a service account named tiller. In Kubernetes, a service account represents an identity for processes that run in a pod. After a process is authenticated through a service account, it can then contact the API server and access cluster resources. If a pod is not assigned a specific service account, it gets the default service account. You will also have to create a Role-Based access control (RBAC) rule that authorizes the tiller service account.

      In Kubernetes RBAC API, a role contains rules that determine a set of permissions. A role can be defined with a scope of namespace or cluster, and can only grant access to resources within a single namespace. ClusterRole can create the same permissions on the level of a cluster, granting access to cluster-scoped resources like nodes and namespaced resources like pods. To assign the tiller service account the right role, create a YAML file called rbac_helm.yaml and open it in your text editor:

      Add the following lines to the file to configure the tiller service account:

      rbac_helm.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: tiller
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: tiller
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system
      
        - kind: User
          name: "admin"
          apiGroup: rbac.authorization.k8s.io
      
        - kind: User
          name: "kubelet"
          apiGroup: rbac.authorization.k8s.io
      
        - kind: Group
          name: system:serviceaccounts
          apiGroup: rbac.authorization.k8s.io
      

      In the preceding file, ServiceAccount allows the tiller processes to access the apiserver as an authenticated service account. ClusterRole grants certain permissions to a role, and ClusterRoleBinding assigns that role to a list of subjects, including the tiller service account, the admin and kubelet users, and the system:serviceaccounts group.

      Next, deploy the configuration in rbac_helm.yaml with the following command:

      • kubectl apply -f rbac_helm.yaml

      With the tiller configuration deployed, you can now initialize Helm with the --service-acount flag to use the service account you just set up:

      • helm init --service-account tiller

      You will receive the following output, representing a successful initialization:

      Output

      Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!

      This creates a tiller pod in the kube-system namespace. It also creates the .helm default repository in your $HOME directory and configures the default Helm stable chart repository at https://kubernetes-charts.storage.googleapis.com and the local Helm repository at http://127.0.0.1:8879/charts.

      To make sure that the tiller pod is running in the kube-system namespace, enter the following command:

      • kubectl --namespace kube-system get pods

      In your list of pods, tiller-deploy will appear, as is shown in the following output:

      Output

      NAME READY STATUS RESTARTS AGE etcd-minikube 1/1 Running 0 2h kube-addon-manager-minikube 1/1 Running 0 2h kube-apiserver-minikube 1/1 Running 0 2h kube-controller-manager-minikube 1/1 Running 0 2h kube-dns-86f4d74b45-rjql8 3/3 Running 0 2h kube-proxy-dv268 1/1 Running 0 2h kube-scheduler-minikube 1/1 Running 0 2h kubernetes-dashboard-5498ccf677-wktkl 1/1 Running 0 2h storage-provisioner 1/1 Running 0 2h tiller-deploy-689d79895f-bggbk 1/1 Running 0 5m

      If the tiller pod's status is Running, it can now manage Kubernetes applications from inside your cluster on behalf of Helm.

      To make sure that the entire Helm application is working, search the Helm package repositiories for an application like MongoDB:

      In the output, you will see a list of possible applications that fit your search term:

      Output

      NAME CHART VERSION APP VERSION DESCRIPTION stable/mongodb 5.4.0 4.0.6 NoSQL document-oriented database that stores JSON-like do... stable/mongodb-replicaset 3.9.0 3.6 NoSQL document-oriented database that stores JSON-like do... stable/prometheus-mongodb-exporter 1.0.0 v0.6.1 A Prometheus exporter for MongoDB metrics stable/unifi 0.3.1 5.9.29 Ubiquiti Network's Unifi Controller

      Now that you have installed Helm on your Kubernetes cluster, you can learn more about the package manager by creating a sample Helm chart and deploying an application from it.

      Step 3 — Creating a Chart and Deploying an Application with Helm

      In the Helm package manager, individual packages are called charts. Within a chart, a set of files defines an application, which can vary in complexity from a pod to a structured, full-stack app. You can download charts from the Helm repositories, or you can use the helm create command to create your own.

      To test out the capabilities of Helm, create a new Helm chart named demo with the following command:

      In your home directory, you will find a new directory called demo, within which you can create and edit your own chart templates.

      Move into the demo directory and use ls to list its contents:

      You will find the following files and directories in demo:

      demo

      charts  Chart.yaml  templates  values.yaml
      

      Using your text editor, open up the Chart.yaml file:

      Inside, you will find the following contents:

      demo/Chart.yaml

      apiVersion: v1
      appVersion: "1.0"
      description: A Helm chart for Kubernetes
      name: demo
      version: 0.1.0
      

      In this Chart.yaml file, you will find fields like apiVersion, which must be always v1, a description that gives additional information about what demo is, the name of the chart, and the version number, which Helm uses as a release marker. When you are done examining the file, close out of your text editor.

      Next, open up the values.yaml file:

      In this file, you will find the following contents:

      demo/values.yaml

      # Default values for demo.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 1
      
      image:
        repository: nginx
        tag: stable
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: ClusterIP
        port: 80
      
      ingress:
        enabled: false
        annotations: {}
          # kubernetes.io/ingress.class: nginx
          # kubernetes.io/tls-acme: "true"
        paths: []
        hosts:
          - chart-example.local
        tls: []
        #  - secretName: chart-example-tls
        #    hosts:
        #      - chart-example.local
      
      resources: {}
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        # limits:
        #  cpu: 100m
        #  memory: 128Mi
        # requests:
        #  cpu: 100m
        #  memory: 128Mi
      
      nodeSelector: {}
      
      tolerations: []
      
      affinity: {}
      

      By changing the contents of values.yaml, chart developers can supply default values for the application defined in the chart, controlling replica count, image base, ingress access, secret management, and more. Chart users can supply their own values for these parameters with a custom YAML file using helm install. When a user provides custom values, these values will override the values in the chart’s values.yaml file.

      Close out the values.yaml file and list the contents of the templates directory with the following command:

      Here you will find templates for various files that can control different aspects of your chart:

      templates

      deployment.yaml  _helpers.tpl  ingress.yaml  NOTES.txt  service.yaml  tests
      

      Now that you have explored the demo chart, you can experiment with Helm chart installation by installing demo. Return to your home directory with the following command:

      Install the demo Helm chart under the name web with helm install:

      • helm install --name web ./demo

      You will get the following output:

      Output

      NAME: web LAST DEPLOYED: Wed Feb 20 20:59:48 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-demo ClusterIP 10.100.76.231 <none> 80/TCP 0s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-demo 1 0 0 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE web-demo-5758d98fdd-x4mjs 0/1 ContainerCreating 0 0s NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80

      In this output, you will find the STATUS of your application, plus a list of relevant resources in your cluster.

      Next, list the deployments created by the demo Helm chart with the following command:

      This will yield output that will list your active deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-demo 1 1 1 1 4m

      Listing your pods with the command kubectl get pods would show the pods that are running your web application, which would look like the following:

      Output

      NAME READY STATUS RESTARTS AGE web-demo-5758d98fdd-nbkqd 1/1 Running 0 4m

      To demonstrate how changes in the Helm chart can release different versions of your application, open up demo/values.yaml in your text editor and change replicaCount: to 3 and image:tag: from stable to latest. In the following code block, you will find what the YAML file should look like after you have finished modifying it, with the changes highlighted:

      demo/values.yaml

      # Default values for demo.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: nginx
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: ClusterIP
        port: 80
      . . .
      

      Save and exit the file.

      Before you deploy this new version of your web application, list your Helm releases as they are now with the following command:

      You will receive the following output, with the one deployment you created earlier:

      Output

      NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE web 1 Wed Feb 20 20:59:48 2019 DEPLOYED demo-0.1.0 1.0 default

      Notice that REVISION is listed as 1, indicating that this is the first revision of the web application.

      To deploy the web application with the latest changes made to demo/values.yaml, upgrade the application with the following command:

      Now, list the Helm releases again:

      You will receive the following output:

      Output

      NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE web 2 Wed Feb 20 21:18:12 2019 DEPLOYED demo-0.1.0 1.0 default

      Notice that REVISION has changed to 2, indicating that this is the second revision.

      To find the history of the Helm releases for web, use the following:

      This will show both of the revisions of the web application:

      Output

      REVISION        UPDATED                         STATUS          CHART           DESCRIPTION
      1               Wed Feb 20 20:59:48 2019        SUPERSEDED      demo-0.1.0      Install complete
      2               Wed Feb 20 21:18:12 2019        DEPLOYED        demo-0.1.0      Upgrade complete
      

      To roll back your application to revision 1, enter the following command:

      This will yield the following output:

      Output

      Rollback was a success! Happy Helming!

      Now, bring up the Helm release history:

      You will receive the following list:

      Output

      REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Feb 20 20:59:48 2019 SUPERSEDED demo-0.1.0 Install complete 2 Wed Feb 20 21:18:12 2019 SUPERSEDED demo-0.1.0 Upgrade complete 3 Wed Feb 20 21:28:48 2019 DEPLOYED demo-0.1.0 Rollback to 1

      By rolling back the web application, you have created a third revision that has the same settings as revision 1. Remember, you can always tell which revision is active by finding the DEPLOYED item under STATUS.

      To prepare for the next section, clean up your testing area by deleting your web release with the helm delete command:

      Examine the Helm release history again:

      You will receive the following output:

      Output

      REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Feb 20 20:59:48 2019 SUPERSEDED demo-0.1.0 Install complete 2 Wed Feb 20 21:18:12 2019 SUPERSEDED demo-0.1.0 Upgrade complete 3 Wed Feb 20 21:28:48 2019 DELETED demo-0.1.0 Deletion complete

      The STATUS for REVISION 3 has changed to DELETED, indicating that your deployed instance of web has been deleted. However, although this does delete the release, it does not delete it from store. In order to delete the release completely, run the helm delete command with the --purge flag.

      In this step, you have managed application releases on Kubernetes with the Helm. If you would like to study Helm further, check out our An Introduction to Helm, the Package Manager for Kubernetes tutorial, or review the official Helm documentation.

      Next, you will set up and test the pipeline automation tool Jenkins X by using the jx CLI to create a CI/CD-ready Kubernetes cluster.

      Step 4 — Setting Up the Jenkins X Environment

      With Jenkins X, you can create your Kubernetes cluster from the ground up with pipeline automation and CI/CD solutions built in. By installing the jx CLI tool, you will be able to efficiently manage application releases, Docker images, and Helm charts, in addition to automatically promoting your applications across environments in GitHub.

      Since you will be using jx to create your cluster, you must first delete the Minikube cluster that you already have. To do this, use the following command:

      This will delete the local simulated Kubernete cluster, but will not delete the default directories created when you first installed Minikube. To clean these off your machine, use the following commands:

      • rm -rf ~/.kube
      • rm -rf ~/.minikube
      • rm -rf /etc/kubernetes/*
      • rm -rf /var/lib/minikube/*

      Once you have completely cleared Minikube from your machine, you can move on to installing the Jenkins X binary.

      First, download the compressed jx file from the official Jenkins X GitHub repository with the curl command and uncompress it with the tar command:

      • curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.781/jx-linux-amd64.tar.gz | tar xzv

      Next, move the downloaded jx file to the executable path at /usr/local/bin:

      Jenkins X comes with a Docker Registry that runs inside your Kubernetes cluster. Since this is an internal element, security measures such as self-signed certificates can cause trouble for the program. To fix this, set Docker to use insecure registries for the local IP range. To do this, create the file /etc/docker/daemon.json and open it in your text editor:

      • nano /etc/docker/daemon.json

      Add the following contents to the file:

      /etc/docker/daemon.json

      {
        "insecure-registries" : ["0.0.0.0/0"]
      }
      

      Save and exit the file. For these changes to take effect, restart the Docker service with the following command:

      To verify that you have configured Docker with insecure registries, use the following command:

      At the end of the output, you should see the following highlighted line:

      Output

      Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 15 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true . . . Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 0.0.0.0/0 127.0.0.0/8 Live Restore Enabled: false

      Now that you have downloaded Jenkins X and configured the Docker registry, use the jx CLI tool to create a Minikube Kubernetes cluster with CI/CD capabilities:

      • jx create cluster minikube --cpu=5 --default-admin-password=admin --vm-driver=none --memory=13314

      Here you are creating a Kubernetes cluster using Minikube, with the flag --cpu=5 to set 5 CPUs and --memory=13314 to give your cluster 13314 MBs of memory. Since Jenkins X is a robust but large program, these specifications will ensure that Jenkins X works without problems in this demonstration. Also, you are using --default-admin-password=admin to set the Jenkins X password as admin and --vm-driver=none to set up the cluster locally, as you did in Step 1.

      As Jenkins X spins up your cluster, you will receive various prompts at different times throughout the process that set the parameters for your cluster and determine how it will communicate with GitHub to manage your production environments.

      First, you will receive the following prompt:

      Output

      ? disk-size (MB) 150GB

      Press ENTER to continue. Next, you will be prompted for the name you wish to use with git, the email address you wish to use with git, and your GitHub username. Enter each of these when prompted, then press ENTER.

      Next, Jenkins X will prompt you to enter your GitHub API token:

      Output

      To be able to create a repository on GitHub we need an API Token Please click this URL https://github.com/settings/tokens/new?scopes=repo,read:user,read:org,user:email,write:repo_hook,delete_repo Then COPY the token and enter in into the form below: ? API Token:

      Enter your token here, or create a new token with the appropriate permissions using the highlighted URL in the preceding code block.

      Next, Jenkins X will ask:

      Output

      ? Do you wish to use GitHub as the pipelines Git server: (Y/n) ? Do you wish to use your_GitHub_username as the pipelines Git user for GitHub server: (Y/n)

      Enter Y for both questions.

      After this, Jenkins X will prompt you to answer the following:

      Output

      ? Select Jenkins installation type: [Use arrows to move, type to filter] >Static Master Jenkins Serverless Jenkins ? Pick workload build pack: [Use arrows to move, type to filter] > Kubernetes Workloads: Automated CI+CD with GitOps Promotion Library Workloads: CI+Release but no CD

      For the prior, select Static Master Jenkins, and select Kubernetes Workloads: Automated CI+CD with GitOps Promotion for the latter. When prompted to select an organization for your environment repository, select your GitHub username.

      Finally, you will receive the following output, which verifies successful installation and provides your Jenkins X admin password.

      Output

      Creating GitHub webhook for your_GitHub_username/environment-horsehelix-production for url http://jenkins.jx.your_IP_address.nip.io/github-webhook/ Jenkins X installation completed successfully ******************************************************** NOTE: Your admin password is: admin ******************************************************** Your Kubernetes context is now set to the namespace: jx To switch back to your original namespace use: jx namespace default For help on switching contexts see: https://jenkins-x.io/developing/kube-context/ To import existing projects into Jenkins: jx import To create a new Spring Boot microservice: jx create spring -d web -d actuator To create a new microservice from a quickstart: jx create quickstart

      Next, use the jx get command to receive a list of URLs that show information about your application:

      This command will yield a list similar to the following:

      Name                      URL
      jenkins                   http://jenkins.jx.your_IP_address.nip.io
      jenkins-x-chartmuseum     http://chartmuseum.jx.your_IP_address.nip.io
      jenkins-x-docker-registry http://docker-registry.jx.your_IP_address.nip.io
      jenkins-x-monocular-api   http://monocular.jx.your_IP_address.nip.io
      jenkins-x-monocular-ui    http://monocular.jx.your_IP_address.nip.io
      nexus                     http://nexus.jx.your_IP_address.nip.io
      

      You can use the URLs to view Jenkins X data about your CI/CD environment via a UI by entering the address into your browser and entering your username and password. In this case, this will be "admin" for both.

      Next, in order to ensure that the service accounts in the namespaces jx, jx-staging, and jx-production have admin privileges, modify your RBAC policies with the following commands:

      • kubectl create clusterrolebinding jx-staging1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-staging:expose --namespace=jx-staging
      • kubectl create clusterrolebinding jx-staging2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-staging:default --namespace=jx-staging
      • kubectl create clusterrolebinding jx-production1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-production:expose --namespace=jx-productions
      • kubectl create clusterrolebinding jx-production2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-production:default --namespace=jx-productions
      • kubectl create clusterrolebinding jx-binding1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx:expose --namespace=jx
      • kubectl create clusterrolebinding jx-binding2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx:default --namespace=jx

      Now that you have created your local Kubernetes cluster with Jenkins X functionality built in, you can move on to creating an application on the platform to test its CI/CD capabilities and experience a Jenkins X pipeline.

      Step 5 — Creating a Test Application in Your Jenkins X Environment

      With your Jenkins X environment set up in your Kubernetes cluster, you now have CI/CD infrastructure in place that can help you automate a testing pipeline. In this step, you will try this out by setting up a test application in a working Jenkins X pipeline.

      For demonstration purposes, this tutorial will use a sample RSVP application created by the CloudYuga team. You can find this application, along with other webinar materials, at the DO-Community GitHub repository.

      First, clone the sample application from the repository with the following command:

      • git clone https://github.com/do-community/rsvpapp.git

      Once you've cloned the repository, move into the rsvpapp directory and remove the git files:

      To initialize a git repository and a Jenkins X project for a new application, you can use jx create to start from scratch or a template, or jx import to import an existing application from a local project or git repository. For this tutorial, import the sample RSVP application by running the following command from within the application's home directory:

      Jenkins X will prompt you for your GitHub username, whether you'd like to initialize git, a commit message, your organization, and the name you would like for your repository. Answer yes to initialize git, then provide the rest of the prompts with your individual GitHub information and preferences. As Jenkins X imports the application, it will create Helm charts and a Jenkinsfile in your application's home directory. You can modify these charts and the Jenkinsfile as per your requirements.

      Since the sample RSVP application runs on port 5000 of its container, modify your charts/rsvpapp/values.yaml file to match this. Open the charts/rsvpapp/values.yaml in your text editor:

      • nano charts/rsvpapp/values.yaml

      In this values.yaml file, set service:internalPort: to 5000. Once you have made this change, your file should look like the following:

      charts/rsvpapp/values.yaml

      # Default values for python.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      replicaCount: 1
      image:
        repository: draft
        tag: dev
        pullPolicy: IfNotPresent
      service:
        name: rsvpapp
        type: ClusterIP
        externalPort: 80
        internalPort: 5000
        annotations:
          fabric8.io/expose: "true"
          fabric8.io/ingress.annotations: "kubernetes.io/ingress.class: nginx"
      resources:
        limits:
          cpu: 100m
          memory: 128Mi
        requests:
          cpu: 100m
          memory: 128Mi
      ingress:
        enabled: false
      

      Save and exit your file.

      Next, change the charts/preview/requirements.yaml to fit with your application. requirements.yaml is a YAML file in which developers can declare chart dependencies, along with the location of the chart and the desired version. Since our sample application uses MongoDB for database purposes, you'll need to modify the charts/preview/requirements.yaml file to list MongoDB as a dependency. Open the file in your text editor with the following command:

      • nano charts/preview/requirements.yaml

      Edit the file by adding the mongodb-replicaset entry after the alias: cleanup entry, as is highlighted in the following code block:

      charts/preview/requirements.yaml

      # !! File must end with empty line !!
      dependencies:
      - alias: expose
        name: exposecontroller
        repository: http://chartmuseum.jenkins-x.io
        version: 2.3.92
      - alias: cleanup
        name: exposecontroller
        repository: http://chartmuseum.jenkins-x.io
        version: 2.3.92
      - name: mongodb-replicaset
        repository: https://kubernetes-charts.storage.googleapis.com/
        version: 3.5.5
      
        # !! "alias: preview" must be last entry in dependencies array !!
        # !! Place custom dependencies above !!
      - alias: preview
        name: rsvpapp
        repository: file://../rsvpapp
      

      Here you have specified the mongodb-replicaset chart as a dependency for the preview chart.

      Next, repeat this process for your rsvpapp chart. Create the charts/rsvpapp/requirements.yaml file and open it in your text editor:

      • nano charts/rsvpapp/requirements.yaml

      Once the file is open, add the following, making sure that there is a single line of empty space before and after the populated lines:

      charts/rsvpapp/requirements.yaml

      
      dependencies:
      - name: mongodb-replicaset
        repository: https://kubernetes-charts.storage.googleapis.com/
        version: 3.5.5
      
      

      Now you have specified the mongodb-replicaset chart as a dependency for your rsvpapp chart.

      Next, in order to connect the frontend of the sample RSVP application to the MongoDB backend, add a MONGODB_HOST environment variable to your deployment.yaml file in charts/rsvpapp/templates/. Open this file in your text editor:

      • nano charts/rsvpapp/templates/deployment.yaml

      Add the following highlighted lines to the file, in addition to one blank line at the top of the file and two blank lines at the bottom of the file. Note that these blank lines are required for the YAML file to work:

      charts/rsvpapp/templates/deployment.yaml

      
      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: {{ template "fullname" . }}
        labels:
          draft: {{ default "draft-app" .Values.draft }}
          chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
      spec:
        replicas: {{ .Values.replicaCount }}
        template:
          metadata:
            labels:
              draft: {{ default "draft-app" .Values.draft }}
              app: {{ template "fullname" . }}
      {{- if .Values.podAnnotations }}
            annotations:
      {{ toYaml .Values.podAnnotations | indent 8 }}
      {{- end }}
          spec:
            containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              env:
              - name: MONGODB_HOST
                value: "mongodb://{{.Release.Name}}-mongodb-replicaset-0.{{.Release.Name}}-mongodb-replicaset,{{.Release.Name}}-mongodb-replicaset-1.{{.Release.Name}}-mongodb-replicaset,{{.Release.Name}}-mongodb-replicaset-2.{{.Release.Name}}-mongodb-replicaset:27017"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              ports:
              - containerPort: {{ .Values.service.internalPort }}
              resources:
      {{ toYaml .Values.resources | indent 12 }}
      
      
      

      With these changes, Helm will be able to deploy your application with MongoDB as its database.

      Next, examine the Jenkinsfile generated by Jenkins X by opening the file from your application's home directory:

      This Jenkinsfile defines the pipeline that is triggered every time you commit a version of your application to your GitHub repository. If you wanted to automate your code testing so that the tests are triggered every time the pipeline is triggered, you would add the test to this document.

      To demonstrate this, add a customized test case by replacing sh "python -m unittest" under stage('CI Build and push snapshot') and stage('Build Release') in the Jenkinsfile with the following highlighted lines:

      /rsvpapp/Jenkinsfile

      . . .
        stages {
          stage('CI Build and push snapshot') {
            when {
              branch 'PR-*'
            }
            environment {
              PREVIEW_VERSION = "0.0.0-SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER"
              PREVIEW_NAMESPACE = "$APP_NAME-$BRANCH_NAME".toLowerCase()
              HELM_RELEASE = "$PREVIEW_NAMESPACE".toLowerCase()
            }
            steps {
              container('python') {
                sh "pip install -r requirements.txt"
                sh "python -m pytest tests/test_rsvpapp.py"
                sh "export VERSION=$PREVIEW_VERSION && skaffold build -f skaffold.yaml"
                sh "jx step post build --image $DOCKER_REGISTRY/$ORG/$APP_NAME:$PREVIEW_VERSION"
                dir('./charts/preview') {
                  sh "make preview"
                  sh "jx preview --app $APP_NAME --dir ../.."
                }
              }
            }
          }
          stage('Build Release') {
            when {
              branch 'master'
            }
            steps {
              container('python') {
      
                // ensure we're not on a detached head
                sh "git checkout master"
                sh "git config --global credential.helper store"
                sh "jx step git credentials"
      
                // so we can retrieve the version in later steps
                sh "echo $(jx-release-version) > VERSION"
                sh "jx step tag --version $(cat VERSION)"
                sh "pip install -r requirements.txt"
                sh "python -m pytest tests/test_rsvpapp.py"
                sh "export VERSION=`cat VERSION` && skaffold build -f skaffold.yaml"
                sh "jx step post build --image $DOCKER_REGISTRY/$ORG/$APP_NAME:$(cat VERSION)"
              }
            }
          }
      . . .
      

      With the added lines, the Jenkins X pipeline will install dependencies and carry out a Python test whenever you commit a change to your application.

      Now that you have changed the sample RSVP application, commit and push these changes to GitHub with the following commands:

      • git add *
      • git commit -m update
      • git push

      When you push these changes to GitHub, you will trigger a new build of your application. If you open the Jenkins UI by navigating to http://jenkins.jx.your_IP_address.nip.io and entering "admin" for your username and password, you will find information about your new build. If you click "Build History" from the menu on the left side of the page, you should see a history of your committed builds. If you click on the blue icon next to a build then select "Console Ouput" from the lefthand menu, you will find the console output for the automated steps in your pipeline. Scrolling to the end of this output, you will find the following message:

      Output

      . . . Finished: SUCCESS

      This means that your application has passed your customized tests and is now successfully deployed.

      Once Jenkins X builds the application release, it will promote the application to the staging environment. To verify that your application is running, list the applications running on your Kubernetes cluster by using the following command:

      You will receive output similar to the following:

      Output

      APPLICATION STAGING PODS URL rsvpapp 0.0.2 1/1 http://rsvpapp.jx-staging.your_IP_address.nip.io

      From this, you can see that Jenkins X has deployed your application in your jx-staging environment as version 0.0.2. The output also shows the URL that you can use to access your application. Visiting this URL will show you the sample RSVP application:

      Sample RSVP Application in the Staging Environment

      Next, check out the activity of your application with the following command:

      • jx get activity -f rsvpapp

      You will receive output similar to the following:

      Output

      STEP STARTED AGO DURATION STATUS your_GitHub_username/rsvpappv/master #1 3h42m23s 4m51s Succeeded Version: 0.0.1 Checkout Source 3h41m52s 6s Succeeded CI Build and push snapshot 3h41m46s NotExecuted Build Release 3h41m46s 56s Succeeded Promote to Environments 3h40m50s 3m17s Succeeded Promote: staging 3h40m29s 2m36s Succeeded PullRequest 3h40m29s 1m16s Succeeded PullRequest: https://github.com/your_GitHub_username/environment-horsehelix-staging/pull/1 Merge SHA: dc33d3747abdacd2524e8c22f0b5fbb2ac3f6fc7 Update 3h39m13s 1m20s Succeeded Status: Success at: http://jenkins.jx.your_IP_address.nip.io/job/your_GitHub_username/job/environment-horsehelix-staging/job/master/2/display/redirect Promoted 3h39m13s 1m20s Succeeded Application is at: http://rsvpapp.jx-staging.your_IP_address.nip.io Clean up 3h37m33s 1s Succeeded your_GitHub_username/rsvpappv/master #2 28m37s 5m57s Succeeded Version: 0.0.2 Checkout Source 28m18s 4s Succeeded CI Build and push snapshot 28m14s NotExecuted Build Release 28m14s 56s Succeeded Promote to Environments 27m18s 4m38s Succeeded Promote: staging 26m53s 4m0s Succeeded PullRequest 26m53s 1m4s Succeeded PullRequest: https://github.com/your_GitHub_username/environment-horsehelix-staging/pull/2 Merge SHA: 976bd5ad4172cf9fd79f0c6515f5006553ac6611 Update 25m49s 2m56s Succeeded Status: Success at: http://jenkins.jx.your_IP_address.nip.io/job/your_GitHub_username/job/environment-horsehelix-staging/job/master/3/display/redirect Promoted 25m49s 2m56s Succeeded Application is at: http://rsvpapp.jx-staging.your_IP_address.nip.io Clean up 22m40s 0s Succeeded

      Here you are getting the Jenkins X activity for the RSVP application by applying a filter with -f rsvpapp.

      Next, list the pods running in the jx-staging namespace with the following command:

      • kubectl get pod -n jx-staging

      You will receive output similar to the following:

      NAME                                 READY     STATUS    RESTARTS   AGE
      jx-staging-mongodb-replicaset-0      1/1       Running   0          6m
      jx-staging-mongodb-replicaset-1      1/1       Running   0          6m
      jx-staging-mongodb-replicaset-2      1/1       Running   0          5m
      jx-staging-rsvpapp-c864c4844-4fw5z   1/1       Running   0          6m
      

      This output shows that your application is running in the jx-staging namespace, along with three pods of the backend MongoDB database, adhering to the changes you made to the YAML files earlier.

      Now that you have run a test application through the Jenkins X pipeline, you can try out promoting this application to the production environment.

      To finish up this demonstration, you will complete the CI/CD process by promoting the sample RSVP application to your jx-production namespace.

      First, use jx promote in the following command:

      • jx promote rsvpapp --version=0.0.2 --env=production

      This will promote the rsvpapp application running with version=0.0.2 to the production environment. Throughout the build process, Jenkins X will prompt you to enter your GitHub account information. Answer these prompts with your individual responses as they appear.

      After successful promotion, check the list of applications:

      You will receive output similar to the following:

      Output

      APPLICATION STAGING PODS URL PRODUCTION PODS URL rsvpapp 0.0.2 1/1 http://rsvpapp.jx-staging.your_IP_address.nip.io 0.0.2 1/1 http://rsvpapp.jx-production.your_IP_address.nip.io

      With this PRODUCTION information, you can confirm that Jenkins X has promoted rsvpapp to the production environment. For further verification, visit the production URL http://rsvpapp.jx-production.your_IP_address.nip.io in your browser. You should see the working application, now runnning from "production":

      Sample RSVP Application in the Production Environment

      Finally, list your pods in the jx-production namespace.

      • kubectl get pod -n jx-production

      You will find that rsvpapp and the MongoDB backend pods are running in this namespace:

      NAME                                     READY     STATUS    RESTARTS   AGE
      jx-production-mongodb-replicaset-0       1/1       Running   0          1m
      jx-production-mongodb-replicaset-1       1/1       Running   0          1m
      jx-production-mongodb-replicaset-2       1/1       Running   0          55s
      jx-production-rsvpapp-54748d68bd-zjgv7   1/1       Running   0          1m 
      

      This shows that you have successfully promoted the RSVP sample application to your production environment, simulating the production-ready deployment of an application at the end of a CI/CD pipeline.

      Conclusion

      In this tutorial, you used Helm to manage packages on a simulated Kubernetes cluster and customized a Helm chart to package and deploy your own application. You also set up a Jenkins X environment on your Kubernetes cluster and run a sample application through a CI/CD pipeline from start to finish.

      You now have experience with these tools that you can use when building a CI/CD system on your own Kubernetes cluster. If you'd like to learn more about Helm, check out our An Introduction to Helm, the Package Manager for Kubernetes and How To Install Software on Kubernetes Clusters with the Helm Package Manager articles. To explore further CI/CD tools on Kubernetes, you can read about the Istio service mesh in the next tutorial in this webinar series.



      Source link