One place for hosting & domains

      Deploy

      How to Use the Linode Ansible Collection to Deploy a Linode


      Ansible is a popular open-source Infrastructure as Code (IaC) tool that can be used to complete common IT tasks like cloud provisioning and configuration management across a wide array of infrastructure components. Commonly seen as a solution to multi-cloud configurations, automation, and continuous delivery issues, Ansible is considered by many to be an industry standard in the modern cloud landscape.

      Ansible Collections are the latest standard for managing Ansible content, empowering users to install roles, modules, and plugins with less developer and administrative overhead than ever before.
      The Linode Ansible collection provides the basic plugins needed to get started using Linode services with Ansible right away.

      This guide shows how to:

      Caution

      Before You Begin

      Note

      The steps outlined in this guide require
      Ansible version 2.9.10 or greater and were tested on a Linode running Ubuntu 22.04. The instructions can be adapted to other Linux distributions or operating systems.
      1. Provision a server that acts as the Ansible
        control node, from which other compute instances are deployed. Follow the instructions in our
        Creating a Compute Instance guide to create a Linode running Ubuntu 22.04. A shared CPU 1GB Nanode is suitable. You can also use an existing workstation or laptop if you prefer.

      2. Add a limited Linux user to your control node Linode by following the
        Add a Limited User Account section of our
        Setting Up and Securing a Compute Instance guide. Ensure that all commands for the rest of this guide are entered as your limited user.

      3. Ensure that you have performed system updates:

        sudo apt update && sudo apt upgrade
        
      4. Install Ansible on your control node. Follow the steps in the
        Install Ansible section of the
        Getting Started With Ansible – Basic Installation and Setup guide.

      5. Ensure you have Python version 2.7 or higher installed on your control node. Issue the following command to check your system’s Python version:

        python --version
        

        Many operating systems, including Ubuntu 22.04, instead have Python 3 installed by default. The Python 3 interpreter can usually be invoked with the python3 command, and the remainder of this guide assumes Python 3 is installed and used. For example, you can run this command to check your Python 3 version:

        python3 --version
        
      6. Install the pip package manager:

        sudo apt install python3-pip
        
      7. Generate a Linode API v4 access token with permission to read and write Linodes and record it in a password manager or other safe location. Follow the
        Get an Access Token section of the
        Getting Started with the Linode API guide.

      Install the Linode Ansible Collection

      The Linode Ansible collection is currently open-source and hosted on both a
      public Github repository and on
      Ansible Galaxy. Ansible Galaxy is Ansible’s own community-focused repository, providing information on and access to a wide array of
      Ansible collections and
      Ansible roles. Ansible Galaxy support is built into the latest versions of Ansible by default. While users can install the Linode Ansible collection
      from source or by
      using git, these steps show how to use Ansible Galaxy:

      1. Install required dependencies for Ansible:

        sudo -H pip3 install -Iv 'resolvelib<0.6.0'
        
      2. Download the latest version of the Linode Ansible collection using the ansible-galaxy command:

        ansible-galaxy collection install linode.cloud
        

        Once the collection is installed, all configuration files are stored in the default ~/.ansible/collections/ansible_collections/ collections folder.

      3. Install the Python module dependencies required for the Linode Ansible collection. The Linode collection’s installation directory contains a requirements.txt file that lists the Python dependencies, including the official
        Python library for the Linode API v4. Use pip to install these dependencies:

        sudo pip3 install -r .ansible/collections/ansible_collections/linode/cloud/requirements.txt
        

      The Linode Ansible collection is now installed and ready to deploy and manage Linode services.

      Configure Ansible

      When interfacing with the Linode Ansible collection, it is generally good practice to use variables to securely store sensitive strings like API tokens. This section shows how to securely store and access the
      Linode API Access token (generated in the
      Before You Begin section) along with a root password that is assigned to new Linode instances. Both of these are encrypted with
      Ansible Vault.

      Create an Ansible Vault Password File

      1. From the control node’s home directory, create a development directory to hold user-generated Ansible files. Then navigate to this new directory:

        mkdir development && cd development
        
      2. In the development directory, create a new empty text file called .vault-pass (with no file extension). Then generate a unique, complex new password (for example, by using a password manager), copy it into the new file, and save it. This password is used to encrypt and decrypt information stored with Ansible Vault:

        File: ~/development/.vault-pass
        1
        
        <PasteYourAnsibleVaultPasswordHere>

        This is an Ansible Vault password file. A password file provides your Vault password to Ansible Vault’s encryption commands. Ansible Vault also offers other options for password management. To learn more about password management, read Ansible’s
        Providing Vault Passwords documentation.

      3. Set permissions on the file so that only your user can read and write to it:

        chmod 600 .vault-pass
        

        Caution

        Do not check this file into version control. If this file is located in a Git repository, add it to your
        .gitignore file.

      Create an Ansible Configuration File

      Create an Ansible configuration file called ansible.cfg with a text editor of your choice. Copy this snippet into the file:

      File: ~/development/ansible.cfg
      1
      2
      
      [defaults]
      VAULT_PASSWORD_FILE = ./vault-pass

      These lines specify the location of your password file.

      Encrypt Variables with Ansible Vault

      1. Create a directory to store variable files used with your
        Ansible playbooks:

        mkdir -p ~/development/group_vars/
        
      2. Make a new empty text file called vars.yml in this directory. In the next steps, your encrypted API token and root password are stored in this file:

        touch ~/development/group_vars/vars.yml
        
      3. Generate a unique, complex new password (for example, by using a password manager) that should be used as the root password for new compute instances created with the Linode Ansible collection. This should be different from the Ansible Vault password specified in the .vault-pass file.

      4. Use the following ansible-vault encrypt_string command to encrypt the new root password, replacing MySecureRootPassword with your password. Because this command is run from inside your ~/development directory, the Ansible Vault password in your .vault-pass file is used to perform the encryption:

         ansible-vault encrypt_string 'MySecureRootPassword' --name 'password' | tee -a group_vars/vars.yml
        

        In the above command, tee -a group_vars/vars.yml appends the encrypted string to your vars.yml file. Once completed, output similar to the following appears:

        password: !vault |
            $ANSIBLE_VAULT;1.1;AES256
            30376134633639613832373335313062366536313334316465303462656664333064373933393831
            3432313261613532346134633761316363363535326333360a626431376265373133653535373238
            38323166666665376366663964343830633462623537623065356364343831316439396462343935
            6233646239363434380a383433643763373066633535366137346638613261353064353466303734
            3833
      5. Run the following command to add a newline at the end of your vars.yml file:

        echo "" >> group_vars/vars.yml
        
      6. Use the following ansible-vault encrypt_string command to encrypt your Linode API token and append it to your vars.yml file, replacing MyAPIToken with your own access token:

        ansible-vault encrypt_string 'MyAPIToken' --name 'api-token' | tee -a group_vars/vars.yml
        
      7. Run the following command to add another newline at the end of your vars.yml file:

        echo "" >> group_vars/vars.yml
        

        Your vars.yml file should now resemble:

        File: ~/development/group_vars/vars.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        
        password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        token: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  65363565316233613963653465613661316134333164623962643834383632646439306566623061
                  3938393939373039373135663239633162336530373738300a316661373731623538306164363434
                  31656434356431353734666633656534343237333662613036653137396235353833313430626534
                  3330323437653835660a303865636365303532373864613632323930343265343665393432326231
                  61313635653463333630636631336539643430326662373137303166303739616262643338373834
                  34613532353031333731336339396233623533326130376431346462633832353432316163373833
                  35316333626530643736636332323161353139306533633961376432623161626132353933373661
                  36663135323664663130

      Understanding Fully Qualified Collection Namespaces

      Ansible is now configured and the Linode Ansible collection is installed. You can create
      playbooks to leverage the collection and create compute instances and other Linode resources.

      Within playbooks, the Linode Ansible collection is further divided by resource types through the
      Fully Qualified Collection Name(FQCN) affiliated with the desired resource. These names serve as identifiers that help Ansible to more easily and authoritatively delineate between modules and plugins within a collection.

      Modules

      Below is a table of all FQCNs currently included with the Linode Ansible collection and a short overview of their purpose:

      The links in the table above correspond to the GitHub pages for each FQCN. These pages contain a list of all available configuration options for the resource the module applies to. A full dynamically updated list of all resources can be found in the
      Linode Ansible Collections Github Repo.

      Inventory Plugins

      Deploy a Linode with the Linode Ansible Collection

      This section shows how to write a playbook that leverages the Linode Ansible collection and your encrypted API token and root password to create a new Linode instance:

      1. Create a playbook file called deploylinode.yml in your ~/development directory. Copy this snippet into the file and save it:

        File: ~/development/deploylinode.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        - name: Create Linode Instance
          hosts: localhost
          vars_files:
              - ./group_vars/vars.yml
          tasks:
            - name: Create a Linode instance
              linode.cloud.instance:
                api_token: "{{ api_token }}"
                label: my-ansible-linode
                type: g6-nanode-1
                region: us-east
                image: linode/ubuntu22.04
                root_pass: "{{ password }}"
                state: present
        • The playbook contains the Create Linode Instance play. When run, the control node receives the necessary instructions from Ansible and uses the Linode API to deploy infrastructure as needed.

        • The vars_files key provides the location of the variable file or files used to populate information related to tasks for the play.

        • The task in the playbook is defined by the name, which serves as a label, and the FQCN used to configure the resource, in this case a Linode compute instance.

        • The configuration options associated with the FQCN are defined. The configuration options for each FQCN are unique to the resource.

          For options where secure strings are used, the encrypted variables in the ./group_vars/vars.yml file are inserted. This includes the API token and root password.

      2. Once the playbook is saved, enter the following command to run it and create a Linode Nanode instance. Because this command is run from inside your ~/development directory, the Ansible Vault password in your .vault-pass file is used by the playbook to decrypt the variables:

        ansible-playbook deploylinode.yml
        

        Once completed, output similar to the following appears:

        PLAY [Create Linode] *********************************************************************
        
        TASK [Gathering Facts] *******************************************************************
        ok: [localhost]
        
        TASK [Create a new Linode.] **************************************************************
        changed: [localhost]
        
        PLAY RECAP *******************************************************************************
        localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Deploy TOBS on LKE


      Updated
      , by Rajakavitha Kodhandapani

      Traducciones al Español

      Estamos traduciendo nuestros guías y tutoriales al Español. Es
      posible que usted esté viendo una traducción generada
      automáticamente. Estamos trabajando con traductores profesionales
      para verificar las traducciones de nuestro sitio web. Este proyecto
      es un trabajo en curso.

      Create a Linode account
      to try this guide with a $100 credit.

      This credit will be applied to any valid services used during your first 60 days.

      In this guide, deploy
      TOBS to your Linode Kubernetes Engine (LKE) cluster using
      Helm. And use kubectl port-forward for local access to your monitoring interfaces.

      The Prometheus Operator Monitoring Stack

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed on any existing Kubernetes cluster. It includes many of the most popular open-source observability tools such as Prometheus, Grafana, Promlens, TimescaleDB, and others. Together, these provide a maintainable solution to analyze the traffic on the server and identify any potential problems with a deployment. You can use Helm charts to configure and update TOBS deployments.

      TOBS includes the following components:

      • OpenTelemetry collector is deployed to collect traces.
      • Alertmanager, is deployed alongside Prometheus, forms the alerting layer of the stack, and handles alerts generated by Prometheus.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • PromLens helps users build PromQL queries with ease. PromLens is a PromQL query builder that helps you build, understand, and fix your queries much more effectively.
      • TimescaleDB is for long-term storage of metric data. Long-term storage provides the ability to perform post-hoc analysis on metric data over long periods of time. Such data analysis can be used for capacity planning, identifying slow-moving regressions, trend analysis, auditing, and more. For information about connecting to the database from the cluster, see
        TimescaleDB Documentation
      • Promscale provides the translation layer between Prometheus and the database. It allows the Prometheus server to store and retrieve metrics from TimescaleDB, and allows users to use PromQL on Promscale and Prometheus.
      • Prometheus is an open-source systems monitoring and altering stack. It has become the de-facto standard in metric monitoring and is the basis of standards such as OpenMetrics. It allows you to monitor and understand how your infrastructure and applications are performing. Service discovery allows Prometheus to automagically discover components within your Kubernetes cluster that are already emitting metrics.
      • kube-state-metrics exports the metrics related to Kubernetes resources such as the status and count of Kubernetes resources, with visibility of the desired resources and the current resources, as well as the trends in your cluster.
      • Node-Exporter is deployed to export node related metrics such as CPU, memory usage, and others from the Kubernetes cluster.

      Before You Begin

      Note

      This guide was written using
      Kubernetes version 1.23.
      1. Deploy an LKE Cluster. This guide was written using an example node pool with three
        2 GB Linodes. Depending on the workloads you plan to deploy on your cluster, you may consider using Linodes with more available resources.

      2. Install
        Helm 3 to your local environment.

      3. Install
        kubectl to your local environment and
        connect to your cluster.

      4. Create the monitoring namespace on your LKE cluster:

        kubectl create namespace monitoring
        
      5. Add the stable Helm charts repository to your Helm repos:

        helm repo add stable https://charts.helm.sh/stable
        
      6. Update your Helm repositories:

        helm repo update
        

      TOBS Minimal Deployment

      In this section, learn to deploy TOBS for individual/local access with kubectl
      Port-Forward.

      Deploy The Observability Stack

      1. Install a certificate manager for your LKE cluster:

         kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
        
      2. Using Helm, deploy the TOBS release labeled lke-monitor in the monitoring namespace on your LKE cluster:

        helm repo add timescale https://charts.timescale.com/
        helm repo update
        helm install --wait lke-monitor timescale/tobs --namespace monitoring
        
      3. Verify that the Prometheus Operator has been deployed to your LKE cluster and its components are running and ready by checking the pods in the monitoring namespace:

        kubectl -n monitoring get pods
        

        You should see a similar output to the following:

        NAME                                                        READY   STATUS      RESTARTS      AGE
        alertmanager-tobs-kube-prometheus-alertmanager-0            2/2     Running     0             2m13s
        lke-monitor-connection-secret-j4sdh                         0/1     Completed   0             2m35s
        lke-monitor-grafana-54d979dcf5-tkkgj                        3/3     Running     2 (65s ago)   2m32s
        lke-monitor-grafana-db-swm8g                                0/1     Completed   3             2m35s
        lke-monitor-kube-state-metrics-6bc5c44b9-g8r5g              1/1     Running     0             2m27s
        lke-monitor-prometheus-node-exporter-b4vvg                  1/1     Running     0             2m33s
        lke-monitor-prometheus-node-exporter-bbcnd                  1/1     Running     0             2m34s
        lke-monitor-prometheus-node-exporter-frrfp                  1/1     Running     0             2m26s
        lke-monitor-promlens-569cfbd586-bkhrr                       1/1     Running     0             2m34s
        lke-monitor-promscale-86d574986c-9wj2z                      1/1     Running     4 (64s ago)   2m27s
        lke-monitor-timescaledb-0                                   1/1     Running     0             2m30s
        opentelemetry-operator-controller-manager-8cf5c85c8-krdj5   2/2     Running     0             2m27s
        prometheus-tobs-kube-prometheus-prometheus-0                2/2     Running     0             2m13s
        tobs-kube-prometheus-operator-5b4f674986-55r4k              1/1     Running     0             2m34s

      Access Monitoring Interfaces with Port-Forward

      1. List the services running in the monitoring namespace and review their respective ports:

        kubectl -n monitoring get svc
        

        You should see an output similar to the following:

        NAME                                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                           AGE
        alertmanager-operated                                       ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   3m41s
        lke-monitor                                                 ClusterIP   10.128.40.142    <none>        5432/TCP                     4m3s
        lke-monitor-config                                          ClusterIP   None             <none>        8008/TCP                     4m3s
        lke-monitor-grafana                                         ClusterIP   10.128.102.243   <none>        80/TCP                       4m3s
        lke-monitor-kube-state-metrics                              ClusterIP   10.128.208.39    <none>        8080/TCP                     4m3s
        lke-monitor-prometheus-node-exporter                        ClusterIP   10.128.170.88    <none>        9100/TCP                     4m3s
        lke-monitor-promlens                                        ClusterIP   10.128.45.92     <none>        80/TCP                       4m3s
        lke-monitor-promscale-connector                             ClusterIP   10.128.198.88    <none>        9201/TCP,9202/TCP            4m3s
        lke-monitor-replica                                         ClusterIP   10.128.137.189   <none>        5432/TCP                     4m3s
        opentelemetry-operator-controller-manager-metrics-service   ClusterIP   10.128.45.42     <none>        8443/TCP                     4m3s
        opentelemetry-operator-webhook-service                      ClusterIP   10.128.12.89     <none>        443/TCP                      4m3s
        prometheus-operated                                         ClusterIP   None             <none>        9090/TCP                     3m41s
        tobs-kube-prometheus-alertmanager                           ClusterIP   10.128.33.44     <none>        9093/TCP                     4m3s
        tobs-kube-prometheus-operator                               ClusterIP   10.128.175.39    <none>        443/TCP                      4m3s
        tobs-kube-prometheus-prometheus                             ClusterIP   10.128.106.173   <none>        9090/TCP                     4m3s

        From the above output, the resource services you will access have the corresponding ports:

        Resource Service Name Port
        Prometheus tobs-kube-prometheus-prometheus 9090
        Alertmanager tobs-kube-prometheus-alertmanager 9093
        Grafana lke-monitor-grafana 80
      2. Use kubectl
        port-forward to open a connection to a service, then access the service’s interface by entering the corresponding address in your web browser:

        Note

        Press control+C on your keyboard to terminate a port-forward process after entering any of the following commands.

        • To provide access to the Prometheus interface at the address 127.0.0.1:9090 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/tobs-kube-prometheus-prometheus \
          9090
          
        • To provide access to the Alertmanager interface at the address 127.0.0.1:9093 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/tobs-kube-prometheus-alertmanager  \
          9093
          
        • To provide access to the Grafana interface at the address 127.0.0.1:8081 in your web browser, enter:

          kubectl -n monitoring \
          port-forward \
          svc/lke-monitor-grafana  \
          8081:80
          

          When accessing the Grafana interface, log in as admin. You can get the password using:

          kubectl get secret --namespace monitoring lke-monitor-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
          

          The Grafana dashboards are accessible at Dashboards > Manage from the left navigation bar.

      TOBS eliminates the need to maintain configuration details for each of the applications, while providing standardized monitoring for the applications running on your cluster.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      This page was originally published on



      Join the conversation.
      Read other comments or post your own below. Comments must be respectful,
      constructive, and relevant to the topic of the guide. Do not post external
      links or advertisements. Before posting, consider if your comment would be
      better addressed by contacting our
      Support team or asking on
      our
      Community Site.



      Source link

      How to Deploy to Kubernetes using Argo CD and GitOps


      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with that increased control comes increased complexity. Continuous Integration and Continuous Deployment (CI/CD) systems usually work at a high level of abstraction in order to provide version control, change logging, and rollback functionality. A popular approach to this abstraction layer is called GitOps.

      GitOps, as originally proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are several tools that use Git as a focal point for DevOps processes on Kubernetes. In this tutorial, you will learn to use Argo CD, a declarative Continuous Delivery tool. Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including Jsonnet, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this article, you will use Argo CD to synchronize and deploy an application from a GitHub repository.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Installing Argo CD on Your Cluster

      In order to install Argo CD, you should first have a valid Kubernetes configuration set up with kubectl, from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      This command should return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If kubectl does not return a set of nodes with the Ready status, you should review your cluster configuration and the Kubernetes documentation.

      Next, create the argocd namespace in your cluster, which will contain Argo CD and its associated services:

      • kubectl create namespace argocd

      After that, you can run the Argo CD install script provided by the project maintainers.

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

      Once the installation completes successfully, you can use the watch command to check the status of your Kubernetes pods:

      • watch kubectl get pods -n argocd

      By default, there should be five pods that eventually receive the Running status as part of a stock Argo CD installation.

      Output

      NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 2m28s argocd-dex-server-66f865ffb4-chwwg 1/1 Running 0 2m30s argocd-redis-5b6967fdfc-q4klp 1/1 Running 0 2m30s argocd-repo-server-656c76778f-vsn7l 1/1 Running 0 2m29s argocd-server-cd68f46f8-zg7hq 1/1 Running 0 2m28s

      You can press Ctrl+C to exit the watch interface. You now have Argo CD running in your Kubernetes cluster! However, because of the way Kubernetes creates abstractions around your network interfaces, you won’t be able to access it directly without forwarding ports from inside your cluster. You’ll learn how to handle that in the next step.

      Step 2 — Forwarding Ports to Access Argo CD

      Because Kubernetes deploys services to arbitrary network addresses inside your cluster, you’ll need to forward the relevant ports in order to access them from your local machine. Argo CD sets up a service named argocd-server on port 443 internally. Because port 443 is the default HTTPS port, and you may be running some other HTTP/HTTPS services, it’s common practice to forward those to arbitrarily chosen other ports, like 8080, like so:

      • kubectl port-forward svc/argocd-server -n argocd 8080:443

      Port forwarding will block the terminal it’s running in as long as it’s active, so you’ll probably want to run this in a new terminal window while you continue to work. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port.

      In the meantime, you should be able to access Argo CD in a web browser by navigating to localhost:8080. However, you’ll be prompted for a login password which you’ll need to use the command line to retrieve in the next step. You’ll probably need to click through a security warning because Argo CD has not yet been configured with a valid SSL certificate.

      Note: Using LetsEncrypt HTTPS certificates with Kubernetes is best accomplished with the use of additional tooling like Cert-Manager.

      Step 3 — Working with Argo CD from the Command Line

      For the next steps, you’ll want to have the argocd command installed locally for interfacing with and changing settings in your Argo CD instance. Argo CD’s official documentation recommends that you install it via the Homebrew package manager. Homebrew is very popular for managing command line tools on MacOS, and has more recently been ported to Linux to facilitate maintaining tools like this one.

      If you don’t already have Homebrew installed, you can retrieve and install it with a one-line command:

      • ​​/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

      You may be prompted for your password during the installation process. Afterward, you should have the brew command available in your terminal. You can use it to install Argo CD:

      This in turn provides the argocd command. Before using it, you’ll want to use kubectl again to retrieve the admin password which was automatically generated during your installation, so that you can use it to log in. You’ll pass it a path to a particular JSON file that’s stored using Kubernetes secrets, and extract the relevant value:

      • kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

      Output

      fbP20pvw-o-D5uxH

      You can then log into your Argo CD dashboard by going back to localhost:8080 in a browser and logging in as the admin user with your own password:

      Argo CD app status

      Once everything is working, you can use the same credentials to log in to Argo CD via the command line, by running argocd login. This will be necessary for deploying from the command line later on:

      • argocd login localhost:8080

      You’ll receive the equivalent certificate warning again on the command line here, and should enter y to proceed when prompted. If desired, you can then change your password to something more secure or more memorable by running argocd account update-password. After that, you’ll have a fully working Argo CD configuration. In the final steps of this tutorial, you’ll learn how to use it to actually deploy some example applications.

      Step 4 — Handling Multiple Clusters (Optional)

      Before deploying an application, you should review where you actually want to deploy it. By default, Argo CD will deploy applications to the same cluster that Argo CD itself is running in, which is fine for a demo, but is probably not what you’ll want in production. In order to list all of the clusters known to your current machine, you can use kubectl config:

      • kubectl config get-contexts -o name

      Output

      test-deploy-cluster test-target-cluster

      Assuming that you’ve installed Argo CD into test-deploy-cluster, and you wanted to use it to deploy applications onto test-target-cluster, you could register test-target-cluster with Argo CD by running argocd cluster add:

      • argocd cluster add target-k8s

      This will add the additional cluster’s login details to Argo CD, and enable Argo CD to deploy services on the cluster.

      Step 5 — Deploying an Example Application (Optional)

      Now that you have Argo CD running and you have an understanding of how to deploy applications to different Kubernetes clusters, it’s time to put it into practice. The Argo CD project maintains a repository of example applications that have been architected to showcase GitOps fundamentals. Many of these examples are ports of the same guestbook demo app to different kinds of Kubernetes manifests, such as Jsonnet. In this case, you’ll be deploying the helm-guestbook example, which uses a Helm chart, one of the most durable Kubernetes management solutions.

      In order to do that, you’ll use the argocd app create command, providing the path to the Git repository, the specific helm-guestbook example, and passing your default destination and namespace:

      • argocd app create helm-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path helm-guestbook --dest-server https://kubernetes.default.svc --dest-namespace default

      After “creating” the application inside of Argo CD, you can check its status with argocd app get:

      • argocd app get helm-guestbook

      Output

      Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: OutOfSync from (53e28ff) Health Status: Missing GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook OutOfSync Missing apps Deployment default helm-guestbook OutOfSync Missing

      The OutOfSync application status is normal. You’ve retrieved the application’s helm chart from Github and created an entry for it in Argo CD, but you haven’t actually spun up any Kubernetes resources for it yet. In order to actually deploy the application you’ll run argocd app sync:

      • argocd app sync helm-guestbook

      sync is synonymous with deployment here in keeping with the principles of GitOps – the goal when using Argo CD is for your application to always track 1:1 with its upstream configuration.

      Output

      TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-01-19T11:01:48-08:00 Service default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy service/helm-guestbook created 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing deployment.apps/helm-guestbook created 2022-01-19T11:01:49-08:00 apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to (53e28ff) Health Status: Progressing Operation: Sync Sync Revision: 53e28ff20cc530b9ada2173fbbd64d48338583ba Phase: Succeeded Start: 2022-01-19 11:01:49 -0800 PST Finished: 2022-01-19 11:01:50 -0800 PST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook Synced Healthy service/helm-guestbook created apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created

      You have now successfully deployed an application using Argo CD! It is possible to accomplish the same thing from the Argo CD web interface, but it is usually quicker and more reproducible to deploy via the command line. However, it is very helpful to check on your Argo CD web dashboard after deployment in order to verify that your applications are running properly. You can see that by opening localhost:8080 in a browser:

      Argo CD app status

      At this point, the last thing to do is to ensure you can access your new deployment in a browser. To do that, you’ll forward another port, the way you did for Argo CD itself. Internally, the helm-guestbook app runs on the regular HTTP port 80, and in order to avoid conflicting with anything that might be running on your own port 80 or on the port 8080 you’re using for Argo CD, you can forward it to port 9090:

      • kubectl port-forward svc/helm-guestbook 9090:80

      As before, you’ll probably want to do this in another terminal, because it will block that terminal until you press Ctrl+C to stop forwarding the port. You can then open localhost:9090 in a browser window to see your example guestbook app:

      Guestbook app

      Any further pushes to this Github repository will automatically be reflected in ArgoCD, which will resync your deployment while providing continuous availability.

      Conclusion

      You’ve now seen the fundamentals of installing and deploying applications using Argo CD. Because Kubernetes requires so many layers of abstraction, it’s important to ensure that your deployments are as maintainable as possible, and the GitOps philosophy is a good solution.

      Next, you may want to learn about deploying TOBS, The Observability Stack, for monitoring the uptime, health, and logging of your Kubernetes cluster.



      Source link