One place for hosting & domains

      Kubernetes

      Linode Kubernetes Engine v1.44.0


      The dockershim component was removed in upstream Kubernetes starting at version 1.24 (see
      Dockershim Removal FAQ). The Linode Kubernetes Engine has kept this component installed on 1.24 LKE nodes in case any customer is reliant on that feature. When deploying a new LKE cluster using Kubernetes v1.24 (and later versions), the default container runtime has been changed to containerd.



      Source link

      How to Set Up TOBS, The Observability Stack, for Kubernetes Monitoring


      Introduction

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed into any existing Kubernetes cluster. It includes many of the most popular open-source observability tools with Prometheus and Grafana as a baseline, including Promlens, TimescaleDB, Alertmanager, and others. Together, these provide a straightforward, maintainable solution for analyzing server traffic and identifying any potential problems with a deployment up to a very large scale.

      TOBS makes use of standard Kubernetes Helm charts in order to configure and update deployments. It can be installed into any Kubernetes cluster, but it can be demonstrated more effectively if you’re running kubectl to manage your cluster from a local machine rather than a remote node. DigitalOcean’s Managed Kubernetes will provide you with a configuration like this by default.

      In this tutorial, you will install TOBS into an existing Kubernetes cluster, and learn how to update, configure, and browse its component dashboards.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Verifying your Kubernetes Configuration

      In order to install TOBS, you should first have a valid Kubernetes configuration set up with kubectl from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      If kubectl is able to connect to your Kubernetes cluster and it’s up and running as expected, this command will return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If this is successful, you can move on to Step 2. If not, you should review your configuration details for any issues.

      By default, kubectl will look for a file at ~/.kube/config in order to understand your environment. In order to verify that this file exists and contains valid YAML syntax, you can run head on it to view its first several lines, i:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: …

      If the file does not exist, ensure that you are logged in as the same user that you configured Kubernetes with. ~/ paths reflect individual users’ home directories, and Kubernetes configurations are saved per-user by default.

      If you are using DigitalOcean’s Managed Kubernetes, ensure that you have run the doctl kubernetes cluster kubeconfig save command after setting up a cluster so that your local machine can authenticate to it. This will create a ~/.kube/config file:

      • doctl kubernetes cluster kubeconfig save your-cluster-name

      If you are using this machine to access multiple clusters, you should review the Kubernetes documentation on using environment variables and multiple configuration files in order to avoid conflicts. After configuring your kubectl environment, you can move on to installing TOBS in the next step.

      Step 2 — Installing TOBS and Testing Your Endpoints

      TOBS includes the following components:

      • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language.
      • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty. To learn more about Alertmanager, consult the Prometheus documentation on alerting.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • kube-state-metrics is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus.
      • Lastly is node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus.

      In order to install TOBS, you first need to run the TOBS installer on your control-plane. This will set up the tobs command and configuration directories. As mentioned in the prerequisites, the tobs command is only designed to work on Linux/macOS/BSD systems (like the official Kubernetes binaries), so if you have been using Windows up to now, you should be working in the Windows Subsystem for Linux environment.

      Retrieve and run the TOBS installer:

      • curl --proto '=https' --tlsv1.2 -sSLf https://tsdb.co/install-tobs-sh |sh

      Output

      tobs 0.7.0 was successfully installed 🎉 Binary is available at /root/.local/bin/tobs.

      You can now push TOBS to your Kubernetes cluster. This is done by a one-liner using your newly-provided tobs command:

      This will generate several lines of output and may take a few moments. Depending on your exact version of Kubernetes, there may be several warnings in the output, but you can ignore these as long as you eventually receive the Welcome to tobs message:

      Output

      WARNING: Using a generated self-signed certificate for TLS access to TimescaleDB. This should only be used for development and demonstration purposes. To use a signed certificate, use the "--tls-timescaledb-cert" and "--tls-timescaledb-key" flags when issuing the tobs install command. Creating TimescaleDB tobs-certificate secret Creating TimescaleDB tobs-credentials secret skipping to create TimescaleDB s3 backup secret as backup option is disabled. 2022/01/10 11:25:34 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame Installing The Observability Stack 2022/01/10 11:25:37 Transport: unhandled response frame type *http.http2UnknownFrame W0110 11:25:55.438728 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0110 11:25:55.646392 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ … 👋🏽 Welcome to tobs, The Observability Stack for Kubernetes …

      The output from this point onward will contain instructions for connecting to each of Prometheus, TimescaleDB, PromLens, and Grafana’s web endpoints in your browser. It is reproduced in full below for reference:

      Output

      ############################################################################### 🔥 PROMETHEUS NOTES: ############################################################################### Prometheus can be accessed via port 9090 on the following DNS name from within your cluster: tobs-kube-prometheus-prometheus.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: tobs prometheus port-forward The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster: tobs-kube-prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=alertmanager,alertmanager=tobs-kube-prometheus-alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 WARNING! Persistence is disabled on AlertManager. You will lose your data when the AlertManager pod is terminated. ############################################################################### 🐯 TIMESCALEDB NOTES: ############################################################################### TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster: tobs.default.svc.cluster.local To get your password for superuser run: tobs timescaledb get-password -U <user> To connect to your database, chose one of these options: 1. Run a postgres pod and connect using the psql cli: tobs timescaledb connect -U <user> 2. Directly execute a psql session on the master node tobs timescaledb connect -m ############################################################################### 🧐 PROMLENS NOTES: ############################################################################### PromLens is a PromQL query builder, analyzer, and visualizer. You can access PromLens via a local browser by executing: tobs promlens port-forward Then you can point your browser to http://127.0.0.1:8081/. ############################################################################### 📈 GRAFANA NOTES: ############################################################################### 1. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: tobs-grafana.default.svc.cluster.local You can access grafana locally by executing: tobs grafana port-forward Then you can point your browser to http://127.0.0.1:8080/. 2. The 'admin' user password can be retrieved by: tobs grafana get-password 3. You can reset the admin user password with grafana-cli from inside the pod. tobs grafana change-password <password-you-want-to-set>

      Each of this is provided with a DNS name internal to your cluster so that they can be accessed from any of your worker nodes, e.g. tobs-kube-prometheus-alertmanager.default.svc.cluster.local for Prometheus. In addition, there is a port forwarding command configured for each that allows you to access them from a local web browser.

      In a new terminal, run tobs prometheus port-forward:

      • tobs prometheus port-forward

      This will occupy the terminal as long as the port forwarding process is active. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port. Next, in a web browser, go to the URL http://127.0.0.1:9090/. You should see the full Prometheus interface running and producing metrics from your cluster:

      Prometheus welcome

      You can do the same for Grafana, which is accessible at http://127.0.0.1:8080/ as long as port forwarding is active in another process. First, you’ll need to use the get-password command provided by the installer output:

      • tobs grafana get-password

      Output

      your-grafana-password

      You can then use this password to log into the Grafana interface by running its port forwarding command and opening http://127.0.0.1:8080/ in your browser.

      • tobs grafana port-forward

      Grafana welcome

      You now have a working TOBS stack running in your Kubernetes cluster. You can refer to the individual components’ documentation in order to learn their respective features. In the last step of this tutorial, you’ll learn how to make updates to the TOBS configuration itself.

      Step 3 — Editing TOBS Configurations and Upgrading

      TOBS’ configuration contains some parameters for the individual applications in the stack, as well as some parameters for the TOBS deployment itself. It is generated and stored as a Kubernetes Helm chart. You can output your current configuration by running tobs helm show-values. However, this will output the entire long configuration to your terminal, which can be difficult to read. You can instead redirect the output to a file with the .yaml extension, because Helm charts are all valid YAML syntax:

      • tobs helm show-values > values.yaml

      The file contents will look like this:

      ~/values.yaml

      2022/01/10 11:56:37 Transport: unhandled response frame type *http.http2UnknownFrame
      # Values for configuring the deployment of TimescaleDB
      # The charts README is at:
      #    https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
      # Check out the various configuration options (administration guide) at:
      #    https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/admin-guide.md
      cli: false
      
      # Override the deployment namespace
      namespaceOverride: ""
      …
      

      You can review the additional parameters available for TOBS’ configuration by reading the TOBS documentation

      If you ever modify this file in order to update your deployment, you can re-install TOBS over itself using the updated configuration. Just pass the -f option to the tobs install command with the YAML file as an additional argument:

      • tobs install -f values.yaml

      Finally, you can upgrade TOBS with the following command:

      This performs the equivalent of a helm upgrade by fetching the newest upstream chart.

      Conclusion

      In this tutorial, you learned to deploy and configure TOBS, The Observability Stack, on an existing Kubernetes cluster. TOBS is particularly helpful because it eliminates the need to individually maintain configuration details for each of these apps, while providing standardized monitoring for the applications running on your cluster.

      Next, you might want to learn how to use Cert-Manager to handle HTTPS ingress to your Kubernetes cluster.



      Source link

      How to Deploy to Kubernetes using Argo CD and GitOps


      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with that increased control comes increased complexity. Continuous Integration and Continuous Deployment (CI/CD) systems usually work at a high level of abstraction in order to provide version control, change logging, and rollback functionality. A popular approach to this abstraction layer is called GitOps.

      GitOps, as originally proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are several tools that use Git as a focal point for DevOps processes on Kubernetes. In this tutorial, you will learn to use Argo CD, a declarative Continuous Delivery tool. Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including Jsonnet, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this article, you will use Argo CD to synchronize and deploy an application from a GitHub repository.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Installing Argo CD on Your Cluster

      In order to install Argo CD, you should first have a valid Kubernetes configuration set up with kubectl, from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      This command should return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If kubectl does not return a set of nodes with the Ready status, you should review your cluster configuration and the Kubernetes documentation.

      Next, create the argocd namespace in your cluster, which will contain Argo CD and its associated services:

      • kubectl create namespace argocd

      After that, you can run the Argo CD install script provided by the project maintainers.

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

      Once the installation completes successfully, you can use the watch command to check the status of your Kubernetes pods:

      • watch kubectl get pods -n argocd

      By default, there should be five pods that eventually receive the Running status as part of a stock Argo CD installation.

      Output

      NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 2m28s argocd-dex-server-66f865ffb4-chwwg 1/1 Running 0 2m30s argocd-redis-5b6967fdfc-q4klp 1/1 Running 0 2m30s argocd-repo-server-656c76778f-vsn7l 1/1 Running 0 2m29s argocd-server-cd68f46f8-zg7hq 1/1 Running 0 2m28s

      You can press Ctrl+C to exit the watch interface. You now have Argo CD running in your Kubernetes cluster! However, because of the way Kubernetes creates abstractions around your network interfaces, you won’t be able to access it directly without forwarding ports from inside your cluster. You’ll learn how to handle that in the next step.

      Step 2 — Forwarding Ports to Access Argo CD

      Because Kubernetes deploys services to arbitrary network addresses inside your cluster, you’ll need to forward the relevant ports in order to access them from your local machine. Argo CD sets up a service named argocd-server on port 443 internally. Because port 443 is the default HTTPS port, and you may be running some other HTTP/HTTPS services, it’s common practice to forward those to arbitrarily chosen other ports, like 8080, like so:

      • kubectl port-forward svc/argocd-server -n argocd 8080:443

      Port forwarding will block the terminal it’s running in as long as it’s active, so you’ll probably want to run this in a new terminal window while you continue to work. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port.

      In the meantime, you should be able to access Argo CD in a web browser by navigating to localhost:8080. However, you’ll be prompted for a login password which you’ll need to use the command line to retrieve in the next step. You’ll probably need to click through a security warning because Argo CD has not yet been configured with a valid SSL certificate.

      Note: Using LetsEncrypt HTTPS certificates with Kubernetes is best accomplished with the use of additional tooling like Cert-Manager.

      Step 3 — Working with Argo CD from the Command Line

      For the next steps, you’ll want to have the argocd command installed locally for interfacing with and changing settings in your Argo CD instance. Argo CD’s official documentation recommends that you install it via the Homebrew package manager. Homebrew is very popular for managing command line tools on MacOS, and has more recently been ported to Linux to facilitate maintaining tools like this one.

      If you don’t already have Homebrew installed, you can retrieve and install it with a one-line command:

      • ​​/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

      You may be prompted for your password during the installation process. Afterward, you should have the brew command available in your terminal. You can use it to install Argo CD:

      This in turn provides the argocd command. Before using it, you’ll want to use kubectl again to retrieve the admin password which was automatically generated during your installation, so that you can use it to log in. You’ll pass it a path to a particular JSON file that’s stored using Kubernetes secrets, and extract the relevant value:

      • kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

      Output

      fbP20pvw-o-D5uxH

      You can then log into your Argo CD dashboard by going back to localhost:8080 in a browser and logging in as the admin user with your own password:

      Argo CD app status

      Once everything is working, you can use the same credentials to log in to Argo CD via the command line, by running argocd login. This will be necessary for deploying from the command line later on:

      • argocd login localhost:8080

      You’ll receive the equivalent certificate warning again on the command line here, and should enter y to proceed when prompted. If desired, you can then change your password to something more secure or more memorable by running argocd account update-password. After that, you’ll have a fully working Argo CD configuration. In the final steps of this tutorial, you’ll learn how to use it to actually deploy some example applications.

      Step 4 — Handling Multiple Clusters (Optional)

      Before deploying an application, you should review where you actually want to deploy it. By default, Argo CD will deploy applications to the same cluster that Argo CD itself is running in, which is fine for a demo, but is probably not what you’ll want in production. In order to list all of the clusters known to your current machine, you can use kubectl config:

      • kubectl config get-contexts -o name

      Output

      test-deploy-cluster test-target-cluster

      Assuming that you’ve installed Argo CD into test-deploy-cluster, and you wanted to use it to deploy applications onto test-target-cluster, you could register test-target-cluster with Argo CD by running argocd cluster add:

      • argocd cluster add target-k8s

      This will add the additional cluster’s login details to Argo CD, and enable Argo CD to deploy services on the cluster.

      Step 5 — Deploying an Example Application (Optional)

      Now that you have Argo CD running and you have an understanding of how to deploy applications to different Kubernetes clusters, it’s time to put it into practice. The Argo CD project maintains a repository of example applications that have been architected to showcase GitOps fundamentals. Many of these examples are ports of the same guestbook demo app to different kinds of Kubernetes manifests, such as Jsonnet. In this case, you’ll be deploying the helm-guestbook example, which uses a Helm chart, one of the most durable Kubernetes management solutions.

      In order to do that, you’ll use the argocd app create command, providing the path to the Git repository, the specific helm-guestbook example, and passing your default destination and namespace:

      • argocd app create helm-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path helm-guestbook --dest-server https://kubernetes.default.svc --dest-namespace default

      After “creating” the application inside of Argo CD, you can check its status with argocd app get:

      • argocd app get helm-guestbook

      Output

      Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: OutOfSync from (53e28ff) Health Status: Missing GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook OutOfSync Missing apps Deployment default helm-guestbook OutOfSync Missing

      The OutOfSync application status is normal. You’ve retrieved the application’s helm chart from Github and created an entry for it in Argo CD, but you haven’t actually spun up any Kubernetes resources for it yet. In order to actually deploy the application you’ll run argocd app sync:

      • argocd app sync helm-guestbook

      sync is synonymous with deployment here in keeping with the principles of GitOps – the goal when using Argo CD is for your application to always track 1:1 with its upstream configuration.

      Output

      TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-01-19T11:01:48-08:00 Service default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy service/helm-guestbook created 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing deployment.apps/helm-guestbook created 2022-01-19T11:01:49-08:00 apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to (53e28ff) Health Status: Progressing Operation: Sync Sync Revision: 53e28ff20cc530b9ada2173fbbd64d48338583ba Phase: Succeeded Start: 2022-01-19 11:01:49 -0800 PST Finished: 2022-01-19 11:01:50 -0800 PST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook Synced Healthy service/helm-guestbook created apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created

      You have now successfully deployed an application using Argo CD! It is possible to accomplish the same thing from the Argo CD web interface, but it is usually quicker and more reproducible to deploy via the command line. However, it is very helpful to check on your Argo CD web dashboard after deployment in order to verify that your applications are running properly. You can see that by opening localhost:8080 in a browser:

      Argo CD app status

      At this point, the last thing to do is to ensure you can access your new deployment in a browser. To do that, you’ll forward another port, the way you did for Argo CD itself. Internally, the helm-guestbook app runs on the regular HTTP port 80, and in order to avoid conflicting with anything that might be running on your own port 80 or on the port 8080 you’re using for Argo CD, you can forward it to port 9090:

      • kubectl port-forward svc/helm-guestbook 9090:80

      As before, you’ll probably want to do this in another terminal, because it will block that terminal until you press Ctrl+C to stop forwarding the port. You can then open localhost:9090 in a browser window to see your example guestbook app:

      Guestbook app

      Any further pushes to this Github repository will automatically be reflected in ArgoCD, which will resync your deployment while providing continuous availability.

      Conclusion

      You’ve now seen the fundamentals of installing and deploying applications using Argo CD. Because Kubernetes requires so many layers of abstraction, it’s important to ensure that your deployments are as maintainable as possible, and the GitOps philosophy is a good solution.

      Next, you may want to learn about deploying TOBS, The Observability Stack, for monitoring the uptime, health, and logging of your Kubernetes cluster.



      Source link