One place for hosting & domains

      Set

      How to Set Up TOBS, The Observability Stack, for Kubernetes Monitoring


      Introduction

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed into any existing Kubernetes cluster. It includes many of the most popular open-source observability tools with Prometheus and Grafana as a baseline, including Promlens, TimescaleDB, Alertmanager, and others. Together, these provide a straightforward, maintainable solution for analyzing server traffic and identifying any potential problems with a deployment up to a very large scale.

      TOBS makes use of standard Kubernetes Helm charts in order to configure and update deployments. It can be installed into any Kubernetes cluster, but it can be demonstrated more effectively if you’re running kubectl to manage your cluster from a local machine rather than a remote node. DigitalOcean’s Managed Kubernetes will provide you with a configuration like this by default.

      In this tutorial, you will install TOBS into an existing Kubernetes cluster, and learn how to update, configure, and browse its component dashboards.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Verifying your Kubernetes Configuration

      In order to install TOBS, you should first have a valid Kubernetes configuration set up with kubectl from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      If kubectl is able to connect to your Kubernetes cluster and it’s up and running as expected, this command will return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If this is successful, you can move on to Step 2. If not, you should review your configuration details for any issues.

      By default, kubectl will look for a file at ~/.kube/config in order to understand your environment. In order to verify that this file exists and contains valid YAML syntax, you can run head on it to view its first several lines, i:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: …

      If the file does not exist, ensure that you are logged in as the same user that you configured Kubernetes with. ~/ paths reflect individual users’ home directories, and Kubernetes configurations are saved per-user by default.

      If you are using DigitalOcean’s Managed Kubernetes, ensure that you have run the doctl kubernetes cluster kubeconfig save command after setting up a cluster so that your local machine can authenticate to it. This will create a ~/.kube/config file:

      • doctl kubernetes cluster kubeconfig save your-cluster-name

      If you are using this machine to access multiple clusters, you should review the Kubernetes documentation on using environment variables and multiple configuration files in order to avoid conflicts. After configuring your kubectl environment, you can move on to installing TOBS in the next step.

      Step 2 — Installing TOBS and Testing Your Endpoints

      TOBS includes the following components:

      • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language.
      • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty. To learn more about Alertmanager, consult the Prometheus documentation on alerting.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • kube-state-metrics is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus.
      • Lastly is node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus.

      In order to install TOBS, you first need to run the TOBS installer on your control-plane. This will set up the tobs command and configuration directories. As mentioned in the prerequisites, the tobs command is only designed to work on Linux/macOS/BSD systems (like the official Kubernetes binaries), so if you have been using Windows up to now, you should be working in the Windows Subsystem for Linux environment.

      Retrieve and run the TOBS installer:

      • curl --proto '=https' --tlsv1.2 -sSLf https://tsdb.co/install-tobs-sh |sh

      Output

      tobs 0.7.0 was successfully installed 🎉 Binary is available at /root/.local/bin/tobs.

      You can now push TOBS to your Kubernetes cluster. This is done by a one-liner using your newly-provided tobs command:

      This will generate several lines of output and may take a few moments. Depending on your exact version of Kubernetes, there may be several warnings in the output, but you can ignore these as long as you eventually receive the Welcome to tobs message:

      Output

      WARNING: Using a generated self-signed certificate for TLS access to TimescaleDB. This should only be used for development and demonstration purposes. To use a signed certificate, use the "--tls-timescaledb-cert" and "--tls-timescaledb-key" flags when issuing the tobs install command. Creating TimescaleDB tobs-certificate secret Creating TimescaleDB tobs-credentials secret skipping to create TimescaleDB s3 backup secret as backup option is disabled. 2022/01/10 11:25:34 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame Installing The Observability Stack 2022/01/10 11:25:37 Transport: unhandled response frame type *http.http2UnknownFrame W0110 11:25:55.438728 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0110 11:25:55.646392 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ … 👋🏽 Welcome to tobs, The Observability Stack for Kubernetes …

      The output from this point onward will contain instructions for connecting to each of Prometheus, TimescaleDB, PromLens, and Grafana’s web endpoints in your browser. It is reproduced in full below for reference:

      Output

      ############################################################################### 🔥 PROMETHEUS NOTES: ############################################################################### Prometheus can be accessed via port 9090 on the following DNS name from within your cluster: tobs-kube-prometheus-prometheus.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: tobs prometheus port-forward The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster: tobs-kube-prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=alertmanager,alertmanager=tobs-kube-prometheus-alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 WARNING! Persistence is disabled on AlertManager. You will lose your data when the AlertManager pod is terminated. ############################################################################### 🐯 TIMESCALEDB NOTES: ############################################################################### TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster: tobs.default.svc.cluster.local To get your password for superuser run: tobs timescaledb get-password -U <user> To connect to your database, chose one of these options: 1. Run a postgres pod and connect using the psql cli: tobs timescaledb connect -U <user> 2. Directly execute a psql session on the master node tobs timescaledb connect -m ############################################################################### 🧐 PROMLENS NOTES: ############################################################################### PromLens is a PromQL query builder, analyzer, and visualizer. You can access PromLens via a local browser by executing: tobs promlens port-forward Then you can point your browser to http://127.0.0.1:8081/. ############################################################################### 📈 GRAFANA NOTES: ############################################################################### 1. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: tobs-grafana.default.svc.cluster.local You can access grafana locally by executing: tobs grafana port-forward Then you can point your browser to http://127.0.0.1:8080/. 2. The 'admin' user password can be retrieved by: tobs grafana get-password 3. You can reset the admin user password with grafana-cli from inside the pod. tobs grafana change-password <password-you-want-to-set>

      Each of this is provided with a DNS name internal to your cluster so that they can be accessed from any of your worker nodes, e.g. tobs-kube-prometheus-alertmanager.default.svc.cluster.local for Prometheus. In addition, there is a port forwarding command configured for each that allows you to access them from a local web browser.

      In a new terminal, run tobs prometheus port-forward:

      • tobs prometheus port-forward

      This will occupy the terminal as long as the port forwarding process is active. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port. Next, in a web browser, go to the URL http://127.0.0.1:9090/. You should see the full Prometheus interface running and producing metrics from your cluster:

      Prometheus welcome

      You can do the same for Grafana, which is accessible at http://127.0.0.1:8080/ as long as port forwarding is active in another process. First, you’ll need to use the get-password command provided by the installer output:

      • tobs grafana get-password

      Output

      your-grafana-password

      You can then use this password to log into the Grafana interface by running its port forwarding command and opening http://127.0.0.1:8080/ in your browser.

      • tobs grafana port-forward

      Grafana welcome

      You now have a working TOBS stack running in your Kubernetes cluster. You can refer to the individual components’ documentation in order to learn their respective features. In the last step of this tutorial, you’ll learn how to make updates to the TOBS configuration itself.

      Step 3 — Editing TOBS Configurations and Upgrading

      TOBS’ configuration contains some parameters for the individual applications in the stack, as well as some parameters for the TOBS deployment itself. It is generated and stored as a Kubernetes Helm chart. You can output your current configuration by running tobs helm show-values. However, this will output the entire long configuration to your terminal, which can be difficult to read. You can instead redirect the output to a file with the .yaml extension, because Helm charts are all valid YAML syntax:

      • tobs helm show-values > values.yaml

      The file contents will look like this:

      ~/values.yaml

      2022/01/10 11:56:37 Transport: unhandled response frame type *http.http2UnknownFrame
      # Values for configuring the deployment of TimescaleDB
      # The charts README is at:
      #    https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
      # Check out the various configuration options (administration guide) at:
      #    https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/admin-guide.md
      cli: false
      
      # Override the deployment namespace
      namespaceOverride: ""
      …
      

      You can review the additional parameters available for TOBS’ configuration by reading the TOBS documentation

      If you ever modify this file in order to update your deployment, you can re-install TOBS over itself using the updated configuration. Just pass the -f option to the tobs install command with the YAML file as an additional argument:

      • tobs install -f values.yaml

      Finally, you can upgrade TOBS with the following command:

      This performs the equivalent of a helm upgrade by fetching the newest upstream chart.

      Conclusion

      In this tutorial, you learned to deploy and configure TOBS, The Observability Stack, on an existing Kubernetes cluster. TOBS is particularly helpful because it eliminates the need to individually maintain configuration details for each of these apps, while providing standardized monitoring for the applications running on your cluster.

      Next, you might want to learn how to use Cert-Manager to handle HTTPS ingress to your Kubernetes cluster.



      Source link

      How To Set Up a Video Streaming Server using Nginx-RTMP on Ubuntu 20.04


      Introduction

      There are many use cases for streaming video. Service providers such as Twitch are very popular for handling the web discovery and community management aspects of streaming, and free software such as OBS Studio is widely used for combining video overlays from multiple different stream sources in real time. While these platforms are very powerful, in some cases you may want to be able to host a stream that does not rely on other service providers.

      In this tutorial, you will learn how to configure the Nginx web server to host an independent RTMP video stream that can be linked and viewed in different applications. RTMP, the Real-Time Messaging Protocol, defines the fundamentals of most internet video streaming. You will also learn how to host HLS and DASH streams that support more modern platforms using the same technology.

      Prerequisites

      To complete this guide, you will need:

      This tutorial will use the placeholder domain name your_domain for URLs and hostnames. Substitute this with your own domain name or IP address as you work through the tutorial.

      Step 1 — Installing and Configuring Nginx-RTMP

      Most modern streaming tools support the RTMP protocol, which defines the basic parameters of an internet video stream. The Nginx web server includes a module that allows you to provide an RTMP stream with minimal configuration from a dedicated URL, just like it provides HTTP access to web pages by default. The Nginx RTMP module isn’t included automatically with Nginx, but on Ubuntu 20.04 and most other Linux distributions you can install it as an additional package.

      Begin by running the following commands as a non-root user to update your package listings and install the Nginx module:

      • sudo apt update
      • sudo apt install libnginx-mod-rtmp

      Installing the module won’t automatically start providing a stream. You’ll need to add a configuration block to your Nginx configuration file that defines where and how the stream will be available.

      Using nano or your favorite text editor, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add this configuration block to the end of the file:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
                      listen 1935;
                      chunk_size 4096;
                      allow publish 127.0.0.1;
                      deny publish all;
      
                      application live {
                              live on;
                              record off;
                      }
              }
      }
      
      • listen 1935 means that RTMP will be listening for connections on port 1935, which is standard.
      • chunk_size 4096 means that RTMP will be sending data in 4KB blocks, which is also standard.
      • allow publish 127.0.0.1 and deny publish all mean that the server will only allow video to be published from the same server, to avoid any other users pushing their own streams.
      • application live defines an application block that will be available at the /live URL path.
      • live on enables live mode so that multiple users can connect to your stream concurrently, a baseline assumption of video streaming.
      • record off disables Nginx-RTMP’s recording functionality, so that all streams are not separately saved to disk by default.

      Save and close the file. If you are using nano, press Ctrl+X, then when prompted, Y and Enter.

      This provides the beginning of your RTMP configuration. By default, it listens on port 1935, which means you’ll need to open that port in your firewall. If you configured ufw as part of your initial server setup run the following command.

      Now you can reload Nginx with your changes:

      • sudo systemctl reload nginx.service

      You should now have a working RTMP server. In the next section, we’ll cover streaming video to your RTMP server from both local and remote sources.

      Step 2 — Sending Video to Your RTMP Server

      There are multiple ways to send video to your RTMP server. One option is to use ffmpeg, a popular command line audio-video utility, to play a video file directly on your server. If you don’t have a video file already on the server, you can download one using youtube-dl, a command line tool for capturing video from streaming platforms like YouTube. In order to use youtube-dl, you’ll need an up to date Python installation on your server as well.

      First, install Python and its package manager, pip:

      • sudo apt install python3-pip

      Next, use pip to install youtube-dl:

      Now you can use youtube-dl to download a video from YouTube. If you don’t have one in mind, try this video, introducing DigitalOcean’s App Platform:

      • youtube-dl https://www.youtube.com/watch?v=iom_nhYQIYk

      You’ll see some output as youtube-dl combines the video and audio streams it’s downloading back into a single file – this is normal.

      Output

      [youtube] iom_nhYQIYk: Downloading webpage WARNING: Requested formats are incompatible for merge and will be merged into mkv. [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 [download] 100% of 32.82MiB in 08:40 [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm [download] 100% of 1.94MiB in 00:38 [ffmpeg] Merging formats into "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 (pass -k to keep) Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm (pass -k to keep)

      You should now have a video file in your current directory with a title like Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv. In order to stream it, you’ll want to install ffmpeg:

      And use ffmpeg to send it to your RTMP server:

      • ffmpeg -re -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" -c:v copy -c:a aac -ar 44100 -ac 1 -f flv rtmp://localhost/live/stream

      This ffmpeg command is doing a few things to prepare the video for a streaming-friendly format. This isn’t an ffmpeg tutorial, so you don’t need to examine it too closely, but you can understand the various options as follows:

      • -re specifies that input will be read at its native framerate.
      • -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" specifies the path to our input file.
      • -c:v is set to copy, meaning that you’re copying over the video format you got from YouTube natively.
      • -c:a has other parameters, namely aac -ar 44100 -ac 1, because you need to resample the audio to an RTMP-friendly format. aac is a widely supported audio codec, 44100 hz is a common frequency, and -ac 1 specifies the first version of the AAC spec for compatibility purposes.
      • -f flv wraps the video in an flv format container for maximum compatibility with RTMP.

      The video is sent to rtmp://localhost/live/stream because you defined the live configuration block in Step 1, and stream is an arbitrarily chosen URL for this video.

      Note: You can learn more about ffmpeg options from ffmprovisr, a community-maintained catalog of ffmpeg command examples, or refer to the official documentation.

      While ffmpeg is streaming the video, it will print timecodes:

      Output

      frame= 127 fps= 25 q=-1.0 size= 405kB time=00:00:05.00 bitrate= 662.2kbits/s speed=frame= 140 fps= 25 q=-1.0 size= 628kB time=00:00:05.52 bitrate= 931.0kbits/s speed=frame= 153 fps= 25 q=-1.0 size= 866kB time=00:00:06.04 bitrate=1173.1kbits/s speed=

      This is standard ffmpeg output. If you were converting video to a different format, these might be helpful in order to understand how efficiently the video is being resampled, but in this case, you just want to see that it’s being played back consistently. Using this sample video, you should get exact fps= 25 increments.

      While ffmpeg is running, you can connect to your RTMP stream from a video player. If you have VLC, mpv, or another media player installed locally, you should be able to view your stream by opening the URL rtmp://your_domain/live/stream in your media player. Your stream will terminate after ffmpeg has finished playing the video. If you want it to keep looping indefinitely, you can add -stream_loop -1 to the beginning of your ffmpeg command.

      Note: You can also stream directly to, for example, Facebook Live using ffmpeg without needing to use Nginx-RTMP at all by replacing rtmp://localhost/live/stream in your ffmpeg command with rtmps://live-api-s.facebook.com:443/rtmp/your-facebook-stream-key. YouTube uses URLs like rtmp://a.rtmp.youtube.com/live2. Other streaming providers that can consume RTMP streams should behave similarly.

      Now that you’ve learned to stream static video sources from the command line, you’ll learn how to stream video from dynamic sources using OBS on a desktop.

      Step 3 — Streaming Video to Your Server via OBS (Optional)

      Streaming via ffmpeg is convenient when you have a prepared video that you want to play back, but live streaming can be much more dynamic. The most popular software for live streaming is OBS, or Open Broadcaster Software – it is free, open source, and very powerful.

      OBS is a desktop application, and will connect to your server from your local computer.

      After installing OBS, configuring it means customizing which of your desktop windows and audio sources you want to add to your stream, and then adding credentials for a streaming service. This tutorial will not be covering your streaming configuration, as it is down to preference, and by default, you can have a working demo by just streaming your entire desktop. In order to set your streaming service credentials, open OBS’ settings menu, navigate to the Stream option and input the following options:

      Streaming Service: Custom
      Server: rtmp://your_domain/live
      Play Path/Stream Key: obs_stream
      

      obs_stream is an arbitrarily chosen path – in this case, your video would be available at rtmp://your_domain/live/obs_stream. You do not need to enable authentication, but you do need to add an additional entry to the IP whitelist that you configured in Step 1.

      Back on the server, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add an additional allow publish entry for your local IP address. If you don’t know your local IP address, it’s best to just go to a site like What’s my IP which can tell you where you accessed it from:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
                      allow publish 127.0.0.1;
                      allow publish your_local_ip_address;
                      deny publish all;
      . . .
      

      Save and close the file, then reload Nginx:

      • sudo systemctl reload nginx.service

      You should now be able to close OBS’ settings menu and click Start Streaming from the main interface! Try viewing your stream in a media player as before. Now that you’ve seen the fundamentals of streaming video in action, you can add a few other features to your server to make it more production-ready.

      Step 4 — Adding Monitoring to Your Configuration (Optional)

      Now that you have Nginx configured to stream video using the Nginx-RTMP module, a common next step is to enable the RTMP statistics page. Rather than adding more and more configuration details to your main nginx.conf file, Nginx allows you to add per-site configurations to individual files in a subdirectory called sites-available/. In this case, you’ll create one called rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      Add the following contents:

      /etc/nginx/sites-available/rtmp

      server {
          listen 8080;
          server_name  localhost;
      
          # rtmp stat
          location /stat {
              rtmp_stat all;
              rtmp_stat_stylesheet stat.xsl;
          }
          location /stat.xsl {
              root /var/www/html/rtmp;
          }
      
          # rtmp control
          location /control {
              rtmp_control all;
          }
      }
      

      Save and close the file. The stat.xsl file from this configuration block is used to style and display an RTMP statistics page in your browser. It is provided by the libnginx-mod-rtmp library that you installed earlier, but it comes zipped up by default, so you will need to unzip it and put it in the /var/www/html/rtmp directory to match the above configuration. Note that you can find additional information about any of these options in the Nginx-RTMP documentation.

      Create the /var/www/html/rtmp directory, and then uncompress the stat.xsl.gz file with the following commands:

      • sudo mkdir /var/www/html/rtmp
      • sudo gunzip -c /usr/share/doc/libnginx-mod-rtmp/examples/stat.xsl.gz > /var/www/html/rtmp/stat.xsl`

      Finally, to access the statistics page that you added, you will need to open another port in your firewall. Specifically, the listen directive is configured with port 8080, so you will need to add a rule to access Nginx on that port. However, you probably don’t want others to be able to access your stats page, so it’s best only to allow it for your own IP address. Run the following command:

      • sudo ufw allow from your_ip_address to any port http-alt

      Next, you’ll need to activate this new configuration. Nginx’s convention is to create symbolic links (like shortcuts) from files in sites-available/ to another folder called sites-enabled/ as you decide to enable or disable them. Using full paths for clarity, make that link:

      • sudo ln -s /etc/nginx/sites-available/rtmp /etc/nginx/sites-enabled/rtmp

      Now you can reload Nginx again to process your changes:

      • sudo systemctl reload nginx.service

      You should now be able to go to http://your_domain:8080/stat in a browser to see the RTMP statistics page. Visit and refresh the page while streaming video and watch as the stream statistics change.

      You’ve now seen how to monitor your video stream and push it to third party providers. In the final section, you’ll learn how to provide it directly in a browser without the use of third party streaming platforms or standalone media player apps.

      Step 5 — Creating Modern Streams for Browsers (Optional)

      As a final step, you may want to add support for newer streaming protocols so that users can stream video from your server using a web browser directly. There are two protocols that you can use to create HTTP-based video streams: Apple’s HLS and MPEG DASH. They both have advantages and disadvantages, so you will probably want to support both.

      The Nginx-RTMP module supports both standards. To add HLS and DASH support to your server, you will need to modify the rtmp block in your nginx.conf file. Open /etc/nginx/nginx.conf using nano or your preferred editor, then add the following highlighted directives:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
      . . .
                      application live {
                              live on;
                              record off;
                              hls on;
                              hls_path /var/www/html/stream/hls;
                              hls_fragment 3;
                              hls_playlist_length 60;
      
                              dash on;
                              dash_path /var/www/html/stream/dash;
                      }
              }
      }
      . . .
      

      Save and close the file. Next, add this to the bottom of your sites-available/rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      /etc/nginx/sites-available/rtmp

      . . .
      server {
          listen 8088;
      
          location / {
              add_header Access-Control-Allow-Origin *;
              root /var/www/html/stream;
          }
      }
      
      types {
          application/dash+xml mpd;
      }
      

      Note: The Access-Control-Allow-Origin * header enables CORS, or Cross-Origin Resource Sharing, which is disabled by default. This communicates to any web browsers accessing data from your server that the server may load resources from other ports or domains. CORS is needed for maximum compatibility with HLS and DASH clients, and a common configuration toggle in many other web deployments.

      Save and close the file. Note that you’re using port 8088 here, which is another arbitrary choice for this tutorial to ensure we aren’t conflicting with any services you may be running on port 80 or 443. You’ll want to open that port in your firewall for now too:

      Finally, create a stream directory in your web root to match the configuration block, so that Nginx can generate the necessary files for HLS and DASH:

      • sudo mkdir /var/www/html/stream

      Reload Nginx again:

      • sudo systemctl reload nginx

      You should now have an HLS stream available at http://your_domain:8088/hls/stream.m3u8 and a DASH stream available at http://your_domain:8088/dash/stream.mpd. These endpoints will generate any necessary metadata on top of your RTMP video feed in order to support modern APIs.

      Conclusion

      The configuration options that you used in this tutorial are all documented in the Nginx RTMP Wiki page. Nginx modules typically share common syntax and expose a very large set of configuration options, and you can review their documentation to change any of your settings from here.

      Nearly all internet video streaming is implemented on top of RTMP, HLS, and DASH, and by using the approach that you have explored in this tutorial, you can provide your stream via other broadcasting services, or expose it any other way you choose. Next, you could look into configuring Nginx as a reverse proxy in order to make some of these different video endpoints available as subdomains.



      Source link

      How to Set Up Squid Proxy for Private Connections on Ubuntu 20.04


      Introduction

      Proxy servers are a type of server application that functions as a gateway between an end user and an internet resource. Through a proxy server, an end user is able to control and monitor their web traffic for a wide variety of purposes, including privacy, security, and caching. For example, you can use a proxy server to make web requests from a different IP address than your own. You can also use a proxy server to research how the web is served differently from one jurisdiction to the next, or avoid some methods of surveillance or web traffic throttling.

      Squid is a stable, popular, open-source HTTP proxy. In this tutorial, you will be installing and configuring Squid to provide an HTTP proxy on a Ubuntu 20.04 server.

      Prerequisites

      To complete this guide, you will need:

      You will use the domain name your_domain in this tutorial, but you should substitute this with your own domain name, or IP address.

      Step 1 — Installing Squid Proxy

      Squid has many use cases beyond routing an individual user’s outbound traffic. In the context of large-scale server deployments, it can be used as a distributed caching mechanism, a load balancer, or another component of a routing stack. However, some methods of horizontally scaling server traffic that would typically have involved a proxy server have been surpassed in popularity by containerization frameworks such as Kubernetes, which distribute more components of an application. At the same time, using proxy servers to redirect web requests as an individual user has become increasingly popular for protecting your privacy. This is helpful to keep in mind when working with open-source proxy servers which may appear to have many dozens of features in a lower-priority maintenance mode. The use cases for a proxy have changed over time, but the fundamental technology has not.

      Begin by running the following commands as a non-root user to update your package listings and install Squid Proxy:

      • sudo apt update
      • sudo apt install squid

      Squid will automatically set up a background service and start after being installed. You can check that the service is running properly:

      • systemctl status squid.service

      Output

      ● squid.service - Squid Web Proxy Server Loaded: loaded (/lib/systemd/system/squid.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-12-15 21:45:15 UTC; 2min 11s ago

      By default, Squid does not allow any clients to connect to it from outside of this server. In order to enable that, you’ll need to make some changes to its configuration file, which is stored in /etc/squid/squid.conf. Open it in nano or your favorite text editor:

      • sudo nano /etc/squid/squid.conf

      Be advised that Squid’s default configuration file is very, very long, and contains a massive number of options that have been temporarily disabled by putting a # at the start of the line they’re on, also called being commented out. You will most likely want to search through the file to find the lines you want to edit. In nano, this is done by pressing Ctrl+W, entering your search term, pressing Enter, and then repeatedly pressing Alt+W to find the next instance of that term if needed.

      Begin by navigating to the line containing the phrase http_access deny all. You should see a block of text explaining Squid’s default access rules:

      /etc/squid/squid.conf

      . . . 
      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      # Example rule allowing access from your local networks.
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      
      # And finally deny all other access to this proxy
      http_access deny all
      . . . 
      

      From this, you can see the current behavior – localhost is allowed; other connections are not. Note that these rules are parsed sequentially, so it’s a good idea to keep the deny all rule at the bottom of this configuration block. You could change that rule to allow all, enabling anyone to connect to your proxy server, but you probably don’t want to do that. Instead, you can add a line above http_access allow localhost that includes your own IP address, like so:

      /etc/squid/squid.conf

      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      # Example rule allowing access from your local networks.
      acl localnet src your_ip_address
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      
      • acl means an Access Control List, a common term for permissions policies
      • localnet in this case is the name of your ACL.
      • src is where the request would originate from under this ACL, i.e., your IP address.

      If you don’t know your local IP address, it’s quickest to go to a site like What’s my IP which can tell you where you accessed it from. After making that change, save and close the file. If you are using nano, press Ctrl+X, and then when prompted, Y and then Enter.

      At this point, you could restart Squid and connect to it, but there’s more you can do in order to secure it first.

      Step 2 — Securing Squid

      Most proxies, and most client-side apps that connect to proxies (e.g., web browsers) support multiple methods of authentication. These can include shared keys, or separate authentication servers, but most commonly entail regular username-password pairs. Squid allows you to create username-password pairs using built-in Linux functionality, as an additional or an alternative step to restricting access to your proxy by IP address. To do that, you’ll create a file called /etc/squid/passwords and point Squid’s configuration to it.

      First, you’ll need to install some utilities from the Apache project in order to have access to a password generator that Squid likes.

      • sudo apt install apache2-utils

      This package provides the htpasswd command, which you can use in order to generate a password for a new Squid user. Squid’s usernames won’t overlap with system usernames in any way, so you can use the same name you’ve logged in with if you want. You’ll be prompted to add a password as well:

      • sudo htpasswd -c /etc/squid/passwords your_squid_username

      This will store your username along with a hash of your new password in /etc/squid/passwords, which will be used as an authentication source by Squid. You can cat the file afterward to see what that looks like:

      • sudo cat /etc/squid/passwords

      Output

      sammy:$apr1$Dgl.Mtnd$vdqLYjBGdtoWA47w4q1Td.

      After verifying that your username and password have been stored, you can update Squid’s configuration to use your new /etc/squid/passwords file. Using nano or your favorite text editor, reopen the Squid configuration file and add the following highlighted lines:

      • sudo nano /etc/squid/squid.conf

      /etc/squid/squid.conf

      …
      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwords
      auth_param basic realm proxy
      acl authenticated proxy_auth REQUIRED
      # Example rule allowing access from your local networks.
      acl localnet src your_ip_address
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      http_access allow authenticated
      # And finally deny all other access to this proxy
      http_access deny all
      …
      

      These additional directives tell Squid to check in your new passwords file for password hashes that can be parsed using the basic_ncsa_auth mechanism, and to require authentication for access to your proxy. You can review Squid’s documentation for more information on this or other authentication methods. After that, you can finally restart Squid with your configuration changes. This might take a moment to complete.

      • sudo systemctl restart squid.service

      And don’t forget to open port 3128 in your firewall if you’re using ufw:

      In the next step, you’ll connect to your proxy at last.

      Step 3 — Connecting through Squid

      In order to demonstrate your Squid server, you’ll use a command line program called curl, which is popular for making different types of web requests. In general, if you want to verify whether a given connection should be working in a browser under ideal circumstances, you should always test first with curl. You’ll be using curl on your local machine in order to do this – it’s installed by default on all modern Windows, Mac, and Linux environments, so you can open any local shell to run this command:

      • curl -v -x http://your_squid_username:your_squid_password@your_server_ip:3128 http://www.google.com/

      The -x argument passes a proxy server to curl, and in this case you’re using the http:// protocol this time, specifying your username and password to this server, and then connecting to a known-working website like google.com. If the command was successful, you should see the following output:

      Output

      * Trying 138.197.103.77... * TCP_NODELAY set * Connected to 138.197.103.77 (138.197.103.77) port 3128 (#0) * Proxy auth using Basic with user 'sammy' > GET http://www.google.com/ HTTP/1.1

      It is also possible to access https:// websites with your Squid proxy without making any further configuration changes. These make use of a separate proxy directive called CONNECT in order to preserve SSL between the client and the server:

      • curl -v -x http://your_squid_username:your_squid_password@your_server_ip:3128 https://www.google.com/

      Output

      * Trying 138.197.103.77... * TCP_NODELAY set * Connected to 138.197.103.77 (138.197.103.77) port 3128 (#0) * allocate connect buffer! * Establish HTTP proxy tunnel to www.google.com:443 * Proxy auth using Basic with user 'sammy' > CONNECT www.google.com:443 HTTP/1.1 > Host: www.google.com:443 > Proxy-Authorization: Basic c2FtbXk6c2FtbXk= > User-Agent: curl/7.55.1 > Proxy-Connection: Keep-Alive > < HTTP/1.1 200 Connection established < * Proxy replied OK to CONNECT request * CONNECT phase completed!

      The credentials that you used for curl should now work anywhere else you might want to use your new proxy server.

      Conclusion

      In this tutorial, you learned to deploy a popular, open-source API endpoint for proxying traffic with little to no overhead. Many applications have built-in proxy support (often at the OS level) going back decades, making this proxy stack highly reusable.

      Next, you may want to learn how to deploy Dante, a SOCKS proxy which can run alongside Squid for proxying different types of web traffic.

      Because one of the most common use cases for proxy servers is proxying traffic to and from different global regions, you may want to review how to use Ansible to automate server deployments next, in case you find yourself wanting to duplicate this configuration in other data centers.



      Source link