One place for hosting & domains

      Streaming

      Building Interactive Live Streaming Apps for Millions of Viewers


      How to Join

      This Tech Talk is free and open to everyone. Register below to get a link to join the live stream or receive the video recording after it airs.

      Date Time RSVP
      February 15, 2022 11:00 a.m.–12:00 p.m. ET / 4:00–5:00 p.m. GMT

      About the Talk

      Today’s CDNs aren’t optimized for real-time, scalable, multi-way live streaming. See how an XDN (Experience Delivery Network) helps deliver seamless interactive experiences for massive audiences by managing signal acquisition/ingest, latency, interactivity, and scaling in the cloud.

      What You’ll Learn

      • When to use cloud services, edge compute, and traditional CDNs
      • Protocols that reduce live streaming latency
      • Monitoring strategies that deliver seamless viewer experience* Avoiding vendor lock-in by controlling your stack
      • Avoiding vendor lock-in by controlling your stack

      This Talk Is Designed For

      • Software developers
      • Anyone that wants to build applications that offer live, interactive streaming
      • Broadcasters and businesses that use OTT (over-the-top communications)

      Prerequisites

      Basic knowledge of how streaming is applied to cloud-based distribution and delivery to end users

      Resources

      Introducing XDN
      (Experience Delivery Network)

      About the Presenters

      Chris Allen, CEO, Red5 Pro
      Mark Pace, CTO, Red5 Pro
      Monica Dunphy, Developer Evangelist, Red5 Pro

      Moderated by
      Darian Wilkin, Senior Solutions Engineer, DigitalOcean

      Need help with large deployments, migration planning, or a proof of concept? Fill out this form and we’ll be in touch!



      Source link

      How To Set Up a Video Streaming Server using Nginx-RTMP on Ubuntu 20.04


      Introduction

      There are many use cases for streaming video. Service providers such as Twitch are very popular for handling the web discovery and community management aspects of streaming, and free software such as OBS Studio is widely used for combining video overlays from multiple different stream sources in real time. While these platforms are very powerful, in some cases you may want to be able to host a stream that does not rely on other service providers.

      In this tutorial, you will learn how to configure the Nginx web server to host an independent RTMP video stream that can be linked and viewed in different applications. RTMP, the Real-Time Messaging Protocol, defines the fundamentals of most internet video streaming. You will also learn how to host HLS and DASH streams that support more modern platforms using the same technology.

      Prerequisites

      To complete this guide, you will need:

      This tutorial will use the placeholder domain name your_domain for URLs and hostnames. Substitute this with your own domain name or IP address as you work through the tutorial.

      Step 1 — Installing and Configuring Nginx-RTMP

      Most modern streaming tools support the RTMP protocol, which defines the basic parameters of an internet video stream. The Nginx web server includes a module that allows you to provide an RTMP stream with minimal configuration from a dedicated URL, just like it provides HTTP access to web pages by default. The Nginx RTMP module isn’t included automatically with Nginx, but on Ubuntu 20.04 and most other Linux distributions you can install it as an additional package.

      Begin by running the following commands as a non-root user to update your package listings and install the Nginx module:

      • sudo apt update
      • sudo apt install libnginx-mod-rtmp

      Installing the module won’t automatically start providing a stream. You’ll need to add a configuration block to your Nginx configuration file that defines where and how the stream will be available.

      Using nano or your favorite text editor, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add this configuration block to the end of the file:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
                      listen 1935;
                      chunk_size 4096;
                      allow publish 127.0.0.1;
                      deny publish all;
      
                      application live {
                              live on;
                              record off;
                      }
              }
      }
      
      • listen 1935 means that RTMP will be listening for connections on port 1935, which is standard.
      • chunk_size 4096 means that RTMP will be sending data in 4KB blocks, which is also standard.
      • allow publish 127.0.0.1 and deny publish all mean that the server will only allow video to be published from the same server, to avoid any other users pushing their own streams.
      • application live defines an application block that will be available at the /live URL path.
      • live on enables live mode so that multiple users can connect to your stream concurrently, a baseline assumption of video streaming.
      • record off disables Nginx-RTMP’s recording functionality, so that all streams are not separately saved to disk by default.

      Save and close the file. If you are using nano, press Ctrl+X, then when prompted, Y and Enter.

      This provides the beginning of your RTMP configuration. By default, it listens on port 1935, which means you’ll need to open that port in your firewall. If you configured ufw as part of your initial server setup run the following command.

      Now you can reload Nginx with your changes:

      • sudo systemctl reload nginx.service

      You should now have a working RTMP server. In the next section, we’ll cover streaming video to your RTMP server from both local and remote sources.

      Step 2 — Sending Video to Your RTMP Server

      There are multiple ways to send video to your RTMP server. One option is to use ffmpeg, a popular command line audio-video utility, to play a video file directly on your server. If you don’t have a video file already on the server, you can download one using youtube-dl, a command line tool for capturing video from streaming platforms like YouTube. In order to use youtube-dl, you’ll need an up to date Python installation on your server as well.

      First, install Python and its package manager, pip:

      • sudo apt install python3-pip

      Next, use pip to install youtube-dl:

      Now you can use youtube-dl to download a video from YouTube. If you don’t have one in mind, try this video, introducing DigitalOcean’s App Platform:

      • youtube-dl https://www.youtube.com/watch?v=iom_nhYQIYk

      You’ll see some output as youtube-dl combines the video and audio streams it’s downloading back into a single file – this is normal.

      Output

      [youtube] iom_nhYQIYk: Downloading webpage WARNING: Requested formats are incompatible for merge and will be merged into mkv. [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 [download] 100% of 32.82MiB in 08:40 [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm [download] 100% of 1.94MiB in 00:38 [ffmpeg] Merging formats into "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 (pass -k to keep) Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm (pass -k to keep)

      You should now have a video file in your current directory with a title like Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv. In order to stream it, you’ll want to install ffmpeg:

      And use ffmpeg to send it to your RTMP server:

      • ffmpeg -re -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" -c:v copy -c:a aac -ar 44100 -ac 1 -f flv rtmp://localhost/live/stream

      This ffmpeg command is doing a few things to prepare the video for a streaming-friendly format. This isn’t an ffmpeg tutorial, so you don’t need to examine it too closely, but you can understand the various options as follows:

      • -re specifies that input will be read at its native framerate.
      • -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" specifies the path to our input file.
      • -c:v is set to copy, meaning that you’re copying over the video format you got from YouTube natively.
      • -c:a has other parameters, namely aac -ar 44100 -ac 1, because you need to resample the audio to an RTMP-friendly format. aac is a widely supported audio codec, 44100 hz is a common frequency, and -ac 1 specifies the first version of the AAC spec for compatibility purposes.
      • -f flv wraps the video in an flv format container for maximum compatibility with RTMP.

      The video is sent to rtmp://localhost/live/stream because you defined the live configuration block in Step 1, and stream is an arbitrarily chosen URL for this video.

      Note: You can learn more about ffmpeg options from ffmprovisr, a community-maintained catalog of ffmpeg command examples, or refer to the official documentation.

      While ffmpeg is streaming the video, it will print timecodes:

      Output

      frame= 127 fps= 25 q=-1.0 size= 405kB time=00:00:05.00 bitrate= 662.2kbits/s speed=frame= 140 fps= 25 q=-1.0 size= 628kB time=00:00:05.52 bitrate= 931.0kbits/s speed=frame= 153 fps= 25 q=-1.0 size= 866kB time=00:00:06.04 bitrate=1173.1kbits/s speed=

      This is standard ffmpeg output. If you were converting video to a different format, these might be helpful in order to understand how efficiently the video is being resampled, but in this case, you just want to see that it’s being played back consistently. Using this sample video, you should get exact fps= 25 increments.

      While ffmpeg is running, you can connect to your RTMP stream from a video player. If you have VLC, mpv, or another media player installed locally, you should be able to view your stream by opening the URL rtmp://your_domain/live/stream in your media player. Your stream will terminate after ffmpeg has finished playing the video. If you want it to keep looping indefinitely, you can add -stream_loop -1 to the beginning of your ffmpeg command.

      Note: You can also stream directly to, for example, Facebook Live using ffmpeg without needing to use Nginx-RTMP at all by replacing rtmp://localhost/live/stream in your ffmpeg command with rtmps://live-api-s.facebook.com:443/rtmp/your-facebook-stream-key. YouTube uses URLs like rtmp://a.rtmp.youtube.com/live2. Other streaming providers that can consume RTMP streams should behave similarly.

      Now that you’ve learned to stream static video sources from the command line, you’ll learn how to stream video from dynamic sources using OBS on a desktop.

      Step 3 — Streaming Video to Your Server via OBS (Optional)

      Streaming via ffmpeg is convenient when you have a prepared video that you want to play back, but live streaming can be much more dynamic. The most popular software for live streaming is OBS, or Open Broadcaster Software – it is free, open source, and very powerful.

      OBS is a desktop application, and will connect to your server from your local computer.

      After installing OBS, configuring it means customizing which of your desktop windows and audio sources you want to add to your stream, and then adding credentials for a streaming service. This tutorial will not be covering your streaming configuration, as it is down to preference, and by default, you can have a working demo by just streaming your entire desktop. In order to set your streaming service credentials, open OBS’ settings menu, navigate to the Stream option and input the following options:

      Streaming Service: Custom
      Server: rtmp://your_domain/live
      Play Path/Stream Key: obs_stream
      

      obs_stream is an arbitrarily chosen path – in this case, your video would be available at rtmp://your_domain/live/obs_stream. You do not need to enable authentication, but you do need to add an additional entry to the IP whitelist that you configured in Step 1.

      Back on the server, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add an additional allow publish entry for your local IP address. If you don’t know your local IP address, it’s best to just go to a site like What’s my IP which can tell you where you accessed it from:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
                      allow publish 127.0.0.1;
                      allow publish your_local_ip_address;
                      deny publish all;
      . . .
      

      Save and close the file, then reload Nginx:

      • sudo systemctl reload nginx.service

      You should now be able to close OBS’ settings menu and click Start Streaming from the main interface! Try viewing your stream in a media player as before. Now that you’ve seen the fundamentals of streaming video in action, you can add a few other features to your server to make it more production-ready.

      Step 4 — Adding Monitoring to Your Configuration (Optional)

      Now that you have Nginx configured to stream video using the Nginx-RTMP module, a common next step is to enable the RTMP statistics page. Rather than adding more and more configuration details to your main nginx.conf file, Nginx allows you to add per-site configurations to individual files in a subdirectory called sites-available/. In this case, you’ll create one called rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      Add the following contents:

      /etc/nginx/sites-available/rtmp

      server {
          listen 8080;
          server_name  localhost;
      
          # rtmp stat
          location /stat {
              rtmp_stat all;
              rtmp_stat_stylesheet stat.xsl;
          }
          location /stat.xsl {
              root /var/www/html/rtmp;
          }
      
          # rtmp control
          location /control {
              rtmp_control all;
          }
      }
      

      Save and close the file. The stat.xsl file from this configuration block is used to style and display an RTMP statistics page in your browser. It is provided by the libnginx-mod-rtmp library that you installed earlier, but it comes zipped up by default, so you will need to unzip it and put it in the /var/www/html/rtmp directory to match the above configuration. Note that you can find additional information about any of these options in the Nginx-RTMP documentation.

      Create the /var/www/html/rtmp directory, and then uncompress the stat.xsl.gz file with the following commands:

      • sudo mkdir /var/www/html/rtmp
      • sudo gunzip -c /usr/share/doc/libnginx-mod-rtmp/examples/stat.xsl.gz > /var/www/html/rtmp/stat.xsl`

      Finally, to access the statistics page that you added, you will need to open another port in your firewall. Specifically, the listen directive is configured with port 8080, so you will need to add a rule to access Nginx on that port. However, you probably don’t want others to be able to access your stats page, so it’s best only to allow it for your own IP address. Run the following command:

      • sudo ufw allow from your_ip_address to any port http-alt

      Next, you’ll need to activate this new configuration. Nginx’s convention is to create symbolic links (like shortcuts) from files in sites-available/ to another folder called sites-enabled/ as you decide to enable or disable them. Using full paths for clarity, make that link:

      • sudo ln -s /etc/nginx/sites-available/rtmp /etc/nginx/sites-enabled/rtmp

      Now you can reload Nginx again to process your changes:

      • sudo systemctl reload nginx.service

      You should now be able to go to http://your_domain:8080/stat in a browser to see the RTMP statistics page. Visit and refresh the page while streaming video and watch as the stream statistics change.

      You’ve now seen how to monitor your video stream and push it to third party providers. In the final section, you’ll learn how to provide it directly in a browser without the use of third party streaming platforms or standalone media player apps.

      Step 5 — Creating Modern Streams for Browsers (Optional)

      As a final step, you may want to add support for newer streaming protocols so that users can stream video from your server using a web browser directly. There are two protocols that you can use to create HTTP-based video streams: Apple’s HLS and MPEG DASH. They both have advantages and disadvantages, so you will probably want to support both.

      The Nginx-RTMP module supports both standards. To add HLS and DASH support to your server, you will need to modify the rtmp block in your nginx.conf file. Open /etc/nginx/nginx.conf using nano or your preferred editor, then add the following highlighted directives:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
      . . .
                      application live {
                              live on;
                              record off;
                              hls on;
                              hls_path /var/www/html/stream/hls;
                              hls_fragment 3;
                              hls_playlist_length 60;
      
                              dash on;
                              dash_path /var/www/html/stream/dash;
                      }
              }
      }
      . . .
      

      Save and close the file. Next, add this to the bottom of your sites-available/rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      /etc/nginx/sites-available/rtmp

      . . .
      server {
          listen 8088;
      
          location / {
              add_header Access-Control-Allow-Origin *;
              root /var/www/html/stream;
          }
      }
      
      types {
          application/dash+xml mpd;
      }
      

      Note: The Access-Control-Allow-Origin * header enables CORS, or Cross-Origin Resource Sharing, which is disabled by default. This communicates to any web browsers accessing data from your server that the server may load resources from other ports or domains. CORS is needed for maximum compatibility with HLS and DASH clients, and a common configuration toggle in many other web deployments.

      Save and close the file. Note that you’re using port 8088 here, which is another arbitrary choice for this tutorial to ensure we aren’t conflicting with any services you may be running on port 80 or 443. You’ll want to open that port in your firewall for now too:

      Finally, create a stream directory in your web root to match the configuration block, so that Nginx can generate the necessary files for HLS and DASH:

      • sudo mkdir /var/www/html/stream

      Reload Nginx again:

      • sudo systemctl reload nginx

      You should now have an HLS stream available at http://your_domain:8088/hls/stream.m3u8 and a DASH stream available at http://your_domain:8088/dash/stream.mpd. These endpoints will generate any necessary metadata on top of your RTMP video feed in order to support modern APIs.

      Conclusion

      The configuration options that you used in this tutorial are all documented in the Nginx RTMP Wiki page. Nginx modules typically share common syntax and expose a very large set of configuration options, and you can review their documentation to change any of your settings from here.

      Nearly all internet video streaming is implemented on top of RTMP, HLS, and DASH, and by using the approach that you have explored in this tutorial, you can provide your stream via other broadcasting services, or expose it any other way you choose. Next, you could look into configuring Nginx as a reverse proxy in order to make some of these different video endpoints available as subdomains.



      Source link

      How To Set Up Physical Streaming Replication with PostgreSQL 12 on Ubuntu 20.04


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Streaming replication is a popular method you can use to horizontally scale your relational databases. It uses two or more copies of the same database cluster running on separate machines. One database cluster is referred to as the primary and serves both read and write operations; the others, referred to as the replicas, serve only read operations. You can also use streaming replication to provide high availability of a system. If the primary database cluster or server were to unexpectedly fail, the replicas are able to continue serving read operations, or (one of the replicas) become the new primary cluster.

      PostgreSQL is a widely used relational database that supports both logical and physical replication. Logical replication streams high-level changes from the primary database cluster to the replica databases. Using logical replication, you can stream changes to just a single database or table in a database. However, in physical replication, changes to the WAL (Write-Ahead-Logging) log file are streamed and replicated in the replica clusters. As a result, you can’t replicate specific areas of a primary database cluster, but instead all changes to the primary are replicated.

      In this tutorial, you will set up physical streaming replication with PostgreSQL 12 on Ubuntu 20.04 using two separate machines running two separate PostgreSQL 12 clusters. One machine will be the primary and the other, the replica.

      Prerequisites

      To complete this tutorial, you will need the following:

      • Two separate machines Ubuntu 20.04 machines; one referred to as the primary and the other referred to as the replica. You can set these up with our Initial Server Setup Guide, including non-root users with sudo permissions and a firewall.
      • Your firewalls configured to allow HTTP/HTTPS and traffic on port 5432—the default port used by PostgreSQL 12. You can follow How To Set Up a Firewall with ufw on Ubuntu 20.04 to configure these firewall settings.
      • PostgreSQL 12 running on both Ubuntu 20.04 Servers. Follow Step 1 of the How To Install and Use PostgreSQL on Ubuntu 20.04 tutorial that covers the installation and basic usage of PostgreSQL on Ubuntu 20.04.

      Step 1 — Configuring the Primary Database to Accept Connections

      In this first step, you’ll configure the primary database to allow your replica database(s) to connect. By default, PostgreSQL only listens to the localhost (127.0.0.1) for connections. To change this, you’ll first edit the listen_addresses configuration parameter on the primary database.

      On your primary server, run the following command to connect to the PostgreSQL cluster as the default postgres user:

      Once you have connected to the database, you’ll modify the listen_addresses parameter using the ALTER SYSTEM command:

      • ALTER SYSTEM SET listen_addresses TO 'your_replica_IP_addr';

      Replace 'your_replica_IP_addr' with the IP address of your replica machine.

      You will receive the following output:

      Output

      ALTER SYSTEM

      The command you just entered instructs the PostgreSQL database cluster to allow connections only from your replica machine. If you were using more than one replica machine, you would list the IP addresses of all your replicas separated by commas. You could also use '*' to allow connections from all IP addresses, however, this isn’t recommended for security reasons.

      Note: You can also run the command on the database from the terminal using psql -c as follows:

      • sudo -u postgres psql -c "ALTER SYSTEM SET listen_addresses TO 'your_replica_IP_adder';"

      Alternatively, you can change the value for listen_addresses by manually editing the postgresql.conf configuration file, which you can find in the /etc/postgresql/12/main/ directory by default. You can also get the location of the configuration file by running SHOW config_file; on the database cluster.

      To open the file using nano use:

      • sudo nano /etc/postgresql/12/main/postgresql.conf

      Once you’re done, your primary database will now accept connections from other machines. Next, you’ll create a role with the appropriate permissions that the replica will use when connecting to the primary.

      Step 2 — Creating a Special Role with Replication Permissions

      Now, you need to create a role in the primary database that has permission to replicate the database. Your replica will use this role when connecting to the primary. Creating a separate role just for replication also has security benefits. Your replica won’t be able to manipulate any data on the primary; it will only be able to replicate the data.

      To create a role, you need to run the following command on the primary cluster:

      • CREATE ROLE test WITH REPLICATION PASSWORD 'testpassword' LOGIN;

      You’ll receive the following output:

      Output

      CREATE ROLE

      This command creates a role named test with the password 'testpassword', which has permission to replicate the database cluster.

      PostgreSQL has a special replication pseudo-database that the replica connects to, but you first need to edit the /etc/postgresql/12/main/pg_hba.conf configuration file to allow your replica to access it. So, exit the PostgreSQL command prompt by running:

      Now that you’re back at your terminal command prompt, open the /etc/postgresql/12/main/pg_hba.conf configuration file using nano:

      • sudo nano /etc/postgresql/12/main/pg_hba.conf

      Append the following line to the end of the pg_hba.conf file:

      /etc/postgresql/12/main/pg_hba.conf

      . . .
      host    replication     test    your-replica-IP/32   md5
      

      This ensures that your primary allows your replica to connect to the replication pseudo-database using the role, test, you created earlier. The host value means to accept non-local connections via plain or SSL-encrypted TCP/IP sockets. replication is the name of the special pseudo-database that PostgreSQL uses for replication. Finally, the value md5 is the type of authentication used. If you want to have more than one replica, just add the same line again to the end of the file with the IP address of your other replica.

      To ensure these changes to the configuration file are implemented, you need to restart the primary cluster using:

      • sudo systemctl restart postgresql@12-main

      If your primary cluster restarted successfully, it is correctly set up and ready to start streaming once your replica connects. Next, you’ll move on to setting up your replica cluster.

      Step 3 — Backing Up the Primary Cluster on the Replica

      As you are setting up physical replication with PostgreSQL in this tutorial, you need to perform a physical backup of the primary cluster’s data files into the replica’s data directory. To do this, you’ll first clear out all the files in the replica’s data directory. The default data directory for PostgreSQL on Ubuntu is /var/lib/postgresql/12/main/.

      You can also find PostgreSQL’s data directory by running the following command on the replica’s database:

      Once you have the location of the data directory, run the following command to remove everything:

      • sudo -u postgres rm -r /var/lib/postgresql/12/main/*

      Since the default owner of the files in the directory is the postgres user, you will need to run the command as postgres using sudo -u postgres.

      Note:
      If in the exceedingly rare case a file in the directory is corrupted and the command does not work, remove the main directory all together and recreate it with the appropriate permissions as follows:

      • sudo -u postgres rm -r /var/lib/postgresql/12/main
      • sudo -u postgres mkdir /var/lib/postgresql/12/main
      • sudo -u postgres chmod 700 /var/lib/postgresql/12/main

      Now that the replica’s data directory is empty, you can perform a physical backup of the primary’s data files. PostgreSQL conveniently has the utility pg_basebackup that simplifies the process. It even allows you to put the server into standby mode using the -R option.

      Execute the pg_basebackup command on the replica as follows:

      • sudo -u postgres pg_basebackup -h primary-ip-addr -p 5432 -U test -D /var/lib/postgresql/12/main/ -Fp -Xs -R
      • The -h option specifies a non-local host. Here, you need to enter the IP address of your server with the primary cluster.

      • The -p option specifies the port number it connects to on the primary server. By default, PostgreSQL uses port :5432.

      • The -U option allows you to specify the user you connect to the primary cluster as. This is the role you created in the previous step.

      • The -D flag is the output directory of the backup. This is your replica’s data directory that you emptied just before.

      • The -Fp specifies the data to be outputted in the plain format instead of as a tar file.

      • -Xs streams the contents of the WAL log as the backup of the primary is performed.

      • Lastly, -R creates an empty file, named standby.signal, in the replica’s data directory. This file lets your replica cluster know that it should operate as a standby server. The -R option also adds the connection information about the primary server to the postgresql.auto.conf file. This is a special configuration file that is read whenever the regular postgresql.conf file is read, but the values in the .auto file override the values in the regular configuration file.

      When the pg_basebackup command connects to the primary, you will be prompted to enter the password for the role you created in the previous step. Depending on the size of your primary database cluster, it may take some time to copy all the files.

      Your replica will now have all the data files from the primary that it requires to begin replication. Next, you’ll be putting the replica into standby mode and start replicating.

      Step 4 — Restarting and Testing the Clusters

      Now that the primary cluster’s data files have been successfully backed up on the replica, the next step is to restart the replica database cluster to put it into standby mode. To restart the replica database, run the following command:

      • sudo systemctl restart postgresql@12-main

      If your replica cluster restarted in standby mode successfully, it should have already connected to the primary database cluster on your other machine. To check if the replica has connected to the primary and the primary is streaming, connect to the primary database cluster by running:

      Now query the pg_stat_replication table on the primary database cluster as follows:

      • SELECT client_addr, state FROM pg_stat_replication;

      Running this query on the primary cluster will output something similar to the following:

      Output

      client_addr | state ------------------+----------- your_replica_IP | streaming

      If you have similar output, then the primary is correctly streaming to the replica.

      Conclusion

      You now have two Ubuntu 20.04 servers each with a PostgreSQL 12 database cluster running with physical streaming between them. Any changes now made to the primary database cluster will also appear in the replica cluster.

      You can also add more replicas to your setup if your databases need to handle more traffic.

      If you wish to learn more about physical streaming replication including how to set up synchronous replication to ensure zero chance of losing any mission-critical data, you can read the entry in the official PostgreSQL docs.

      You can check out our PostgreSQL topic page for more tutorials and content.



      Source link