One place for hosting & domains

      MultiNode

      How To Set Up Multi-Node Deployments With Rancher 2.1, Kubernetes, and Docker Machine on Ubuntu 18.04


      The author selected Code Org to receive a donation as part of the Write for DOnations program.

      Introduction

      Rancher is a popular open-source container management platform. Released in early 2018, Rancher 2.X works on Kubernetes and has incorporated new tools such as multi-cluster management and built-in CI pipelines. In addition to the enhanced security, scalability, and straightforward deployment tools already in Kubernetes, Rancher offers a graphical user interface that makes managing containers easier. Through Rancher’s GUI, users can manage secrets, securely handle roles and permissions, scale nodes and pods, and set up load balancers and volumes without needing a command line tool or complex YAML files.

      In this tutorial, you will deploy a multi-node Rancher 2.1 server using Docker Machine on Ubuntu 18.04. By the end, you’ll be able to provision new DigitalOcean Droplets and container pods via the Rancher UI to quickly scale up or down your hosting environment.

      Prerequisites

      Before you start this tutorial, you’ll need a DigitalOcean account, in addition to the following:

      • A DigitalOcean Personal Access Token, which you can create following the instructions in this tutorial. This token will allow Rancher to have API access to your DigitalOcean account.

      • A fully registered domain name with an A record that points to the IP address of the Droplet you create in Step 1. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for example.com.

      Step 1 — Creating a Droplet With Docker Installed

      To start and configure Rancher, you’ll need to create a new Droplet with Docker installed. To accomplish this, you can use DigitalOcean’s Docker image.

      First, log in to your DigitalOcean account and choose Create Droplet. Then, under the Choose an Image section, select the Marketplace tab. Select Docker 18.06.1~ce~3 on 18.04.

      Choose the Docker 18.06 image from the One-click Apps menu

      Next, select a Droplet no smaller than 2GB and choose a datacenter region for your Droplet.

      Finally, add your SSH keys, provide a host name for your Droplet, and press the Create button.

      It will take a few minutes for the server to provision and for Docker to download. Once the Droplet deploys successfully, you’re ready to start Rancher in a new Docker container.

      Step 2 — Starting and Configuring Rancher

      The Droplet you created in Step 1 will run Rancher in a Docker container. In this step, you will start the Rancher container and ensure it has a Let’s Encrypt SSL certificate so that you can securely access the Rancher admin panel. Let’s Encrypt is an automated, open-source certificate authority that allows developers to provision ninety-day SSL certificates for free.

      Log in to your new Droplet:

      To make sure Docker is running, enter:

      Check that the listed Docker version is what you expect. You can start Rancher with a Let's Encrypt certificate already installed by running the following command:

      • docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain example.com

      The --acme-domain option installs an SSL certificate from Let's Encrypt to ensure your Rancher admin is served over HTTPS. This script also instructs the Droplet to fetch the rancher/rancher Docker image and start a Rancher instance in a container that will restart automatically if it ever goes down accidentally. To ease recovery in the event of data loss, the script mounts a volume on the host machine (at /host/rancher) that contains the Rancher data.

      To see all the running containers, enter:

      You'll see output similar to the following (with a unique container ID and name):

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b2afed0a599 rancher/rancher "entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_fermat

      If the container is not running, you can execute the docker run command again.

      Before you can access the Rancher admin panel, you'll need to set your admin password and Rancher server URL. The Rancher admin interface will give you access to all of your running nodes, pods, and secrets, so it is important that you use a strong password for it.

      Go to the domain name that points to your new Droplet in your web browser. The first time you visit this address, Rancher will let you set a password:

      Set your Rancher password using the prompt

      When asked for your Rancher server URL, use the domain name pointed at your Droplet.

      You have now completed your Rancher server setup, and you will see the Rancher admin home screen:

      The Rancher admin home screen

      You're ready to continue to the Rancher cluster setup.

      Step 3 — Configuring a Cluster With a Single Node

      To use Rancher, you'll need to create a cluster with at least one node. A cluster is a group of one or more nodes. This guide will give you more information about the Kubernetes Architecture. In this tutorial, nodes correspond to Droplets that Rancher will manage. Pods represent a group of running Docker containers within the Droplet. Each node can run many pods. Using the Rancher UI, you can set up clusters and nodes in an underlying Kubernetes environment.

      By the end of this step, you will have set up a cluster with a single node ready to run your first pod.

      In Rancher, click Add Cluster, and select DigitalOcean as the infrastructure provider.

      Select DigitalOcean from the listed infrastructure providers

      Enter a Cluster Name and scroll down to the Node Pools section. Enter a Name Prefix, leave the Count at 1 for now, and check etcd, Control Plane, and Worker.

      • etcd is Kubernetes' key value storage system for keeping your entire environment's state. In order to maintain high availability, you should run three or five etcd nodes so that if one goes down your environment will still be manageable.
      • Control Plane checks through all of the Kubernetes Objects — such as pods — in your environment and keeps them up to date with the configuration you provide in the Rancher admin interface.
      • Workers run the actual workloads and monitoring agents that ensure your containers stay running and networked. Worker nodes are where your pods will run the software you deploy.

      Create a Node Pool with a single Node

      Before creating the cluster, click Add Node Template to configure the specific options for your new node.

      Enter your DigitalOcean Personal Access Token in the Access Token input box and click Next: Configure Droplet.

      Next, select the same Region and Droplet Size as Step 1. For Image, be sure to select Ubuntu 16.04.5 x64 as there's currently a compatibility issue with Rancher and Ubuntu 18.04. Hit Create to save the template.

      Finally, click Create at the Add Cluster page to kick off the provisioning process. It will take a few minutes for Rancher to complete this step, but you will see a new Droplet in your DigitalOcean Droplets dashboard when it's done.

      In this step, you've created a new cluster and node onto which you will deploy a workload in the next section.

      Step 4 — Deploying a Web Application Workload

      Once the new cluster and node are ready, you can deploy your first pod in a workload. A Kubernetes Pod is the smallest unit of work available to Kubernetes and by extension Rancher. Workloads describe a single group of pods that you deploy together. For example, you may run multiple pods of your webserver in a single workload to ensure that if one pod slows down with a particular request, other instances can handle incoming requests. In this section, you're going to deploy a Nginx Hello World image to a single pod.

      Hover over Global in the header and select Default. This will bring you to the Default project dashboard. You'll focus on deploying a single project in this tutorial, but from this dashboard you can also create multiple projects to achieve isolated container hosting environments.

      To start configuring your first pod, click Deploy.

      Enter a Name, and put nginxdemos/hello in the Docker Image field. Next, map port 80 in the container to port 30000 on the host nodes. This will ensure that the pods you deploy are available on each node at port 30000. You can leave Protocol set to TCP, and the next dropdown as NodePort.

      Note: While this method of running the pod on every node's port is easier to get started, Rancher also includes Ingress to provide load balancing and SSL termination for production use.

      The input form for deploying a Workload

      To launch the pod, scroll to the bottom and click Launch.

      Rancher will take you back to the default project home page, and within a few seconds your pod will be ready. Click the link 30000/tcp just below the name of the workload and Rancher will open a new tab with information about the running container's environment.

      Server address, Server name, and other output from the running NGINX container

      The Server address and port you see on this page are those of the internal Docker network, and not the public IP address you see in your browser. This means that Rancher is working and routing traffic from http://first_node_ip:30000/ to the workload as expected.

      At this point, you've successfully deployed your first workload of one pod to a single Rancher node. Next, you'll see how to scale your Rancher environment.

      Step 5 — Scaling Nodes and Pods

      Rancher gives you two ways to scale your hosting resources: increasing the number of pods in your workload or increasing the number of nodes in your cluster.

      Adding pods to your workload will give your application more running processes. This will allow it to handle more traffic and enable zero-downtime deployments, but each node can handle only a finite number of pods. Once all your nodes have hit their pod limit, you will have to increase the number of nodes if you want to continue scaling up.

      Another consideration is that while increasing pods is typically free, you will have to pay for each node you add to your environment. In this step, you will scale up both nodes and pods, and add another node to your Rancher cluster.

      Note: This part of the tutorial will provision a new DigitalOcean Droplet automatically via the API, so be aware that you will incur extra charges while the second node is running.

      Navigate to the cluster home page of your Rancher installation by selecting Cluster: your-cluster-name from the top navigation bar. Next click Nodes from the top navigation bar.

      Use the top navbar dropdown to select your Cluster

      This page shows that you currently have one running node in the cluster. To add more nodes, click Edit Cluster, and scroll to the Node Pools section at the bottom of the page. Click Add Node Pool, enter a prefix, and check the Worker box. Click Save to update the cluster.

      Add a Node Pool as a Worker only

      Within 2–5 minutes, Rancher will provision a second droplet and indicate the node as Active in the cluster's dashboard. This second node is only a worker, which means it will not run the Rancher etcd or Control Plane containers. This allows the Worker more capacity for running workloads.

      Note: Having an uneven number of etcd nodes will ensure that they can always reach a quorum (or consensus). If you only have one etcd node, you run the risk of your cluster being unreachable if that one node goes down. In a production environment it is a better practice to run three or five etcd nodes.

      When the second node is ready, you will be able to see the workload you deployed in the previous step on this node by navigating to http://second_node_ip:30000/ in your browser.

      Scaling up nodes gives you more Droplets to distribute your workloads on, but you may also want to run more instances of each pod within a workload. To add more pods, return to the Default project page, press the arrow to the left of your hello-world workload, and click + twice to add two more pods.

      Running three Hello World Pods in a Workload

      Rancher will automatically deploy more pods and distribute the running containers to each node depending on where there is availability.

      You can now scale your nodes and pods to suit your application's requirements.

      Conclusion

      You've now set up multi-node deployments using Rancher 2.1 on Ubuntu 18.04, and have scaled up to two running nodes and multiple pods within a workload. You can use this strategy to host and scale any kind of Docker container that you need to run in your application and use Rancher's dashboard and alerts to help you maximize the performance of your workloads and nodes within each cluster.



      Source link

      How To Create a Multi-Node MySQL Cluster on Ubuntu 18.04


      Introduction

      The MySQL Cluster distributed database provides high availability and throughput for your MySQL database management system. A MySQL Cluster consists of one or more management nodes (ndb_mgmd) that store the cluster’s configuration and control the data nodes (ndbd), where cluster data is stored. After communicating with the management node, clients (MySQL clients, servers, or native APIs) connect directly to these data nodes.

      With MySQL Cluster there is typically no replication of data, but instead data node synchronization. For this purpose a special data engine must be used — NDBCluster (NDB). It’s helpful to think of the cluster as a single logical MySQL environment with redundant components. Thus, a MySQL Cluster can participate in replication with other MySQL Clusters.

      MySQL Cluster works best in a shared-nothing environment. Ideally, no two components should share the same hardware. For simplicity and demonstration purposes, we’ll limit ourselves to using only three servers. We will set up two servers as data nodes which sync data between themselves. The third server will be used for the Cluster Manager and also for the MySQL server/client. If you spin up additional servers, you can add more data nodes to the cluster, decouple the cluster manager from the MySQL server/client, and configure more servers as Cluster Managers and MySQL servers/clients.

      Prerequisites

      To complete this tutorial, you will need a total of three servers: two servers for the redundant MySQL data nodes (ndbd), and one server for the Cluster Manager (ndb_mgmd) and MySQL server/client (mysqld and mysql).

      In the same DigitalOcean data center, create the following Droplets with private networking enabled:

      Be sure to note down the private IP addresses of your three Droplets. In this tutorial our cluster nodes have the following private IP addresses:

      • 198.51.100.0 will be the first MySQL Cluster data node
      • 198.51.100.1 will be the second data node
      • 198.51.100.2 will be the Cluster Manager & MySQL server node

      Once you’ve spun up your Droplets, configured a non-root user, and noted down the IP addresses for the 3 nodes, you’re ready to begin with this tutorial.

      Step 1 — Installing and Configuring the Cluster Manager

      We’ll first begin by downloading and installing the MySQL Cluster Manager, ndb_mgmd.

      To install the Cluster Manager, we first need to fetch the appropriate .deb installer file from the the official MySQL Cluster download page.

      From this page, under Select Operating System, choose Ubuntu Linux. Then, under Select OS Version, choose Ubuntu Linux 18.04 (x86, 64-bit).

      Scroll down until you see DEB Package, NDB Management Server, and click on the Download link for the one that does not contain dbgsym (unless you require debug symbols). You will be brought to a Begin Your Download page. Here, right click on No thanks, just start my download. and copy the link to the .deb file.

      Now, log in to your Cluster Manager Droplet (in this tutorial, 198.51.100.2), and download this .deb file:

      • cd ~
      • wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.6/mysql-cluster-community-management-server_7.6.6-1ubuntu18.04_amd64.deb

      Install ndb_mgmd using dpkg:

      • sudo dpkg -i mysql-cluster-community-management-server_7.6.6-1ubuntu18.04_amd64.deb

      We now need to configure ndb_mgmd before first running it; proper configuration will ensure correct synchronization and load distribution among the data nodes.

      The Cluster Manager should be the first component launched in any MySQL cluster. It requires a configuration file, passed in as an argument to its executable. We’ll create and use the following configuration file: /var/lib/mysql-cluster/config.ini.

      On the Cluster Manager Droplet, create the /var/lib/mysql-cluster directory where this file will reside:

      • sudo mkdir /var/lib/mysql-cluster

      Then create and edit the configuration file using your preferred text editor:

      • sudo nano /var/lib/mysql-cluster/config.ini

      Paste the following text into your editor:

      /var/lib/mysql-cluster/config.ini

      [ndbd default]
      # Options affecting ndbd processes on all data nodes:
      NoOfReplicas=2  # Number of replicas
      
      [ndb_mgmd]
      # Management process options:
      hostname=198.51.100.2 # Hostname of the manager
      datadir=/var/lib/mysql-cluster  # Directory for the log files
      
      [ndbd]
      hostname=198.51.100.0 # Hostname/IP of the first data node
      NodeId=2            # Node ID for this data node
      datadir=/usr/local/mysql/data   # Remote directory for the data files
      
      [ndbd]
      hostname=198.51.100.1 # Hostname/IP of the second data node
      NodeId=3            # Node ID for this data node
      datadir=/usr/local/mysql/data   # Remote directory for the data files
      
      [mysqld]
      # SQL node options:
      hostname=198.51.100.2 # In our case the MySQL server/client is on the same Droplet as the cluster manager
      

      After pasting in this text, being sure to replace the hostname values above with the correct IP addresses of the Droplets you’ve configured. Setting this hostname parameter is an important security measure that prevents other servers from connecting to the Cluster Manager.

      Save the file and close your text editor.

      This is a pared-down, minimal configuration file for a MySQL Cluster. You should customize the parameters in this file depending on your production needs. For a sample, fully configured ndb_mgmd configuration file, consult the MySQL Cluster documentation.

      In the above file you can add additional components like data nodes (ndbd) or MySQL server nodes (mysqld) by appending instances to the appropriate section.

      We can now start the manager by executing the ndb_mgmd binary and specifying its config file using the -f flag:

      • sudo ndb_mgmd -f /var/lib/mysql-cluster/config.ini

      You should see the following output:

      Output

      MySQL Cluster Management Server mysql-5.7.22 ndb-7.6.6 2018-07-25 21:48:39 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it... 2018-07-25 21:48:39 [MgmtSrvr] INFO -- Successfully created config directory

      This indicates that the MySQL Cluster Management server has successfully been installed and is now running on your Droplet.

      Ideally, we’d like to start the Cluster Management server automatically on boot. To do this, we’re going to create and enable a systemd service.

      Before we create the service, we need to kill the running server:

      Now, open and edit the following systemd Unit file using your favorite editor:

      • sudo nano /etc/systemd/system/ndb_mgmd.service

      Paste in the following code:

      /etc/systemd/system/ndb_mgmd.service

      [Unit]
      Description=MySQL NDB Cluster Management Server
      After=network.target auditd.service
      
      [Service]
      Type=forking
      ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
      ExecReload=/bin/kill -HUP $MAINPID
      KillMode=process
      Restart=on-failure
      
      [Install]
      WantedBy=multi-user.target
      

      Here, we’ve added a minimal set of options instructing systemd on how to start, stop and restart the ndb_mgmd process. To learn more about the options used in this unit configuration, consult the systemd manual.

      Save and close the file.

      Now, reload systemd’s manager configuration using daemon-reload:

      • sudo systemctl daemon-reload

      We’ll enable the service we just created so that the MySQL Cluster Manager starts on reboot:

      • sudo systemctl enable ndb_mgmd

      Finally, we’ll start the service:

      • sudo systemctl start ndb_mgmd

      You can verify that the NDB Cluster Management service is running:

      • sudo systemctl status ndb_mgmd

      You should see the following output:

      ● ndb_mgmd.service - MySQL NDB Cluster Management Server
         Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; enabled; vendor preset: enabled)
         Active: active (running) since Thu 2018-07-26 21:23:37 UTC; 3s ago
        Process: 11184 ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini (code=exited, status=0/SUCCESS)
       Main PID: 11193 (ndb_mgmd)
          Tasks: 11 (limit: 4915)
         CGroup: /system.slice/ndb_mgmd.service
                 └─11193 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
      

      Which indicates that the ndb_mgmd MySQL Cluster Management server is now running as a systemd service.

      The final step for setting up the Cluster Manager is to allow incoming connections from other MySQL Cluster nodes on our private network.

      If you did not configure the ufw firewall when setting up this Droplet, you can skip ahead to the next section.

      We’ll add rules to allow local incoming connections from both data nodes:

      • sudo ufw allow from 198.51.100.0
      • sudo ufw allow from 198.51.100.1

      After entering these commands, you should see the following output:

      Output

      Rule added

      The Cluster Manager should now be up and running, and able to communicate with other Cluster nodes over the private network.

      Step 2 — Installing and Configuring the Data Nodes

      Note: All the commands in this section should be executed on both data nodes.

      In this step, we'll install the ndbd MySQL Cluster data node daemon, and configure the nodes so they can communicate with the Cluster Manager.

      To install the data node binaries we first need to fetch the appropriate .deb installer file from the official MySQL download page.

      From this page, under Select Operating System, choose Ubuntu Linux. Then, under Select OS Version, choose Ubuntu Linux 18.04 (x86, 64-bit).

      Scroll down until you see DEB Package, NDB Data Node Binaries, and click on the Download link for the one that does not contain dbgsym (unless you require debug symbols). You will be brought to a Begin Your Download page. Here, right click on No thanks, just start my download. and copy the link to the .deb file.

      Now, log in to your first data node Droplet (in this tutorial, 198.51.100.0), and download this .deb file:

      • cd ~
      • wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.6/mysql-cluster-community-data-node_7.6.6-1ubuntu18.04_amd64.deb

      Before we install the data node binary, we need to install a dependency, libclass-methodmaker-perl:

      • sudo apt update
      • sudo apt install libclass-methodmaker-perl

      We can now install the data note binary using dpkg:

      • sudo dpkg -i mysql-cluster-community-data-node_7.6.6-1ubuntu18.04_amd64.deb

      The data nodes pull their configuration from MySQL’s standard location, /etc/my.cnf. Create this file using your favorite text editor and begin editing it:

      Add the following configuration parameter to the file:

      /etc/my.cnf

      [mysql_cluster]
      # Options for NDB Cluster processes:
      ndb-connectstring=198.51.100.2  # location of cluster manager
      

      Specifying the location of the Cluster Manager node is the only configuration needed for ndbd to start. The rest of the configuration will be pulled from the manager directly.

      Save and exit the file.

      In our example, the data node will find out that its data directory is /usr/local/mysql/data, per the manager's configuration. Before starting the daemon, we’ll create this directory on the node:

      • sudo mkdir -p /usr/local/mysql/data

      Now we can start the data node using the following command:

      You should see the following output:

      Output

      2018-07-18 19:48:21 [ndbd] INFO -- Angel connected to '198.51.100.2:1186' 2018-07-18 19:48:21 [ndbd] INFO -- Angel allocated nodeid: 2

      The NDB data node daemon has been successfully installed and is now running on your server.

      We also need to allow incoming connections from other MySQL Cluster nodes over the private network.

      If you did not configure the ufw firewall when setting up this Droplet, you can skip ahead to setting up the systemd service for ndbd.

      We’ll add rules to allow incoming connections from the Cluster Manager and other data nodes:

      • sudo ufw allow from 198.51.100.0
      • sudo ufw allow from 198.51.100.2

      After entering these commands, you should see the following output:

      Output

      Rule added

      Your MySQL data node Droplet can now communicate with both the Cluster Manager and other data node over the private network.

      Finally, we’d also like the data node daemon to start up automatically when the server boots. We’ll follow the same procedure used for the Cluster Manager, and create a systemd service.

      Before we create the service, we’ll kill the running ndbd process:

      Now, open and edit the following systemd Unit file using your favorite editor:

      • sudo nano /etc/systemd/system/ndbd.service

      Paste in the following code:

      /etc/systemd/system/ndbd.service

      [Unit]
      Description=MySQL NDB Data Node Daemon
      After=network.target auditd.service
      
      [Service]
      Type=forking
      ExecStart=/usr/sbin/ndbd
      ExecReload=/bin/kill -HUP $MAINPID
      KillMode=process
      Restart=on-failure
      
      [Install]
      WantedBy=multi-user.target
      

      Here, we’ve added a minimal set of options instructing systemd on how to start, stop and restart the ndbd process. To learn more about the options used in this unit configuration, consult the systemd manual.

      Save and close the file.

      Now, reload systemd’s manager configuration using daemon-reload:

      • sudo systemctl daemon-reload

      We’ll now enable the service we just created so that the data node daemon starts on reboot:

      • sudo systemctl enable ndbd

      Finally, we’ll start the service:

      • sudo systemctl start ndbd

      You can verify that the NDB Cluster Management service is running:

      • sudo systemctl status ndbd

      You should see the following output:

      Output

      ● ndbd.service - MySQL NDB Data Node Daemon Loaded: loaded (/etc/systemd/system/ndbd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-07-26 20:56:29 UTC; 8s ago Process: 11972 ExecStart=/usr/sbin/ndbd (code=exited, status=0/SUCCESS) Main PID: 11984 (ndbd) Tasks: 46 (limit: 4915) CGroup: /system.slice/ndbd.service ├─11984 /usr/sbin/ndbd └─11987 /usr/sbin/ndbd

      Which indicates that the ndbd MySQL Cluster data node daemon is now running as a systemd service. Your data node should now be fully functional and able to connect to the MySQL Cluster Manager.

      Once you’ve finished setting up the first data node, repeat the steps in this section on the other data node (198.51.100.1 in this tutorial).

      Step 3 — Configuring and Starting the MySQL Server and Client

      A standard MySQL server, such as the one available in Ubuntu's APT repository, does not support the MySQL Cluster engine NDB. This means we need to install the custom SQL server packaged with the other MySQL Cluster software we’ve installed in this tutorial.

      We’ll once again grab the MySQL Cluster Server binary from the official MySQL Cluster download page.

      From this page, under Select Operating System, choose Ubuntu Linux. Then, under Select OS Version, choose Ubuntu Linux 18.04 (x86, 64-bit).

      Scroll down until you see DEB Bundle, and click on the Download link (it should be the first one in the list). You will be brought to a Begin Your Download page. Here, right click on No thanks, just start my download. and copy the link to the .tar archive.

      Now, log in to the Cluster Manager Droplet (in this tutorial, 198.51.100.2), and download this .tar archive (recall that we are installing MySQL Server on the same node as our Cluster Manager – in a production setting you should run these daemons on different nodes):

      • cd ~
      • wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.6/mysql-cluster_7.6.6-1ubuntu18.04_amd64.deb-bundle.tar

      We’ll now extract this archive into a directory called install. First, create the directory:

      Now extract the archive into this directory:

      • tar -xvf mysql-cluster_7.6.6-1ubuntu18.04_amd64.deb-bundle.tar -C install/

      Move into this directory, containing the extracted MySQL Cluster component binaries:

      Before we install the MySQL server binary, we need to install a couple of dependencies:

      • sudo apt update
      • sudo apt install libaio1 libmecab2

      Now, we need to install the MySQL Cluster dependencies, bundled in the tar archive we just extracted :

      • sudo dpkg -i mysql-common_7.6.6-1ubuntu18.04_amd64.deb
      • sudo dpkg -i mysql-cluster-community-client_7.6.6-1ubuntu18.04_amd64.deb
      • sudo dpkg -i mysql-client_7.6.6-1ubuntu18.04_amd64.deb
      • sudo dpkg -i mysql-cluster-community-server_7.6.6-1ubuntu18.04_amd64.deb

      When installing mysql-cluster-community-server, a configuration prompt should appear, asking you to set a password for the root account of your MySQL database. Choose a strong, secure password, and hit <Ok>. Re-enter this root password when prompted, and hit <Ok> once again to complete installation.

      We can now install the MySQL server binary using dpkg:

      • mysql-server_7.6.6-1ubuntu18.04_amd64.deb

      We now need to configure this MySQL server installation.

      The configuration for MySQL Server is stored in the default /etc/mysql/my.cnf file.

      Open this configuration file using your favorite editor:

      • sudo nano /etc/mysql/my.cnf

      You should see the following text:

      /etc/mysql/my.cnf

      # Copyright (c) 2015, 2016, Oracle and/or its affiliates. All rights reserved.
      #
      # This program is free software; you can redistribute it and/or modify
      # it under the terms of the GNU General Public License as published by
      # the Free Software Foundation; version 2 of the License.
      #
      # This program is distributed in the hope that it will be useful,
      # but WITHOUT ANY WARRANTY; without even the implied warranty of
      # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
      # GNU General Public License for more details.
      #
      # You should have received a copy of the GNU General Public License
      # along with this program; if not, write to the Free Software
      # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
      
      #
      # The MySQL Cluster Community Server configuration file.
      #
      # For explanations see
      # http://dev.mysql.com/doc/mysql/en/server-system-variables.html
      
      # * IMPORTANT: Additional settings that can override those from this file!
      #   The files must end with '.cnf', otherwise they'll be ignored.
      #
      !includedir /etc/mysql/conf.d/
      !includedir /etc/mysql/mysql.conf.d/
      

      Append the following configuration to it:

      /etc/mysql/my.cnf

      . . .
      [mysqld]
      # Options for mysqld process:
      ndbcluster                      # run NDB storage engine
      
      [mysql_cluster]
      # Options for NDB Cluster processes:
      ndb-connectstring=198.51.100.2  # location of management server
      

      Save and exit the file.

      Restart the MySQL server for these changes to take effect:

      • sudo systemctl restart mysql

      MySQL by default should start automatically when your server reboots. If it doesn’t, the following command should fix this:

      • sudo systemctl enable mysql

      A SQL server should now be running on your Cluster Manager / MySQL Server Droplet.

      In the next step, we’ll run a few commands to verify that our MySQL Cluster installation is functioning as expected.

      Step 4 — Verifying MySQL Cluster Installation

      To verify your MySQL Cluster installation, log in to your Cluster Manager / SQL Server node.

      We’ll open the MySQL client from the command line and connect to the root account we just configured by entering the following command:

      Enter your password when prompted, and hit ENTER.

      You should see an output similar to:

      Output

      Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 3 Server version: 5.7.22-ndb-7.6.6 MySQL Cluster Community Server (GPL) Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql>

      Once inside the MySQL client, run the following command:

      • SHOW ENGINE NDB STATUS G

      You should now see information about the NDB cluster engine, beginning with connection parameters:

      Output

      *************************** 1. row *************************** Type: ndbcluster Name: connection Status: cluster_node_id=4, connected_host=198.51.100.2, connected_port=1186, number_of_data_nodes=2, number_of_ready_data_nodes=2, connect_count=0 . . .

      This indicates that you’ve successfully connected to your MySQL Cluster.

      Notice here the number of ready_data_nodes: 2. This redundancy allows your MySQL cluster to continue operating even if one of the data nodes fails. It also means that your SQL queries will be load balanced across the two data nodes.

      You can try shutting down one of the data nodes to test cluster stability. The simplest test would be to restart the data node Droplet in order to fully test the recovery process. You should see the value of number_of_ready_data_nodes change to 1 and back up to 2 again as the node reboots and reconnects to the Cluster Manager.

      To exit the MySQL prompt, simply type quit or press CTRL-D.

      This is the first test that indicates that the MySQL cluster, server, and client are working. We'll now go through an additional test to confirm that the cluster is functioning properly.

      Open the Cluster management console, ndb_mgm using the command:

      You should see the following output:

      Output

      -- NDB Cluster -- Management Client -- ndb_mgm>

      Once inside the console enter the command SHOW and hit ENTER:

      You should see the following output:

      Output

      Connected to Management Server at: 198.51.100.2:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @198.51.100.0 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0, *) id=3 @198.51.100.1 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @198.51.100.2 (mysql-5.7.22 ndb-7.6.6) [mysqld(API)] 1 node(s) id=4 @198.51.100.2 (mysql-5.7.22 ndb-7.6.6)

      The above shows that there are two data nodes connected with node-ids 2 and 3. There is also one management node with node-id 1 and one MySQL server with node-id 4. You can display more information about each id by typing its number with the command STATUS as follows:

      The above command shows you the status, MySQL version, and NDB version of node 2:

      Output

      Node 2: started (mysql-5.7.22 ndb-7.6.6)

      To exit the management console type quit, and then hit ENTER.

      The management console is very powerful and gives you many other options for administering the cluster and its data, including creating an online backup. For more information consult the official MySQL documentation.

      At this point, you’ve fully tested your MySQL Cluster installation. The concluding step of this guide shows you how to create and insert test data into this MySQL Cluster.

      Step 5 — Inserting Data into MySQL Cluster

      To demonstrate the cluster’s functionality, let's create a new table using the NDB engine and insert some sample data into it. Note that in order to use cluster functionality, the engine must be specified explicitly as NDB. If you use InnoDB (default) or any other engine, you will not make use of the cluster.

      First, let's create a database called clustertest with the command:

      • CREATE DATABASE clustertest;

      Next, switch to the new database:

      Now, create a simple table called test_table like this:

      • CREATE TABLE test_table (name VARCHAR(20), value VARCHAR(20)) ENGINE=ndbcluster;

      We have explicitly specified the engine ndbcluster in order to make use of the cluster.

      Now, we can start inserting data using this SQL query:

      • INSERT INTO test_table (name,value) VALUES('some_name','some_value');

      To verify that the data has been inserted, run the following select query:

      • SELECT * FROM test_table;

      When you insert data into and select data from an ndbcluster table, the cluster load balances queries between all the available data nodes. This improves the stability and performance of your MySQL database installation.

      You can also set the default storage engine to ndbcluster in the my.cnf file that we edited previously. If you do this, you won’t need to specify the ENGINE option when creating tables. To learn more, consult the MySQL Reference Manual.

      Conclusion

      In this tutorial, we’ve demonstrated how to set up and configure a MySQL Cluster on Ubuntu 18.04 servers. It’s important to note that this is a minimal, pared-down architecture used to demonstrate the installation procedure, and there are many advanced options and features worth learning about before deploying MySQL Cluster in production (for example, performing backups). To learn more, consult the official MySQL Cluster documentation.



      Source link