One place for hosting & domains

      Import

      How To Back Up, Import, and Migrate Your Apache Kafka Data on CentOS 7


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Backing up your Apache Kafka data is an important practice that will help you recover from unintended data loss or bad data added to the cluster due to user error. Data dumps of cluster and topic data are an efficient way to perform backups and restorations.

      Importing and migrating your backed up data to a separate server is helpful in situations where your Kafka instance becomes unusable due to server hardware or networking failures and you need to create a new Kafka instance with your old data. Importing and migrating backed up data is also useful when you are moving the Kafka instance to an upgraded or downgraded server due to a change in resource usage.

      In this tutorial, you will back up, import, and migrate your Kafka data on a single CentOS 7 installation as well as on multiple CentOS 7 installations on separate servers. ZooKeeper is a critical component of Kafka’s operation. It stores information about cluster state such as consumer data, partition data, and the state of other brokers in the cluster. As such, you will also back up ZooKeeper’s data in this tutorial.

      Prerequisites

      To follow along, you will need:

      • A CentOS 7 server with at least 4GB of RAM and a non-root sudo user set up by following this tutorial.
      • A CentOS 7 server with Apache Kafka installed, to act as the source of the backup. Follow the How To Install Apache Kafka on CentOS 7 guide to set up your Kafka installation, if Kafka isn’t already installed on the source server.
      • OpenJDK 8 installed on the server. To install this version, follow these instructions on installing specific versions of OpenJDK.
      • Optional for Step 7 — Another CentOS 7 server with Apache Kafka installed, to act as the destination of the backup. Follow the article link in the previous prerequisite to install Kafka on the destination server. This prerequisite is required only if you are moving your Kafka data from one server to another. If you want to back up and import your Kafka data to a single server, you can skip this prerequisite.

      Step 1 — Creating a Test Topic and Adding Messages

      A Kafka message is the most basic unit of data storage in Kafka and is the entity that you will publish to and subscribe from Kafka. A Kafka topic is like a container for a group of related messages. When you subscribe to a particular topic, you will receive only messages that were published to that particular topic. In this section you will log in to the server that you would like to back up (the source server) and add a Kafka topic and a message so that you have some data populated for the backup.

      This tutorial assumes you have installed Kafka in the home directory of the kafka user (/home/kafka/kafka). If your installation is in a different directory, modify the ~/kafka part in the following commands with your Kafka installation’s path, and for the commands throughout the rest of this tutorial.

      SSH into the source server by executing:

      • ssh sammy@source_server_ip

      Run the following command to log in as the kafka user:

      Create a topic named BackupTopic using the kafka-topics.sh shell utility file in your Kafka installation's bin directory, by typing:

      • ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic BackupTopic

      Publish the string "Test Message 1" to the BackupTopic topic by using the ~/kafka/bin/kafka-console-producer.sh shell utility script.

      If you would like to add additional messages here, you can do so now.

      • echo "Test Message 1" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic BackupTopic > /dev/null

      The ~/kafka/bin/kafka-console-producer.sh file allows you to publish messages directly from the command line. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of publishing messages during testing or while performing administrative tasks. The --topic flag specifies the topic that you will publish the message to.

      Next, verify that the kafka-console-producer.sh script has published the message(s) by running the following command:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      The ~/kafka/bin/kafka-console-consumer.sh shell script starts the consumer. Once started, it will subscribe to messages from the topic that you published in the "Test Message 1" message in the previous command. The --from-beginning flag in the command allows consuming messages that were published before the consumer was started. Without the flag enabled, only messages published after the consumer was started will appear. On running the command, you will see the following output in the terminal:

      Output

      Test Message 1

      Press CTRL+C to stop the consumer.

      You've created some test data and verified that it's persisted. Now you can back up the state data in the next section.

      Step 2 — Backing Up the ZooKeeper State Data

      Before backing up the actual Kafka data, you need to back up the cluster state stored in ZooKeeper.

      ZooKeeper stores its data in the directory specified by the dataDir field in the ~/kafka/config/zookeeper.properties configuration file. You need to read the value of this field to determine the directory to back up. By default, dataDir points to the /tmp/zookeeper directory. If the value is different in your installation, replace /tmp/zookeeper with that value in the following commands.

      Here is an example output of the ~/kafka/config/zookeeper.properties file:

      ~/kafka/config/zookeeper.properties

      ...
      ...
      ...
      # the directory where the snapshot is stored.
      dataDir=/tmp/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
      ...
      ...
      ...
      

      Now that you have the path to the directory, you can create a compressed archive file of its contents. Compressed archive files are a better option over regular archive files to save disk space. Run the following command:

      • tar -czf /home/kafka/zookeeper-backup.tar.gz /tmp/zookeeper/*

      The command's output tar: Removing leading / from member names you can safely ignore.

      The -c and -z flags tell tar to create an archive and apply gzip compression to the archive. The -f flag specifies the name of the output compressed archive file, which is zookeeper-backup.tar.gz in this case.

      You can run ls in your current directory to see zookeeper-backup.tar.gz as part of your output.

      You have now successfully backed up the ZooKeeper data. In the next section, you will back up the actual Kafka data.

      Step 3 — Backing Up the Kafka Topics and Messages

      In this section, you will back up Kafka's data directory into a compressed tar file like you did for ZooKeeper in the previous step.

      Kafka stores topics, messages, and internal files in the directory that the log.dirs field specifies in the ~/kafka/config/server.properties configuration file. You need to read the value of this field to determine the directory to back up. By default and in your current installation, log.dirs points to the /tmp/kafka-logs directory. If the value is different in your installation, replace /tmp/kafka-logs in the following commands with the correct value.

      Here is an example output of the ~/kafka/config/server.properties file:

      ~/kafka/config/server.properties

      ...
      ...
      ...
      ############################# Log Basics #############################
      
      # A comma separated list of directories under which to store log files
      log.dirs=/tmp/kafka-logs
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=1
      
      # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
      # This value is recommended to be increased for installations with data dirs located in RAID array.
      num.recovery.threads.per.data.dir=1
      ...
      ...
      ...
      

      First, stop the Kafka service so that the data in the log.dirs directory is in a consistent state when creating the archive with tar. To do this, return to your server's non-root user by typing exit and then run the following command:

      • sudo systemctl stop kafka

      After stopping the Kafka service, log back in as your kafka user with:

      It is necessary to stop/start the Kafka and ZooKeeper services as your non-root sudo user because in the Apache Kafka installation prerequisite you restricted the kafka user as a security precaution. This step in the prerequisite disables sudo access for the kafka user, which leads to commands failing to execute.

      Now, create a compressed archive file of the directory's contents by running the following command:

      • tar -czf /home/kafka/kafka-backup.tar.gz /tmp/kafka-logs/*

      Once again, you can safely ignore the command's output (tar: Removing leading / from member names).

      You can run ls in the current directory to see kafka-backup.tar.gz as part of the output.

      You can start the Kafka service again — if you do not want to restore the data immediately — by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl start kafka

      Log back in as your kafka user:

      You have successfully backed up the Kafka data. You can now proceed to the next section, where you will be restoring the cluster state data stored in ZooKeeper.

      Step 4 — Restoring the ZooKeeper Data

      In this section you will restore the cluster state data that Kafka creates and manages internally when the user performs operations such as creating a topic, adding/removing additional nodes, and adding and consuming messages. You will restore the data to your existing source installation by deleting the ZooKeeper data directory and restoring the contents of the zookeeper-backup.tar.gz file. If you want to restore data to a different server, see Step 7.

      You need to stop the Kafka and ZooKeeper services as a precaution against the data directories receiving invalid data during the restoration process.

      First, stop the Kafka service by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl stop kafka

      Next, stop the ZooKeeper service:

      • sudo systemctl stop zookeeper

      Log back in as your kafka user:

      You can then safely delete the existing cluster data directory with the following command:

      Now restore the data you backed up in Step 2:

      • tar -C /tmp/zookeeper -xzf /home/kafka/zookeeper-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/zookeeper before extracting the data. You specify the --strip 2 flag to make tar extract the archive's contents in /tmp/zookeeper/ itself and not in another directory (such as /tmp/zookeeper/tmp/zookeeper/) inside of it.

      You have restored the cluster state data successfully. Now, you can proceed to the Kafka data restoration process in the next section.

      Step 5 — Restoring the Kafka Data

      In this section you will restore the backed up Kafka data to your existing source installation (or the destination server if you have followed the optional Step 7) by deleting the Kafka data directory and restoring the compressed archive file. This will allow you to verify that restoration works successfully.

      You can safely delete the existing Kafka data directory with the following command:

      Now that you have deleted the data, your Kafka installation resembles a fresh installation with no topics or messages present in it. To restore your backed up data, extract the files by running:

      • tar -C /tmp/kafka-logs -xzf /home/kafka/kafka-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/kafka-logs before extracting the data. You specify the --strip 2 flag to ensure that the archive's contents are extracted in /tmp/kafka-logs/ itself and not in another directory (such as /tmp/kafka-logs/kafka-logs/) inside of it.

      Now that you have extracted the data successfully, you can start the Kafka and ZooKeeper services again by typing exit, to switch to your non-root sudo user, and then executing:

      • sudo systemctl start kafka

      Start the ZooKeeper service with:

      • sudo systemctl start zookeeper

      Log back in as your kafka user:

      You have restored the kafka data, you can move on to verifying that the restoration is successful in the next section.

      Step 6 — Verifying the Restoration

      To test the restoration of the Kafka data, you will consume messages from the topic you created in Step 1.

      Wait a few minutes for Kafka to start up and then execute the following command to read messages from the BackupTopic:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      If you get a warning like the following, you need to wait for Kafka to start fully:

      Output

      [2018-09-13 15:52:45,234] WARN [Consumer clientId=consumer-1, groupId=console-consumer-87747] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

      Retry the previous command in another few minutes or run sudo systemctl restart kafka as your non-root sudo user. If there are no issues in the restoration, you will see the following output:

      Output

      Test Message 1

      If you do not see this message, you can check if you missed out any commands in the previous section and execute them.

      Now that you have verified the restored Kafka data, this means you have successfully backed up and restored your data in a single Kafka installation. You can continue to Step 7 to see how to migrate the cluster and topics data to an installation in another server.

      Step 7 — Migrating and Restoring the Backup to Another Kafka Server (Optional)

      In this section, you will migrate the backed up data from the source Kafka server to the destination Kafka server. To do so, you will first use the scp command to download the compressed tar.gz files to your local system. You will then use scp again to push the files to the destination server. Once the files are present in the destination server, you can follow the steps used previously to restore the backup and verify that the migration is successful.

      You are downloading the backup files locally and then uploading them to the destination server, instead of copying it directly from your source to destination server, because the destination server will not have your source server's SSH key in its /home/sammy/.ssh/authorized_keys file and cannot connect to and from the source server. Your local machine can connect to both servers however, saving you an additional step of setting up SSH access from the source to destination server.

      Download the zookeeper-backup.tar.gz and kafka-backup.tar.gz files to your local machine by executing:

      • scp sammy@source_server_ip:/home/kafka/zookeeper-backup.tar.gz .

      You will see output similar to:

      Output

      zookeeper-backup.tar.gz 100% 68KB 128.0KB/s 00:00

      Now run the following command to download the kafka-backup.tar.gz file to your local machine:

      • scp sammy@source_server_ip:/home/kafka/kafka-backup.tar.gz .

      You will see the following output:

      Output

      kafka-backup.tar.gz 100% 1031KB 488.3KB/s 00:02

      Run ls in the current directory of your local machine, you will see both of the files:

      Output

      kafka-backup.tar.gz zookeeper.tar.gz

      Run the following command to transfer the zookeeper-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp zookeeper-backup.tar.gz sammy@destination_server_ip:/home/sammy/zookeeper-backup.tar.gz

      Now run the following command to transfer the kafka-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp kafka-backup.tar.gz sammy@destination_server_ip:/home/sammy/kafka-backup.tar.gz

      You have uploaded the backup files to the destination server successfully. Since the files are in the /home/sammy/ directory and do not have the correct permissions for access by the kafka user, you can move the files to the /home/kafka/ directory and change their permissions.

      SSH into the destination server by executing:

      • ssh sammy@destination_server_ip

      Now move zookeeper-backup.tar.gz to /home/kafka/ by executing:

      • sudo mv zookeeper-backup.tar.gz /home/sammy/zookeeper-backup.tar.gz

      Similarly, run the following command to copy kafka-backup.tar.gz to /home/kafka/:

      • sudo mv kafka-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      Change the owner of the backup files by running the following command:

      • sudo chown kafka /home/kafka/zookeeper-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      The previous mv and chown commands will not display any output.

      Now that the backup files are present in the destination server at the correct directory, follow the commands listed in Steps 4 to 6 of this tutorial to restore and verify the data for your destination server.

      Conclusion

      In this tutorial, you backed up, imported, and migrated your Kafka topics and messages from both the same installation and installations on separate servers. If you would like to learn more about other useful administrative tasks in Kafka, you can consult the operations section of Kafka's official documentation.

      To store backed up files such as zookeeper-backup.tar.gz and kafka-backup.tar.gz remotely, you can explore Digital Ocean Spaces. If Kafka is the only service running on your server, you can also explore other backup methods such as full instance backups.



      Source link

      How to Back Up, Import, and Migrate Your Apache Kafka Data on Ubuntu 18.04


      The author selected Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Backing up your Apache Kafka data is an important practice that will help you recover from unintended data loss or bad data added to the cluster due to user error. Data dumps of cluster and topic data are an efficient way to perform backups and restorations.

      Importing and migrating your backed up data to a separate server is helpful in situations where your Kafka instance becomes unusable due to server hardware or networking failures and you need to create a new Kafka instance with your old data. Importing and migrating backed up data is also useful when you are moving the Kafka instance to an upgraded or downgraded server due to a change in resource usage.

      In this tutorial, you will back up, import, and migrate your Kafka data on a single Ubuntu 18.04 installation as well as on multiple Ubuntu 18.04 installations on separate servers. ZooKeeper is a critical component of Kafka’s operation. It stores information about cluster state such as consumer data, partition data, and the state of other brokers in the cluster. As such, you will also back up ZooKeeper’s data in this tutorial.

      Prerequisites

      To follow along, you will need:

      • An Ubuntu 18.04 server with at least 4GB of RAM and a non-root sudo user set up by following the tutorial.
      • An Ubuntu 18.04 server with Apache Kafka installed, to act as the source of the backup. Follow the How To Install Apache Kafka on Ubuntu 18.04 guide to set up your Kafka installation, if Kafka isn’t already installed on the source server.
      • OpenJDK 8 installed on the server. To install this version, follow these instructions on installing specific versions of OpenJDK.

      • Optional for Step 7 — Another Ubuntu 18.04 server with Apache Kafka installed, to act as the destination of the backup. Follow the article link in the previous prerequisite to install Kafka on the destination server. This prerequisite is required only if you are moving your Kafka data from one server to another. If you want to back up and import your Kafka data to a single server, you can skip this prerequisite.

      Step 1 — Creating a Test Topic and Adding Messages

      A Kafka message is the most basic unit of data storage in Kafka and is the entity that you will publish to and subscribe from Kafka. A Kafka topic is like a container for a group of related messages. When you subscribe to a particular topic, you will receive only messages that were published to that particular topic. In this section you will log in to the server that you would like to back up (the source server) and add a Kafka topic and a message so that you have some data populated for the backup.

      This tutorial assumes you have installed Kafka in the home directory of the kafka user (/home/kafka/kafka). If your installation is in a different directory, modify the ~/kafka part in the following commands with your Kafka installation’s path, and for the commands throughout the rest of this tutorial.

      SSH into the source server by executing:

      • ssh sammy@source_server_ip

      Run the following command to log in as the kafka user:

      Create a topic named BackupTopic using the kafka-topics.sh shell utility file in your Kafka installation's bin directory, by typing:

      • ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic BackupTopic

      Publish the string "Test Message 1" to the BackupTopic topic by using the ~/kafka/bin/kafka-console-producer.sh shell utility script.

      If you would like to add additional messages here, you can do so now.

      • echo "Test Message 1" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic BackupTopic > /dev/null

      The ~/kafka/bin/kafka-console-producer.sh file allows you to publish messages directly from the command line. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of publishing messages during testing or while performing administrative tasks. The --topic flag specifies the topic that you will publish the message to.

      Next, verify that the kafka-console-producer.sh script has published the message(s) by running the following command:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      The ~/kafka/bin/kafka-console-consumer.sh shell script starts the consumer. Once started, it will subscribe to messages from the topic that you published in the "Test Message 1" message in the previous command. The --from-beginning flag in the command allows consuming messages that were published before the consumer was started. Without the flag enabled, only messages published after the consumer was started will appear. On running the command, you will see the following output in the terminal:

      Output

      Test Message 1

      Press CTRL+C to stop the consumer.

      You've created some test data and verified that it's persisted. Now you can back up the state data in the next section.

      Step 2 — Backing Up the ZooKeeper State Data

      Before backing up the actual Kafka data, you need to back up the cluster state stored in ZooKeeper.

      ZooKeeper stores its data in the directory specified by the dataDir field in the ~/kafka/config/zookeeper.properties configuration file. You need to read the value of this field to determine the directory to back up. By default, dataDir points to the /tmp/zookeeper directory. If the value is different in your installation, replace /tmp/zookeeper with that value in the following commands.

      Here is an example output of the ~/kafka/config/zookeeper.properties file:

      ~/kafka/config/zookeeper.properties

      ...
      ...
      ...
      # the directory where the snapshot is stored.
      dataDir=/tmp/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
      ...
      ...
      ...
      

      Now that you have the path to the directory, you can create a compressed archive file of its contents. Compressed archive files are a better option over regular archive files to save disk space. Run the following command:

      • tar -czf /home/kafka/zookeeper-backup.tar.gz /tmp/zookeeper/*

      The command's output tar: Removing leading / from member names you can safely ignore.

      The -c and -z flags tell tar to create an archive and apply gzip compression to the archive. The -f flag specifies the name of the output compressed archive file, which is zookeeper-backup.tar.gz in this case.

      You can run ls in your current directory to see zookeeper-backup.tar.gz as part of your output.

      You have now successfully backed up the ZooKeeper data. In the next section, you will back up the actual Kafka data.

      Step 3 — Backing Up the Kafka Topics and Messages

      In this section, you will back up Kafka's data directory into a compressed tar file like you did for ZooKeeper in the previous step.

      Kafka stores topics, messages, and internal files in the directory that the log.dirs field specifies in the ~/kafka/config/server.properties configuration file. You need to read the value of this field to determine the directory to back up. By default and in your current installation, log.dirs points to the /tmp/kafka-logs directory. If the value is different in your installation, replace /tmp/kafka-logs in the following commands with the correct value.

      Here is an example output of the ~/kafka/config/server.properties file:

      ~/kafka/config/server.properties

      ...
      ...
      ...
      ############################# Log Basics #############################
      
      # A comma separated list of directories under which to store log files
      log.dirs=/tmp/kafka-logs
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=1
      
      # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
      # This value is recommended to be increased for installations with data dirs located in RAID array.
      num.recovery.threads.per.data.dir=1
      ...
      ...
      ...
      

      First, stop the Kafka service so that the data in the log.dirs directory is in a consistent state when creating the archive with tar. To do this, return to your server's non-root user by typing exit and then run the following command:

      • sudo systemctl stop kafka

      After stopping the Kafka service, log back in as your kafka user with:

      It is necessary to stop/start the Kafka and ZooKeeper services as your non-root sudo user because in the Apache Kafka installation prerequisite you restricted the kafka user as a security precaution. This step in the prerequisite disables sudo access for the kafka user, which leads to commands failing to execute.

      Now, create a compressed archive file of the directory's contents by running the following command:

      • tar -czf /home/kafka/kafka-backup.tar.gz /tmp/kafka-logs/*

      Once again, you can safely ignore the command's output (tar: Removing leading / from member names).

      You can run ls in the current directory to see kafka-backup.tar.gz as part of the output.

      You can start the Kafka service again — if you do not want to restore the data immediately — by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl start kafka

      Log back in as your kafka user:

      You have successfully backed up the Kafka data. You can now proceed to the next section, where you will be restoring the cluster state data stored in ZooKeeper.

      Step 4 — Restoring the ZooKeeper Data

      In this section you will restore the cluster state data that Kafka creates and manages internally when the user performs operations such as creating a topic, adding/removing additional nodes, and adding and consuming messages. You will restore the data to your existing source installation by deleting the ZooKeeper data directory and restoring the contents of the zookeeper-backup.tar.gz file. If you want to restore data to a different server, see Step 7.

      You need to stop the Kafka and ZooKeeper services as a precaution against the data directories receiving invalid data during the restoration process.

      First, stop the Kafka service by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl stop kafka

      Next, stop the ZooKeeper service:

      • sudo systemctl stop zookeeper

      Log back in as your kafka user:

      You can then safely delete the existing cluster data directory with the following command:

      Now restore the data you backed up in Step 2:

      • tar -C /tmp/zookeeper -xzf /home/kafka/zookeeper-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/zookeeper before extracting the data. You specify the --strip 2 flag to make tar extract the archive's contents in /tmp/zookeeper/ itself and not in another directory (such as /tmp/zookeeper/tmp/zookeeper/) inside of it.

      You have restored the cluster state data successfully. Now, you can proceed to the Kafka data restoration process in the next section.

      Step 5 — Restoring the Kafka Data

      In this section you will restore the backed up Kafka data to your existing source installation (or the destination server if you have followed the optional Step 7) by deleting the Kafka data directory and restoring the compressed archive file. This will allow you to verify that restoration works successfully.

      You can safely delete the existing Kafka data directory with the following command:

      Now that you have deleted the data, your Kafka installation resembles a fresh installation with no topics or messages present in it. To restore your backed up data, extract the files by running:

      • tar -C /tmp/kafka-logs -xzf /home/kafka/kafka-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/kafka-logs before extracting the data. You specify the --strip 2 flag to ensure that the archive's contents are extracted in /tmp/kafka-logs/ itself and not in another directory (such as /tmp/kafka-logs/kafka-logs/) inside of it.

      Now that you have extracted the data successfully, you can start the Kafka and ZooKeeper services again by typing exit, to switch to your non-root sudo user, and then executing:

      • sudo systemctl start kafka

      Start the ZooKeeper service with:

      • sudo systemctl start zookeeper

      Log back in as your kafka user:

      You have restored the kafka data, you can move on to verifying that the restoration is successful in the next section.

      Step 6 — Verifying the Restoration

      To test the restoration of the Kafka data, you will consume messages from the topic you created in Step 1.

      Wait a few minutes for Kafka to start up and then execute the following command to read messages from the BackupTopic:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      If you get a warning like the following, you need to wait for Kafka to start fully:

      Output

      [2018-09-13 15:52:45,234] WARN [Consumer clientId=consumer-1, groupId=console-consumer-87747] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

      Retry the previous command in another few minutes or run sudo systemctl restart kafka as your non-root sudo user. If there are no issues in the restoration, you will see the following output:

      Output

      Test Message 1

      If you do not see this message, you can check if you missed out any commands in the previous section and execute them.

      Now that you have verified the restored Kafka data, this means you have successfully backed up and restored your data in a single Kafka installation. You can continue to Step 7 to see how to migrate the cluster and topics data to an installation in another server.

      Step 7 — Migrating and Restoring the Backup to Another Kafka Server (Optional)

      In this section, you will migrate the backed up data from the source Kafka server to the destination Kafka server. To do so, you will first use the scp command to download the compressed tar.gz files to your local system. You will then use scp again to push the files to the destination server. Once the files are present in the destination server, you can follow the steps used previously to restore the backup and verify that the migration is successful.

      You are downloading the backup files locally and then uploading them to the destination server, instead of copying it directly from your source to destination server, because the destination server will not have your source server's SSH key in its /home/sammy/.ssh/authorized_keys file and cannot connect to and from the source server. Your local machine can connect to both servers however, saving you an additional step of setting up SSH access from the source to destination server.

      Download the zookeeper-backup.tar.gz and kafka-backup.tar.gz files to your local machine by executing:

      • scp sammy@source_server_ip:/home/kafka/zookeeper-backup.tar.gz .

      You will see output similar to:

      Output

      zookeeper-backup.tar.gz 100% 68KB 128.0KB/s 00:00

      Now run the following command to download the kafka-backup.tar.gz file to your local machine:

      • scp sammy@source_server_ip:/home/kafka/kafka-backup.tar.gz .

      You will see the following output:

      Output

      kafka-backup.tar.gz 100% 1031KB 488.3KB/s 00:02

      Run ls in the current directory of your local machine, you will see both of the files:

      Output

      kafka-backup.tar.gz zookeeper.tar.gz

      Run the following command to transfer the zookeeper-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp zookeeper-backup.tar.gz sammy@destination_server_ip:/home/sammy/zookeeper-backup.tar.gz

      Now run the following command to transfer the kafka-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp kafka-backup.tar.gz sammy@destination_server_ip:/home/sammy/kafka-backup.tar.gz

      You have uploaded the backup files to the destination server successfully. Since the files are in the /home/sammy/ directory and do not have the correct permissions for access by the kafka user, you can move the files to the /home/kafka/ directory and change their permissions.

      SSH into the destination server by executing:

      • ssh sammy@destination_server_ip

      Now move zookeeper-backup.tar.gz to /home/kafka/ by executing:

      • sudo mv zookeeper-backup.tar.gz /home/sammy/zookeeper-backup.tar.gz

      Similarly, run the following command to copy kafka-backup.tar.gz to /home/kafka/:

      • sudo mv kafka-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      Change the owner of the backup files by running the following command:

      • sudo chown kafka /home/kafka/zookeeper-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      The previous mv and chown commands will not display any output.

      Now that the backup files are present in the destination server at the correct directory, follow the commands listed in Steps 4 to 6 of this tutorial to restore and verify the data for your destination server.

      Conclusion

      In this tutorial, you backed up, imported, and migrated your Kafka topics and messages from both the same installation and installations on separate servers. If you would like to learn more about other useful administrative tasks in Kafka, you can consult the operations section of Kafka's official documentation.

      To store backed up files such as zookeeper-backup.tar.gz and kafka-backup.tar.gz remotely, you can explore Digital Ocean Spaces. If Kafka is the only service running on your server, you can also explore other backup methods such as full instance backups.



      Source link

      Import Existing Infrastructure to Terraform


      Updated by Linode Contributed by Linode

      Terraform is an orchestration tool that uses declarative code to build, change, and version infrastructure that is made up of server instances and services. You can use Linode’s official Terraform provider to interact with Linode services. Existing Linode infrastructure can be imported and brought under Terraform management. This guide will describe how to import existing Linode infrastructure into Terraform using the official Linode provider plugin.

      Before You Begin

      1. Terraform and the Linode Terraform provider should be installed in your development environment. You should also have a basic understanding of Terraform resources. To install and learn about Terraform, read our Use Terraform to Provision Linode Environments guide.

      2. To use Terraform you must have a valid API access token. For more information on creating a Linode API access token, visit our Getting Started with the Linode API guide.

      3. This guide uses the Linode CLI to retrieve information about the Linode infrastructure you will import to Terraform. For more information on the setup, installation, and usage of the Linode CLI, check out the Using the Linode CLI guide.

      Terraform’s Import Command

      Throughout this guide the terraform import command will be used to import Linode resources. At the time of writing this guide, the import command does not generate a Terraform resource configuration. Instead, it imports your existing resources into Terraform’s state.

      State is Terraform’s stored JSON mapping of your current Linode resources to their configurations. You can access and use the information provided by the state to manually create a corresponding resource configuration file and manage your existing Linode infrastructure with Terraform.

      Additionally, there is no current way to import more than one resource at a time. All resources must be individually imported.

      Caution

      When importing your infrastructure to Terraform, failure to accurately provide your Linode service’s ID information can result in the unwanted alteration or destruction of the service. Please follow the instructions provided in this guide carefully. It might be beneficial to use multiple Terraform Workspaces to manage separate testing and production infrastructures.

      Import a Linode to Terraform

      Retrieve Your Linode’s ID

      1. Using the Linode CLI, retrieve a list of all your Linode instances and find the ID of the Linode you would like to manage under Terraform:

        linode-cli linodes list --json --pretty
        
          
        [
          {
            "id": 11426126,
            "image": "linode/debian9",
            "ipv4": [
            "192.0.2.2"
            ],
            "label": "terraform-import",
            "region": "us-east",
            "status": "running",
            "type": "g6-standard-1"
          }
        ]
        
        

        This command will return a list of your existing Linodes in JSON format. From the list, find the Linode you would like to import and copy down its corresponding id. In this example, the Linode’s ID is 11426126. You will use your Linode’s ID to import your Linode to Terraform.

      Create An Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the Linode instance you will import in the next section. Your file can be named anything you like, but it must end in .tf. Add a Linode provider block with your API access token and an empty linode_instance resource configuration block in the file:

        Note

        The example resource block defines example_label as the label. This can be changed to any value you prefer. This label is used to reference your Linode resource configuration within Terraform, and does not have to be the same label originally assigned to the Linode when it was created outside of Terraform.

        linode_import.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "your_API_access_token"
        }
        
        resource "linode_instance" "example_label" {}

      Import Your Linode to Terraform

      1. Run the import command, supplying the linode_instance resource’s label, and the Linode’s ID that was retrieved in the Retrieve Your Linode’s ID section :

        terraform import linode_instance.example_label linodeID
        

        You should see a similar output:

          
        linode_instance.example_label: Importing from ID "11426126"...
        linode_instance.example_label: Import complete!
          Imported linode_instance (ID: 11426126)
        linode_instance.example_label: Refreshing state... (ID: 11426126)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your Linode. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported Linode instance.

        terraform show
        

        You should see an output similar to the following:

          
        linode_instance.example_label:
          id = 11426126
          alerts.# = 1
          alerts.0.cpu = 90
          alerts.0.io = 10000
          alerts.0.network_in = 10
          alerts.0.network_out = 10
          alerts.0.transfer_quota = 80
          backups.# = 1
          boot_config_label = My Debian 9 Disk Profile
          config.# = 1
          config.0.comments =
          config.0.devices.# = 1
          config.0.devices.0.sda.# = 1
          config.0.devices.0.sda.0.disk_id = 24170011
          config.0.devices.0.sda.0.disk_label = Debian 9 Disk
          config.0.devices.0.sda.0.volume_id = 0
          config.0.devices.0.sdb.# = 1
          config.0.devices.0.sdb.0.disk_id = 24170012
          config.0.devices.0.sdb.0.disk_label = 512 MB Swap Image
          config.0.devices.0.sdb.0.volume_id = 0
          config.0.devices.0.sdc.# = 0
          config.0.devices.0.sdd.# = 0
          config.0.devices.0.sde.# = 0
          config.0.devices.0.sdf.# = 0
          config.0.devices.0.sdg.# = 0
          config.0.devices.0.sdh.# = 0
          config.0.helpers.# = 1
          config.0.helpers.0.devtmpfs_automount = true
          config.0.helpers.0.distro = true
          config.0.helpers.0.modules_dep = true
          config.0.helpers.0.network = true
          config.0.helpers.0.updatedb_disabled = true
          config.0.kernel = linode/grub2
          config.0.label = My Debian 9 Disk Profile
          config.0.memory_limit = 0
          config.0.root_device = /dev/root
          config.0.run_level = default
          config.0.virt_mode = paravirt
          disk.# = 2
          disk.0.authorized_keys.# = 0
          disk.0.filesystem = ext4
          disk.0.id = 24170011
          disk.0.image =
          disk.0.label = Debian 9 Disk
          disk.0.read_only = false
          disk.0.root_pass =
          disk.0.size = 50688
          disk.0.stackscript_data.% = 0
          disk.0.stackscript_id = 0
          disk.1.authorized_keys.# = 0
          disk.1.filesystem = swap
          disk.1.id = 24170012
          disk.1.image =
          disk.1.label = 512 MB Swap Image
          disk.1.read_only = false
          disk.1.root_pass =
          disk.1.size = 512
          disk.1.stackscript_data.% = 0
          disk.1.stackscript_id = 0
          group = Terraform
          ip_address = 192.0.2.2
          ipv4.# = 1
          ipv4.1835604989 = 192.0.2.2
          ipv6 = 2600:3c03::f03c:91ff:fef6:3ebe/64
          label = terraform-import
          private_ip = false
          region = us-east
          specs.# = 1
          specs.0.disk = 51200
          specs.0.memory = 2048
          specs.0.transfer = 2000
          specs.0.vcpus = 1
          status = running
          swap_size = 512
          type = g6-standard-1
          watchdog_enabled = true
        
        

        You will use this information in the next section.

        Note

        There is a current bug in the Linode Terraform provider that causes the Linode’s root_device configuration to display an import value of /dev/root, instead of /dev/sda. This is visible in the example output above: config.0.root_device = /dev/root. However, the correct disk, /dev/sda, is in fact targeted. For this reason, when running the terraform plan or the terraform apply commands, the output will display config.0.root_device: "/dev/root" => "/dev/sda".

        You can follow the corresponding GitHub issue for more details.

      Fill In Your Linode’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_instance resource block. In the example below, the necessary values were collected from the output of the terraform show command applied in Step 2 of the Import Your Linode to Terraform section. The file’s comments indicate the corresponding keys used to determine the values for the linode_instance configuration block.

        linode_instance_import.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        
        provider "linode" {
            token = "a12b3c4e..."
        }
        
        resource "linode_instance" "example_label" {
            label = "terraform-import" #label
            region = "us-east"         #region
            type = "g6-standard-1"     #type
            config {
                label = "My Debian 9 Disk Profile"     #config.0.label
                kernel = "linode/grub2"                #config.0.kernel
                root_device = "/dev/sda"               #config.0.root_device
                devices {
                    sda = {
                        disk_label = "Debian 9 Disk"    #config.0.devices.0.sda.0.disk_label
                    }
                    sdb = {
                        disk_label = "512 MB Swap Image" #config.0.devices.0.sdb.0.disk_label
                    }
                }
            }
            disk {
                label = "Debian 9 Disk"      #disk.0.label
                size = "50688"               #disk.0.size
            }
            disk {
                label = "512 MB Swap Image"  #disk.1.label
                size = "512"                 #disk.1.size
            }
        }
            

        Note

        If your Linode uses more than two disks (for instance, if you have attached a Block Storage Volume), you will need to add those disks to your Linode resource configuration block. In order to add a disk, you must add the disk to the devices stanza and create an additional disk stanza.

        Note

        If you have more than one configuration profile, you must choose which profile to boot from with the boot_config_label argument. For example:

        resource "linode_instance" "example_label" {
            boot_config_label = "My Debian 9 Disk Profile"
        ...
        
      2. To check for errors in your configuration, run the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with a terraform apply. Running terraform plan is a good way to determine if the configuration you provided is exact enough for Terraform to take over the management of your Linode.

        Note

        Running terraform plan will display any changes that will be applied to your existing infrastructure based on your configuration file(s). However, you will not be notified about the addition and removal of disks with terraform plan. For this reason, it is vital that the values you include in your linode_instance resource configuration block match the values generated from running the terraform show command.

      3. Once you have verified the configurations you provided in the linode_instance resource block, you are ready to begin managing your Linode instance with Terraform. Any changes or updates can be made by updating your linode_instance_import.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode Instance Terraform documentation.

      Import a Domain to Terraform

      Retrieve Your Domain’s ID

      1. Using the Linode CLI, retrieve a list of all your domains to find the ID of the domain you would like to manage under Terraform:

        linode-cli domains list --json --pretty
        

        You should see output like the following:

          
        [
          {
            "domain": "import-example.com",
            "id": 1157521,
            "soa_email": "[email protected]",
            "status": "active",
            "type": "master"
          }
        ]
        
        

        Find the domain you would like to import and copy down the ID. You will need this ID to import your domain to Terraform.

      Create an Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the domain you will import in the next section. Your file can be named anything you like, but must end in .tf. Add a Linode provider block with your API access token and an empty linode_domain resource configuration block to the file:

        domain_import.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_domain" "example_label" {}

      Import Your Domain to Terraform

      1. Run the import command, supplying the linode_domain resource’s label, and the domain ID that was retrieved in the Retrieve Your Domain’s ID section:

        terraform import linode_domain.example_label domainID
        

        You should see output similar to the following:

          
        linode_domain.example_label: Importing from ID "1157521"...
        linode_domain.example_label: Import complete!
          Imported linode_domain (ID: 1157521)
        linode_domain.example_label: Refreshing state... (ID: 1157521)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your domain. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported domain:

        terraform show
        

        You should see output like the following:

          
        linode_domain.example_label:
          id = 1157521
          description =
          domain = import-example.com
          expire_sec = 0
          group =
          master_ips.# = 0
          refresh_sec = 0
          retry_sec = 0
          soa_email = [email protected]
          status = active
          ttl_sec = 0
          type = master
        
        

      Fill In Your Domain’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_domain resource block. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 2 of the Import Your Domain to Terraform section.

        linode_domain_example.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        provider "linode" {
            token = "1a2b3c..."
        }
        
        resource "linode_domain" "example_label" {
            domain = "import-example.com"
            soa_email = "[email protected]"
            type = "master"
        }
            

        Note

        If your Domain type is slave then you’ll need to include a master_ips argument with values set to the IP addresses that represent the Master DNS for your domain.

      2. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      3. Once you have verified the configurations you provided in the linode_domain block, you are ready to begin managing your domain with Terraform. Any changes or updates can be made by updating your linode_domain_example.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode Domain Terraform documentation.

      Import a Block Storage Volume to Terraform

      Retrieve Your Block Storage Volume’s ID

      1. Using the Linode CLI, retrieve a list of all your volumes to find the ID of the Block Storage Volume you would like to manage under Terraform:

        linode-cli volumes list --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "id": 17045,
            "label": "import-example",
            "linode_id": 11426126,
            "region": "us-east",
            "size": 20,
            "status": "active"
          }
        ]
        
        

        Find the Block Storage Volume you would like to import and copy down the ID. You will use this ID to import your volume to Terraform.

      Create an Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the Block Storage Volume you will import in the next section. Your file can be named anything you like, but must end in .tf. Add a Linode provider block with your API access token and an empty linode_volume resource configuration block to the file:

        linode_volume_example.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_volume" "example_label" {}

      Import Your Volume to Terraform

      1. Run the import command, supplying the linode_volume resource’s label, and the volume ID that was retrieved in the Retrieve Your Block Storage Volume’s ID section:

        terraform import linode_volume.example_label volumeID
        

        You should see output similar to the following:

          
        linode_volume.example_label: Importing from ID "17045"...
        linode_volume.example_label: Import complete!
          Imported linode_volume (ID: 17045)
        linode_volume.example_label: Refreshing state... (ID: 17045)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your Volume. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported Volume:

        terraform show
        

        You should see output like the following:

          
        linode_volume.example_label:
          id = 17045
          filesystem_path = /dev/disk/by-id/scsi-0Linode_Volume_import-example
          label = import-example
          linode_id = 11426126
          region = us-east
          size = "20"
          status = active
        
        

      Fill In Your Volume’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_volume resource block. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 2 of the Import Your Volume to Terraform section:

        linode_volume_example.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        provider "linode" {
            token = "1a2b3c..."
        }
        
        resource "linode_volume" "example_label" {
            label = "import-example"
            region = "us-east"
            size = "20"
        }
            

        Note

        Though it is not required, it’s a good idea to include a configuration for the size of the volume so that it can be managed more easily should you ever choose to expand the Volume. It is not possible to reduce the size of a volume.

      2. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      3. Once you have verified the configurations you provided in the linode_volume block, you are ready to begin managing your Block Storage Volume with Terraform. Any changes or updates can be made by updating your linode_volume_example.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more optional configuration options, visit the Linode Volume Terraform documentation.

      Import a NodeBalancer to Terraform

      Configuring Linode NodeBalancers with Terraform requires three separate resource configuration blocks: one to create the NodeBalancer, a second for the NodeBalancer Configuration, and a third for the NodeBalancer Nodes.

      Retrieve Your NodeBalancer, NodeBalancer Config, NodeBalancer Node IDs

      1. Using the Linode CLI, retrieve a list of all your NodeBalancers to find the ID of the NodeBalancer you would like to manage under Terraform:

        linode-cli nodebalancers list --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "client_conn_throttle": 0,
            "hostname": "nb-192-0-2-3.newark.nodebalancer.linode.com",
            "id": 40721,
            "ipv4": "192.0.2.3",
            "ipv6": "2600:3c03:1::68ed:945f",
            "label": "terraform-example",
            "region": "us-east"
          }
        ]
        
        

        Find the NodeBalancer you would like to import and copy down the ID. You will use this ID to import your NodeBalancer to Terraform.

      2. Retrieve your NodeBalancer configuration by supplying the ID of the NodeBalancer you retrieved in the previous step:

        linode-cli nodebalancers configs-list 40721 --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "algorithm": "roundrobin",
            "check_passive": true,
            "cipher_suite": "recommended",
            "id": 35876,
            "port": 80,
            "protocol": "http",
            "ssl_commonname": "",
            "ssl_fingerprint": "",
            "stickiness": "table"
          }
        ]
        
        

        Copy down the ID of your NodeBalancer configuration, you will use it to import your NodeBalancer configuration to Terraform.

      3. Retrieve a list of Nodes corresponding to your NodeBalancer to find the label and address of your NodeBalancer Nodes. Supply the ID of your NodeBalancer as the first argument and the ID of your NodeBalancer configuration as the second:

        linode-cli nodebalancers nodes-list 40721 35876 --json --pretty
        

        You should see output like the following:

          
        [
          {
            "address": "192.168.214.37:80",
            "id": 327539,
            "label": "terraform-import",
            "mode": "accept",
            "status": "UP",
            "weight": 100
          }
        ]
        
        

        If you are importing a NodeBalancer, chances are your output lists more than one Node. Copy down the IDs of each Node. You will use them to import your Nodes to Terraform.

      Create Empty Resource Configurations

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the NodeBalancer you will import in the next section. Your file can be named anything you like, but must end in .tf.

        Add a Linode provider block with your API access token and empty linode_nodebalancer, linode_nodebalancer_config, and linode_nodebalancer_node resource configuration blocks to the file. Be sure to give the resources appropriate labels. These labels will be used to reference the resources locally within Terraform:

        linode_nodebalancer_example.tf
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_nodebalancer" "example_nodebalancer_label" {}
        
        resource "linode_nodebalancer_config" "example_nodebalancer_config_label" {}
        
        resource "linode_nodebalancer_node" "example_nodebalancer_node_label" {}

        If you have more than one NodeBalancer Configuration, you will need to supply multiple linode_nodebalancer_config resource blocks with different labels. The same is true for each NodeBalancer Node requiring an additional linode_nodebalancer_node block.

      Import Your NodeBalancer, NodeBalancer Configuration, and NodeBalancer Nodes to Terraform

      1. Run the import command for your NodeBalancer, supplying your local label and the ID of your NodeBalancer as the last parameter.

        terraform import linode_nodebalancer.example_nodebalancer_label nodebalancerID
        

        You should see output similar to the following:

          
        linode_nodebalancer.example_nodebalancer_label: Importing from ID "40721"...
        linode_nodebalancer.example_nodebalancer_label: Import complete!
          Imported linode_nodebalancer (ID: 40721)
        linode_nodebalancer.example_nodebalancer_label: Refreshing state... (ID: 40721)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      2. Run the import command for your NodeBalancer configuration, supplying your local label, and the ID of your NodeBalancer and the ID of your NodeBalancer configuration separated by commas as the last argument.

        terraform import linode_nodebalancer_config.example_nodebalancer_config_label nodebalancerID,nodebalancerconfigID
        

        You should see output similar to the following:

          
        linode_nodebalancer_config.example_nodebalancer_config_label: Importing from ID "40721,35876"...
        linode_nodebalancer_config.example_nodebalancer_config_label: Import complete!
          Imported linode_nodebalancer_config (ID: 35876)
        linode_nodebalancer_config.example_nodebalancer_config_label: Refreshing state... (ID: 35876)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      3. Run the import command for you NodeBalancer Nodes, supplying your local label, and the ID of your NodeBalancer, the ID of your NodeBalancer Configuration, and your NodeBalancer Node, separated by commas, as the last argument.

        terraform import linode_nodebalancer_node.example_nodebalancer_node_label nodebalancerID,nodebalancerconfigID,nodebalancernodeID
        

        You should see output like the following:

          
        linode_nodebalancer_node.example_nodebalancer_node_label: Importing from ID "40721,35876,327539"...
        linode_nodebalancer_node.example_nodebalancer_node_label: Import complete!
          Imported linode_nodebalancer_node (ID: 327539)
        linode_nodebalancer_node.example_nodebalancer_node_label: Refreshing state... (ID: 327539)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      4. Running terraform import creates a terraform.tfstate file with information about your NodeBalancer. You will use this information to fill out your resource configuration. To view the information created by terraform import, run the show command:

        terraform show
        

        You should see output like the following:

          
        linode_nodebalancer.example_nodebalancer_label:
          id = 40721
          client_conn_throttle = 0
          created = 2018-11-16T20:21:03Z
          hostname = nb-192-0-2-3.newark.nodebalancer.linode.com
          ipv4 = 192.0.2.3
          ipv6 = 2600:3c03:1::68ed:945f
          label = terraform-example
          region = us-east
          transfer.% = 3
          transfer.in = 0.013627052307128906
          transfer.out = 0.0015048980712890625
          transfer.total = 0.015131950378417969
          updated = 2018-11-16T20:21:03Z
        
        linode_nodebalancer_config.example_nodebalancer_config_label:
          id = 35876
          algorithm = roundrobin
          check = none
          check_attempts = 2
          check_body =
          check_interval = 5
          check_passive = true
          check_path =
          check_timeout = 3
          cipher_suite = recommended
          node_status.% = 2
          node_status.down = 0
          node_status.up = 1
          nodebalancer_id = 40721
          port = 80
          protocol = http
          ssl_commonname =
          ssl_fingerprint =
          ssl_key =
          stickiness = table
        
        linode_nodebalancer_node.example_nodebalancer_node_label:
          id = 327539
          address = 192.168.214.37:80
          config_id = 35876
          label = terraform-import
          mode = accept
          nodebalancer_id = 40721
          status = UP
          weight = 100
        
        

      Fill In Your NodeBalancer’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for all three NodeBalancer resource configuration blocks. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 4 of the Import Your NodeBalancer, NodeBalancer Configuration, and NodeBalancer Nodes to Terraform section:
      linode_nodebalancer_example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          token = "1a2b3c..."
      }
      
      resource "linode_nodebalancer" "nodebalancer_import" {
          label = "terraform-example"
          region = "us-east"
      }
      
      resource "linode_nodebalancer_config" "nodebalancer_config_import" {
          nodebalancer_id = "40721"
      }
      
      resource "linode_nodebalancer_node" "nodebalancer_node_import" {
          label = "terraform-import"
          address = "192.168.214.37:80"
          nodebalancer_id = "40721"
          config_id = "35876"
      }
      1. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      2. Once you have verified the configurations you provided in all three NodeBalancer configuration blocks, you are ready to begin managing your NodeBalancers with Terraform. Any changes or updates can be made by updating your linode_nodebalancer_example.tf file, then verifying the changes with the terrform plan command, and finally, applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode NodeBalancer, Linode NodeBalancer Config, and Linode NodeBalancer Node Terraform documentation.

      Next Steps

      You can follow a process similar to what has been outlined in this guide to begin importing other pieces of your Linode infrastructure such as images, SSH keys, access tokens, and StackScripts. Check out the links in the More Information section below for helpful information.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link