One place for hosting & domains

      September 2019

      Customizing Go Binaries with Build Tags


      Introduction

      In Go, a build tag, or a build constraint, is an identifier added to a piece of code that determines when the file should be included in a package during the build process. This allows you to build different versions of your Go application from the same source code and to toggle between them in a fast and organized manner. Many developers use build tags to improve the workflow of building cross-platform compatible applications, such as programs that require code changes to account for variances between different operating systems. Build tags are also used for integration testing, allowing you to quickly switch between the integrated code and the code with a mock service or stub, and for differing levels of feature sets within an application.

      Let’s take the problem of differing customer feature sets as an example. When writing some applications, you may want to control which features to include in the binary, such as an application that offers Free, Pro, and Enterprise levels. As the customer increases their subscription level in these applications, more features become unlocked and available. To solve this problem, you could maintain separate projects and try to keep them in sync with each other through the use of import statements. While this approach would work, over time it would become tedious and error prone. An alternative approach would be to use build tags.

      In this article, you will use build tags in Go to generate different executable binaries that offer Free, Pro, and Enterprise feature sets of a sample application. Each will have a different set of features available, with the Free version being the default.

      Prerequisites

      To follow the example in this article, you will need:

      Building the Free Version

      Let’s start by building the Free version of the application, as it will be the default when running go build without any build tags. Later on, we will use build tags to selectively add other parts to our program.

      In the src directory, create a folder with the name of your application. This tutorial will use app:

      Move into this folder:

      Next, make a new text file in your text editor of choice named main.go:

      Now, we’ll define the Free version of the application. Add in the following contents to main.go:

      main.go

      package main
      
      import "fmt"
      
      var features = []string{
        "Free Feature #1",
        "Free Feature #2",
      }
      
      func main() {
        for _, f := range features {
          fmt.Println(">", f)
        }
      }
      

      In this file, we created a program that declares a slice named features, which holds two strings that represent the features of our Free application. The main() function in the application uses a for loop to range through the features slice and print all of the features available to the screen.

      Save and exit the file. Now that this file is saved, we will no longer have to edit it for the rest of the article. Instead we will use build tags to change the features of the binaries we will build from it.

      Build and run the program:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      The program has printed out our two free features, completing the Free version of our app.

      So far, you created an application that has a very basic feature set. Next, you will build a way to add more features into the application at build time.

      Adding the Pro Features With go build

      We have so far avoided making changes to main.go, simulating a common production environment in which code needs to be added without changing and possibly breaking the main code. Since we can’t edit the main.go file, we’ll need to use another mechanism for injecting more features into the features slice using build tags.

      Let’s create a new file called pro.go that will use an init() function to append more features to the features slice:

      Once the editor has opened the file, add the following lines:

      pro.go

      package main
      
      func init() {
        features = append(features,
          "Pro Feature #1",
          "Pro Feature #2",
        )
      }
      

      In this code, we used init() to run code before the main() function of our application, followed by append() to add the Pro features to the features slice. Save and exit the file.

      Compile and run the application using go build:

      Since there are now two files in our current directory (pro.go and main.go), go build will create a binary from both of them. Execute this binary:

      This will give you the following feature set:

      Output

      > Free Feature #1 > Free Feature #2 > Pro Feature #1 > Pro Feature #2

      The application now includes both the Pro and the Free features. However, this is not desirable: since there is no distinction between versions, the Free version now includes the features that are supposed to be only available in the Pro version. To fix this, you could include more code to manage the different tiers of the application, or you could use build tags to tell the Go tool chain which .go files to build and which to ignore. Let’s add build tags in the next step.

      You can now use build tags to distinguish the Pro version of your application from the Free version.

      Let’s start by examining what a build tag looks like:

      // +build tag_name
      

      By putting this line of code as the first line of your package and replacing tag_name with the name of your build tag, you will tag this package as code that can be selectively included in the final binary. Let’s see this in action by adding a build tag to the pro.go file to tell the go build command to ignore it unless the tag is specified. Open up the file in your text editor:

      Then add the following highlighted line:

      pro.go

      // +build pro
      
      package main
      
      func init() {
        features = append(features,
          "Pro Feature #1",
          "Pro Feature #2",
        )
      }
      

      At the top of the pro.go file, we added // +build pro followed by a blank newline. This trailing newline is required, otherwise Go interprets this as a comment. Build tag declarations must also be at the very top of a .go file. Nothing, not even comments, can be above build tags.

      The +build declaration tells the go build command that this isn’t a comment, but instead is a build tag. The second part is the pro tag. By adding this tag at the top of the pro.go file, the go build command will now only include the pro.go file with the pro tag is present.

      Compile and run the application again:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      Since the pro.go file requires a pro tag to be present, the file is ignored and the application compiles without it.

      When running the go build command, we can use the -tags flag to conditionally include code in the compiled source by adding the tag itself as an argument. Let’s do this for the pro tag:

      This will output the following:

      Output

      > Free Feature #1 > Free Feature #2 > Pro Feature #1 > Pro Feature #2

      Now we only get the extra features when we build the application using the pro build tag.

      This is fine if there are only two versions, but things get complicated when you add in more tags. To add in the Enterprise version of our app in the next step, we will use multiple build tags joined together with Boolean logic.

      Build Tag Boolean Logic

      When there are multiple build tags in a Go package, the tags interact with each other using Boolean logic. To demonstrate this, we will add the Enterprise level of our application using both the pro tag and the enterprise tag.

      In order to build an Enterprise binary, we will need to include both the default features, the Pro level features, and a new set of features for Enterprise. First, open an editor and create a new file, enterprise.go, that will add the new Enterprise features:

      The contents of enterprise.go will look almost identical to pro.go but will contain new features. Add the following lines to the file:

      enterprise.go

      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and exit the file.

      Currently the enterprise.go file does not have any build tags, and as you learned when you added pro.go, this means that these features will be added to the Free version when executing go.build. For pro.go, you added // +build pro and a newline to the top of the file to tell go build that it should only be included when -tags pro is used. In this situation, you only needed one build tag to accomplish the goal. When adding the new Enterprise features, however, you first must also have the Pro features.

      Let’s add support for the pro build tag to enterprise.go first. Open the file with your text editor:

      Next add the build tag before the package main declaration and make sure to include a newline after the build tag:

      enterprise.go

      // +build pro
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and exit the file.

      Compile and run the application without any tags:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      The Enterprise features no longer show up in the Free version. Now let’s add the pro build tag and build and run the application again:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2 > Pro Feature #1 > Pro Feature #2

      This is still not exactly what we need: The Enterprise features now show up when we try to build the Pro version. To solve this, we need to use another build tag. Unlike the pro tag, however, we need to now make sure both the pro and enterprise features are available.

      The Go build system accounts for this situation by allowing the use of some basic Boolean logic in the build tags system.

      Let’s open enterprise.go again:

      Add another build tag, enterprise, on the same line as the pro tag:

      enterprise.go

      // +build pro enterprise
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and close the file.

      Now let’s compile and run the application with the new enterprise build tag.

      • go build -tags enterprise
      • ./app

      This will give the following:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2

      Now we have lost the Pro features. This is because when we put multiple build tags on the same line in a .go file, go build interprets them as using OR logic. With the addition of the line // +build pro enterprise, the enterprise.go file will be built if either the pro build tag or the enterprise build tag is present. We need to set up the build tags correctly to require both and use AND logic instead.

      Instead of putting both tags on the same line, if we put them on separate lines, then go build will interpret those tags using AND logic.

      Open enterprise.go once again and let’s separate the build tags onto multiple lines.

      enterprise.go

      // +build pro
      // +build enterprise
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Now compile and run the application with the new enterprise build tag.

      • go build -tags enterprise
      • ./app

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      Still not quite there: Because an AND statement requires both elements to be considered true, we need to use both pro and enterprise build tags.

      Let’s try again:

      • go build -tags "enterprise pro"
      • ./app

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2 > Pro Feature #1 > Pro Feature #2

      Now our application can be built from the same source tree in multiple ways unlocking the features of the application accordingly.

      In this example, we used a new // +build tag to signify AND logic, but there are alternative ways to represent Boolean logic with build tags. The following table holds some examples of other syntactic formatting for build tags, along with their Boolean equivalent:

      Build Tag SyntaxBuild Tag SampleBoolean Statement
      Space-separated elements// +build pro enterprisepro OR enterprise
      Comma-separated elements// +build pro,enterprisepro AND enterprise
      Exclamation point elements// +build !proNOT pro

      Conclusion

      In this tutorial, you used build tags to allow you to control which of your code got compiled into the binary. First, you declared build tags and used them with go build, then you combined multiple tags with Boolean logic. You then built a program that represented the different feature sets of a Free, Pro, and Enterprise version, showing the powerful level of control that build tags can give you over your project.

      If you’d like to learn more about build tags, take a look at the Golang documentation on the subject, or continue to explore our How To Code in Go series.



      Source link

      Understanding defer in Go


      Introduction

      Go has many of the common control-flow keywords found in other programming languages such as if, switch, for, etc. One keyword that isn’t found in most other programming languages is defer, and though it’s less common you’ll quickly see how useful it can be in your programs.

      One of the primary uses of a defer statement is for cleaning up resources, such as open files, network connections, and database handles. When your program is finished with these resources, it’s important to close them to avoid exhausting the program’s limits and to allow other programs access to those resources. defer makes our code cleaner and less error prone by keeping the calls to close the file/resource in proximity to the open call.

      In this article we will learn how to properly use the defer statement for cleaning up resources as well as several common mistakes that are made when using defer.

      What is a defer Statement

      A defer statement adds the function call following the defer keyword onto a stack. All of the calls on that stack are called when the function in which they were added returns. Because the calls are placed on a stack, they are called in last-in-first-out order.

      Let’s look at how defer works by printing out some text:

      main.go

      package main
      
      import "fmt"
      
      func main() {
          defer fmt.Println("Bye")
          fmt.Println("Hi")
      }
      

      In the main function, we have two statements. The first statement starts with the defer keyword, followed by a print statement that prints out Bye. The next line prints out Hi.

      If we run the program, we will see the following output:

      Output

      Hi Bye

      Notice that Hi was printed first. This is because any statement that is preceded by the defer keyword isn’t invoked until the end of the function in which defer was used.

      Let’s take another look at the program, and this time we’ll add some comments to help illustrate what is happening:

      main.go

      package main
      
      import "fmt"
      
      func main() {
          // defer statement is executed, and places
          // fmt.Println("Bye") on a list to be executed prior to the function returning
          defer fmt.Println("Bye")
      
          // The next line is executed immediately
          fmt.Println("Hi")
      
          // fmt.Println*("Bye") is now invoked, as we are at the end of the function scope
      }
      

      The key to understanding defer is that when the defer statement is executed, the arguments to the deferred function are evaluated immediately. When a defer executes, it places the statement following it on a list to be invoked prior to the function returning.

      Although this code illustrates the order in which defer would be run, it’s not a typical way it would be used when writing a Go program. It’s more likely we are using defer to clean up a resource, such as a file handle. Let’s look at how to do that next.

      Using defer to Clean Up Resources

      Using defer to clean up resources is very common in Go. Let’s first look at a program that writes a string to a file but does not use defer to handle the resource clean-up:

      main.go

      package main
      
      import (
          "io"
          "log"
          "os"
      )
      
      func main() {
          if err := write("readme.txt", "This is a readme file"); err != nil {
              log.Fatal("failed to write file:", err)
          }
      }
      
      func write(fileName string, text string) error {
          file, err := os.Create(fileName)
          if err != nil {
              return err
          }
          _, err = io.WriteString(file, text)
          if err != nil {
              return err
          }
          file.Close()
          return nil
      }
      

      In this program, there is a function called write that will first attempt to create a file. If it has an error, it will return the error and exit the function. Next, it tries to write the string This is a readme file to the specified file. If it receives an error, it will return the error and exit the function. Then, the function will try to close the file and release the resource back to the system. Finally the function returns nil to signify that the function executed without error.

      Although this code works, there is a subtle bug. If the call to io.WriteString fails, the function will return without closing the file and releasing the resource back to the system.

      We could fix the problem by adding another file.Close() statement, which is how you would likely solve this in a language without defer:

      main.go

      package main
      
      import (
          "io"
          "log"
          "os"
      )
      
      func main() {
          if err := write("readme.txt", "This is a readme file"); err != nil {
              log.Fatal("failed to write file:", err)
          }
      }
      
      func write(fileName string, text string) error {
          file, err := os.Create(fileName)
          if err != nil {
              return err
          }
          _, err = io.WriteString(file, text)
          if err != nil {
              file.Close()
              return err
          }
          file.Close()
          return nil
      }
      

      Now even if the call to io.WriteString fails, we will still close the file. While this was a relatively easy bug to spot and fix, with a more complicated function, it may have been missed.

      Instead of adding the second call to file.Close(), we can use a defer statement to ensure that regardless of which branches are taken during execution, we always call Close().

      Here’s the version that uses the defer keyword:

      main.go

      package main
      
      import (
          "io"
          "log"
          "os"
      )
      
      func main() {
          if err := write("readme.txt", "This is a readme file"); err != nil {
              log.Fatal("failed to write file:", err)
          }
      }
      
      func write(fileName string, text string) error {
          file, err := os.Create(fileName)
          if err != nil {
              return err
          }
          defer file.Close()
          _, err = io.WriteString(file, text)
          if err != nil {
              return err
          }
          return nil
      }
      

      This time we added the line of code: defer file.Close(). This tells the compiler that it should execute the file.Close prior to exiting the function write.

      We have now ensured that even if we add more code and create another branch that exits the function in the future, we will always clean up and close the file.

      However, we have introduced yet another bug by adding the defer. We are no longer checking the potential error that can be returned from the Close method. This is because when we use defer, there is no way to communicate any return value back to our function.

      In Go, it is considered a safe and accepted practice to call Close() more than once without affecting the behavior of your program. If Close() is going to return an error, it will do so the first time it is called. This allows us to call it explicitly in the successful path of execution in our function.

      Let’s look at how we can both defer the call to Close, and still report on an error if we encounter one.

      main.go

      package main
      
      import (
          "io"
          "log"
          "os"
      )
      
      func main() {
          if err := write("readme.txt", "This is a readme file"); err != nil {
              log.Fatal("failed to write file:", err)
          }
      }
      
      func write(fileName string, text string) error {
          file, err := os.Create(fileName)
          if err != nil {
              return err
          }
          defer file.Close()
          _, err = io.WriteString(file, text)
          if err != nil {
              return err
          }
      
          return file.Close()
      }
      

      The only change in this program is the last line in which we return file.Close(). If the call to Close results in an error, this will now be returned as expected to the calling function. Keep in mind that our defer file.Close() statement is also going to run after the return statement. This means that file.Close() is potentially called twice. While this isn’t ideal, it is an acceptable practice as it should not create any side effects to your program.

      If, however, we receive an error earlier in the function, such as when we call WriteString, the function will return that error, and will also try to call file.Close because it was deferred. Although file.Close may (and likely will) return an error as well, it is no longer something we care about as we received an error that is more likely to tell us what went wrong to begin with.

      So far, we have seen how we can use a single defer to ensure that we clean up our resources properly. Next we will see how we can use multiple defer statements for cleaning up more than one resource.

      Multiple defer Statements

      It is normal to have more than one defer statement in a function. Let’s create a program that only has defer statements in it to see what happens when we introduce multiple defers:

      main.go

      package main
      
      import "fmt"
      
      func main() {
          defer fmt.Println("one")
          defer fmt.Println("two")
          defer fmt.Println("three")
      }
      

      If we run the program, we will receive the following output:

      Output

      three two one

      Notice that the order is the opposite in which we called the defer statements. This is because each deferred statement that is called is stacked on top of the previous one, and then called in reverse when the function exits scope (Last In, First Out).

      You can have as many deferred calls as needed in a function, but it is important to remember they will all be called in the opposite order they were executed.

      Now that we understand the order in which multiple defers will execute, let’s see how we would use multiple defers to clean up multiple resources. We’ll create a program that opens a file, writes to it, then opens it again to copy the contents to another file.

      main.go

      package main
      
      import (
          "fmt"
          "io"
          "log"
          "os"
      )
      
      func main() {
          if err := write("sample.txt", "This file contains some sample text."); err != nil {
              log.Fatal("failed to create file")
          }
      
          if err := fileCopy("sample.txt", "sample-copy.txt"); err != nil {
              log.Fatal("failed to copy file: %s")
          }
      }
      
      func write(fileName string, text string) error {
          file, err := os.Create(fileName)
          if err != nil {
              return err
          }
          defer file.Close()
          _, err = io.WriteString(file, text)
          if err != nil {
              return err
          }
      
          return file.Close()
      }
      
      func fileCopy(source string, destination string) error {
          src, err := os.Open(source)
          if err != nil {
              return err
          }
          defer src.Close()
      
          dst, err := os.Create(destination)
          if err != nil {
              return err
          }
          defer dst.Close()
      
          n, err := io.Copy(dst, src)
          if err != nil {
              return err
          }
          fmt.Printf("Copied %d bytes from %s to %sn", n, source, destination)
      
          if err := src.Close(); err != nil {
              return err
          }
      
          return dst.Close()
      }
      

      We added a new function called fileCopy. In this function, we first open up our source file that we are going to copy from. We check to see if we received an error opening the file. If so, we return the error and exit the function. Otherwise, we defer the closing of the source file we just opened.

      Next we create the destination file. Again, we check to see if we received an error creating the file. If so, we return that error and exit the function. Otherwise, we also defer the Close() for the destination file. We now have two defer functions that will be called when the function exits its scope.

      Now that we have both files open, we will Copy() the data from the source file to the destination file. If that is successful, we will attempt to close both files. If we receive an error trying to close either file, we will return the error and exit function scope.

      Notice that we explicitly call Close() for each file, even though the defer will also call Close(). This is to ensure that if there is an error closing a file, we report the error. It also ensures that if for any reason the function exits early with an error, for instance if we failed to copy between the two files, that each file will still try to close properly from the deferred calls.

      Conclusion

      In this article we learned about the defer statement, and how it can be used to ensure that we properly clean up system resources in our program. Properly cleaning up system resources will make your program use less memory and perform better. To learn more about where defer is used, read the article on Handling Panics, or explore our entire How To Code in Go series.



      Source link

      How To Analyze Managed Redis Database Statistics Using the Elastic Stack on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Database monitoring is the continuous process of systematically tracking various metrics that show how the database is performing. By observing performance data, you can gain valuable insights and identify possible bottlenecks, as well as find additional ways of improving database performance. Such systems often implement alerting that notifies administrators when things go wrong. Gathered statistics can be used to not only improve the configuration and workflow of the database, but also those of client applications.

      The benefit of using the Elastic Stack (ELK stack) for monitoring your managed database is its excellent support for searching and the ability to ingest new data very quickly. It does not excel at updating the data, but this trade-off is acceptable for monitoring and logging purposes, where past data is almost never changed. Elasticsearch offers a powerful means of querying the data, which you can use through Kibana to get a better understanding of how the database fares through different time periods. This will allow you to correlate database load with real-life events to gain insight into how the database is being used.

      In this tutorial, you’ll import database metrics, generated by the Redis INFO command, into Elasticsearch via Logstash. This entails configuring Logstash to periodically run the command, parse its output and send it to Elasticsearch for indexing immediately afterward. The imported data can later be analyzed and visualized in Kibana. By the end of the tutorial, you’ll have an automated system pulling in Redis statistics for later analysis.

      Prerequisites

      Step 1 — Installing and Configuring Logstash

      In this section, you will install Logstash and configure it to pull statistics from your Redis database cluster, then parse them to send to Elasticsearch for indexing.

      Start off by installing Logstash with the following command:

      • sudo apt install logstash -y

      Once Logstash is installed, enable the service to automatically start on boot:

      • sudo systemctl enable logstash

      Before configuring Logstash to pull the statistics, let’s see what the data itself looks like. To connect to your Redis database, head over to your Managed Database Control Panel, and under the Connection details panel, select Flags from the dropdown:

      Managed Database Control Panel

      You’ll be shown a preconfigured command for the Redli client, which you’ll use to connect to your database. Click Copy and run the following command on your server, replacing redli_flags_command with the command you have just copied:

      Since the output from this command is long, we’ll explain this broken down into its different sections:

      In the output of the Redis info command, sections are marked with #, which signifies a comment. The values are populated in the form of key:value, which makes them relatively easy to parse.

      Output

      # Server redis_version:5.0.4 redis_git_sha1:ab60b2b1 redis_git_dirty:1 redis_build_id:7909f4de3561dc50 redis_mode:standalone os:Linux 5.2.14-200.fc30.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:9.1.1 process_id:72 run_id:ddb7b96c93bbd0c369c6d06ce1c02c78902e13cc tcp_port:25060 uptime_in_seconds:1733 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:8687593 executable:/usr/bin/redis-server config_file:/etc/redis.conf # Clients connected_clients:3 client_recent_max_input_buffer:2 client_recent_max_output_buffer:0 blocked_clients:0 . . .

      The Server section contains technical information about the Redis build, such as its version and the Git commit it’s based on. While the Clients section provides the number of currently opened connections.

      Output

      . . . # Memory used_memory:941560 used_memory_human:919.49K used_memory_rss:4931584 used_memory_rss_human:4.70M used_memory_peak:941560 used_memory_peak_human:919.49K used_memory_peak_perc:100.00% used_memory_overhead:912190 used_memory_startup:795880 used_memory_dataset:29370 used_memory_dataset_perc:20.16% allocator_allocated:949568 allocator_active:1269760 allocator_resident:3592192 total_system_memory:1030356992 total_system_memory_human:982.62M used_memory_lua:37888 used_memory_lua_human:37.00K used_memory_scripts:0 used_memory_scripts_human:0B number_of_cached_scripts:0 maxmemory:463470592 maxmemory_human:442.00M maxmemory_policy:allkeys-lru allocator_frag_ratio:1.34 allocator_frag_bytes:320192 allocator_rss_ratio:2.83 allocator_rss_bytes:2322432 rss_overhead_ratio:1.37 rss_overhead_bytes:1339392 mem_fragmentation_ratio:5.89 mem_fragmentation_bytes:4093872 mem_not_counted_for_evict:0 mem_replication_backlog:0 mem_clients_slaves:0 mem_clients_normal:116310 mem_aof_buffer:0 mem_allocator:jemalloc-5.1.0 active_defrag_running:0 lazyfree_pending_objects:0 . . .

      Here Memory confirms how much RAM Redis has allocated for itself, as well as the maximum amount of memory it can possibly use. If it starts running out of memory, it will free up keys using the strategy you specified in the Control Panel (shown in the maxmemory_policy field in this output).

      Output

      . . . # Persistence loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1568966978 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:0 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:217088 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 # Stats total_connections_received:213 total_commands_processed:2340 instantaneous_ops_per_sec:1 total_net_input_bytes:39205 total_net_output_bytes:776988 instantaneous_input_kbps:0.02 instantaneous_output_kbps:2.01 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:353 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 . . .

      In the Persistence section, you can see the last time Redis saved the keys it stores to disk, and if it was successful. The Stats section provides numbers related to client and in-cluster connections, the number of times the requested key was (or wasn’t) found, and so on.

      Output

      . . . # Replication role:master connected_slaves:0 master_replid:9c1d345a46d29d08537981c4fc44e312a21a160b master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:46137344 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 . . .

      Note: The Redis project uses the terms “master” and “slave” in its documentation and in various commands. DigitalOcean generally prefers the alternative terms “primary” and “replica.”
      This guide will default to the terms “primary” and “replica” whenever possible, but note that there are a few instances where the terms “master” and “slave” unavoidably come up.

      By looking at the role under Replication, you’ll know if you’re connected to a primary or replica node. The rest of the section provides the number of currently connected replicas and the amount of data that the replica is lacking in regards to the primary. There may be additional fields if the instance you are connected to is a replica.

      Output

      . . . # CPU used_cpu_sys:1.972003 used_cpu_user:1.765318 used_cpu_sys_children:0.000000 used_cpu_user_children:0.001707 # Cluster cluster_enabled:0 # Keyspace

      Under CPU, you’ll see the amount of system (used_cpu_sys) and user (used_cpu_user) CPU Redis is consuming at the moment. The Cluster section contains only one unique field, cluster_enabled, which serves to indicate that the Redis cluster is running.

      Logstash will be tasked to periodically run the info command on your Redis database (similar to how you just did), parse the results, and send them to Elasticsearch. You’ll then be able to access them later from Kibana.

      You’ll store the configuration for indexing Redis statistics in Elasticsearch in a file named redis.conf under the /etc/logstash/conf.d directory, where Logstash stores configuration files. When started as a service, it will automatically run them in the background.

      Create redis.conf using your favorite editor (for example, nano):

      • sudo nano /etc/logstash/conf.d/redis.conf

      Add the following lines:

      /etc/logstash/conf.d/redis.conf

      input {
          exec {
              command => "redis_flags_command info"
              interval => 10
              type => "redis_info"
          }
      }
      
      filter {
          kv {
              value_split => ":"
              field_split => "rn"
              remove_field => [ "command", "message" ]
          }
      
          ruby {
              code =>
              "
              event.to_hash.keys.each { |k|
                  if event.get(k).to_i.to_s == event.get(k) # is integer?
                      event.set(k, event.get(k).to_i) # convert to integer
                  end
                  if event.get(k).to_f.to_s == event.get(k) # is float?
                      event.set(k, event.get(k).to_f) # convert to float
                  end
              }
              puts 'Ruby filter finished'
              "
          }
      }
      
      output {
          elasticsearch {
              hosts => "http://localhost:9200"
              index => "%{type}"
          }
      }
      

      Remember to replace redis_flags_command with the command shown in the control panel that you used earlier in the step.

      You define an input, which is a set of filters that will run on the collected data, and an output that will send the filtered data to Elasticsearch. The input consists of the exec command, which will run a command on the server periodically, after a set time interval (expressed in seconds). It also specifies a type parameter that defines the document type when indexed in Elasticsearch. The exec block passes down an object containing two fields, command and message string. The command field will contain the command that was run, and the message will contain its output.

      There are two filters that will run sequentially on the data collected from the input. The kv filter stands for key-value filter, and is built-in to Logstash. It is used for parsing data in the general form of keyvalue_separatorvalue and provides parameters for specifying what are considered a value and field separators. The field separator pertains to strings that separate the data formatted in the general form from each other. In the case of the output of the Redis INFO command, the field separator (field_split) is a new line, and the value separator (value_split) is :. Lines that do not follow the defined form will be discarded, including comments.

      To configure the kv filter, you pass : to thevalue_split parameter, and rn (signifying a new line) to the field_split parameter. You also order it to remove the command and message fields from the current data object by passing them to remove_field as elements of an array, because they contain data that are now useless.

      The kv filter represents the value it parsed as a string (text) type by design. This raises an issue because Kibana can’t easily process string types, even if it’s actually a number. To solve this, you’ll use custom Ruby code to convert the number-only strings to numbers, where possible. The second filter is a ruby block that provides a code parameter accepting a string containing the code to be run.

      event is a variable that Logstash provides to your code, and contains the current data in the filter pipeline. As was noted before, filters run one after another, meaning that the Ruby filter will receive the parsed data from the kv filter. The Ruby code itself converts the event to a Hash and traverses through the keys, then checks if the value associated with the key could be represented as an integer or as a float (a number with decimals). If it can, the string value is replaced with the parsed number. When the loop finishes, it prints out a message (Ruby filter finished) to report progress.

      The output sends the processed data to Elasticsearch for indexing. The resulting document will be stored in the redis_info index, defined in the input and passed in as a parameter to the output block.

      Save and close the file.

      You’ve installed Logstash using apt and configured it to periodically request statistics from Redis, process them, and send them to your Elasticsearch instance.

      Step 2 — Testing the Logstash Configuration

      Now you’ll test the configuration by running Logstash to verify it will properly pull the data.

      Logstash supports running a specific configuration by passing its file path to the -f parameter. Run the following command to test your new configuration from the last step:

      • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

      It may take some time to show the output, but you’ll soon see something similar to the following:

      Output

      WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-09-20 11:59:53.440 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-09-20 11:59:53.459 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.3"} [INFO ] 2019-09-20 12:00:02.543 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-09-20 12:00:03.331 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-09-20 12:00:03.727 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-09-20 12:00:04.015 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6} [WARN ] 2019-09-20 12:00:04.020 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [INFO ] 2019-09-20 12:00:04.071 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-09-20 12:00:04.100 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template [INFO ] 2019-09-20 12:00:04.146 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [INFO ] 2019-09-20 12:00:04.295 [[main]-pipeline-manager] exec - Registering Exec Input {:type=>"redis_info", :command=>"...", :interval=>10, :schedule=>nil} [INFO ] 2019-09-20 12:00:04.315 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x73adceba run>"} [INFO ] 2019-09-20 12:00:04.483 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-09-20 12:00:05.318 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} Ruby filter finished Ruby filter finished Ruby filter finished ...

      You’ll see the Ruby filter finished message being printed at regular intervals (set to 10 seconds in the previous step), which means that the statistics are being shipped to Elasticsearch.

      You can exit Logstash by clicking CTRL + C on your keyboard. As previously mentioned, Logstash will automatically run all config files found under /etc/logstash/conf.d in the background when started as a service. Run the following command to start it:

      • sudo systemctl start logstash

      You’ve run Logstash to check if it can connect to your Redis cluster and gather data. Next, you’ll explore some of the statistical data in Kibana.

      Step 3 — Exploring Imported Data in Kibana

      In this section, you’ll explore and visualize the statistical data describing your database’s performance in Kibana.

      In your web browser, navigate to your domain where you exposed Kibana as a part of the prerequisite. You’ll see the default welcome page:

      Kibana - Welcome Page

      Before exploring the data Logstash is sending to Elasticsearch, you’ll first need to add the redis_info index to Kibana. To do so, click on Management from the left-hand vertical sidebar, and then on Index Patterns under the Kibana section.

      Kibana - Index Pattern Creation

      You’ll see a form for creating a new Index Pattern. Index Patterns in Kibana provide a way to pull in data from multiple Elasticsearch indexes at once, and can be used to explore only one index.

      Beneath the Index pattern text field, you’ll see the redis_info index listed. Type it in the text field and then click on the Next step button.

      You’ll then be asked to choose a timestamp field, so you’ll later be able to narrow your searches by a time range. Logstash automatically adds one, called @timestamp. Select it from the dropdown and click on Create index pattern to finish adding the index to Kibana.

      Kibana - Index Pattern Timestamp Selection

      To create and see existing visualizations, click on the Visualize item in the left-hand vertical menu. You’ll see the following page:

      Kibana - Visualizations

      To create a new visualization, click on the Create a visualization button, then select Line from the list of types that will pop up. Then, select the redis_info* index pattern you have just created as the data source. You’ll see an empty visualization:

      Kibana - Empty Visualization

      The left-side panel provides a form for editing parameters that Kibana will use to draw the visualization, which will be shown on the central part of the screen. On the upper-right hand side of the screen is the date range picker. If the @timestamp field is being used in the visualization, Kibana will only show the data belonging to the time interval specified in the range picker.

      You’ll now visualize the average Redis memory usage during a specified time interval. Click on Y-Axis under Metrics in the panel on the left to unfold it, then select Average as the Aggregation and select used_memory as the Field. This will populate the Y axis of the plot with the average values.

      Next, click on X-Axis under Buckets. For the Aggregation, choose Date Histogram. @timestamp should be automatically selected as the Field. Then, show the visualization by clicking on the blue play button on the top of the panel. If your database is brand new and not used you won’t see a very long line. In all cases, however, you will see an accurate portrayal of average memory usage. Here is how the resulting visualization may look after little to no usage:

      Kibana - Redis Memory Usage Visualization

      In this step, you have visualized memory usage of your managed Redis database, using Kibana. You can also use other plot types Kibana offers, such as the Visual Builder, to create more complicated graphs that portray more than one field at the same time. This will allow you to gain a better understanding of how your database is being used, which will help you optimize client applications, as well as your database itself.

      Conclusion

      You now have the Elastic stack installed on your server and configured to pull statistics data from your managed Redis database on a regular basis. You can analyze and visualize the data using Kibana, or some other suitable software, which will help you gather valuable insights and real-world correlations into how your database is performing.

      For more information about what you can do with your Redis Managed Database, visit the product docs. If you’d like to present the database statistics using another visualization type, check out the Kibana docs for further instructions.



      Source link