One place for hosting & domains

      Problems

      Advanced Troubleshooting DNS Problems


      The most common DNS error is a simple typo, whether it’s from the client or the server. Typos and other incorrect DNS data cause many problems. Even when data is correct, DNS can still be a difficult protocol to troubleshoot.

      The DNS process starts at the client, which queries a specified upstream DNS server. The upstream DNS server could already have the answer in its cache. In which case, an IP address is answered and the query is resolved. If not, it queries further upstream until a successive answer is given, which is authoritative for the query. The Top Level Domain (TLD), such as a .com, .net, or org, may be the root authority for a query. Authoritative servers on a local network may also be the correct resolver.

      Possible causes of unsatisfied queries include:

      • No network circuit path from the client to the specified DNS server.
      • The specified DNS server might not have the query credentials needed.
      • The cached information on the specified DNS server could be out of date.
      • The specified DNS server may have no upstream path to a TLD server.

      Fortunately, there are a variety of tools available for Windows, macOS, and Linux used to troubleshoot DNS problems. Most DNS troubleshooting tools focus on tracing the circuits to a DNS resolver or querying DNS servers for a correct response.

      Tracing the Resolver Network Circuit

      Clients must be able to reach the DNS server with a query. The most common failure is an interruption in the network circuit between the client app and the desired DNS server.

      Each client declares a default DNS server for DNS queries. If the address of the default DNS server is incorrect, or points to an unavailable resource, there can be no resolution to the request. In this case, the requesting application receives an error, or simply hangs. The IP address of the default DNS server must be known to troubleshoot further.

      The IP address of the default DNS server is found in the properties of the network adapter in use. To find yours, follow the instructions below for your operating system:

      Windows

      1. Right-click the network icon in the Notification Area of the Taskbar.

      2. Choose Network and Internet Settings.

      3. Select your network adapter.

      4. Scroll down to reveal the settings for that network adapter.

      macOS

      1. Click the network icon in the Menu Bar.

      2. Click Network Preferences.

      3. Select your network adapter.

      4. Click Advanced.

      5. Click DNS to reveal the settings for that adapter.

      Linux

      In Linux, the location of the desired DNS server depends on the values set in the resolver configuration file located at /etc/resolve.conf.

      To find these values via the command line, enter:

      cat /etc/resolve.conf
      

      To find these values via the GUI in Ubuntu 22.04 LTS:

      1. Click network icon in the GNOME Panel.

      2. Select your network adapter.

      3. Click <adapter/connection/device> Settings.

      4. Click the gear icon next to your chosen adapter/connection/device

      Knowing the desired address helps trace the circuit to determine if the desired DNS server can be reached.

      Circuit Testing with Ping

      Troubleshooting the network communications circuit uses common tools like ping. Many queries fail because there is no network circuit to the desired DNS server. The ping command line tool is universal to all operating systems and does not require administrative rights.

      Open a command line and use ping to determine if the IP address responds. In this case, the public DNS server for Cloudflare (1.1.1.1) is cited:

      ping 1.1.1.1
      

      Or:

      ping cloudflare.com
      

      When a reply is returned, queries can be made to the server, in which case, skip to the next section.

      If there is no reply using ping, then the circuit is unusable and the requested resolver is not responding. This can be because it’s under attack, dead, or offline. Specify another resolver/DNS server and repeat the attempt.

      An unusable circuit, or route to a DNS server, means that an Internet connection is down for the host making queries. If the circuit is up, another DNS server can be used, and must be placed in the host’s settings. Reset the host to restart the network stack and read the new DNS server entry. Such alternate DNS Internet servers are 1.1.1.1 for Cloudflare, and 8.8.8.8 for Google.

      Note

      The traceroute command line tool is also handy in network circuit analysis. It uses either an IP address or a Fully Qualified Domain Name (FQDN) to show latencies between the host and the desired IP address/FQDN.

      Tracing Resolvers with NSLookup

      Windows, macOS, and Linux contain the command line tool, nslookup (Name Server Lookup). It performs a query on a known address, using the default DNS resolver, unless another resolver is specified.

      nslookup google.com
      

      Renders:

      nslookup google.com
          Server:  one.one.one.one
          Address:  1.1.1.1
      
          Non-authoritative answer:
          Name:	google.com
          Addresses:  2607:f8b0:4004:c19::8a
                    2607:f8b0:4004:c19::71
                    2607:f8b0:4004:c19::8b
                    2607:f8b0:4004:c19::64
                    64.233.177.138
                    64.233.177.139
                    64.233.177.102
                    64.233.177.101
                    64.233.177.100
                    64.233.177.113

      Here, the default nameserver is Cloudflare.com’s public DNS server (one.one.one.one), with an IP address of 1.1.1.1. It uses a cached (non-authoritative) answer for the queried server (google.com). It then renders three lines of IPv6 addresses and six lines of IPv4 addresses.

      When there’s a working network circuit available, nslookup permits changing the nameserver so that entries among differing nameservers can be compared. When a TLD FQDN entry is changed, it can take as long as 48 hours to be uniform among nameservers/resolvers. This is because non-authoritative servers cache entries using a Time-To-Live (TTL) variable for speedier response. The answer is delivered from cache, without looking up the answer on the next higher-level server. When the cache expires on DNS servers/resolvers/nameservers, a request from a higher authority renews the cached and delivered answer.

      Start of Authority (SOA)

      Every domain used publicly in DNS has a registrar. The registrar keeps a record of ownership, which may not be visible to the public. The entry in this record that must be made public is the IP address(es) of its nameservers. The nameservers/resolvers, in turn, are the guardians of the DNS for TLDs.

      An organization that hosts a domain generally also contains its nameserver, and is the SOA for the domain and any subdomains underneath it. TLD resolvers such as .com, .net, and .org, are highly controlled. They are only accessible through other servers using the DNS Security Extensions (DNSSEC) authentication and security protocol.

      Hosting organizations like Linode may also use other Content Distribution Networks (CDNs) who provide a presence where heavily-used sites manage IP and DNS access. CDNs are not covered by this guide, because their troubleshooting often requires tools that only the CDNs can provide. Nonetheless, these sites must respond appropriately when a host makes a query.

      Dig

      The most popular client-side DNS query tool beyond nslookup is dig (Domain Information Groper). The dig app must be downloaded to Windows as part of the
      bind package, which also includes an optional DNS server. The dig command is included with most versions of macOS and Linux.

      The dig tool allows specific DNS records to be queried. This includes MX (mail), A records (domain), TXT records (specifically recorded details), and other host records included in the SOA nameserver.

      An example of the dig output shows its command line configuration:

      dig linode.com
      

      The output looks similar to:

      ; <<>> DiG 9.16.30 <<>> linode.com
          ;; global options: +cmd
          ;; Got answer:
          ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54592
          ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
      
          ;; OPT PSEUDOSECTION:
          ; EDNS: version: 0, flags:; udp: 1232
          ;; QUESTION SECTION:
          ;linode.com.                	IN  	A
      
          ;; ANSWER SECTION:
          linode.com.         	300 	IN  	A   	69.164.200.202
          linode.com.         	300 	IN  	A   	72.14.191.202
          linode.com.         	300 	IN  	A   	72.14.180.202
      
          ;; Query time: 62 msec
          ;; SERVER: 1.1.1.1#53(1.1.1.1)
          ;; WHEN: Thu Jul 14 17:11:03 Eastern Daylight Time 2022
          ;; MSG SIZE  rcvd: 87

      Options for the dig command include:

      • +trace, which queries DNS servers to the SOA.
      • ANY, which finds ALL records, not just the default A record.
      • +f <filename> which calls the hosts listed in the specified file.

      The SOA can be found, and the path to it queried, to see if records match. This allows DNS resolvers along the path to the SOA to be adjusted for longer/shorter TTL caching. This accommodates records that change frequently, such as DynamicDNS records in Active Directory environments.

      Domain Transfers

      When a domain transfer occurs between nameserver registrations, it takes time to propagate to the hundreds of thousands of resolvers that might be contacted. The dig command is the handiest tool to find outdated cache or incorrect entries.

      The Hosts File

      Windows, macOS, and Linux look to a hosts file before trying to resolve a DNS address. The hosts file pre-dates DNS in many ways, and takes precedence over DNS or attempts to use a resolver.

      When present and filled in, the hosts file is the first resolver in any host. In Windows, this file is located in the \system32\hosts entry. On macOS and Linux, it’s located in the /etc/host file.

      The hosts file is canonical, meaning that it overrides any resolver and cannot be violated. When tracing odd DNS access problems, look for the presence of this file, and use a text editor to examine its contents. Entries may be administratively placed to prevent access to specific sites. It is sometimes used as a “private DNS” to control system behavior and prevent typos from permitting access to typo-based malware sites. The hosts file must be altered to allow subsequent access through a resolver to such sites.

      Walled Gardens and Captive Portals

      Walled Gardens and Captive Ports are terms for the same environment. Essentially, barriers presented by network access providers, such as “Free WiFi” services deployed in public spaces, shops, schools, and government facilities. These offerings are controlled through an access control host/router to a network, whether wired or wireless. All DNS or IP address calls are intercepted, and focused to an authorization device.

      After supplying sufficient credentials, a user is provided either confined or open access. Within the Captive Portal or Walled Garden, DNS calls are often intercepted and routed through local or cloud-based DNS servers/resolvers. They resolve requests using their own entries, which may not be the public DNS entry for a user.

      The “wall” in the Walled Garden provides confined access, typically limiting access to social media, adult websites, or other sites chosen by the provider. Although browser DNS-over-HTTPS protocols can sometimes “leap” over, or bypass, these walls, some browsers do not support this protocol.

      To regain access, re-load the network stack and use a public resolver like Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8.

      DNS Poisoning

      The answers provided by DNS servers are listed in the DNS server/resolver’s tables. Access to these tables is usually highly limited. When DNS is correctly configured for host security, it limits how DNS entries can be changed.

      It’s possible to poison DNS servers intentionally with spurious entries in an attempt to hijack browser activity. When performed by bad actors, these entries can focus browser requests to malware, crypto routines, or competitors.

      Administrators regularly use dig and a file of test hosts to compare changes in entries. The value of the test is to show when the list of hosts has changed, and for what reason. A changed address in a popular site may mean its entry has been hijacked upstream, locally, or even at a TLD server. It may also mean the IP address has simply been changed administratively. Poisoned entries must be deleted where possible.

      Hosting Provider Problems

      A hosting provider, or the domain registrar, can flush DNS cache, including informing upstream/downstream providers where they’re connected. Administrators making changes in DNS entries face a problem with the cache TTL/life in both up and downstream servers. Changes can be quick to propagate where CDN and other services, such as Akamai and Cloudflare, are the stored address for downstream servers.

      Records

      Each host provides a mandatory A record that points to the IP of the receiver of domain targets. The CNAME record provides an alias for queries, allowing aliases for “www” or other common domain prefixes.

      DNS MX records point to mail. Most hosting providers run mail through an application, such as cpanel. Email entries for a domain are rejected because of missing mail-focused domain keys used in:

      • Transport Layer Security (TLS)
      • DomainKeys Identified Mail (DKIM)
      • Sender Policy Framework (SPF)
      • Domain Message Authentication Reporting and Conformance (DMARC)

      These entries must be valid, and the mail engine used must comply with the DNS records for the domain.

      Host provider email records are entered in the provider’s DNS record for a domain, or maintained separately by a domain administrator.

      Alterations

      Some host providers and registrars lock DNS records so that they cannot be changed by fraudulent or unauthorized access. To prevent hijacking and fraud, the locks placed on records require obtaining a key, providing credentials, or other verification obstacles.

      The authentication credentials needed to change DNS records should be stored carefully. A re-authentication attempt usually requires time and multiple methods of authentication to obtain. Domain hijacks occur when credentials are guessed, spoofed, or heavily attacked.

      Conclusion

      Simple typos are a huge problem. So are network outages that initially appear to be DNS problems. Verifying accuracy and the network path fixes many DNS outages. User/host configuration problems, especially an incorrect default DNS/resolver, can also be quickly checked.

      When resolvers must be checked, the toolkit can be simple. Much information can be revealed about the listings inside resolvers using nslookup and dig. Data can be compared from period to period.

      Complying with a hosting provider’s documentation allows you to keep complete records that cooperate with the provider, their services, and email.

      DNS servers are not invulnerable. They can be poisoned. It’s good practice to check important records on a periodic basis to test listing integrity, especially before users fill the support email box.



      Source link

      Solving Real World Problems With Bash Scripts – A Tutorial


      Updated by Linode Contributed by Mihalis Tsoukalos

      Introduction

      This guide presents some of the advanced capabilities of the bash shell by showing practical and fully functional bash scripts. It also illustrates how you can work with dates and times in bash scripts and how to write and use functions in bash.

      In This Guide

      In this guide, you will find the following information about bash scripts:

      Note

      This guide is written for a non-root user. Depending on your configuration, some commands might require the help of sudo in order to properly execute. If you are not familiar with the sudo command, see the Users and Groups guide.

      Functions in bash shell

      The bash scripting language has support for functions. The parameters of a function can be accessed as $1, $2, etc. and you can have as many parameters as you want. If you are interested in finding out the name of the function, you can use the FUNCNAME variable. Functions are illustrated in functions.sh, which is as follows:

      functions.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      
      #!/bin/bash
      
      function f1 {
          echo Hello from $FUNCNAME!
          VAR="123"
      }
      
      f2() {
          p1=$1
          p2=$2
          sum=$((${p1} + ${p2}))
          echo "${sum}"
      }
      
      f1
      echo ${VAR}
      
      mySum="$(f2 1 2)"
      echo mySum = $mySum
      
      mySum="$(f2 10 -2)"
      echo mySum = $mySum

      Run the script with the following command:

      ./functions.sh
      

      The output will look like this:

        
      Hello from f1!
      123
      mySum = 3
      mySum = 8
      
      

      Note

      If you want to check whether a function parameter exists or not, you can use the statement:

      if [ -z "$1" ]
      

      Using bash Functions as Shell Commands

      This is a trick that allows you to use bash functions as shell commands. You can execute the above code as

      . ./functions.sh
      

      Notice the dot in front of the text file. After that you can use f1 as a regular command in the terminal where you executed . ./my_function.sh. You will also be able to use the f2 command with two integers of your choice to quickly calculate a sum. If you want that function to be globally available, you can put its implementation to a bash configuration file that is automatically executed by bash each time a new bash session begins. A good place to put that function implementation would be ~/.bash_profile.

      Working with Dates and Times

      Bash allows you to work with dates and times using traditional UNIX utilities such as date(1). The main difficulty many programmers run into when working with dates and times is getting or using the correct format. This is a matter of using date(1) with the correct parameters and has nothing to do with bash scripting per se. Using date(1) as date +[something] means that we want to use a custom format – this is signified by the use of + in the command line argument of date(1).

      A good way to create unique filenames is to use UNIX epoch time or, if you want your filename to be more descriptive, a date-time combination. The unique nature of the filename is derived from a focus on a higher level of detail in defining your output. If done correctly, you will never have the exact same time value even if you execute the script multiple times on the same UNIX machine.

      The example that follows will shed some light on the use of date(1).

      Using Dates and Times in bash scripts

      The code of dateTime.sh is the following:

      dateTime.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      
      #!/bin/bash
      
      # Print default output
      echo `date`
      
      # Print current date without the time
      echo `date +"%m-%d-%y"`
      
      # Use 4 digits for year
      echo `date +"%m-%d-%Y"`
      
      # Display time only
      echo `date +"%T"`
      
      # Display 12 hour time
      echo `date +"%r"`
      
      # Time without seconds
      echo `date +"%H:%M"`
      
      # Print full date
      echo `date +"%A %d %b %Y %H:%M:%S"`
      
      # Nanoseconds
      echo Nanoseconds: `date +"%s-%N"`
      
      # Different timezone by name
      echo Timezone: `TZ=":US/Eastern" date +"%T"`
      echo Timezone: `TZ=":Europe/UK" date +"%T"`
      
      # Print epoch time - convenient for filenames
      echo `date +"%s"`
      
      # Print week number
      echo Week number: `date +"%V"`
      
      # Create unique filename
      f=`date +"%s"`
      touch $f
      ls -l $f
      rm $f
      
      # Add epoch time to existing file
      f="/tmp/test"
      touch $f
      mv $f $f.`date +"%s"`
      ls -l "$f".*
      rm "$f".*

      If you want an even more unique filename, you can also use nanoseconds when defining the behaviour of your script.

      Run the dateTime script:

      ./dateTime.sh
      

      The output of dateTime.sh will resemble the following:

        
      Fri Aug 30 13:05:09 EST 2019
      08-30-19
      08-30-2019
      13:05:09
      01:05:09 PM
      13:05
      Friday 30 Aug 2019 13:05:09
      Nanoseconds: 1567159562-373152585
      Timezone: 06:05:09
      Timezone: 10:05:09
      1567159509
      Week number: 35
      -rw-r--r--  1 mtsouk  staff  0 Aug 30 13:05 1567159509
      -rw-r--r--  1 mtsouk  wheel  0 Aug 30 13:05 /tmp/test.1567159509
      
      

      Bash scripts for Administrators

      This section will present some bash scripts that are generally helpful for UNIX system administrators and power users.

      Watching Free Disk Space

      The bash script that follows watches the free space of your hard disks and warns you when that free space drops below a given threshold – the value of the threshold is given by the user as a command line argument. Notice that if the program gets no command line argument, a default value is used as the threshold.

      freeDisk.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      
      #!/bin/bash
      
      # default value to use if none specified
      PERCENT=30
      
      # test for command line arguement is present
      if [[ $# -le 0 ]]
      then
          printf "Using default value for threshold!n"
      # test if argument is an integer
      # if it is, use that as percent, if not use default
      else
          if [[ $1 =~ ^-?[0-9]+([0-9]+)?$ ]]
          then
              PERCENT=$1
          fi
      fi
      
      let "PERCENT += 0"
      printf "Threshold = %dn" $PERCENT
      
      df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read data;
      do
          used=$(echo $data | awk '{print $1}' | sed s/%//g)
          p=$(echo $data | awk '{print $2}')
          if [ $used -ge $PERCENT ]
          then
              echo "WARNING: The partition "$p" has used $used% of total available space - Date: $(date)"
          fi
      done
      • The sed s/%//g command is used for omitting the percent sign from the output of df -Ph.
      • df is the command to report file system disk space usage, while the options -Ph specify POSIX output and human-readable, meaning, print sizes in powers of 1024.
      • awk(1) is used for extracting the desired fields from output of the df(1) command.

      Run ./freeDisk.sh with this command:

      ./freeDisk.sh
      

      The output of freeDisk.sh will resemble the following:

        
      Using default value for threshold!
      Threshold = 30
      WARNING: The partition "/dev/root" has used 61% of total available space - Date: Wed Aug 28 21:14:51 EEST 2019
      
      

      Note

      This script and others like it can be easily executed as cron jobs and automate tasks the UNIX way.

      Notice that the code of freeDisk.sh looks relatively complex. This is because bash is not good at the conversion between strings and numeric values – more than half of the code is for initializing the PERCENT variable correctly.

      Rotating Log Files

      The presented bash script will help you to rotate a log file after exceeding a defined file size. If the log file is connected to a server process, you might need to stop the process before the rotation and start it again after the log rotation is complete – this is not the case with rotate.sh.

      rotate.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      
      #!/bin/bash
      
      f="/home/mtsouk/connections.data"
      
      if [ ! -f $f ]
      then
        echo $f does not exist!
        exit
      fi
      
      touch ${f}
      MAXSIZE=$((4096*1024))
      
      size=`du -b ${f} | tr -s 't' ' ' | cut -d' ' -f1`
      if [ ${size} -gt ${MAXSIZE} ]
      then
          echo Rotating!
          timestamp=`date +%s`
          mv ${f} ${f}.$timestamp
          touch ${f}
      fi
      • Note that the path to the log file /home/mtsouk/connections.data will not exist by default. You’ll need to either use a log file that already exists like kern.log on some Linux systems, or replace it with a new one.

      • Additionally, the value of MAXSIZE can be a value of your choice, and the script can be edited to suit the needs of your own configuration – you can even make changes to the existing code and provide the MAXSIZE value as a command line argument to the program.

      • The du command is used to estimate the file space usage. It’s use to track the files and directories that are consuming excessive space on the hard disk. The -b option tells this command to print the size in bytes.

      Run the rotate script with the following command:

      ./rotate.sh
      

      The output of rotate.sh when it has reached the threshold defined by MAXSIZE will resemble the following:

        
      Rotating!
      
      

      After running, two files will be created on the system. You can see them with this command:

      ls -l connections.data*
      
        
      -rw-r--r-- 1 mtsouk mtsouk       0 Aug 28 20:18 connections.data
      -rw-r--r-- 1 mtsouk mtsouk 2118655 Aug 28 20:18 connections.data.1567012710
      
      

      If you want to make rotate.sh more generic, you can provide the name of the log file as a command line argument to the bash script.

      Monitoring the Number of TCP Connections

      The presented bash script calculates the number of TCP connections on the current machine and prints that on the screen along with date and time related information.

      tcpConnect.sh
      1
      2
      3
      4
      5
      6
      
      #!/bin/bash
      
      C=$(/bin/netstat -nt | tail -n +3 | grep ESTABLISHED | wc -l)
      D=$(date +"%m %d")
      T=$(date +"%H %M")
      printf "%s %s %sn" "$C" "$D" "$T"
      • The main reason for using the full path of netstat(1) when calling it is to make the script as secure as possible.
      • If you do not provide the full path then the script will search all the directories of the PATH variable to find that executable file.
      • Apart from the number of established connections (defined by the C variable), the script prints the month, day of the month, hour of the day, and minutes of the hour. If you want, you can also print the year and seconds.

      Execute the tcpConnect script with the following command:

      ./tcpConnect.sh
      

      The output will be similar to the following:

        
      8 08 28 16 22
      
      

      tcpConnect.sh can be easily executed as a cron(8) by adding the following to your cron file:

      */4 * * * * /home/mtsouk/bin/tcpConnect.sh >> ~/connections.data
      

      The previous cron(8) job executes tcpConnect.sh every 4 minutes, every hour of each day and appends the results to ~/connections.data in order to be able to watch or visualize them at any time.

      Additional Examples

      Sorting in bash

      The presented example will show how you can sort integer values in bash using the sort(1) utility:

      sort.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      
      #!/bin/bash
      
      # test that at least one argument was passed
      if [[ $# -le 0 ]]
      then
          printf "Not enough arguments!n"
          exit
      fi
      
      count=1
      
      for arg in "[email protected]"
      do
          if [[ $arg =~ ^-?[0-9]+([0-9]+)?$ ]]
          then
              n[$count]=${arg}
              let "count += 1"
          else
              echo "$arg is not a valid integer!"
          fi
      done
      
      sort -n <(printf "%sn" "${n[@]}")
      • The presented technique uses an array to store all integer values before sorting them.
      • All numeric values are given as command line arguments to the script.
      • The script tests whether each command line argument is a valid integer before adding it to the n array.
      • The sorting part is done using sort -n, which sorts the array numerically. If you want to deal with strings, then you should omit the -n option.
      • The printf command, after sort -n, prints every element of the array in a separate line whereas the < character tells sort -n to use the output of printf as input.

      Run the sort script with the following command:

      ./sort.sh 100 a 1.1 1 2 3 -1
      

      The output of sort.sh will resemble the following:

        
      a is not a valid integer!
      1.1 is not a valid integer!
      -1
      1
      2
      3
      100
      
      

      A Game Written in bash

      This section will present a simple guessing game written in bash(1). The logic of the game is based on a random number generator that produces random numbers between 1 and 20 and expects from the user to guess them.

      guess.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      #!/bin/bash
      NUMGUESS=0
      
      echo "$0 - Guess a number between 1 and 20"
      
      (( secret = RANDOM % 20 + 1 ))
      
      while [[ guess -ne secret ]]
      do
          (( NUMGUESS = NUMGUESS + 1 ))
          read -p "Enter guess: " guess
      
          if (( guess < $secret )); then
              echo "Try higher..."
          elif (( $guess > $secret )); then
              echo "Try lower..."
          fi
      done
      
      printf "Yes! You guessed it in $NUMGUESS guesses.n"

      Run the guess script:

      ./guess.sh
      

      The output of guess.sh will resemble the following:

        
      ./guess.sh - Guess a number between 1 and 20
      Enter guess: 1
      Try higher...
      Enter guess: 5
      Try higher...
      Enter guess: 7
      Try lower...
      Enter guess: 6
      Yes! You guessed it in 4 guesses.
      
      

      Calculating Letter Frequencies

      The following bash script will calculate the number of times each letter appears on a file.

      freqL.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      
      #!/bin/bash
      
      if [ -z "$1" ]; then
          echo "Usage: $0 filename."
          exit 1
      fi
      
      filename=$1
      
      while read -n 1 c
      do
          echo "$c"
      done < "$filename" | grep '[[:alpha:]]' | sort | uniq -c | sort -nr
      • The script reads the input file character by character, prints each character, and processes the output using the grep, sort, and uniq commands to count the frequency of each character.
      • The [:alpha:] pattern used by grep(1) matches all alphabetic characters and is equivalent to A-Za-z.
      • If you also want to include numeric characters in the output, you should use [:alnum:] instead.
      • Additionally, if you want the output to be sorted alphabetically instead of numerically, you can execute freqL.sh and then process its output using the sort -k2,2 command.

      Run the freqL script:

      ./freqL.sh text.txt
      

      The output of freqL.sh will resemble the following:

        
         2 b
         1 s
         1 n
         1 i
         1 h
         1 a
      
      

      Note

      The file text.txt will not exist by default. You can use a pre-existing text file to test this script, or you can create the text.txt file using a text editor of your choice.

      Timing Out read Operations

      The read builtin command supports the -t timeout option that allows you to time out a read operation after a given time, which can be very convenient when you are expecting user input that takes too long. The technique is illustrated in timeOut.sh.

      timeOut.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      
      #!/bin/bash
      
      if [[ $# -le 0 ]]
      then
          printf "Not enough arguments!n"
          exit
      fi
      
      TIMEOUT=$1
      VARIABLE=0
      
      while :
      do
        ((VARIABLE = VARIABLE + 1))
        read -t $TIMEOUT -p "Do you want to Quit(Y/N): "
        if [ $VARIABLE -gt $TIMEOUT ]; then
          echo "Timing out - user response took too long!"
          break
        fi
      
        case $REPLY in
        [yY]*)
          echo "Quitting!"
          break
          ;;
        [nN]*)
          echo "Do not quit!"
          ;;
        *) echo "Please choose Y or N!"
           ;;
        esac
      done
      • The timeout of the read operation is given as a command line argument to the script, an integer representing the number of seconds that will pass before the script will “time out” and exit.
      • The case block is what handles the available options.
      • Notice that what you are going to do in each case is up to you – the presented code uses simple commands to illustrate the technique.

      Run the timeOut script:

      ./timeOut.sh 10
      

      The output of timeOut.sh will resemble the following:

        
      Do you want to Quit(Y/N): Please choose Y or N!
      Do you want to Quit(Y/N): Y
      Quitting!
      
      

      Alternatively, you can wait the full ten seconds for your script to time out:

        
      Do you want to Quit(Y/N):
      Timing out - user response took too long!
      
      

      Converting tabs to spaces

      The presented utility, which is named t2s.sh, will read a text file and convert each tab to the specified number of space characters. Notice that the presented script replaces each tab character with 4 spaces but you can change that value in the code or even get it as command line argument.

      tabs2spaces.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      
      #!/bin/bash
      
      for f in "[email protected]"
      do
          if [ ! -f $f ]
          then
            echo $f does not exist!
            continue
          fi
          echo "Converting $f.";
          newFile=$(expand -t 4 "$f");
          echo "$newFile" > "$f";
      done
      • The script uses the expand(1) utility that does the job of converting tabs to spaces for us.
      • expand(1) writes its results to standard output – the script saves that output and replaces the current file with the new output, which means that the original file will change.
      • Although tabs2spaces.sh does not use any fancy techniques or code, it does the job pretty well.

      Run the tabs2spaces script:

      ./tabs2spaces.sh textfile.txt
      

      The output of tabs2spaces.sh will resemble the following:

        
      Converting textfile.txt.
      
      

      Note

      The file textfile.txt will not exist by default. You can use a pre-existing text file to test this script, or you can create the textfile.txt file using a text editor of your choice.

      Counting files

      The following script will look into a predefined list of directories and count the number of files that exist in each directory and its subdirectories. If that number is above a threshold, then the script will generate a warning message.

      ./countFiles.sh
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      
      #!/bin/bash
      
      DIRECTORIES="/bin:/home/mtsouk/code:/srv/www/www.mtsoukalos.eu/logs:/notThere"
      
      # Count the number of arguments passed in
      if [[ $# -le 0 ]]
      then
          echo "Using default value for COUNT!"
      else
          if [[ $1 =~ ^-?[0-9]+([0-9]+)?$ ]]
          then
              COUNT=$1
          fi
      fi
      
      while read -d ':' dir; do
          if [ ! -d "$dir" ]
          then
              echo "**" Skipping $dir
              continue
          fi
          files=`find $dir -type f | wc -l`
          if [ $files -lt $COUNT ]
          then
              echo "Everything is fine in $dir: $files"
          else
              echo "WARNING: Large number of files in $dir: $files!"
          fi
      done <<< "$DIRECTORIES:"

      The counting of the files is done with the find $dir -type f | wc -l command. You can read more about the find command in our guide.

      Run the countFiles script:

      ./countFiles.sh 100
      

      The output of countFiles.sh will resemble the following:

        
      WARNING: Large number of files in /bin: 118!
      Everything is fine in /home/mtsouk/code: 81
      WARNING: Large number of files in /srv/www/www.mtsoukalos.eu/logs: 106!
      ** Skipping /notThere
      
      

      Summary

      The bash scripting language is a powerful programming language that can save you time and energy when applied effectively. If you have a lot of useful bash scripts, then you can automate things by creating cron jobs that execute your bash scripts. It is up to the developer to decide whether they prefer to use bash or a different scripting language such as perl, ruby, or python.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link