One place for hosting & domains

      How to Write the Perfect Meta Description & Supercharge Your Organic Click-Through Rate


      Strong Search Engine Optimization (SEO) fundamentals are essential if you want to get your content to be found organically in search engines. However, despite all of our efforts to cater to Google’s robots, we still need to ensure we’re paying special attention to the human element.

      Fortunately, a well-written meta description can catch readers’ attention and convince them that your content will answer their questions, and is worth their time. In turn, it can increase your site’s organic click-through rate (CTR) and help tell Google your page is valuable.

      In this post, we’ll introduce you to meta descriptions and their importance to your website. Then, we’ll share some important tips to help you write better meta descriptions and supercharge your organic CTR. Let’s get started!

      An Introduction to Meta Descriptions

      Meta descriptions are the snippets of text you see underneath the title within Search Engine Results Pages (SERPs), as seen the example below:

      Meta description examples

      The main goal of a good meta description is to give readers an idea of what the web page is all about. Naturally, titles also play a vital role here, but there’s only so much information you can fit into a single headline.

      Meta descriptions provide up to a couple of sentences to expand on your page’s content. You can either write them yourself or have search engines generate them automatically based on your web page’s content.

      As convenient as having search engines do the work for you might sound, we strongly recommend that you write your own meta descriptions. That way, you get full control over what shows up on the SERPs and social media sites while also increasing your chances of engaging users.

      Let’s take a look at some meta description examples for a specific line of shoes. You can tell the meta description below was generated automatically, and it doesn’t give you much to go on:

      An example of a poorly-written meta description

      Here’s another result for the same product search, this one using a stronger meta description:

      A meta description from Nike news

      It’s important to understand that meta descriptions only give you a limited number of characters to play with. On desktops, that can be up to 160 characters, whereas mobile users will only see 120 of them. Roughly speaking, that means you get about two lines of text.

      Why Meta Descriptions Are Important

      SEO is all about relevance. In order to rank, you first need to produce high quality content designed to answer a specific question or cover a topic thoroughly and accurately. Beyond that, there are a whole bunch of SEO best practices, tips, and tricks that can help you rank better.

      Some of these Google has put into a category their algorithm uses to help them rank content in Search, referred to as “ranking factors.” There are over 200, in fact. Various examples include keyword usage, meta data, media usage in content, backlinks, and engagement.

      While meta descriptions themselves are not a “ranking factor” — in that they don’t directly influence the rankings of your pages — they can indirectly help you by encouraging real human searchers to click on your results, thus signaling higher user engagement and potentially influencing your search position.

      What to Include in a Meta Description

      Two lines of text isn’t much, but more often than not, it’s enough to cover a few key elements. Most often, this should include:

      1. What your page is about
      2. How it can benefit the reader

      If a meta description is too vague, then you’re not selling users on the idea of visiting your website. You’ll probably still get clicks, of course, but not as many as you might have otherwise.

      Let’s say, for example, that you wanted to write a meta description for this article you are reading right here. Here’s, maybe, a not-so-good example:

      Have you ever wondered what meta descriptions are? Wonder no more, because we’ll tell you everything you need to know.

      While it hits on the article’s primary topic, it doesn’t do a good job of previewing the page’s actual content. Now let’s give it another go, keeping in mind the fundamental elements we want to include:

      Meta descriptions convey what your web page is all about and can indirectly influence rankings. Find out how to write the perfect meta description here.

      This is short and to the point, and we even had enough characters left over to include a simple Call to Action (CTA). It may not win any literary awards, but it will get the job done.

      How to Write the Perfect Meta Description (10 Key Tips)

      At this point, you know the basics of what a meta description should include. Now here are 10 tips that will help you really knock your meta descriptions out of the park.

      1. Use Relevant Keywords

      If you’re reading this, you’re probably familiar with the concept of keywords. Ideally, you’ll use key phrases organically throughout all of your content, including in metadata such as your descriptions.

      Let’s say, for example, that you’re writing a recipe and you want to optimize it for the search term “how to cook a healthy lasagna”. That’s an easy term to work into a meta description:

      Learning how to cook a healthy lasagna is easier than you might imagine. Let’s go over a recipe you can cook in under two hours!

      Including keywords within your meta descriptions is an SEO best practice. It gives search engines a better idea of what your content is all about.

      However, as always, make sure to work those meta keywords in organically. That means not stuffing your descriptions full of keywords.

      In fact, Google advises against using long lists of keywords in your meta descriptions. So, you’ll want to make sure your description still reads like something a human (not a bot) would write.

      2. Consider Meta Description Character Count

      So far, most of the examples we’ve shown you have come in well under the maximum character count for the major search engines. You want to get some mileage out of your meta descriptions. However, in practice, obsessing over the character count isn’t as serious as you might think.

      To build on our earlier example of a healthy lasagna recipe, you could easily expand on its description to cover more information:

      Learning how to cook a healthy lasagna is easier than you might imagine. For this recipe, we’re substituting meat with eggplants, which means it will cook faster and feed up to four people.

      That example goes over the character limit for both desktop and mobile meta descriptions in Google. In practice, it would get cut off and look something like this:

      Learning how to cook a healthy lasagna is easier than you might imagine. For this recipe, we’re substituting meat with eggplants, which means it will cook …

      That snippet still provides plenty of information, so you don’t necessarily need to change it. What matters is that you include the essential details early on, so whatever does get cut off is just supplementary information.

      You’ll also want to keep in mind that Google recommends against very short meta descriptions. Although it’s great to be concise, a one-sentence description is unlikely to contain enough information to make readers click through.

      3. Create Unique Meta Descriptions

      When it comes to meta descriptions, there are two kinds of potential duplicates. It’s good practice to avoid both of them:

      • Mimicking other sites’ descriptions
      • Having several of your pages use the same description

      Overall, duplicate content is almost always bad news when it comes to SEO. Moreover, it can hurt your CTR if you have several pages competing for the same search terms. For this reason, Google recommends against repurposing the same meta description for different pages or posts.

      For practical purposes, there’s no reason all of your pages shouldn’t have unique meta descriptions. If it takes you more than a few minutes to write one, then you’re probably overthinking it.

      Get Content Delivered Straight to Your Inbox

      Subscribe to our blog and receive great content just like this delivered straight to your inbox.

      4. Write Compelling Copy

      Most meta descriptions are pretty boring, at least linguistically speaking. The need to cover so much information in such a limited space doesn’t lend itself well to innovation.

      One way to make your meta descriptions stand out is by using compelling language. To do that, take a look at what other websites are writing for the keywords you want to rank for. Let’s say, for example, that you’re looking for a cast iron pizza recipe.

      A lot of the content will be similar, which means their meta descriptions will share elements as well. However, not all descriptions are equally effective:

      Examples of pizza recipes in the search results

      Some of our favorite hits from the above example include the words “crispy”, “buttery”, and “chewy”. There are five results here, but the first and last stand out due to their word choices.

      Think about it this way — if you’re staring at that page trying to decide which recipe to follow, you’ll probably pick the one that sounds more delicious. At that stage, you don’t know how good the recipe will be, so your only indicators are the title tag, picture, and word choice in the meta description.

      5. Be Specific and Concise

      A vague meta description is unlikely to get many click-throughs. If a reader doesn’t know what to expect from the outset, there’s little motivation for them to spend the effort visiting your page and reading your content.

      Although a meta description is short, you can pack details into it by being picky with your words. Using precise language can communicate the gist of your content without running over the character count.

      Google provides this example of a highly specific meta description for a product page:

      An example of a specific and detailed meta description

      When writing a meta description for a blog post, you might include a quick summary of its headings so that readers have a detailed overview of the content:

      An example of a meta description that uses a list format

      Alternatively, you might summarize one of the key points from your article in your meta description. The exact approach will depend on the nature of your post and what information is most relevant to readers.

      6. Make Your Meta Description Actionable

      Simply writing a statement summary of your blog post can tell potential readers what it’s about. However, it could come across as generic and even boring if you don’t add a little excitement to your meta description.

      You can address this by making your meta description actionable. This means prompting the reader to actually do something when they click on the headline and go to your article.

      In the following meta description example, the website engages readers by encouraging them to “try these tips”, while also explaining the benefits of the article:

      An example of a meta description with a call to action

      Using a CTA in your meta description can catch a reader’s attention when they’re scanning the search engine results. It can also tell them a little more about what they should expect to learn from your content.

      There are a couple of ways to make your meta description actionable. You can include a direct CTA at the end of it, like in the example above. Alternatively, you might use active voice throughout to excite the reader and incentivize them to click through:

      An example of a meta description with active language

      Both approaches are valid and can be effective for attracting readers to your site, so it’s a matter of preference. However, in all cases, we recommend staying away from passive language as much as possible.

      7. Consider Your Target Audience

      It’s also essential to consider your target audience when writing meta descriptions for your content. This is the group of people that typically read your blog or would benefit most from your content.

      It might seem like writing a more generic meta description will ensure your content appeals to more people. However, this approach is often counterproductive. That’s because opting for a bland meta description means you might miss out on attracting readers who will better connect with your content.

      One of the easiest ways to target your ideal audience is by thinking carefully about the tone of your meta description. For example, it could be a good idea to incorporate some humor and casual language if your target reader is Gen Z:

      An example of a humorous meta description

      Alternatively, a more serious and to-the-point description with formal language could be more appropriate if your content is targeted toward older professionals.

      8. Appeal to Emotion

      It’s also a good idea to appeal to emotion in your meta descriptions. This approach can involve targeting a positive emotion, such as a reader’s excitement, or a negative pain point, like fear or trepidation.

      Targeting emotions within your meta descriptions can be highly effective because it plays on readers’ psychological triggers. In fact, studies show that most people make emotional decisions when choosing brands or buying products.

      If you choose to target a positive emotion within your meta descriptions, consider using exciting words to make readers feel more invested in your content. Here’s an example:

      Asking for a promotion can supercharge your career and earn your employer’s respect. Learn how to ask for and actually get a promotion!

      The same approach can also work for negative emotions:

      Are you worried about what will happen to your family after you pass away? Check out our life insurance policies and find the best plan for you!

      In both scenarios, using a question at the start of the meta description can immediately tap into the reader’s state of mind. We’ll explore this a little more in the next tip.

      9. Answer a Specific Question or Concern

      Most people use search engines to answer a question they have in mind. You can see this in action with the People also ask section in Google search results:

      The People also ask section in Google search results

      Although there aren’t enough words in a meta description to answer a potential question, you can target search intent. This refers to the reason why a user is making a particular search online.

      Targeting search intent can convince readers that your post won’t be a waste of their time. By immediately bringing up the reason they’re searching in the first place, you can assure readers that the rest of your content will provide value.

      You can address search intent by posing a question within your meta description or outlining a concern the reader might have. For example, if you’re teaching readers how to create an online store, your meta description might look like this:

      Are you looking to turn your passion into a business and make money online? Check out our complete guide to creating an online store!

      Since search intent goes hand in hand with keyword research, you’ll want to consider it when planning and writing your post. Then, you can provide a concise and exciting invitation to keep reading within your meta description.

      Supercharge Your Organic CTR with Strong Meta Descriptions

      When you boil it down, SEO is very competitive. You’ll never be the only website within a niche, so you’ll need to look for ways to make your pages stand out in the SERPs. Fortunately, an informative and unique meta description is a great way to catch potential visitors’ eyes.

      There are a few ways to write a perfect meta description. You can include keywords and leverage interesting language. We also recommend being as specific and detailed as possible, using emotional vocabulary and phrases that will appeal to your target audience.

      Are you looking to maximize your reach and get new eyes on your site? Our Dreamhost SEO marketing services can help you optimize your existing content, create new posts for your website, and provide monthly reports to track your progress. Check out our SEO plans today!

      Search Engine Optimization Made Easy

      We take the guesswork (and actual work) out of growing your website traffic with SEO.

      DreamHost custom website development services



      Source link

      How To Build a Rate Limiter With Node.js on App Platform


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Rate limiting manages your network’s traffic and limits the number of times someone repeats an operation in a given duration, such as using an API. A service without a layer of security against rate limit abuse is prone to overload and hampers your application’s proper operation for legitimate customers.

      In this tutorial, you will build a Node.js server that will check the IP address of the request and also calculate the rate of these requests by comparing the timestamp of requests per user. If an IP address crosses the limit you have set for the application, you will call Cloudflare’s API and add the IP address to a list. You will then configure a Cloudflare Firewall Rule that will ban all requests with IP addresses in the list.

      By the end of this tutorial, you will have built a Node.js project deployed on DigitalOcean’s App Platform that protects a Cloudflare routed domain with rate limiting.

      Prerequisites

      Before you begin this guide, you will need:

      Step 1 — Setting Up the Node.js Project and Deploying to DigitalOcean’s App Platform

      In this step, you will expand on your basic Express server, push your code to a GitHub repository, and deploy your application to App Platform.

      Open the project directory of the basic Express server with your code editor. Create a new file by the name .gitignore in the root directory of the project. Add the following lines to the newly created .gitignore file:

      .gitignore

      node_modules/
      .env
      

      The first line in your .gitignore file is a directive to git not to track the node_modules directory. This will enable you to keep your repository size small. The node_modules can be generated when required by running the command npm install. The second line prevents the environment variable file from being tracked. You will create the .env file in further steps.

      Navigate to your server.js in your code editor and modify the following lines of code:

      server.js

      ...
      app.listen(process.env.PORT || 3000, () => {
          console.log(`Example app is listening on port ${process.env.PORT || 3000}`);
      });
      

      The change to conditionally use PORT as an environment variable enables the application to dynamically have the server running on the assigned PORT or use 3000 as the fallback one.

      Note: The string in console.log() is wrapped within backticks(`) and not within quotes. This enables you to use template literals, which provides the capability to have expressions within strings.

      Visit your terminal window and run your application:

      Your browser window will display Successful response. In your terminal, you will see the following output:

      Output

      Example app is listening on port 3000

      With your Express server running successfully, you’ll now deploy to App Platform.

      First, initialize git in the root directory of the project and push the code to your GitHub account. Navigate to the App Platform dashboard in the browser and click on the Create App button. Choose the GitHub option and authorize with GitHub, if necessary. Select your project’s repository from the dropdown list of projects you want to deploy to App Platform. Review the configuration, then give a name to the application. For the purpose of this tutorial, select the Basic plan as you’ll work in the application’s development phase. Once ready, click Launch App.

      Next, navigate to the Settings tab and click on the section Domains. Add your domain routed via Cloudflare into the field Domain or Subdomain Name. Select the bullet You manage your domain to copy the CNAME record that you’ll use to add to your domain’s Cloudflare DNS account.

      With your application deployed to App Platform, head over to your domain’s dashboard on Cloudflare in a new tab as you will return to App Platform’s dashboard later. Navigate to the DNS tab. Click on the Add Record button and select CNAME as your Type, @ as the root, and paste in the CNAME you copied from the App Platform. Click on the Save button, then navigate to the Domains section under the Settings tab in your App Platform’s Dashboard and click on the Add Domain button.

      Click the Deployments tab to see the details of the deployment. Once deployment finishes, you can open your_domain to view it on the browser. Your browser window will display: Successful response. Navigate to the Runtime Logs tab on the App Platform dashboard, and you will get the following output:

      Output

      Example app is listening on port 8080

      Note: The port number 8080 is the default assigned port by the App Platform. You can override this by changing the configuration while reviewing the app before deployment.

      With your application now deployed to App Platform, let’s look at how to outline a cache to calculate requests to the rate limiter.

      Step 2 — Caching User’s IP Address and Calculating Requests Per Second

      In this step, you will store a user’s IP address in a cache with an array of timestamps to monitor the requests per second of each user’s IP address. A cache is temporary storage for data frequently used by an application. The data in a cache is usually kept in quick access hardware like RAM (Random-Access Memory). The fundamental goal of a cache is to improve data retrieval performance by decreasing the need to visit the slower storage layer underneath it. You will use three npm packages: node-cache, is-ip, and request-ip to aid in the process.

      The request-ip package captures the user’s IP address used to request the server. The node-cache package creates an in-memory cache which you will use to keep track of user’s requests. You’ll use the is-ip package used to check if an IP Address is IPv6 Address. Install the node-cache, is-ip, and request-ip package via npm on your terminal.

      • npm i node-cache is-ip request-ip

      Open the server.js file in your code editor and add following lines of code below const express = require('express');:

      server.js

      ...
      const requestIP = require('request-ip');
      const nodeCache = require('node-cache');
      const isIp = require('is-ip');
      ...
      

      The first line here grabs the requestIP module from request-ip package you installed. This module captures the user’s IP address used to request the server. The second line grabs the nodeCache module from the node-cache package. nodeCache creates an in-memory cache, which you will use to keep track of user’s requests per second. The third line takes the isIp module from the is-ip package. This checks if an IP address is IPv6 which you will format as per Cloudflare’s specification to use CIDR notation.

      Define a set of constant variables in your server.js file. You will use these constants throughout your application.

      server.js

      ...
      const TIME_FRAME_IN_S = 10;
      const TIME_FRAME_IN_MS = TIME_FRAME_IN_S * 1000;
      const MS_TO_S = 1 / 1000;
      const RPS_LIMIT = 2;
      ...
      

      TIME_FRAME_IN_S is a constant variable that will determine the period over which your application will average the user’s timestamps. Increasing the period will increase the cache size, hence consume more memory. The TIME_FRAME_IN_MS constant variable will also determine the period of time your application will average user’s timestamps, but in milliseconds. MS_TO_S is the conversion factor you will use to convert time in milliseconds to seconds. The RPS_LIMIT variable is the threshold limit of the application that will trigger the rate limiter, and change the value as per your application’s requirements. The value 2 in the RPS_LIMIT variable is a moderate value that will trigger during the development phase.

      With Express, you can write and use middleware functions, which have access to all HTTP requests coming to your server. To define a middleware function, you will call app.use() and pass it a function. Create a function named ipMiddleware as middleware.

      server.js

      ...
      const ipMiddleware = async function (req, res, next) {
          let clientIP = requestIP.getClientIp(req);
          if (isIp.v6(clientIP)) {
              clientIP = clientIP.split(':').splice(0, 4).join(':') + '::/64';
          }
          next();
      };
      app.use(ipMiddleware);
      
      ...
      

      The getClientIp() function provided by requestIP takes the request object, req from the middleware, as parameter. The .v6() function comes from the is-ip module and returns true if the argument passed to it is an IPv6 address. Cloudflare’s Lists requires the IPv6 address in /64 CIDR notation. You need to format the IPv6 address to follow the format: aaaa:bbbb:cccc:dddd::/64. The .split(':') method creates an array from the string containing the IP address splitting them by the character :. The .splice(0,4) method returns the first four elements of the array. The join(':') function returns a string from the array combined with the character :.

      The next() call directs the middleware to go to the next middleware function if there is one. In your example, it will take the request to the GET route /. This is important to include at the end of your function. Otherwise, the request will not move forward from the middleware.

      Initialize an instance of node-cache by adding the following variable below the constants:

      server.js

      ...
      const IPCache = new nodeCache({ stdTTL: TIME_FRAME_IN_S, deleteOnExpire: false, checkperiod: TIME_FRAME_IN_S });
      ...
      

      With the constant variable IPCache, you are overriding the default parameters native to nodeCache with the custom properties:

      • stdTTL: The interval in seconds after which a key-value pair of cache elements will be evicted from the cache. TTL stands for Time To Live, and is a measure of time after which cache expires.
      • deleteOnExpire: Set to false as you will write a custom callback function to handle the expired event.
      • checkperiod: The interval in seconds after which an automatic check for expired elements is triggered. The default value is 600, and as your application’s element expiry is set to a lesser value, the check for expiry will also happen sooner.

      For more information on the default parameters of node-cache, you will find the node-cache npm package’s docs page useful. The following diagram will help you to visualise how a cache stores data:

      Schematic Representation of Data Stored in Cache

      You will now create a new key-value pair for the new IP address and append to an existing key-value pair if an IP address exists in the cache. The value is an array of timestamps corresponding to each request made to your application. In your server.js file, create the updateCache() function below the IPCache constant variable to add the timestamp of the request to cache:

      server.js

      ...
      const updateCache = (ip) => {
          let IPArray = IPCache.get(ip) || [];
          IPArray.push(new Date());
          IPCache.set(ip, IPArray, (IPCache.getTtl(ip) - Date.now()) * MS_TO_S || TIME_FRAME_IN_S);
      };
      ...
      

      The first line in the function gets the array of timestamps for the given IP address, or if null, initializes with an empty array. In the following line, you are pushing the present timestamp caught by the new Date() function into the array. The .set() function provided by node-cache takes three arguments: key, value and the TTL. This TTL will override the standard TTL set by replacing the value of stdTTL from the IPCache variable. If the IP address already exists in the cache, you will use the existing TTL; else, you will set TTL as TIME_FRAME_IN_S.

      The TTL for the current key-value pair is calculated by subtracting the present timestamp from the expiry timestamp. The difference is then converted to seconds and passed as the third argument to the .set() function. The .getTtl() function takes a key and IP address as an argument and returns the TTL of the key-value pair as a timestamp. If the IP address does not exist in the cache, it will return undefined and use the fallback value of TIME_FRAME_IN_S.

      Note: You require the conversion timestamps from milliseconds to seconds as JavaScript stores them in milliseconds while the node-cache module uses seconds.

      In the ipMiddleware middleware, add the following lines after the if code block if (isIp.v6(clientIP)) to calculate the requests per second of the IP address calling your application:

      server.js

      ...
          updateCache(clientIP);
          const IPArray = IPCache.get(clientIP);
          if (IPArray.length > 1) {
              const rps = IPArray.length / ((IPArray[IPArray.length - 1] - IPArray[0]) * MS_TO_S);
              if (rps > RPS_LIMIT) {
                  console.log('You are hitting limit', clientIP);
              }
          }
      ...
      

      The first line adds the timestamp of the request made by the IP address to the cache by calling the updateCache() function you declared. The second line collects the array of timestamps for the IP address. If the number of elements in the array of timestamps is greater than one (calculating requests per second needs a minimum of two timestamps), and the requests per second are more than the threshold value you defined in the constants, you will console.log the IP address. The rps variable calculates the requests per second by dividing the number of requests with a time interval difference, and converts the units to seconds.

      Since you had defaulted the property deleteOnExpire to the value false in the IPCache variable, you will now need to handle the expired event manually. node-cache provides a callback function that triggers on expired event. Add the following lines of code below the IPCache constant variable:

      server.js

      ...
      IPCache.on('expired', (key, value) => {
          if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS) {
              IPCache.del(key);
          }
      });
      ...
      

      .on() is a callback function that accepts key and value of the expired element as the arguments. In your cache, value is an array of timestamps of requests. The highlighted line checks if the last element in the array is at least TIME_FRAME_IN_S in the past than the present time. As you are adding elements to your array of timestamps, if the last element in value is at least TIME_FRAME_IN_S in the past than the present time, the .del() function takes key as an argument and deletes the expired element from the cache.

      For the instances when some elements of the array are at least TIME_FRAME_IN_S in the past than the present time, you need to handle it by removing the expired items from the cache. Add the following code in the callback function after the if code block if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS).

      server.js

      ...
          else {
              const updatedValue = value.filter(function (element) {
                  return new Date() - element < TIME_FRAME_IN_MS;
              });
              IPCache.set(key, updatedValue, TIME_FRAME_IN_S - (new Date() - updatedValue[0]) * MS_TO_S);
          }
      ...
      

      The filter() array method native to JavaScript provides a callback function to filter the elements in your array of timestamps. In your case, the highlighted line checks for elements that are least TIME_FRAME_IN_S in the past than the present time. The filtered elements are then added to the updatedValue variable. This will update your cache with the filtered elements in the updatedValue variable and a new TTL. The TTL that matches the first element in the updatedValue variable will trigger the .on('expired') callback function when the cache removes the following element. The difference of TIME_FRAME_IN_S and the time expired since the first request’s timestamp in updatedValue calculates the new and updated TTL.

      With your middleware functions now defined, visit your terminal window and run your application:

      Then, visit localhost:3000 in your web browser. Your browser window will display: Successful response. Refresh the page repeatedly to hit the RPS_LIMIT. Your terminal window will display:

      Output

      Example app is listening on port 3000 You are hitting limit ::1

      Note: The IP address for localhost is shown as ::1. Your application will capture the public IP of a user when deployed outside localhost.

      Your application is now able to able to track the user’s requests and store the timestamps in the cache. In the next step, you will integrate Cloudflare’s API to set up the Firewall.

      Step 3 — Setting Up the Cloudflare Firewall

      In this step, you will set up Cloudflare’s Firewall to block IP Addresses when hitting the rate limit, create environment variables, and make calls to the Cloudflare API.

      Visit the Cloudflare dashboard in your browser, log in, and navigate to your account’s homepage. Open Lists under Configurations tab. Create a new List with your_list as the name.

      Note: The Lists section is available on your Cloudflare account’s dashboard page and not your Cloudflare domain’s dashboard page.

      Navigate to the Home tab and open your_domain’s dashboard. Open the Firewall tab and click on Create a Firewall rule under the Firewall Rules section. Give your_rule_name to the Firewall to identify it. In the Field, select IP Source Address from the dropdown, is in list for the Operator, and your_list for the Value. Under the dropdown for Choose an action, select Block and click Deploy.

      Create a .env file in the project’s root directory with the following lines to call Cloudflare API from your application:

      .env

      ACCOUNT_MAIL=your_cloudflare_login_mail
      API_KEY=your_api_key
      ACCOUNT_ID=your_account_id
      LIST_ID=your_list_id
      

      To get a value for API_KEY, navigate to the API Tokens tab on the My Profile section of your Cloudflare dashboard. Click View in the Global API Key section and enter your Cloudflare password to view it. Visit the Lists section under the Configurations tab on the account’s homepage. Click on Edit beside your_list list you created. Get the ACCOUNT_ID and LIST_ID from the URL of your_list in the browser. The URL is of the format below:
      https://dash.cloudflare.com/your_account_id/configurations/lists/your_list_id

      Warning: Make sure the content of .env is kept confidential and not made public. Make sure you have the .env file listed in the .gitignore file you created in Step 1.

      Install the axios and dotenv package via npm on your terminal.

      Open the server.js file in your code editor and the add following lines of code below the nodeCache constant variable:

      server.js

      ...
      const axios = require('axios');
      require('dotenv').config();
      ...
      

      The first line here grabs the axios module from axios package you installed. You will use this module to make network calls to Cloudflare’s API. The second line requires and configures the dotenv module to enable the process.env global variable that will define the values you placed in your .env file to server.js.

      Add the following to the if (rps > RPS_LIMIT) condition within ipMiddleware above console.log('You are hitting limit', clientIP) to call Cloudflare API.

      server.js

      ...
          const url = `https://api.cloudflare.com/client/v4/accounts/${process.env.ACCOUNT_ID}/rules/lists/${process.env.LIST_ID}/items`;
          const body = [{ ip: clientIP, comment: 'your_comment' }];
          const headers = {
              'X-Auth-Email': process.env.ACCOUNT_MAIL,
              'X-Auth-Key': process.env.API_KEY,
              'Content-Type': 'application/json',
          };
          try {
              await axios.post(url, body, { headers });
          } catch (error) {
              console.log(error);
          }
      ...
      

      You are now calling the Cloudflare API through the URL to add an item, in this case an IP address, to your_list. The Cloudflare API takes your ACCOUNT_MAIL and API_KEY in the header of the request with the key as X-Auth-Email and X-Auth-Key. The body of the request takes an array of objects with ip as the IP address to add to the list, and a comment with the value your_comment to identify the entry. You can modify value of comment with your own custom comment. The POST request made via axios.post() is wrapped in a try-catch block to handle errors if any, that may occur. The axios.post function takes the url, body and an object with headers to make the request.

      Change the clientIP variable within the ipMiddleware function when testing out the API requests with a test IP address like 198.51.100.0/24 as Cloudflare does not accept the localhost’s IP address in its Lists.

      server.js

      ...
      const clientIP = '198.51.100.0/24';
      ...
      

      Visit your terminal window and run your application:

      Then, visit localhost:3000 in your web browser. Your browser window will display: Successful response. Refresh the page repeatedly to hit the RPS_LIMIT. Your terminal window will display:

      Output

      Example app is listening on port 3000 You are hitting limit ::1

      When you have hit the limit, open the Cloudflare dashboard and navigate to the your_list’s page. You will see the IP address you put in the code added to your Cloudflare’s List named your_list. The Firewall page will display after pushing your changes to GitHub.

      Warning: Make sure to change the value in your clientIP constant variable to requestIP.getClientIp(req) before deploying or pushing the code to GitHub.

      Deploy your application by committing the changes and pushing the code to GitHub. As you have set up auto-deploy, the code from GitHub will automatically deploy to your DigitalOcean’s App Platform. As your .env file is not added to GitHub, you will need to add it to App Platform via the Settings tab at App-Level Environment Variables section. Add the key-value pair from your project’s .env file so your application can access its contents on the App Platform. After you save the environment variables, open your_domain in your browser after deployment finishes and refresh the page repeatedly to hit the RPS_LIMIT. Once you hit the limit, the browser will show Cloudflare’s Firewall page.

      Cloudflare's Error 1020 Page

      Navigate to the Runtime Logs tab on the App Platform dashboard, and you will view the following output:

      Output

      ... You are hitting limit your_public_ip

      You can open your_domain from a different device or via VPN to see that the Firewall bans only the IP address in your_list. You can delete the IP address from your_list through your Cloudflare dashboard.

      Note: Occasionally, it takes few seconds for the Firewall to trigger due to the cached response from the browser.

      You have set up Cloudflare’s Firewall to block IP Addresses when users are hitting the rate limit by making calls to the Cloudflare API.

      Conclusion

      In this article, you built a Node.js project deployed on DigitalOcean’s App Platform connected to your domain routed via Cloudflare. You protected your domain against rate limit misuse by configuring a Firewall Rule on Cloudflare. From here, you can modify the Firewall Rule to show JS Challenge or CAPTCHA instead of banning the user. The Cloudflare documentation details the process.



      Source link

      How To Implement PHP Rate Limiting with Redis on Ubuntu 20.04


      The author selected the Apache Software Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Redis (Remote Dictionary Server ) is an in-memory open source software. It is a data-structure store that uses a server’s RAM, which is several times faster than even the fastest Solid State Drive (SSD). This makes Redis highly responsive, and therefore, suitable for rate limiting.

      Rate limiting is a technology that puts a cap on the number of times a user can request a resource from a server. Many services implement rate limiting to prevent abuse to a service when a user may try to put too much load on a server.

      For instance, when you’re implementing a public API (Application Programming Interface) for your web application with PHP, you need some form of rate limiting. The reason is that when you release an API to the public, you’d want to put a control on the number of times an application user can repeat an action in a specific timeframe. Without any control, users may bring your system to a complete halt.

      Rejecting users’ requests that exceed a certain limit allows your application to run smoothly. If you have a lot of customers, rate limiting enforces a fair-usage policy that allows each customer to have high-speed access to your application. Rate limiting is also good for reducing bandwidth costs and minimizing congestion on your server.

      It might be practical to code a rate-limiting module by logging user activities in a database like MySQL. However, the end product may not be scalable when many users access the system since the data must be fetched from disk and compared against the set limit. This is not only slow, but relational database management systems are not designed for this purpose.

      Since Redis works as an in-memory database, it is a qualified candidate for creating a rate limiter, and it has been proven reliable for this purpose.

      In this tutorial, you’ll implement a PHP script for rate limiting with Redis on an Ubuntu 20.04 server.

      Prerequisites

      Before you begin, you’ll need the following:

      Step 1 — Installing the Redis Library for PHP

      First, you’ll begin by updating your Ubuntu server package repository index. Then, install the php-redis extension. This is a library that allows you to implement Redis in your PHP code. To do this, run the following commands:

      • sudo apt update
      • sudo apt install -y php-redis

      Next, restart the Apache server to load the php-redis library:

      • sudo systemctl restart apache2

      Once you’ve updated your software information index and installed the Redis library for PHP, you’ll now create a PHP resource that caps users’ access based on their IP address.

      Step 2 — Building a PHP Web Resource for Rate Limiting

      In this step, you’ll create a test.php file in the root directory (/var/www/html/) of your web server. This file will be accessible to the public and users can type its address in a web browser to run it. However, for the basis of this guide, you’ll later test access to the resource using the curl command.

      The sample resource file allows users to access it three times in a timeframe of 10 seconds. Users trying to exceed the limit will get an error informing them that they have been rate limited.

      The core functionality of this file relies heavily on the Redis server. When a user requests the resource for the first time, the PHP code in the file will create a key on the Redis server based on the user’s IP address.

      When the user visits the resource again, the PHP code will try to match the user’s IP address with the keys stored in the Redis server and increment the value by one if the key exists. The PHP code will keep checking if the incremented value hits the maximum limit set.

      The Redis key, which is based on the user’s IP address, will expire after 10 seconds; after this time period, logging the user’s visits to the web resource will begin again.

      To begin, open the /var/www/html/test.php file:

      • sudo nano /var/www/html/test.php

      Next, enter the following information to initialize the Redis class. Remember to enter the appropriate value for REDIS_PASSWORD:

      /var/www/html/test.php

      <?php
      
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      

      $redis->auth implements plain text authentication to the Redis server. This is OK while you’re working locally (via localhost), but if you’re using a remote Redis server, consider using SSL authentication.

      Next, in the same file, initialize the following variables:

      /var/www/html/test.php

      . . .
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      

      You’ve defined:

      • $max_calls_limit: is the maximum number of calls a user can access the resource.
      • $time_period: defines the timeframe in seconds within which a user is allowed to access the resource per the $max_calls_limit.
      • $total_user_calls: initializes a variable that retrieves the number of times a user has requested access to the resource in the given timeframe.

      Next, add the following code to retrieve the IP address of the user requesting the web resource:

      /var/www/html/test.php

      . . .
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      

      While this code uses the users’ IP address for demonstration purposes, if you’ve got a protected resource on the server that requires authentication, you might log users’ activities using their usernames or access tokens.

      In such a scenario, every user authenticated into your system will have a unique identifier (for example, a customer ID, developer ID, vendor ID, or even a user ID). (If you configure this, remember to use these identifiers in place of the $user_ip_address.)

      For this guide, the user IP address is sufficient for proving the concept. So, once you’ve retrieved the user’s IP address in the previous code snippet, add the next code block to your file:

      /var/www/html/test.php

      . . .
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      In this code, you use an if...else statement to check if there is a key defined with the IP address on the Redis server. If the key doesn’t exist, if (!$redis->exists($user_ip_address)) {...}, you set it and define its value to 1 using the code $redis->set($user_ip_address, 1);.

      The $redis->expire($user_ip_address, $time_period); sets the key to expire within the time period—in this case, 10 seconds.

      If the user’s IP address does not exist as a Redis key, you set the variable $total_user_calls to 1.

      In the ...else {...}... statement block, you use the $redis->INCR($user_ip_address); command to increment the value of the Redis key set for each IP address key by 1. This only happens when the key is already set in the Redis server and counts as a repeat request.

      The statement $total_user_calls = $redis->get($user_ip_address); retrieves the total requests the user makes by checking their IP address-based key on the Redis server.

      Toward the end of the file, you use the ...if ($total_user_calls > $max_calls_limit) {... }.. statement to check if the limit is exceeded; if so, you alert the user with echo "User " . $user_ip_address . " limit exceeded.";. Finally, you’re informing the user about the visits they make in the time period using the echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds"; statement.

      After adding all the code, your /var/www/html/test.php file will be as follows:

      /var/www/html/test.php

      <?php
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      When you’ve finished editing the /var/www/html/test.php file, save and close it.

      You’ve now coded the logic needed to rate limit users on the test.php web resource. In the next step, you’ll test your script.

      Step 3 — Testing Redis Rate Limiting

      In this step, you’ll use the curl command to request the web resource that you’ve coded in Step 2. To fully check the script, you’ll request the resource five times in a single command. It is possible to do this by including a placeholder URL parameter at the end of the test.php file. Here, you use the value ?[1-5] at the end of your request to execute the curl commands five times.

      Run the following command:

      • curl -H "Accept: text/plain" -H "Content-Type: text/plain" -X GET http://localhost/test.php?[1-5]

      After running the code, you will receive output similar to the following:

      Output

      [1/5]: http://localhost/test.php?1 --> <stdout> --_curl_--http://localhost/test.php?1 Welcome 127.0.0.1 total calls made 1 in 10 seconds [2/5]: http://localhost/test.php?2 --> <stdout> --_curl_--http://localhost/test.php?2 Welcome 127.0.0.1 total calls made 2 in 10 seconds [3/5]: http://localhost/test.php?3 --> <stdout> --_curl_--http://localhost/test.php?3 Welcome 127.0.0.1 total calls made 3 in 10 seconds [4/5]: http://localhost/test.php?4 --> <stdout> --_curl_--http://localhost/test.php?4 User 127.0.0.1 limit exceeded. [5/5]: http://localhost/test.php?5 --> <stdout> --_curl_--http://localhost/test.php?5 User 127.0.0.1 limit exceeded.

      As you’ll note, the first three requests ran without a problem. However, your script has rate limited the fourth and fifth requests. This confirms that the Redis server is rate limiting users’ requests.

      In this guide, you’ve set low values for the two variables following:

      /var/www/html/test.php

      ...
      $max_calls_limit  = 3;
      $time_period      = 10;
      ...
      

      When designing your application in a production environment, you could consider higher values depending on how often you expect users to hit your application.

      It is best practice to check real-time stats before setting these values. For instance, if your server logs show that an average user hits your application 1,000 times every 60 seconds, you may use those values as a benchmark for throttling users.

      To put things in a better perspective, here are some real-world examples of rate-limiting implementations (as of 2021):

      Conclusion

      This tutorial implemented a PHP script for rate limiting with Redis on an Ubuntu 20.04 server to prevent your web application from inadvertent or malicious overuse. You could extend the code to further suit your needs depending on your use case.

      You might want to secure your Apache server for production use; follow the How To Secure Apache with Let’s Encrypt on Ubuntu 20.04 tutorial.

      You might also consider reading how Redis works as a database cache. Try out our How To Set Up Redis as a Cache for MySQL with PHP on Ubuntu 20.04 tutorial.

      You can find further resources on our PHP and Redis topic pages.



      Source link