One place for hosting & domains


      Getting Started with Browserify


      Browserify changed my life.

      … My life as a JavaScript developer, anyway.

      With Browserify you can write code [in the browser] that uses require in the same way that you would use it in Node.

      Browserify lets you use require in the browser, the same way you’d use it in Node. It’s not just syntactic sugar for loading scripts on the client. It’s a tool that brings all the resources npm ecosystem off of the server, and into the client.

      Simple, yet immensely powerful.

      In this article, we’ll take a look at:

      Let’s dive in.

      Before we get started, make sure you’ve got Node and npm installed. I’m running Node 5.7.0 and NPM v3.6.0, but versioning shouldn’t be a problem. Feel free to either grab the repo or code along.

      Anyone who’s worked with Node will be familiar with its CommonJS style require function.

      require-ing a module exposes its public API to the file you required it in:

      "use strict";
      const React = require('react');
      let Component = React.createClass ({

      Node’s require implementation makes modularizing server-side code quite a straightforward task. Install, require, hack: Dead simple.

      Module loading in the client is an inherently different beast. In the simplest case, you load your modules in a series of <script> tags in your HTML. This is perfectly correct, but it can be problematic for two reasons:

      The AMD specification and AMD loaders – Require.js being amongst the most popular – came about as solutions to these issues. And, frankly, they’re awesome. There’s nothing inherently wrong with Require.js, or AMD loaders in general, but the solutions furnished by newer tools like Browserify and Webpack bring distinct advantages over those offered by Require.js.

      Amongst other things, Browserify:

      We’ll take a look at all of this and a whole lot more throughout the article. But first, what’s the deal with Webpack?

      The religious wars between users of Angular and Ember, Grunt and Gulp, Browserify and Webpack, all prove the point: Choosing your development tools is serious business.

      The choice between Browserify or Webpack depends largely on the tooling workflow you already have and the exigencies of your project. There are a number of differences between their feature sets, but the most important distinction, to my mind, is one of intent:

      If your project and dependencies are already closely tied to the Node ecosystem, Browserify is a solid choice. If you need more power to manage static assets than you can shake a script at, Webpack’s your tool.

      I tend to stick with Browserify, as I rarely find myself in need of Webpack’s additional power. You might find Webpack to be a solid choice if your build pipeline gets complex enough, though.

      If you decide to check it out, take a look at Front-End Tooling Book’s chapter on Webpack, and Pete Hunt’s Webpack How-To before diving into the official docs.

      Time to get our hands dirty. The first step is to install Browserify. Fire up a terminal and run:

      1. npm install --global browserify

      This installs the Browserify package and makes it available system-wide.

      Oh, and if you find yourself needing to use sudo for this, fix your npm permissions.

      Next, let’s give our little project a home. Find a suitable place on your hard drive and make a new folder for it:

      1. mkdir Browserify_Introduction
      2. cd Browserify_Introduction

      We’ll need a minimal home page, as well. Drop this into index.html:


      <!doctype html>
          <title>Getting Cozy with Browserify</title>
          <link rel="stylesheet" href="">
            h1, p, div { text-align: center; }
            html       { background: #fffffe; }
          <div class="container">
            <h2>Welcome to the Client Side.</h2>
            <div class="well">
              <p>I see you've got some numbers. Why not let me see them?</p>
              <div id="response">
          <script src="main.js"></script>
          <script src="bundle.js"></script>

      On the off chance you’re typing this out by hand, you’ll definitely have noticed the reference to the nonexistent main.js. Nonexistent files are no fun, so let’s make it exist.

      First, install Ramda:

      1. npm install ramda --save

      There’s nothing special about Ramda, by the way. I just chose it because I like it. Any package would do.

      Now, drop this into main.js:


      "use strict";
      var R = require('ramda');
      var square = function square (x) { return x * x; }
      var squares = R.chain(square, [1, 2, 3, 4, 5]);
      document.getElementById('response').innerHTML = squares;

      This is simple, but let’s go step-by-step anyway.

      The important things to note are that we’re using Node’s require, available only in a Node environment, together with the DOM API, available only in the browser.

      That shouldn’t work. And, in fact, it doesn’t. If you open index.html in your browser and open up the console, you’ll find a ReferenceError just waiting to grab your attention.

      Reference Error

      Ew. Let’s get rid of that.

      In the same directory housing your main.js, run:

      1. browserify main.js -o bundle.js

      Now open up index.html again, and you should see our array of squares smack dab in the middle of the page.

      It’s that simple.

      When you tell Browserify to bundle up main.js, it scans the file, and takes note of all the files you require. It then includes the source of those files in the bundle and repeats the process for its dependencies.

      In other words, Browserify traverses the dependency graph, using your main.js as its entry point, and includes the source of every dependency it finds.

      If you open up your bundle.js, you’ll see this in action. At the top is some obfuscated weirdness; then, a portion with your source code; and finally, the entirety of the Ramda library.

      Your Bundle

      Magic, eh?

      Let’s take a look at some additional Browserify fundamentals.

      Browserify isn’t limited to concatenating the source of your dependencies: It’s also capable of transforming the code along the way.

      “Transform” can mean many things. It can be compiling CoffeeScript to JavaScript, transpiling ES2015 to vanilla JavaScript, or even replacing const with var declarations.

      If it’s a change to your code, it counts as a transformation. We’ll take a look at using transforms in the full example, so hang on tight for usage details. For now, be sure to bookmark the growing list of available Browserify transforms for future reference.

      One of the disadvantages to transformations – and builds in general – is mangled line references. When your code throws an error, you want the browser to tell you, “take a look at line 57, column 23”. Not, “take a look at variable q on line 1, column 18,278 of main.min.js.”

      The solution is source maps. They’re files that tell your browser how to translate between line references in your transformed code and line references in your original source.

      With Browserify, enabling source maps is trivial. Run:

      1. browserify --debug main.js -o bundle.js

      The --debug flag tells Browserify to include source map information in bundle.js. That’s all you have to add to make it work.

      There is one downside to this, though: Adding source maps to bundle.js makes your bundle twice as large.

      That’s fine for development. But making your users download a file twice as big as the one they really need is a bit rude, don’t you think?

      The solution is to create two files: One for the source map, one for the bundle. If you’re using Browserify alone, the tool of choice for this is exorcist.

      Once you’ve installed it (npm install --global exorcist), you use it like this:

      1. browserify main.js --debug | exorcist > bundle.js

      This rips all the source map information out of bundle.js and spits it into instead.

      That’s mostly all there is to using Exorcist. Be sure to check the exorcist documentation for the details.

      There is a whole swath of tools for Browserify that keep an eye on your files and rebuild your bundle whenever they change. We’ll take a look at two tools: Watchify, and Beefy.

      Using Watchify

      Watchify is a standard tool for automatically rebuilding your bundle.js whenever you update source files.

      First, install it with npm:

      1. npm install --global watchify

      Next, delete your bundle.js.

      Now, navigate to your working directory in a new terminal, and run:

      1. watchify main.js -o bundle.js -v

      The -v flag tells Watchify to notify you whenever it rebuilds your bundle. It’ll still work if you don’t include it, but you won’t be able to tell it’s doing anything.

      That aside, notice that using Watchify is identical to using Browserify! You should have gotten some output, and if you check, you’ll notice a newly updated bundle.js sitting in your working directory.

      Now, open up main.js and save it without changing anything. You’ll see Watchify rebuild your bundle and spit out some more logs – that’s all it takes to automatically rebuild your bundle when you change your source!

      The Watchify repo has all the information on more advanced usage, such as how to use it with Exorcist. Check them out if you need.

      If you ran the example, be sure to kill the Watchify process before moving on (just close the terminal you ran it in, or kill $(pgrep node) if you love you some CLI).


      Beefy makes it easy to enable live reload alongside automatic rebuild. It does two big things for you:

      Whenever you change anything, it rebuilds your bundle, and – if you tell it to – automatically refreshes your browser with the changes.

      If you’re like me and need such a minimal feedback loop, it’s hard to go wrong with Beefy.

      To get started, go ahead and install it:

      1. npm install -g beefy

      I’ve installed it globally because I use it so much. If you’d rather use it on a per-project basis, run:

      1. npm install --save-dev beefy

      Either way, using it is straightforward. First, delete your bundle.js. Then, Spin up a new terminal, navigate to your working directory, and run:

      1. beefy main.js --live

      Beefy should print some information notifying you that it’s listening on

      If instead, it says, Error: Could not find a suitable bundler!, run this instead:

      1. beefy main.js --browserify $(which browserify) --live

      The --browserify $(which browserify) bit tells Beefy to use the global Browserify installation. You don’t need this unless you got the error.

      We told Beefy to watch main.js. If your entry point has a different name – say, app.js – you’d pass it that instead. The --live switch tells Beefy to automatically rebuild your bundle and reload the browser whenever you change your source code.

      Let’s see it in action. In your browser, navigate to http://localhost:9966. You should see the same home page we did last time.

      Our Initial Web Page

      Now, open up main.js, and change squares:


      "use strict";
      var R = require('ramda');
      var square = function square (x) { return x * x; }
      var squares = R.chain(square, [1, 2, 3, 4, 5, 6]);
      document.getElementById('response').innerHTML = squares

      Save it, and check out the web page. You should see an updated version of it:

      Our Web Page After Update

      And if you were watching it as you saved, you’d have noticed it update in real-time.

      Under the hood, Beefy rebuilds your main.js whenever the server receives a request for bundle.js. Beefy does not save a bundle.js to your working directory; when you need one for production, you’ll still have to build that using Browserify. We’ll see how to deal with that inconvenience in just a second.

      Again, that’s all there is to it. If you need anything more specific, the documentation’s got your back.

      That’s it for Browserify: The Essentials. Let’s build a small Browserify configuration that:

      A real, production-quality workflow would do more. But this will show you how to use Browserify to do something nontrivial, and extending it for your own projects should be a cinch.

      We’ll be using npm scripts to set this up. In the next section, we’ll do it with Gulp.

      Let’s get to it.

      Installing Dependencies

      We’ll need to install some packages to get this done:

      You’ve already got Beefy, so don’t worry about installing it. To grab the others, run:

      1. npm install --save-dev caching-coffeeify coffeeify minifyify

      Now, let’s start building out our scripts. Open up your package.json. You should find a scripts key about halfway down; it should include a key called "tests".

      Right after it, add a "serve" task:

      1. "serve" : "beefy main.js --live"

      You can see the whole package.json at my GitHub repo. If you had to use the --browserify $(which browserify) option earlier, you’ll have to do that here too.

      Save that, and back in your terminal, run npm run serve. You should see the same output we got when we ran Beefy earlier.

      You may get an ENOSPC error. If you do, run npm dedupe and try again. If that doesn’t help, the top answer on this SO thread will solve the problem.

      We just associated a command – beefy main.js --live – with a script name – serve. When we run npm run <NAME>, npm executes the command associated with the name you pass, located in the "scripts" section of your package.json. In this case, npm run serve fires up Beefy.

      Sweet start. Let’s finish it up.

      Open up package.json again, and add to your serve script:

      "serve" : "beefy main.js --browserify -t caching-coffeeify --live"

      When using Beefy, the --browserify option lets you pass options to Browserify. The -t flag tells Browserify you’re about to give it a transform to run. Caching-Coffeeify is a transform that compiles CoffeeScript to JavaScript, and optimizes to make sure it only recompiles what’s changed – whenever you want to compile CoffeeScript on-the-fly like this, Caching-Coffeeify is a better choice than plain ol’ Coffeeify.

      Now, we can include CoffeeScript files in our project. To see this in action, create alongside your main.js:

      "use strict"
      module.exports = () => [1, 2, 3, 4, 5]

      … And in main.js:


      "use strict";
      var R = require('ramda'),
            get_list = require('./');
      var square = function square (x) { return x * x; }
      var squares = R.chain(square, get_list());
      document.getElementById('response').innerHTML = squares

      Now, run npm run serve, navigate to http://localhost:9966, and everything should still work.

      A Build Task

      To add a script that builds out a minified bundle with stripped source maps, open up your package.json and add:

      "serve"         : "beefy main.js --browserify -t caching-coffeeify --live",
      "build" : "browserify main.js --debug -t coffeeify -t -p [ minifyify --map --output build/ ] > build/bundle.js"

      Now, in your working directory, run mkdir build. This is the folder we’ll save our bundle.js and source map too. Run npm run build; check what’s in your build folder; and voilà.

      Your Build Folder

      I assume you’re already familiar with Gulp. If not, check out the docs.

      Using npm scripts is fine for simple setups. But it’s already clear that this can get cumbersome and unreadable.

      That’s where Gulp comes in.

      In the interest of brevity, we’ll just set up a basic task that does the following:

      But if you like bells and whistles, check out the repo. It features a fancy watch task for you to get started with.

      As always, the first step is installation:

      1. npm install -g gulp && npm install gulp --save-dev

      We’ll need to install a bit of a toolchain to make this work. Here’s the command; the names of the dependencies are in the Gulpfile below.

      1. npm install --save-dev vinyl-source-stream vinyl-buffer gulp-livereload gulp-uglify gulp-util gulp babelify babel-preset-es2015 buffer merge rename source sourcemaps watchify

      Swell. Now, create a Gulpfile that looks like this:


      "use strict";
      var	babelify   = require('babelify'),
              browserify = require('browserify'),
              buffer     = require('vinyl-buffer'),
              coffeeify  = require('coffeeify'),
              gulp       = require('gulp'),
              gutil      = require('gulp-util'),
              livereload = require('gulp-livereload'),
              merge      = require('merge'),
              rename     = require('gulp-rename'),
              source     = require('vinyl-source-stream'),
              sourceMaps = require('gulp-sourcemaps'),
              watchify   = require('watchify');
      var config = {
          js: {
              src: './main.js',       
              outputDir: './build/',  
              mapDir: './maps/',      
              outputFile: 'bundle.js' 
      function bundle (bundler) {
            .pipe(sourceMaps.init({ loadMaps : true }))  
      gulp.task('bundle', function () {
          var bundler = browserify(config.js.src)  
                                      .transform(babelify, { presets : [ 'es2015' ] });  

      Now if you run gulp bundle in your working directory, you’ll have your bundle.js sitting in build/, and your sitting in build/maps/.

      This config is mostly Gulp-specific detail, so I’ll let the comments speak for themselves. The important thing to note is that, in our bundle task, we can easily chain transformations. This is a great example of how intuitive and fluent Browserify’s API can be. Check the documentation for everything else you can do with it.

      Whew! What a whirlwind tour. So far, you’ve learned:

      That’s more than enough to be productive with Browserify. There are a few links you should bookmark:

      And that about wraps it up. If you’ve got questions, comments, or confusion, drop a line in the comments – I’ll get back to you.

      Be sure to follow me on Twitter (@PelekeS) if you want a heads-up when I publish something new. Next time, we’ll make that boring home page a lot more interesting by using this tooling alongside React.

      Until then, keep getting cozy with Browserify. Go build something incredible.

      Getting Started with Redis in PHP


      Salvatore Sanfilippo is an open-source, in-memory data structure server with advanced key-value cache and store, often referred to as a NoSQL database. It is also referred to as a data structure server since it can store strings, hashes, lists, sets, sorted sets, and more.

      The essence of a key-value store is the ability to store some data, called a value inside a key. This data can later be retrieved only if we know the exact key used to store it.

      Salvatore Sanfilippo (creator of Redis) said Redis can be used to replace an RDBMS database. Now, although nothing is impossible, I think it would be a bad idea because using a key-value store for things, like a full-text search, might be painful. Especially, when you consider ACID compliance and syncing data in a key-value store: painful.

      Below are just a few uses of Redis, though there are many more than this.

      • Caching can be used in the same manner as memcached.
      • Leaderboards or related problems.
      • Counting stuff.
      • Real-time analysis.
      • Deletion and filtering.
      • Show latest item listings on your home page.

      This article’s aim is not to show you the syntax of Redis (you can learn about Redis’s syntax here), in this article, we will learn how to use Redis in PHP.

      Redis is pretty easy to install and the instructions, included, are for both Windows and Linux users.

      To install Redis on Linux, is pretty simple, but you’ll need TCL installed if you don’t have TCL installed. You can simply run:

      1. sudo apt-get install tcl

      To install Redis:

      1. wget
      2. tar xzf redis-2.8.19.tar.gz
      3. cd redis-2.8.19
      4. make

      Note: 2.8.19 should be replaced with the latest stable version of Redis.

      All Redis binaries are saved in the src Folder. To start the server:

      1. src/redis-server

      Redis installation on Windows is very easy, just visit this link, download a package, and install.

      Install Predis a Redis Client for PHP

      Predis is a Redis Client for PHP. It is well written and has a lot of support from the community. To use Predis just clone the repository into your working directory:

      1. git clone git://

      First, we’ll require the Redis Autoloader and register it. Then we’ll wrap the client in a try-catch block. The connection setting for connecting to Redis on a local server is different from connecting to a remote server.

          require "predis/autoload.php";
          try {
              $redis = new PredisClient();
          catch (Exception $e) {

      Now that we have successfully connected to the Redis server, let’s start using Redis.

      Redis supports a range of datatypes and you might wonder what a NoSQL key-value store has to do with datatypes? Well, these datatypes help developers store data in a meaningful way and can make data retrieval faster. Here are some of the datatypes supported by Redis:

      • String: Similar to Strings in PHP.
      • List: Similar to a single dimensional array in PHP. You can push, pop, shift and unshift, the elements that are placed in order or insertion FIFO (first in, first out).
      • Hash: Maps between string fields and string values. They are the perfect data type to represent objects (e.g.: A User with a number of fields like name, surname, and so forth).
      • Set: Similar to list, except that it has no order and each element may appear only once.
      • Sorted Set: Similar to Redis Sets with a unique feature of values stored in set. The difference is that each member of a Sorted Set is associated with score, used to order the set from the smallest score to the largest.

      Others are bitmaps and hyperloglogs, but they will not be discussed in this article, as they are pretty dense.

      In Redis, the most important commands are SET, GET and EXISTS. These commands are used to store, check, and retrieve data from a Redis server. Just like the commands, the Predis class can be used to perform Redis operations by methods with the same name as commands. For example:

          $redis->set(';message';, ';Hello world';);
          $value = $redis->get('message');
          echo ($redis->exists('message')) ? "Oui" : "please populate the message key";

      INCR and DECR are commands used to either decrease or increase a value.

          $redis->set("counter", 0);

      We can also increase the values of the counter key by larger integer values or we can decrease the value of the counter key with the INCRBY and DECRBY commands.

          $redis->set("counter", 0);
          $redis->incrby("counter", 15); 
          $redis->incrby("counter", 5);  
          $redis->decrby("counter", 10); 

      There are a few basic Redis commands for working with lists and they are:

      • LPUSH: adds an element to the beginning of a list
      • RPUSH: add an element to the end of a list
      • LPOP: removes the first element from a list and returns it
      • RPOP: removes the last element from a list and returns it
      • LLEN: gets the length of a list
      • LRANGE: gets a range of elements from a list

      Simple List Usage:

          $redis->rpush("languages", "french"); 
          $redis->rpush("languages", "arabic"); 
          $redis->lpush("languages", "english"); 
          $redis->lpush("languages", "swedish"); 
          $redis->lrange("languages", 0, -1); 
          $redis->lrange("languages", 0, 1); 

      A hash in Redis is a map between one string field and string values, like a one-to-many relationship. The commands associated with hashes in Redis are:

      • HSET: sets a key-value on the hash
      • HGET: gets a key-value on the hash
      • HGETALL: gets all key-values from the hash
      • HMSET: mass assigns several key-values to a hash
      • HDEL: deletes a key from the object
      • HINCRBY: increments a key-value from the hash with a given value.
          $key = ';linus torvalds';;
          $redis->hset($key, ';age';, 44);
          $redis->hset($key, ';country';, ';finland';);
          $redis->hset($key, 'occupation', 'software engineer');
          $redis->hset($key, 'reknown', 'linux kernel');
          $redis->hset($key, 'to delete', 'i will be deleted');
          $redis->get($key, 'age'); 
          $redis->get($key, 'country')); 
          $redis->del($key, 'to delete');
          $redis->hincrby($key, 'age', 20); 
          $redis->hmset($key, [
              'age' => 44,
              'country' => 'finland',
              'occupation' => 'software engineer',
              'reknown' => 'linux kernel',
          $data = $redis->hgetall($key);

      The list of commands associated with sets includes:

      • SADD: adds a N number of values to the key
      • SREM: removes N number of values from a key
      • SISMEMBER: if a value exists
      • SMEMBERS: lists of values in the set.
          $key = "countries";
          $redis->sadd($key, ';china';);
          $redis->sadd($key, ['england', 'france', 'germany']);
          $redis->sadd($key, 'china'); 
          $redis->srem($key, ['england', 'china']);
          $redis->sismember($key, 'england'); 

      Since Redis is an in-memory data store, you would probably not store data forever. Therefore, this brings us to EXPIRE, EXPIREAT, TTL, PERSIST:

      • EXPIRE: sets an expiration time, in seconds, for the key after which it is deleted
      • EXPIREAT: sets an expiration time using UNIX timestamps for the key after which it is deleted
      • TTL: gets the remaining time left for a key expiration
      • PERSIST: makes a key last forever by removing the expiration timer from the key.
          $key = "expire in 1 hour";
          $redis->expire($key, 3600); 
          $redis->expireat($key, time() + 3600); 

      The commands listed in this article are just a handful of many existing Redis commands (see more redis commands).

      Future of Redis

      Redis is a better replacement for memcached, as it is faster, scales better (supports master-slave replication), supports datatypes that many (Facebook, Twitter, Instagram) have dropped memcached for Redis. Redis is open source and many brilliant programmers from the open-source community have contributed patches.

      Other sources

      Get Started

      Determine Your Application’s Networking Architecture

      Consider your application’s requirements and determine how your application should communicate both internally and over the public internet. As part of this, review the range of options available for private and public network connectivity on the Linode platform: VPCs, VLANs, Private IPv4 addresses, and Public IPv4/IPv6 addresses. When choosing VPC for private networking (the most common product), determine if segmenting the VPC into multiple subnets is needed. Consider the number of IP addresses you need now (and might need in the future) per subnet and decide on an acceptable CIDR block as outlined with Valid IPv4 Ranges for Subnets.

      Create a VPC

      Once you’ve determined that a VPC is needed, you can create it directly in the Cloud Manager using the Create VPC form or by deploying a new Compute Instance and entering a new VPC. During this process, you’ll need to define the following parameters:

      • Region: The data center where the VPC is deployed. Since VPCs are region-specific, only Compute Instances within that region can join the VPC.
      • Label: A string to identify the VPC. This should be unique to your account.
      • Subnet Label: A string to identify the subnet, which should be unique compared to other subnets on the same VPC.
      • Subnet CIDR range: The range of IP addresses that can be used by Compute Instances assigned to this subnet.

      While at least 1 subnet must be created, you can create up to 10 subnets per VPC.

      Review the Create a VPC guide for complete instructions.

      Assign Compute Instances

      You can assign existing Compute Instances to a VPC or, more commonly, deploy a new Compute Instance to the VPC. For further instructions, review the Assign a Compute Instance to a VPC page.

      • New Compute Instance: When creating a Compute Instance, there is an option to add it to an existing VPC. The VPC must already be created in the same data center as selected for the Compute Instance. When assigning a new instance to a VPC, you must also select the subnet that the instance should belong to. By default, an IPv4 address from the subnet’s CIDR range will be assigned to the instance, though you can opt to manually enter an IP address. Additionally, public IPv4 connectivity won’t be configured by default, though an option is present to configure 1:1 NAT on the VPC interface.

      • Existing Compute Instance: If you need to add an existing Compute Instance to a VPC, you can do so from the VPC page or by directly editing that instance’s Configuration Profile.

      This page was originally published on