One place for hosting & domains

      Nodejs

      How To Work with Files Using Streams in Node.js


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      The concept of streams in computing usually describes the delivery of data in a steady, continuous flow. You can use streams for reading from or writing to a source continuously, thus eliminating the need to fit all the data in memory at once.

      Using streams provides two major advantages. One is that you can use your memory efficiently since you do not have to load all the data into memory before you can begin processing. Another advantage is that using streams is time-efficient. You can start processing data almost immediately instead of waiting for the entire payload. These advantages make streams a suitable tool for large data transfer in I/O operations. Files are a collection of bytes that contain some data. Since files are a common data source in Node.js, streams can provide an efficient way to work with files in Node.js.

      Node.js provides a streaming API in the stream module, a core Node.js module, for working with streams. All Node.js streams are an instance of the EventEmitter class (for more on this, see Using Event Emitters in Node.js). They emit different events you can listen for at various intervals during the data transmission process. The native stream module provides an interface consisting of different functions for listening to those events that you can use to read and write data, manage the transmission life cycle, and handle transmission errors.

      There are four different kinds of streams in Node.js. They are:

      • Readable streams: streams you can read data from.
      • Writable streams: streams you can write data to.
      • Duplex streams: streams you can read from and write to (usually simultaneously).
      • Transform streams: a duplex stream in which the output (or writable stream) is dependent on the modification of the input (or readable stream).

      The file system module (fs) is a native Node.js module for manipulating files and navigating the local file system in general. It provides several methods for doing this. Two of these methods implement the streaming API. They provide an interface for reading and writing files using streams. Using these two methods, you can create readable and writable file streams.

      In this article, you will read from and write to a file using the fs.createReadStream and fs.createWriteStream functions. You will also use the output of one stream as the input of another and implement a custom transform steam. By performing these actions, you will learn to use streams to work with files in Node.js. To demonstrate these concepts, you will write a command-line program with commands that replicate the cat functionality found in Linux-based systems, write input from a terminal to a file, copy files, and transform the content of a file.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Setting up a File Handling Command-Line Program

      In this step, you will write a command-line program with basic commands. This command-line program will demonstrate the concepts you’ll learn later in the tutorial, where you’ll use these commands with the functions you’ll create to work with files.

      To begin, create a folder to contain all your files for this program. In your terminal, create a folder named node-file-streams:

      Using the cd command, change your working directory to the new folder:

      Next, create and open a file called mycliprogram in your favorite text editor. This tutorial uses GNU nano, a terminal text editor. To use nano to create and open your file, type the following command:

      In your text editor, add the following code to specify the shebang, store the array of command-line arguments from the Node.js process, and store the list of commands the application should have.

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const args = process.argv;
      const commands = ['read', 'write', 'copy', 'reverse'];
      

      The first line contains a shebang, which is a path to the program interpreter. Adding this line tells the program loader to parse this program using Node.js.

      When you run a Node.js script on the command-line, several command-line arguments are passed when the Node.js process runs. You can access these arguments using the argv property or the Node.js process. The argv property is an array that contains the command-line arguments passed to a Node.js script. In the second line, you assign that property to a variable called args.

      Next, create a getHelpText function to display a manual of how to use the program. Add the code below to your mycliprogram file:

      node-file-streams/mycliprogram

      ...
      const getHelpText = function() {
          const helpText = `
          simplecli is a simple cli program to demonstrate how to handle files using streams.
          usage:
              mycliprogram <command> <path_to_file>
      
              <command> can be:
              read: Print a file's contents to the terminal
              write: Write a message from the terminal to a file
              copy: Create a copy of a file in the current directory
              reverse: Reverse the content of a file and save its output to another file.
      
              <path_to_file> is the path to the file you want to work with.
          `;
          console.log(helpText);
      }
      

      The getHelpText function prints out the multi-line string you created as the help text for the program. The help text shows the command-line arguments or parameters that the program expects.

      Next, you’ll add the control logic to check the length of args and provide the appropriate response:

      node-file-streams/mycliprogram

      ...
      let command = '';
      
      if(args.length < 3) {
          getHelpText();
          return;
      }
      else if(args.length > 4) {
          console.log('More arguments provided than expected');
          getHelpText();
          return;
      }
      else {
          command = args[2]
          if(!args[3]) {
              console.log('This tool requires at least one path to a file');
              getHelpText();
              return;
          }
      }
      

      In the code snippet above, you have created an empty string command to store the command received from the terminal. The first if block checks whether the length of the args array is less than 3. If it is less than 3, it means that no other additional arguments were passed when running the program. In this case, it prints the help text to the terminal and terminates.

      The else if block checks to see if the length of the args array is greater than 4. If it is, then the program has received more arguments than it needs. The program will print a message to this effect along with the help text and terminate.

      Finally, in the else block, you store the third element or the element in the second index of the args array in the command variable. The code also checks whether there is a fourth element or an element with index = 3 in the args array. If the item does not exist, it prints a message to the terminal indicating that you need a file path to continue.

      Save the file. Then run the application:

      You might get a permission denied error similar to the output below:

      Output

      -bash: ./mycliprogram: Permission denied

      To fix this error, you will need to provide the file with execution permissions, which you can do with the following command:

      Re-run the file again. The output will look similar to this:

      Output

      simplecli is a simple cli program to demonstrate how to handle files using streams. usage: mycliprogram <command> <path_to_file> read: Print a file's contents to the terminal write: Write a message from the terminal to a file copy: Create a copy of a file in the current directory reverse: Reverse the content of a file and save it output to another file.

      Finally, you are going to partially implement the commands in the commands array you created earlier. Open the mycliprogram file and add the code below:

      node-file-streams/mycliprogram

      ...
      switch(commands.indexOf(command)) {
          case 0:
              console.log('command is read');
              break;
          case 1:
              console.log('command is write');
              break;
          case 2:
              console.log('command is copy');
              break;
          case 3:
              console.log('command is reverse');
              break;
          default:
              console.log('You entered a wrong command. See help text below for supported functions');
              getHelpText();
              return;
      }
      

      Any time you enter a command found in the switch statement, the program runs the appropriate case block for the command. For this partial implementation, you print the name of the command to the terminal. If the string is not in the list of commands you created above, the program will print out a message to that effect with the help text. Then the program will terminate.

      Save the file, then re-run the program with the read command and any file name:

      • ./mycliprogram read test.txt

      The output will look similar to this:

      Output

      command is read

      You have now successfully created a command-line program. In the following section, you will replicate the cat functionality as the read command in the application using createReadStream().

      Step 2 — Reading a File with createReadStream()

      The read command in the command-line application will read a file from the file system and print it out to the terminal similar to the cat command in a Linux-based terminal. In this section, you will implement that functionality using createReadStream() from the fs module.

      The createReadStream function creates a readable stream that emits events that you can listen to since it inherits from the EventsEmitter class. The data event is one of these events. Every time the readable stream reads a piece of data, it emits the data event, releasing a piece of data. When used with a callback function, it invokes the callback with that piece of data or chunk, and you can process that data within that callback function. In this case, you want to display that chunk in the terminal.

      To begin, add a text file to your working directory for easy access. In this section and some subsequent ones, you will be using a file called lorem-ipsum.txt. It is a text file containing ~1200 lines of lorem ipsum text generated using the Lorem Ipsum Generator, and it is hosted on GitHub. In your terminal, enter the following command to download the file to your working directory:

      • wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/lorem-ipsum.txt

      To replicate the cat functionality in your command-line application, you’ll need to import the fs module because it contains the createReadStream function you need. To do this, open the mycliprogram file and add this line immediately after the shebang:

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const fs = require('fs');
      

      Next, you will create a function below the switch statement called read() with a single parameter: the file path for the file you want to read. This function will create a readable stream from that file and listen for the data event on that stream.

      node-file-streams/mycliprogram

      ...
      function read(filePath) {
          const readableStream = fs.createReadStream(filePath);
      
          readableStream.on('error', function (error) {
              console.log(`error: ${error.message}`);
          })
      
          readableStream.on('data', (chunk) => {
              console.log(chunk);
          })
      }
      

      The code also checks for errors by listening for the error event. When an error occurs, an error message will print to the terminal.

      Finally, you should replace console.log() with the read() function in the first case block case 0 as shown in the code block below:

      node-file-streams/mycliprogram

      ...
      switch (command){
          case 0:
              read(args[3]);
              break;
          ...
      }
      

      Save the file to persist the new changes and run the program:

      • ./mycliprogram read lorem-ipsum.txt

      The output will look similar to this:

      Output

      <Buffer 0a 0a 4c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74 2c 20 63 6f 6e 73 65 63 74 65 74 75 72 20 61 64 69 70 69 73 63 69 ... > ... <Buffer 76 69 74 61 65 20 61 6e 74 65 20 66 61 63 69 6c 69 73 69 73 20 6d 61 78 69 6d 75 73 20 75 74 20 69 64 20 73 61 70 69 65 6e 2e 20 50 65 6c 6c 65 6e 74 ... >

      Based on the output above, you can see that the data was read in chunks or pieces, and these pieces of data are of the Buffer type. For the sake of brevity, the terminal output above shows only two chunks, and the ellipsis indicates that there are several buffers in between the chunks shown here. The larger the file, the greater the number of buffers or chunks.

      To return the data in a human-readable format, you will set the encoding type of the data by passing the string value of the encoding type you want as a second argument to the createReadStream() function. In the second argument to the createReadStream() function, add the following highlighted code to set the encoding type to utf8.

      node-file-streams/mycliprogram

      
      ...
      const readableStream = fs.createReadStream(filePath, 'utf8')
      ...
      

      Re-running the program will display the contents of the file in the terminal. The program prints the lorem ipsum text from the lorem-ipsum.txt file line by line as it appears in the file.

      Output

      Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean est tortor, eleifend et enim vitae, mattis condimentum elit. In dictum ex turpis, ac rutrum libero tempus sed... ... ...Quisque nisi diam, viverra vel aliquam nec, aliquet ut nisi. Nullam convallis dictum nisi quis hendrerit. Maecenas venenatis lorem id faucibus venenatis. Suspendisse sodales, tortor ut condimentum fringilla, turpis erat venenatis justo, lobortis egestas massa massa sed magna. Phasellus in enim vel ante viverra ultricies.

      The output above shows a small fraction of the content of the file printed to the terminal. When you compare the terminal output with the lorem-ipsum.txt file, you will see that the content is the same and takes the same formatting as the file, just like with the cat command.

      In this section, you implemented the cat functionality in your command-line program to read the content of a file and print it to the terminal using the createReadStream function. In the next step, you will create a file based on the input from the terminal using createWriteStream().

      Step 3 — Writing to a File with createWriteStream()

      In this section, you will write input from the terminal to a file using createWriteStream(). The createWriteStream function returns a writable file stream that you can write data to. Like the readable stream in the previous step, this writable stream emits a set of events like error, finish, and pipe. Additionally, it provides the write function for writing data to the stream in chunks or bits. The write function takes in the chunk, which could be a string, a Buffer, <Uint8Array>, or any other JavaScript value. It also allows you to specify an encoding type if the chunk is a string.

      To write input from a terminal to a file, you will create a function called write in your command-line program. In this function, you will create a prompt that receives input from the terminal (until the user terminates it) and writes the data to a file.

      First, you will need to import the readline module at the top of the mycliprogram file. The readline module is a native Node.js module that you can use to receive data from a readable stream like the standard input (stdin) or your terminal one line at a time. Open your mycliprogram file and add the highlighted line :

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const fs = require('fs');
      const readline = require('readline');
      

      Then, add the following code below the read() function.

      node-file-streams/mycliprogram

      ...
      function write(filePath) {
          const writableStream = fs.createWriteStream(filePath);
      
          writableStream.on('error',  (error) => {
              console.log(`An error occured while writing to the file. Error: ${error.message}`);
          });
      }
      

      Here, you are creating a writable stream with the filePath parameter. This file path will be the command-line argument after the write word. You are also listening for the error event if anything goes wrong (for example, if you provide a filePath that does not exist).

      Next, you will write the prompt to receive a message from the terminal and write it to the specified filePath using the readline module you imported earlier. To create a readline interface, a prompt, and to listen for the line event, update the write function as shown in the block:

      node-file-streams/mycliprogram

      ...
      function write(filePath) {
          const writableStream = fs.createWriteStream(filePath);
      
          writableStream.on('error',  (error) => {
              console.log(`An error occured while writing to the file. Error: ${error.message}`);
          });
      
          const rl = readline.createInterface({
              input: process.stdin,
              output: process.stdout,
              prompt: 'Enter a sentence: '
          });
      
          rl.prompt();
      
          rl.on('line', (line) => {
              switch (line.trim()) {
                  case 'exit':
                      rl.close();
                      break;
                  default:
                      sentence = line + 'n'
                      writableStream.write(sentence);
                      rl.prompt();
                      break;
              }
          }).on('close', () => {
              writableStream.end();
              writableStream.on('finish', () => {
                  console.log(`All your sentences have been written to ${filePath}`);
              })
              setTimeout(() => {
                  process.exit(0);
              }, 100);
          });
      }
      

      You created a readline interface (rl) that allows the program to read the standard input (stdin) from your terminal on a line-by-line basis and write a specified prompt string to standard output (stdout). You also called the prompt() function to write the configured prompt message to a new line and to allow the user to provide additional input.

      Then you chained two event listeners together on the rl interface. The first one listens for the line event emitted each time the input stream receives an end-of-line input. This input could be a line feed character (n), the carriage return character (r), or both characters together (rn), and it usually occurs when you press the ENTER or return key on your computer. Therefore, any time you press either of these keys while typing in the terminal, the line event is emitted. The callback function receives a string containing the single line of input line.

      You trimmed the line and checked to see if it is the word exit. If not, the program will add a new line character to line and write the sentence to the filePath using the .write() function. Then you called the prompt function to prompt the user to enter another line of text. If the line is exit, the program calls the close function on the rl interface. The close function closes the rl instance and releases the standard input (stdin) and output (stdout) streams.

      This function brings us to the second event you listened for on the rl instance: the close event. This event is emitted when you call rl.close(). After writing data to a stream, you have to call the end function on the stream to tell your program that it should no longer write data to the writable stream. Doing this will ensure that the data is completely flushed to your output file. Therefore, when you type the word exit, you close the rl instance and stop your writable stream by calling the end function.

      To provide feedback to the user that the program has successfully written all the text from the terminal to the specified filePath, you listened for the finish event on writableStream. In the callback function, you logged a message to the terminal to inform the user when writing is complete. Finally, you exited the process after 100ms to provide enough time for the finish event to provide feedback.

      Finally, to call this function in your mycliprogram, replace the console.log statement in the case 1 block in the switch statement with the new write function, as shown here:

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 1:
              write(args[3]);
              break;
      
          ...
      }
      

      Save the file containing the new changes. Then run the command-line application in your terminal with the write command.

      • ./mycliprogram write output.txt

      At the Enter a sentence prompt, add any input you’d like. After a couple of entries, type exit.

      The output will look similar to this (with your input displaying instead of the highlighted lines):

      Output

      Enter a sentence: Twinkle, twinkle, little star Enter a sentence: How I wonder what you are Enter a sentence: Up above the hills so high Enter a sentence: Like a diamond in the sky Enter a sentence: exit All your sentences have been written to output.txt

      Check output.txt to see the file content using the read command you created earlier.

      • ./mycliprogram read output.txt

      The terminal output should contain all the text you have typed into the command except exit. Based on the input above, the output.txt file has the following content:

      Output

      Twinkle, twinkle, little star How I wonder what you are Up above the hills so high Like a diamond in the sky

      In this step, you wrote to a file using streams. Next, you will implement the function that copies files in your command-line program.

      Step 4 — Copying Files Using pipe()

      In this step, you will use the pipe function to create a copy of a file using streams. Although there are other ways to copy files using streams, using pipe is preferred because you don’t need to manage the data flow.

      For example, one way to copy files using streams would be to create a readable stream for the file, listen to the stream on the data event, and write each chunk from the stream event to a writable stream of the file copy. The snippet below shows an example:

      example.js

      const fs = require('fs');
      const readableStream = fs.createReadStream('lorem-ipsum.txt', 'utf8');
      const writableStream = fs.createWriteStream('lorem-ipsum-copy.txt');
      
      readableStream.on('data', () => {
          writableStream.write(chunk);
      });
      
      writableStream.end();
      

      The disadvantage to this method is that you need to manage the events on both the readable and writeable streams.

      The preferred method for copying files using streams is to use pipe. A plumbing pipe passes water from a source such as a water tank (output) to a faucet or tap (input). Similarly, you use pipe to direct data from an output stream to an input stream. (If you are familiar with the Linux-based bash shell, the pipe | command directs data from one stream to another.)

      Piping in Node.js provides the ability to read data from a source and write it somewhere else without managing the data flow as you would using the first method. Unlike the previous approach, you do not need to manage the events on both the readable and writable streams. For this reason, it is a preferred approach for implementing a copy command in your command-line application that uses streams.

      In the mycliprogram file, you will add a new function invoked when a user runs the program with the copy command-line argument. The copy method will use pipe() to copy from an input file to the destination copy of the file. Create the copy function after the write function as shown below:

      node-file-streams/mycliprogram

      ...
      function copy(filePath) {
          const inputStream = fs.createReadStream(filePath)
          const fileCopyPath = filePath.split('.')[0] + '-copy.' + filePath.split('.')[1]
          const outputStream = fs.createWriteStream(fileCopyPath)
      
          inputStream.pipe(outputStream)
      
          outputStream.on('finish', () => {
              console.log(`You have successfully created a ${filePath} copy. The new file name is ${fileCopyPath}.`);
          })
      }
      

      In the copy function, you created an input or readable stream using fs.createReadStream(). You also generated a new name for the destination, output a copy of the file, and created an output or writable stream using fs.createWriteStream(). Then you piped the data from the inputStream to the outputStream using .pipe(). Finally, you listened for the finish event and printed out a message on a successful file copy.

      Recall that to close a writable stream, you have to call the end() function on the stream. When piping streams, the end() function is called on the writable stream (outputStream) when the readable stream (inputStream) emits the end event. The end() function of the writable stream emits the finish event, and you listen for this event to indicate that you have finished copying a file.

      To see this function in action, open the mycliprogram file and update the case 2 block of the switch statement as shown below:

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 2:
              copy(args[3]);
              break;
      
          ...
      }
      

      Calling the copy function in the case 2 block of the switch statements ensures that when you run the mycliprogram program with the copy command and the required file paths, the copy function is executed.

      Run mycliprogram:

      • ./mycliprogram copy lorem-ipsum.txt

      The output will look similar to this:

      Output

      You have successfully created a lorem-ipsum-copy.txt copy. The new file name is lorem-ipsum-copy.txt.

      Within the node-file-streams folder, you will see a newly added file with the name lorem-ipsum-copy.txt.

      You have successfully added a copy function to your command-line program using pipe. In the next step, you will use streams to modify the content of a file.

      Step 5 — Reversing the Content of a File using Transform()

      In the previous three steps of this tutorial, you have worked with streams using the fs module. In this section, you will modify file streams using the Transform() class from the native stream module, which provides a transform stream. You can use a transform stream to read data, manipulate the data, and provide new data as output. Thus, the output is a ‘transformation’ of the input data. Node.js modules that use transform streams include the crypto module for cryptography and the zlib module with gzip for compressing and uncompressing files.

      You are going to implement a custom transform stream using the Transform() abstract class. The transform stream you create will reverse the contents of a file line by line, which will demonstrate how to use transform streams to modify the content of a file as you want.

      In the mycliprogram file, you will add a reverse function that the program will call when a user passes the reverse command-line argument.

      First, you need to import the Transform() class at the top of the file below the other imports. Add the highlighted line as shown below:

      mycliprogram

      #!/usr/bin/env node
      ...
      const stream = require('stream');
      const Transform = stream.Transform || require('readable-stream').Transform;
      

      In Node.js versions earlier than v0.10, the Transform abstract class is missing. Therefore, the code block above includes the readable-streams polyfill so that this program can work with earlier versions of Node.js. If the Node.js version is > 0.10 the program uses the abstract class, and if not, it uses the polyfill.

      Note: If you are using a Node.js version < 0.10, you will have to run npm init -y to create a package.json file and install the polyfill using npm install readable-stream to your working directory for the polyfill to be applied.

      Next, you will create the reverse function right under your copy function. In that function, you will create a readable stream using the filePath parameter, generate a name for the reversed file, and create a writable stream using that name. Then you create reverseStream, an instance of the Transform() class. When you call the Transform() class, you pass in an object containing one function. This important function is the transform function.

      Beneath the copy function, add the code block below to add the reverse function.

      node-file-streams/mycliprogram

      ...
      function reverse(filePath) {
          const readStream = fs.createReadStream(filePath);
          const reversedDataFilePath = filePath.split('.')[0] + '-reversed.'+ filePath.split('.')[1];
          const writeStream = fs.createWriteStream(reversedDataFilePath);
      
          const reverseStream = new Transform({
              transform (data, encoding, callback) {
                  const reversedData = data.toString().split("").reverse().join("");
                  this.push(reversedData);
                  callback();
              }
          });
      
          readStream.pipe(reverseStream).pipe(writeStream).on('finish', () => {
              console.log(`Finished reversing the contents of ${filePath} and saving the output to ${reversedDataFilePath}.`);
          });
      }
      

      The transform function receives three parameters: data, encoding type, and a callback function. Within this function, you converted the data to a string, split the string, reversed the contents of the resultant array, and joined them back together. This process rewrites the data backward instead of forward.

      Next, you connected the readStream to the reverseStream and finally to the writeStream using two pipe() functions. Finally, you listened for the finish event to alert the user when the file contents have been completely reversed.

      You will notice that the code above uses another syntax for listening for the finish event. Instead of listening for the finish event for the writeStream on a new line, you chained the on function to the second pipe function. You can chain some event listeners on a stream. In this case, doing this has the same effect as calling the on('finish') function on the writeStream.

      To wrap things up, replace the console.log statement in the case 3 block of the switch statement with reverse().

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 3:
              reverse(args[3]);
              break;
      
          ...
      }
      

      To test this function, you will use another file containing the names of countries in alphabetical order (countries.csv). You can download it to your working directory by running the command below.

      • wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/countries.csv

      You can then run mycliprogram.

      • ./mycliprogram reverse countries.csv

      The output will look similar to this:

      Output

      Finished reversing the contents of countries.csv and saving the output to countries-reversed.csv.

      Compare the contents of countries-reversed.csv with countries.csv to see the transformation. Each name is now written backward, and the order of the names has also been reversed (“Afghanistan” is written as “natsinahgfA” and appears last, and “Zimbabwe” is written as “ewbabmiZ” and appears first).

      You have successfully created a custom transform stream. You have also created a command-line program with functions that use streams for file handling.

      Conclusion

      Streams are used in native Node.js modules and in various yarn and npm packages that perform input/output operations because they provide an efficient way to handle data. In this article, you used various stream-based functions to work with files in Node.js. You built a command-line program with read, write, copy, and reverse commands. Then you implemented each of these commands in functions named accordingly. To implement the functions, you used functions like createReadStream, createWriteStream, pipe from the fs module, the createInterface function from the readline module, and finally the abstract Transform() class. Finally, you pieced these functions together in a small command-line program.

      As a next step, you could extend the command-line program you created to include other file system functionality you might want to use locally. A good example could be writing a personal tool to convert data from .tsv stream source to .csv or attempting to replicate the wget command you used in this article to download files from GitHub.

      The command-line program you have written handles command-line arguments itself and uses a simple prompt to get user input. You can learn more about building more robust and maintainable command-line applications by following How To Handle Command-line Arguments in Node.js Scripts and How To Create Interactive Command-line Prompts with Inquirer.js.

      Additionally, Node.js provides extensive documentation on the various Node.js stream module classes, methods, and events you might need for your use case.



      Source link

      How To Build a Media Processing API in Node.js With Express and FFmpeg.wasm


      The author selected the Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Handling media assets is becoming a common requirement of modern back-end services. Using dedicated, cloud-based solutions may help when you’re dealing with massive scale or performing expensive operations, such as video transcoding. However, the extra cost and added complexity may be hard to justify when all you need is to extract a thumbnail from a video or check that user-generated content is in the correct format. Particularly at a smaller scale, it makes sense to add media processing capability directly to your Node.js API.

      In this guide, you will build a media API in Node.js with Express and ffmpeg.wasm — a WebAssembly port of the popular media processing tool. You’ll build an endpoint that extracts a thumbnail from a video as an example. You can use the same techniques to add other features supported by FFmpeg to your API.

      When you’re finished, you will have a good grasp on handling binary data in Express and processing them with ffmpeg.wasm. You’ll also handle requests made to your API that cannot be processed in parallel.

      Prerequisites

      To complete this tutorial, you will need:

      This tutorial was verified with Node v16.11.0, npm v7.15.1, express v4.17.1, and ffmpeg.wasm v0.10.1.

      Step 1 — Setting Up the Project and Creating a Basic Express Server

      In this step, you will create a project directory, initialize Node.js and install ffmpeg, and set up a basic Express server.

      Start by opening the terminal and creating a new directory for the project:

      Navigate to the new directory:

      Use npm init to create a new package.json file. The -y parameter indicates that you’re happy with the default settings for the project.

      Finally, use npm install to install the packages required to build the API. The --save flag indicates that you wish to save those as dependencies in the package.json file.

      • npm install --save @ffmpeg/ffmpeg @ffmpeg/core express cors multer p-queue

      Now that you have installed ffmpeg, you’ll set up a web server that responds to requests using Express.

      First, open a new file called server.mjs with nano or your editor of choice:

      The code in this file will register the cors middleware which will permit requests made from websites with a different origin. At the top of the file, import the express and cors dependencies:

      server.mjs

      import express from 'express';
      import cors from 'cors';
      

      Then, create an Express app and start the server on the port :3000 by adding the following code below the import statements:

      server.mjs

      ...
      const app = express();
      const port = 3000;
      
      app.use(cors());
      
      app.listen(port, () => {
          console.log(`[info] ffmpeg-api listening at http://localhost:${port}`)
      });
      

      You can start the server by running the following command:

      You’ll see the following output:

      Output

      [info] ffmpeg-api listening at http://localhost:3000

      When you try loading http://localhost:3000 in your browser, you’ll see Cannot GET /. This is Express telling you it is listening for requests.

      With your Express server now set up, you’ll create a client to upload the video and make requests to your Express server.

       Step 2 — Creating a Client and Testing the Server

      In this section, you’ll create a web page that will let you select a file and upload it to the API for processing.

      Start by opening a new file called client.html:

      In your client.html file, create a file input and a Create Thumbnail button. Below, add an empty <div> element to display errors and an image that will show the thumbnail that the API sends back. At the very end of the <body> tag, load a script called client.js. Your final HTML template should look as follows:

      client.html

      <!DOCTYPE html>
      <html lang="en">
      <head>
          <meta charset="UTF-8">
          <title>Create a Thumbnail from a Video</title>
          <style>
              #thumbnail {
                  max-width: 100%;
              }
          </style>
      </head>
      <body>
          <div>
              <input id="file-input" type="file" />
              <button id="submit">Create Thumbnail</button>
              <div id="error"></div>
              <img id="thumbnail" />
          </div>
          <script src="https://www.digitalocean.com/community/tutorials/client.js"></script>
      </body>
      </html>
      

      Note that each element has a unique id. You’ll need them when referring to the elements from the client.js script. The styling on the #thumbnail element is there to ensure that the image fits on the screen when it loads.

      Save the client.html file and open client.js:

      In your client.js file, start by defining variables that store references to your HTML elements you created:

      client.js

      const fileInput = document.querySelector('#file-input');
      const submitButton = document.querySelector('#submit');
      const thumbnailPreview = document.querySelector('#thumbnail');
      const errorDiv = document.querySelector('#error');
      

      Then, attach a click event listener to the submitButton variable to check whether you’ve selected a file:

      client.js

      ...
      submitButton.addEventListener('click', async () => {
          const { files } = fileInput;
      }
      

      Next, create a function showError() that will output an error message when a file is not selected. Add the showError() function above your event listener:

      client.js

      const fileInput = document.querySelector('#file-input');
      const submitButton = document.querySelector('#submit');
      const thumbnailPreview = document.querySelector('#thumbnail');
      const errorDiv = document.querySelector('#error');
      
      function showError(msg) {
          errorDiv.innerText = `ERROR: ${msg}`;
      }
      
      submitButton.addEventListener('click', async () => {
      ...
      

      Now, you will build a function createThumbnail() that will make a request to the API, send the video, and receive a thumbnail in response. At the top of your client.js file, define a new constant with the URL to a /thumbnail endpoint:

      const API_ENDPOINT = 'http://localhost:3000/thumbnail';
      
      const fileInput = document.querySelector('#file-input');
      const submitButton = document.querySelector('#submit');
      const thumbnailPreview = document.querySelector('#thumbnail');
      const errorDiv = document.querySelector('#error');
      ...
      

      You will define and use the /thumbnail endpoint in your Express server.

      Next, add the createThumbnail() function below your showError() function:

      client.js

      ...
      function showError(msg) {
          errorDiv.innerText = `ERROR: ${msg}`;
      }
      
      async function createThumbnail(video) {
      
      }
      ...
      

      Web APIs frequently use JSON to transfer structured data from and to the client. To include a video in a JSON, you would have to encode it in base64, which would increase its size by about 30%. You can avoid this by using multipart requests instead. Multipart requests allow you to transfer structured data including binary files over http without the unnecessary overhead. You can do this using the FormData() constructor function.

      Inside the createThumbnail() function, create an instance of FormData and append the video file to the object. Then make a POST request to the API endpoint using the Fetch API with the FormData() instance as the body. Interpret the response as a binary file (or blob) and convert it to a data URL so that you can assign it to the <img> tag you created earlier.

      Here’s the full implementation of createThumbnail():

      client.js

      ...
      async function createThumbnail(video) {
          const payload = new FormData();
          payload.append('video', video);
      
          const res = await fetch(API_ENDPOINT, {
              method: 'POST',
              body: payload
          });
      
          if (!res.ok) {
              throw new Error('Creating thumbnail failed');
          }
      
          const thumbnailBlob = await res.blob();
          const thumbnail = await blobToDataURL(thumbnailBlob);
      
          return thumbnail;
      }
      ...
      

      You’ll notice createThumbnail() has the function blobToDataURL() in its body. This is a helper function that will convert a blob to a data URL.

      Above your createThumbnail() function, create the function blobDataToURL() that returns a promise:

      client.js

      ...
      async function blobToDataURL(blob) {
          return new Promise((resolve, reject) => {
              const reader = new FileReader();
              reader.onload = () => resolve(reader.result);
              reader.onerror = () => reject(reader.error);
              reader.onabort = () => reject(new Error("Read aborted"));
              reader.readAsDataURL(blob);
          });
      }
      ...
      

      blobToDataURL() uses FileReader to read the contents of the binary file and format it as a data URL.

      With the createThumbnail() and showError() functions now defined, you can use them to finish implementing the event listener:

      client.js

      ...
      submitButton.addEventListener('click', async () => {
          const { files } = fileInput;
      
          if (files.length > 0) {
              const file = files[0];
              try {
                  const thumbnail = await createThumbnail(file);
                  thumbnailPreview.src = thumbnail;
              } catch(error) {
                  showError(error);
              }
          } else {
              showError('Please select a file');
          }
      });
      

      When a user clicks on the button, the event listener will pass the file to the createThumbnail() function. If successful, it will assign the thumbnail to the <img> element you created earlier. In case the user doesn’t select a file or the request fails, it will call the showError() function to display an error.

      At this point, your client.js file will look like the following:

      client.js

      const API_ENDPOINT = 'http://localhost:3000/thumbnail';
      
      const fileInput = document.querySelector('#file-input');
      const submitButton = document.querySelector('#submit');
      const thumbnailPreview = document.querySelector('#thumbnail');
      const errorDiv = document.querySelector('#error');
      
      function showError(msg) {
          errorDiv.innerText = `ERROR: ${msg}`;
      }
      
      async function blobToDataURL(blob) {
          return new Promise((resolve, reject) => {
              const reader = new FileReader();
              reader.onload = () => resolve(reader.result);
              reader.onerror = () => reject(reader.error);
              reader.onabort = () => reject(new Error("Read aborted"));
              reader.readAsDataURL(blob);
          });
      }
      
      async function createThumbnail(video) {
          const payload = new FormData();
          payload.append('video', video);
      
          const res = await fetch(API_ENDPOINT, {
              method: 'POST',
              body: payload
          });
      
          if (!res.ok) {
              throw new Error('Creating thumbnail failed');
          }
      
          const thumbnailBlob = await res.blob();
          const thumbnail = await blobToDataURL(thumbnailBlob);
      
          return thumbnail;
      }
      
      submitButton.addEventListener('click', async () => {
          const { files } = fileInput;
      
          if (files.length > 0) {
              const file = files[0];
      
              try {
                  const thumbnail = await createThumbnail(file);
                  thumbnailPreview.src = thumbnail;
              } catch(error) {
                  showError(error);
              }
          } else {
              showError('Please select a file');
          }
      });
      

      Start the server again by running:

      With your client now set up, uploading the video file here will result in receiving an error message. This is because the /thumbnail endpoint is not built yet. In the next step, you’ll create the /thumbnail endpoint in Express to accept the video file and create the thumbnail.

       Step 3 — Setting Up an Endpoint to Accept Binary Data

      In this step, you will set up a POST request for the /thumbnail endpoint and use middleware to accept multipart requests.

      Open server.mjs in an editor:

      Then, import multer at the top of the file:

      server.mjs

      import express from 'express';
      import cors from 'cors';
      import multer from 'multer';
      ...
      

      Multer is a middleware that processes incoming multipart/form-data requests before passing them to your endpoint handler. It extracts fields and files from the body and makes them available as an array on the request object in Express. You can configure where to store the uploaded files and set limits on file size and format.

      After importing it, initialize the multer middleware with the following options:

      server.mjs

      ...
      const app = express();
      const port = 3000;
      
      const upload = multer({
          storage: multer.memoryStorage(),
          limits: { fileSize: 100 * 1024 * 1024 }
      });
      
      app.use(cors());
      ...
      

      The storage option lets you choose where to store the incoming files. Calling multer.memoryStorage() will initialize a storage engine that keeps files in Buffer objects in memory as opposed to writing them to disk. The limits option lets you define various limits on what files will be accepted. Set the fileSize limit to 100MB or a different number that matches your needs and the amount of memory available on your server. This will prevent your API from crashing when the input file is too big.

      Note: Due to the limitations of WebAssembly, ffmpeg.wasm cannot handle input files over 2GB in size.

      Next, set up the POST /thumbnail endpoint itself:

      server.mjs

      ...
      app.use(cors());
      
      app.post('/thumbnail', upload.single('video'), async (req, res) => {
          const videoData = req.file.buffer;
      
          res.sendStatus(200);
      });
      
      app.listen(port, () => {
          console.log(`[info] ffmpeg-api listening at http://localhost:${port}`)
      });
      

      The upload.single('video') call will set up a middleware for that endpoint only that will parse the body of a multipart request that includes a single file. The first parameter is the field name. It must match the one you gave to FormData when creating the request in client.js. In this case, it’s video. multer will then attach the parsed file to the req parameter. The content of the file will be under req.file.buffer.

      At this point, the endpoint doesn’t do anything with the data it receives. It acknowledges the request by sending an empty 200 response. In the next step, you’ll replace that with the code that extracts a thumbnail from the video data received.

      In this step, you’ll use ffmpeg.wasm to extract a thumbnail from the video file received by the POST /thumbnail endpoint.

      ffmpeg.wasm is a pure WebAssembly and JavaScript port of FFmpeg. Its main goal is to allow running FFmpeg directly in the browser. However, because Node.js is built on top of V8 — Chrome’s JavaScript engine — you can use the library on the server too.

      The benefit of using a native port of FFmpeg over a wrapper built on top of the ffmpeg command is that if you’re planning to deploy your app with Docker, you don’t have to build a custom image that includes both FFmpeg and Node.js. This will save you time and reduce the maintenance burden of your service.

      Add the following import to the top of server.mjs:

      server.mjs

      import express from 'express';
      import cors from 'cors';
      import multer from 'multer';
      import { createFFmpeg } from '@ffmpeg/ffmpeg';
      ...
      

      Then, create an instance of ffmpeg.wasm and start loading the core:

      server.mjs

      ...
      import { createFFmpeg } from '@ffmpeg/ffmpeg';
      
      const ffmpegInstance = createFFmpeg({ log: true });
      let ffmpegLoadingPromise = ffmpegInstance.load();
      
      const app = express();
      ...
      

      The ffmpegInstance variable holds a reference to the library. Calling ffmpegInstance.load() starts loading the core into memory asynchronously and returns a promise. Store the promise in the ffmpegLoadingPromise variable so that you can check whether the core has loaded.

      Next, define the following helper function that will use fmpegLoadingPromise to wait for the core to load in case the first request arrives before it’s ready:

      server.mjs

      ...
      let ffmpegLoadingPromise = ffmpegInstance.load();
      
      async function getFFmpeg() {
          if (ffmpegLoadingPromise) {
              await ffmpegLoadingPromise;
              ffmpegLoadingPromise = undefined;
          }
      
          return ffmpegInstance;
      }
      
      const app = express();
      ...
      

      The getFFmpeg() function returns a reference to the library stored in the ffmpegInstance variable. Before returning it, it checks whether the library has finished loading. If not, it will wait until ffmpegLoadingPromise resolves. In case the first request to your POST /thumbnail endpoint arrives before ffmpegInstance is ready to use, your API will wait and resolve it when it can rather than rejecting it.

      Now, implement the POST /thumbnail endpoint handler. Replace res.sendStatus(200); at the end of the end of the function with a call to getFFmpeg to get a reference to ffmpeg.wasm when it’s ready:

      server.mjs

      ...
      app.post('/thumbnail', upload.single('video'), async (req, res) => {
          const videoData = req.file.buffer;
      
          const ffmpeg = await getFFmpeg();
      });
      ...
      

      ffmpeg.wasm works on top of an in-memory file system. You can read and write to it using ffmpeg.FS. When running FFmpeg operations, you will pass virtual file names to the ffmpeg.run function as an argument the same way as you would when working with the CLI tool. Any output files created by FFmpeg will be written to the file system for you to retrieve.

      In this case, the input file is a video. The output file will be a single PNG image. Define the following variables:

      server.mjs

      ...
          const ffmpeg = await getFFmpeg();
      
          const inputFileName = `input-video`;
          const outputFileName = `output-image.png`;
          let outputData = null;
      });
      ...
      

      The file names will be used on the virtual file system. outputData is where you’ll store the thumbnail when it’s ready.

      Call ffmpeg.FS() to write the video data to the in-memory file system:

      server.mjs

      ...
          let outputData = null;
      
          ffmpeg.FS('writeFile', inputFileName, videoData);
      });
      ...
      

      Then, run the FFmpeg operation:

      server.mjs

      ...
          ffmpeg.FS('writeFile', inputFileName, videoData);
      
          await ffmpeg.run(
              '-ss', '00:00:01.000',
              '-i', inputFileName,
              '-frames:v', '1',
              outputFileName
          );
      });
      ...
      

      The -i parameter specifies the input file. -ss seeks to the specified time (in this case, 1 second from the beginning of the video). -frames:v limits the number of frames that will be written to the output (a single frame in this scenario). outputFileName at the end indicates where will FFmpeg write the output.

      After FFmpeg exits, use ffmpeg.FS() to read the data from the file system and delete both the input and output files to free up memory:

      server.mjs

      ...
          await ffmpeg.run(
              '-ss', '00:00:01.000',
              '-i', inputFileName,
              '-frames:v', '1',
              outputFileName
          );
      
          outputData = ffmpeg.FS('readFile', outputFileName);
          ffmpeg.FS('unlink', inputFileName);
          ffmpeg.FS('unlink', outputFileName);
      });
      ...
      

      Finally, dispatch the output data in the body of the response:

      server.mjs

      ...
          ffmpeg.FS('unlink', outputFileName);
      
          res.writeHead(200, {
              'Content-Type': 'image/png',
              'Content-Disposition': `attachment;filename=${outputFileName}`,
              'Content-Length': outputData.length
          });
          res.end(Buffer.from(outputData, 'binary'));
      });
      ...
      

      Calling res.writeHead() dispatches the response head. The second parameter includes custom http headers) with information about the data in the body of the request that will follow. The res.end() function sends the data from its first argument as the body of the request and finalizes the request. The outputData variable is a raw array of bytes as returned by ffmpeg.FS(). Passing it to Buffer.from() initializes a Buffer to ensure the binary data will be handled correctly by res.end().

      At this point, your POST /thumbnail endpoint implementation should look like this:

      server.mjs

      ...
      app.post('/thumbnail', upload.single('video'), async (req, res) => {
          const videoData = req.file.buffer;
      
          const ffmpeg = await getFFmpeg();
      
          const inputFileName = `input-video`;
          const outputFileName = `output-image.png`;
          let outputData = null;
      
          ffmpeg.FS('writeFile', inputFileName, videoData);
      
          await ffmpeg.run(
              '-ss', '00:00:01.000',
              '-i', inputFileName,
              '-frames:v', '1',
              outputFileName
          );
      
          outputData = ffmpeg.FS('readFile', outputFileName);
          ffmpeg.FS('unlink', inputFileName);
          ffmpeg.FS('unlink', outputFileName);
      
          res.writeHead(200, {
              'Content-Type': 'image/png',
              'Content-Disposition': `attachment;filename=${outputFileName}`,
              'Content-Length': outputData.length
          });
          res.end(Buffer.from(outputData, 'binary'));
      });
      ...
      

      Aside from the 100MB file limit for uploads, there’s no input validation or error handling. When ffmpeg.wasm fails to process a file, reading the output from the virtual file system will fail and prevent the response from being sent. For the purposes of this tutorial, wrap the implementation of the endpoint in a try-catch block to handle that scenario:

      server.mjs

      ...
      app.post('/thumbnail', upload.single('video'), async (req, res) => {
          try {
              const videoData = req.file.buffer;
      
              const ffmpeg = await getFFmpeg();
      
              const inputFileName = `input-video`;
              const outputFileName = `output-image.png`;
              let outputData = null;
      
              ffmpeg.FS('writeFile', inputFileName, videoData);
      
              await ffmpeg.run(
                  '-ss', '00:00:01.000',
                  '-i', inputFileName,
                  '-frames:v', '1',
                  outputFileName
              );
      
              outputData = ffmpeg.FS('readFile', outputFileName);
              ffmpeg.FS('unlink', inputFileName);
              ffmpeg.FS('unlink', outputFileName);
      
              res.writeHead(200, {
                  'Content-Type': 'image/png',
                  'Content-Disposition': `attachment;filename=${outputFileName}`,
                  'Content-Length': outputData.length
              });
              res.end(Buffer.from(outputData, 'binary'));
          } catch(error) {
              console.error(error);
              res.sendStatus(500);
          }
      ...
      });
      

      Secondly, ffmpeg.wasm cannot handle two requests in parallel. You can try this yourself by launching the server:

      • node --experimental-wasm-threads server.mjs

      Note the flag required for ffmpeg.wasm to work. The library depends on WebAssembly threads and bulk memory operations. These have been in V8/Chrome since 2019. However, as of Node.js v16.11.0, WebAssembly threads remain behind a flag in case there might be changes before the proposal is finalised. Bulk memory operations also require a flag in older versions of Node. If you’re running Node.js 15 or lower, add --experimental-wasm-bulk-memory as well.

      The output of the command will look like this:

      Output

      [info] use ffmpeg.wasm v0.10.1 [info] load ffmpeg-core [info] loading ffmpeg-core [info] fetch ffmpeg.wasm-core script from @ffmpeg/core [info] ffmpeg-api listening at http://localhost:3000 [info] ffmpeg-core loaded

      Open client.html in a web browser and select a video file. When you click the Create Thumbnail button, you should see the thumbnail appear on the page. Behind the scenes, the site uploads the video to the API, which processes it and responds with the image. However, when you click the button repeatedly in quick succession, the API will handle the first request. The subsequent requests will fail:

      Output

      Error: ffmpeg.wasm can only run one command at a time at Object.run (.../ffmpeg-api/node_modules/@ffmpeg/ffmpeg/src/createFFmpeg.js:126:13) at file://.../ffmpeg-api/server.mjs:54:26 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5)

      In the next section, you’ll learn how to deal with concurrent requests.

      Step 5 — Handling Concurrent Requests

      Since ffmpeg.wasm can only execute a single operation at a time, you’ll need a way of serializing requests that come in and processing them one at a time. In this scenario, a promise queue is a perfect solution. Instead of starting to process each request right away, it will be queued up and processed when all the requests that arrived before it have been handled.

      Open server.mjs in your preferred editor:

      Import p-queue at the top of server.mjs:

      server.mjs

      import express from 'express';
      import cors from 'cors';
      import { createFFmpeg } from '@ffmpeg/ffmpeg';
      import PQueue from 'p-queue';
      ...
      

      Then, create a new queue at the top of server.mjs file under the variable ffmpegLoadingPromise:

      server.mjs

      ...
      const ffmpegInstance = createFFmpeg({ log: true });
      let ffmpegLoadingPromise = ffmpegInstance.load();
      
      const requestQueue = new PQueue({ concurrency: 1 });
      ...
      

      In the POST /thumbnail endpoint handler, wrap the calls to ffmpeg in a function that will be queued up:

      server.mjs

      ...
      app.post('/thumbnail', upload.single('video'), async (req, res) => {
          try {
              const videoData = req.file.buffer;
      
              const ffmpeg = await getFFmpeg();
      
              const inputFileName = `input-video`;
              const outputFileName = `thumbnail.png`;
              let outputData = null;
      
              await requestQueue.add(async () => {
                  ffmpeg.FS('writeFile', inputFileName, videoData);
      
                  await ffmpeg.run(
                      '-ss', '00:00:01.000',
                      '-i', inputFileName,
                      '-frames:v', '1',
                      outputFileName
                  );
      
                  outputData = ffmpeg.FS('readFile', outputFileName);
                  ffmpeg.FS('unlink', inputFileName);
                  ffmpeg.FS('unlink', outputFileName);
              });
      
              res.writeHead(200, {
                  'Content-Type': 'image/png',
                  'Content-Disposition': `attachment;filename=${outputFileName}`,
                  'Content-Length': outputData.length
              });
              res.end(Buffer.from(outputData, 'binary'));
          } catch(error) {
              console.error(error);
              res.sendStatus(500);
          }
      });
      ...
      

      Every time a new request comes in, it will only start processing when there’s nothing else queued up in front of it. Note that the final sending of the response can happen asynchronously. Once the ffmpeg.wasm operation finishes running, another request can start processing while the response goes out.

      To test that everything works as expected, start up the server again:

      • node --experimental-wasm-threads server.mjs

      Open the client.html file in your browser and try uploading a file.

      A screenshot of client.html with a thumbnail loaded

      With the queue in place, the API will now respond every time. The requests will be handled sequentially in the order in which they arrive.

      Conclusion

      In this article, you built a Node.js service that extracts a thumbnail from a video using ffmpeg.wasm. You learned how to upload binary data from the browser to your Express API using multipart requests and how to process media with FFmpeg in Node.js without relying on external tools or having to write data to disk.

      FFmpeg is an incredibly versatile tool. You can use the knowledge from this tutorial to take advantage of any features that FFmpeg supports and use them in your project. For example, to generate a three-second GIF, change the ffmpeg.run call to this on the POST /thumbnail endpoint:

      server.mjs

      ...
      await ffmpeg.run(
          '-y',
          '-t', '3',
          '-i', inputFileName,
          '-filter_complex', 'fps=5,scale=720:-1:flags=lanczos[x];[x]split[x1][x2];[x1]palettegen[p];[x2][p]paletteuse',
          '-f', 'gif',
          outputFileName
      );
      ...
      

      The library accepts the same parameters as the original ffmpeg CLI tool. You can use the official documentation to find a solution for your use case and test it quickly in the terminal.

      Thanks to ffmpeg.wasm being self-contained, you can dockerize this service using the stock Node.js base images and scale your service up by keeping multiple nodes behind a load balancer. Follow the tutorial How To Build a Node.js Application with Docker to learn more.

      If your use case requires performing more expensive operations, such as transcoding large videos, make sure that you run your service on machines with enough memory to store them. Due to current limitations in WebAssembly, the maximum input file size cannot exceed 2GB, although this might change in the future.

      Additionally, ffmpeg.wasm cannot take advantage of some x86 assembly optimizations from the original FFmpeg codebase. That means some operations can take a long time to finish. If that’s the case, consider whether this is the right solution for your use case. Alternatively, make requests to your API asynchronous. Instead of waiting for the operation to finish, queue it up and respond with a unique ID. Create another endpoint that the clients can query to find out whether the processing ended and the output file is ready. Learn more about the asynchronous request-reply pattern for REST APIs and how to implement it.



      Source link

      How To Process Images in Node.js With Sharp


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Digital image processing is a method of using a computer to analyze and manipulate images. The process involves reading an image, applying methods to alter or enhance the image, and then saving the processed image. It’s common for applications that handle user-uploaded content to process images. For example, if you’re writing a web application that allows users to upload images, users may upload unnecessary large images. This can negatively impact the application load speed, and also waste your server space. With image processing, your application can resize and compress all the user-uploaded images, which can significantly improve your application performance and save your server disk space.

      Node.js has an ecosystem of libraries you can use to process images, such as sharp, jimp, and gm module. This article will focus on the sharp module. sharp is a popular Node.js image processing library that supports various image file formats, such as JPEG, PNG, GIF, WebP, AVIF, SVG and TIFF.

      In this tutorial, you’ll use sharp to read an image and extract its metadata, resize, change an image format, and compress an image. You will then crop, grayscale, rotate, and blur an image. Finally, you will composite images, and add text on an image. By the end of this tutorial, you’ll have a good understanding of how to process images in Node.js.

      Prerequisites

      To complete this tutorial, you’ll need:

      Step 1 — Setting Up the Project Directory and Downloading Images

      Before you start writing your code, you need to create the directory that will contain the code and the images you’ll use in this article.

      Open your terminal and create the directory for the project using the mkdir command:

      Move into the newly created directory using the cd command:

      Create a package.json file using npm init command to keep track of the project dependencies:

      The -y option tells npm to create the default package.json file.

      Next, install sharp as a dependency:

      You will use the following three images in this tutorial:

      Digitalocean maskot sammy
      Underwater ocean scene
      sammy with a transparent background

      Next, download the images in your project directory using the curl command.

      Use the following command to download the first image. This will download the image as sammy.png:

      • curl -O https://xpresservers.com/wp-content/uploads/2021/09/How-To-Process-Images-in-Nodejs-With-Sharp.png

      Next, download the second image with the following command. This will download the image as underwater.png:

      • curl -O https://xpresservers.com/wp-content/uploads/2021/09/1631157332_451_How-To-Process-Images-in-Nodejs-With-Sharp.png

      Finally, download the third image using the following command. This will download the image as sammy-transparent.png:

      • curl -O https://xpresservers.com/wp-content/uploads/2021/09/1631157333_547_How-To-Process-Images-in-Nodejs-With-Sharp.png

      With the project directory and the dependencies set up, you’re now ready to start processing images.

      In this section, you’ll write code to read an image and extract its metadata. Image metadata is text embedded into an image, which includes information about the image such as its type, width, and height.

      To extract the metadata, you’ll first import the sharp module, create an instance of sharp, and pass the image path as an argument. After that, you’ll chain the metadata() method to the instance to extract the metadata and log it into the console.

      To do this, create and open readImage.js file in your preferred text editor. This tutorial uses a terminal text editor called nano:

      Next, require in sharp at the top of the file:

      process_images/readImage.js

      const sharp = require("sharp");
      

      sharp is a promise-based image processing module. When you create a sharp instance, it returns a promise. You can resolve the promise using the then method or use async/await, which has a cleaner syntax.

      To use async/await syntax, you’ll need to create an asynchronous function by placing the async keyword at the beginning of the function. This will allow you to use the await keyword inside the function to resolve the promise returned when you read an image.

      In your readImage.js file, define an asynchronous function, getMetadata(), to read the image, extract its metadata, and log it into the console:

      process_images/readImage.js

      const sharp = require("sharp");
      
      async function getMetadata() {
        const metadata = await sharp("sammy.png").metadata();
        console.log(metadata);
      }
      
      

      getMetadata() is an synchronous function given the async keyword you defined before the function label. This lets you use the await syntax within the function. The getMetadata() function will read an image and return an object with its metadata.

      Within the function body, you read the image by calling sharp() which takes the image path as an argument, here with sammy.png.

      Apart from taking an image path, sharp() can also read image data stored in a Buffer, Uint8Array, or Uint8ClampedArray provided the image is JPEG, PNG, GIF, WebP, AVIF, SVG or TIFF.

      Now, when you use sharp() to read the image, it creates a sharp instance. You then chain the metadata() method of the sharp module to the instance. The method returns an object containing the image metadata, which you store in the metadata variable and log its contents using console.log().

      Your program can now read an image and return its metadata. However, if the program throws an error during execution, it will crash. To get around this, you need to capture the errors when they occur.

      To do that, wrap the code within the getMetadata() function inside a try...catch block:

      process_images/readImage.js

      const sharp = require("sharp");
      
      async function getMetadata() {
        try {
          const metadata = await sharp("sammy.png").metadata();
          console.log(metadata);
        } catch (error) {
          console.log(`An error occurred during processing: ${error}`);
        }
      }
      

      Inside the try block, you read an image, extract and log its metadata. When an error occurs during this process, execution skips to the catch section and logs the error preventing the program from crashing.

      Finally, call the getMetadata() function by adding the highlighted line:

      process_images/readImage.js

      
      const sharp = require("sharp");
      
      async function getMetadata() {
        try {
          const metadata = await sharp("sammy.png").metadata();
          console.log(metadata);
        } catch (error) {
          console.log(`An error occurred during processing: ${error}`);
        }
      }
      
      getMetadata();
      

      Now, save and exit the file. Enter y to save the changes you made in the file, and confirm the file name by pressing ENTER or RETURN key.

      Run the file using the node command:

      You should see an output similar to this:

      Output

      { format: 'png', width: 750, height: 483, space: 'srgb', channels: 3, depth: 'uchar', density: 72, isProgressive: false, hasProfile: false, hasAlpha: false }

      Now that you’ve read an image and extracted its metadata, you’ll now resize an image, change its format, and compress it.

      Step 3 — Resizing, Changing Image Format, and Compressing Images

      Resizing is the process of altering an image dimension without cutting anything from it, which affects the image file size. In this section, you’ll resize an image, change its image type, and compress the image. Image compression is the process of reducing an image file size without losing quality.

      First, you’ll chain the resize() method from the sharp instance to resize the image, and save it in the project directory. Second, you’ll chain the format() method to the resized image to change its format from png to jpeg. Additionally, you will pass an option to the format() method to compress the image and save it to the directory.

      Create and open resizeImage.js file in your text editor:

      Add the following code to resize the image to 150px width and 97px height:

      process_images/resizeImage.js

      const sharp = require("sharp");
      
      async function resizeImage() {
        try {
          await sharp("sammy.png")
            .resize({
              width: 150,
              height: 97
            })
            .toFile("sammy-resized.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      resizeImage();
      

      The resizeImage() function chains the sharp module’s resize() method to the sharp instance. The method takes an object as an argument. In the object, you set the image dimensions you want using the width and height property. Setting the width to 150 and the height to 97 will make the image 150px wide, and 97px tall.

      After resizing the image, you chain the sharp module’s toFile() method, which takes the image path as an argument. Passing sammy-resized.png as an argument will save the image file with that name in the working directory of your program.

      Now, save and exit the file. Run your program in the terminal:

      You will get no output, but you should see a new image file created with the name sammy-resized.png in the project directory.

      Open the image on your local machine. You should see an image of Sammy 150px wide and 97px tall:

      image resized to 150px width and 97px height

      Now that you can resize an image, next you’ll convert the resized image format from png to jpeg, compress the image, and save it in the working directory. To do that, you will use toFormat() method, which you’ll chain after the resize() method.

      Add the highlighted code to change the image format to jpeg and compress it:

      process_images/resizeImage.js

      const sharp = require("sharp");
      
      async function resizeImage() {
        try {
          await sharp("sammy.png")
            .resize({
              width: 150,
              height: 97
            })
            .toFormat("jpeg", { mozjpeg: true })
            .toFile("sammy-resized-compressed.jpeg");
        } catch (error) {
          console.log(error);
        }
      }
      
      resizeImage();
      

      Within the resizeImage() function, you use the toFormat() method of the sharp module to change the image format and compress it. The first argument of the toFormat() method is a string containing the image format you want to convert your image to. The second argument is an optional object containing output options that enhance and compress the image.

      To compress the image, you pass it a mozjpeg property that holds a boolean value. When you set it to true, sharp uses mozjpeg defaults to compress the image without sacrificing quality. The object can also take more options; see the sharp documentation for more details.

      Note: Regarding the toFormat() method’s second argument, each image format takes an object with different properties. For example, mozjpeg property is accepted only on JPEG images.

      However, other image formats have equivalents options such quality, compression, and lossless. Make sure to refer to the documentation to know what kind of options are acceptable for the image format you are compressing.

      Next, you pass the toFile() method a different filename to save the compressed image as sammy-resized-compressed.jpeg.

      Now, save and exit the file, then run your code with the following command:

      You will receive no output, but an image file sammy-resized-compressed.jpeg is saved in your project directory.

      Open the image on your local machine and you will see the following image:

      Sammy image resized and compressed

      With your image now compressed, check the file size to confirm your compression is successful. In your terminal, run the du command to check the file size for sammy.png:

      -h option produces human-readable output showing you the file size in kilobytes, megabytes and many more.

      After running the command, you should see an output similar to this:

      Output

      120K sammy.png

      The output shows that the original image is 120 kilobytes.

      Next, check the file size for sammy-resized.png:

      After running the command, you will see the following output:

      Output

      8.0K sammy-resized.png

      sammy-resized.png is 8 kilobytes down from 120 kilobytes. This shows that the resizing operation affects the file size.

      Now, check the file size for sammy-resized-compressed.jpeg:

      • du -h sammy-resized-compressed.jpeg

      After running the command, you will see the following output:

      Output

      4.0K sammy-resized-compressed.jpeg

      The sammy-resized-compressed.jpeg is now 4 kilobytes down from 8 kilobytes, saving you 4 kilobytes, showing that the compression worked.

      Now that you’ve resized an image, changed its format and compressed it, you will crop and grayscale the image.

      Step 4 — Cropping and Converting Images to Grayscale

      In this step, you will crop an image, and convert it to grayscale. Cropping is the process of removing unwanted areas from an image. You’ll use the extend() method to crop the sammy.png image. After that, you’ll chain the grayscale() method to the cropped image instance and convert it to grayscale.

      Create and open cropImage.js in your text editor:

      In your cropImage.js file, add the following code to crop the image:

      process_images/cropImage.js

      const sharp = require("sharp");
      
      async function cropImage() {
        try {
          await sharp("sammy.png")
            .extract({ width: 500, height: 330, left: 120, top: 70  })
            .toFile("sammy-cropped.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      cropImage();
      

      The cropImage() function is an asynchronous function that reads an image and returns your image cropped. Within the try block, a sharp instance will read the image. Then, the sharp module’s extract() method chained to the instance takes an object with the following properties:

      • width: the width of the area you want to crop.
      • height: the height of the area you want to crop.
      • top: the vertical position of the area you want to crop.
      • left: the horizontal position of the area you want to crop.

      When you set the width to 500 and the height to 330, imagine that sharp creates a transparent box on top of the image you want to crop. Any part of the image that fits in the box will remain, and the rest will be cut:

      image showing the cropping area

      The top and left properties control the position of the box. When you set left to 120, the box is positioned 120px from the left edge of the image, and setting top to 70 positions the box 70px from the top edge of the image.

      The area of the image that fits within the box will be extracted out and saved into sammy-cropped.png as a separate image.

      Save and exit the file. Run the program in the terminal:

      The output won’t be shown but the image sammy-cropped.png will be saved in your project directory.

      Open the image on your local machine. You should see the image cropped:

      image cropped

      Now that you cropped an image, you will convert the image to grayscale. To do that, you’ll chain the grayscale method to the sharp instance. Add the highlighted code to convert the image to grayscale:

      process_images/cropImage.js

      const sharp = require("sharp");
      
      async function cropImage() {
        try {
          await sharp("sammy.png")
            .extract({ width: 500, height: 330, left: 120, top: 70 })
            .grayscale()
            .toFile("sammy-cropped-grayscale.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      cropImage();
      

      The cropImage() function converts the cropped image to grayscale by chaining the sharp module’s grayscale() method to the sharp instance. It then saves the image in the project directory as sammy-cropped-grayscale.png.

      Press CTRL+X to save and exit the file.

      Run your code in the terminal:

      Open sammy-cropped-grayscale.png on your local machine. You should now see the image in grayscale:

      image cropped and grayscaled

      Now that you’ve cropped and extracted the image, you’ll work with rotating and blurring it.

      Step 5 — Rotating and Blurring Images

      In this step, you’ll rotate the sammy.png image at a 33 degrees angle. You’ll also apply a gaussian blur on the rotated image. A gaussian blur is a technique of blurring an image using the Gaussian function, which reduces the noise level and detail on an image.

      Create a rotateImage.js file in your text editor:

      In your rotateImage.js file, write the following code block to create a function that rotates sammy.png to an angle of 33 degrees:

      process_images/rotateImage.js

      const sharp = require("sharp");
      
      async function rotateImage() {
        try {
          await sharp("sammy.png")
            .rotate(33, { background: { r: 0, g: 0, b: 0, alpha: 0 } })
            .toFile("sammy-rotated.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      rotateImage();
      

      The rotateImage() function is an asynchronous function that reads an image and will return the image rotated to an angle of 33 degrees. Within the function, the rotate() method of the sharp module takes two arguments. The first argument is the rotation angle of 33 degrees. By default, sharp makes the background of the rotated image black. To remove the black background, you pass an object as a second argument to make the background transparent.

      The object has a background property which holds an object defining the RGBA color model. RGBA stands for red, green, blue, and alpha.

      • r: controls the intensity of the red color. It accepts an integer value of 0 to 255. 0 means the color is not being used, and 255 is red at its highest.

      • g: controls the intensity of the green color. It accepts an integer value of 0-255. 0 means that the color green is not used, and 255 is green at its highest.

      • b: controls the intensity of blue. It also accepts an integer value between 0 and 255. 0 means that the blue color isn’t used, and 255 is blue at its highest.

      • alpha: controls the opacity of the color defined by r, g, and b properties. 0 or 0.0 makes the color transparent and 1 or 1.1 makes the color opaque.

      For the alpha property to work, you must make sure you define and set the values for r, g, and b. Setting the r, g, and b values to 0 creates a black color. To create a transparent background, you must define a color first, then you can set alpha to 0 to make it transparent.

      Now, save and exit the file. Run your script in the terminal:

      Check for the existence of sammy-rotated.png in your project directory. Open it on your local machine.

      You should see the image rotated to an angle of 33 degrees:

      image rotated 33 degrees

      Next, you’ll blur the rotated image. You’ll achieve that by chaining the blur() method to the sharp instance.

      Enter the highlighted code below to blur the image:

      process_images/rotateImage.js

      const sharp = require("sharp");
      
      async function rotateImage() {
        try {
          await sharp("sammy.png")
            .rotate(33, { background: { r: 0, g: 0, b: 0, alpha: 0 } })
            .blur(4)
            .toFile("sammy-rotated-blurred.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      rotateImage();
      

      The rotateImage() function now reads the image, rotate it, and applies a gaussian blur to the image. It applies a gaussian blur to the image using the sharp module’s blur() method. The method accepts a single argument containing a sigma value between 0.3 and 1000. Passing it 4 will apply a gaussian blur with a sigma value of 4. After the image is blurred, you define a path to save the blurred image.

      Your script will now blur the rotated image with a sigma value of 4. Save and exit the file, then run the script in your terminal:

      After running the script, open sammy-rotated-blurred.png file on your local machine. You should now see the rotated image blurred:

      rotated image blurred

      Now that you’ve rotated and blurred an image, you’ll composite an image over another.

      Step 6 — Compositing Images Using composite()

      Image Composition is a process of combining two or more separate pictures to create a single image. This is done to create effects that borrow the best elements from the different photos. Another common use case is to watermark an image with a logo.

      In this section, you’ll composite sammy-transparent.png over the underwater.png. This will create an illusion of sammy swimming deep in the ocean. To composite the images, you’ll chain the composite() method to the sharp instance.

      Create and open the file compositeImage.js in your text editor:

      Now, create a function to composite the two images by adding the following code in the compositeImages.js file:

      process_images/compositeImages.js

      const sharp = require("sharp");
      
      async function compositeImages() {
        try {
          await sharp("underwater.png")
            .composite([
              {
                input: "sammy-transparent.png",
                top: 50,
                left: 50,
              },
            ])
            .toFile("sammy-underwater.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      compositeImages()
      

      The compositeImages() function reads the underwater.png image first. Next, you chain the composite() method of the sharp module, which takes an array as an argument. The array contains a single object that reads the sammy-transparent.png image. The object has the following properties:

      • input: takes the path of the image you want to composite over the processed image. It also accepts a Buffer, Uint8Array, or Uint8ClampedArray as input.
      • top: controls the vertical position of the image you want to composite over. Setting top to 50 offsets the sammy-transparent.png image 50px from the top edge of the underwater.png image.
      • left: controls the horizontal position of the image you want to composite over another. Setting left to 50 offsets the sammy-transparent.png 50px from the left edge of the underwater.png image.

      The composite() method requires an image of similar size or smaller to the processed image.

      To visualize what the composite() method is doing, think of it like its creating a stack of images. The sammy-transparent.png image is placed on top of underwater.png image:

      a graphic showing an image stack

      The top and left values positions the sammy-transparent.png image relative to the underwater.png image.

      Save your script and exit the file. Run your script to create an image composition:

      node compositeImages.js
      

      Open sammy-underwater.png in your local machine. You should now see the sammy-transparent.png composited over the underwater.png image:

      an image composition

      You’ve now composited images using the composite() method. In the next step, you’ll use the composite() method to add text to an image.

      Step 7 — Adding Text on an Image

      In this step, you’ll write text on an image. At the time of writing, sharp doesn’t have a native way of adding text to an image. To add text, first, you’ll write code to draw text using Scalable Vector Graphics(SVG). Once you’ve created the SVG image, you’ll write code to composite the image with the sammy.png image using the composite method.

      SVG is an XML-based markup language for creating vector graphics for the web. You can draw text, or shapes such as circles, triangles, and as well as draw complex shapes such as illustrations, logos, etc. The complex shapes are created with a graphic tool like Inkscape which generates the SVG code. The SVG shapes can be rendered and scaled to any size without losing quality.

      Create and open the addTextOnImage.js file in your text editor.

      In your addTextOnImage.js file, add the following code to create an SVG container:

      process_images/addTextOnImage.js

      const sharp = require("sharp");
      
      async function addTextOnImage() {
        try {
          const width = 750;
          const height = 483;
          const text = "Sammy the Shark";
      
          const svgImage = `
          <svg width="${width}" height="${height}">
          </svg>
          `;
        } catch (error) {
          console.log(error);
        }
      }
      
      addTextOnImage();
      

      The addTextOnImage() function defines four variables: width, height, text, and svgImage. width holds the integer 750, and height holds the integer 483. text holds the string Sammy the Shark. This is the text that you’ll draw using SVG.

      The svgImage variable holds the svg element. The svg element has two attributes: width and height that interpolates the width and height variables you defined earlier. The svg element creates a transparent container according to the given width and height.

      You gave the svg element a width of 750 and height of 483 so that the SVG image will have the same size as sammy.png. This will help in making the text look centered on the sammy.png image.

      Next, you’ll draw the text graphics. Add the highlighted code to draw Sammy the Shark on the SVG container:

      process_images/addTextOnImage.js

      async function addTextOnImage() {
          ...
          const svg = `
          <svg width="${width}" height="${height}">
          <text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
          </svg>
          `;
        ....
      }
      

      The SVG text element has four attributes: x, y, text-anchor, and class. x and y define the position for the text you are drawing on the SVG container. The x attribute positions the text horizontally, and the y attribute positions the text vertically.

      Setting x to 50% draws the text in the middle of the container on the x-axis, and setting y to 50% positions the text in the middle on y-axis of the SVG image.

      The text-anchor aligns text horizontally. Setting text-anchor to middle will align the text on the center at the x coordinate you specified.

      class defines a class name on the text element. You’ll use the class name to apply CSS styles to the text element.

      ${text} interpolates the string Sammy the Shark stored in the text variable. This is the text that will be drawn on the SVG image.

      Next, add the highlighted code to style the text using CSS:

      process_images/addTextOnImage.js

          const svg = `
          <svg width="${width}" height="${height}">
            <style>
            .title { fill: #001; font-size: 70px; font-weight: bold;}
            </style>
            <text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
          </svg>
          `;
      

      In this code, fill changes the text color to black, font-size changes the font size, and font-weight changes the font weight.

      At this point, you have written the code necessary to draw the text Sammy the Shark with SVG. Next, you’ll save the SVG image as a png with sharp so that you can see how SVG is drawing the text. Once that is done, you’ll composite the SVG image with sammy.png.

      Add the highlighted code to save the SVG image as a png with sharp:

      process_images/addTextOnImage.js

          ....
          const svgImage = `
          <svg width="${width}" height="${height}">
          ...
          </svg>
          `;
          const svgBuffer = Buffer.from(svgImage);
          const image = await sharp(svgBuffer).toFile("svg-image.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      addTextOnImage();
      

      Buffer.from() creates a Buffer object from the SVG image. A buffer is a temporary space in memory that stores binary data.

      After creating the buffer object, you create a sharp instance with the buffer object as input. In addition to an image path, sharp also accepts a buffer, Uint9Array, or Uint8ClampedArray.

      Finally, you save the SVG image in the project directory as svg-image.png.

      Here is the complete code:

      process_images/addTextOnImage.js

      const sharp = require("sharp");
      
      async function addTextOnImage() {
        try {
          const width = 750;
          const height = 483;
          const text = "Sammy the Shark";
      
          const svgImage = `
          <svg width="${width}" height="${height}">
            <style>
            .title { fill: #001; font-size: 70px; font-weight: bold;}
            </style>
            <text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
          </svg>
          `;
          const svgBuffer = Buffer.from(svgImage);
          const image = await sharp(svgBuffer).toFile("svg-image.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      addTextOnImage()
      

      Save and exit the file, then run your script with the following command:

      node addTextOnImage.js
      

      Note: If you installed Node.js using Option 2 — Installing Node.js with Apt Using a NodeSource PPA or Option 3 — Installing Node Using the Node Version Manager and getting the error fontconfig error: cannot load default config file: no such file: (null), install fontconfig to generate the font configuration file.

      Update your server’s package index, and after that, use apt install to install fontconfig.

      • sudo apt update
      • sudo apt install fontconfig

      Open svg-image.png on your local machine. You should now see the text Sammy the Shark rendered with a transparent background:

      svg text rendered

      Now that you’ve confirmed the SVG code draws the text, you will composite the text graphics onto sammy.png.

      Add the following highlighted code to composite the SVG text graphics image onto the sammy.png image.

      process_images/addTextOnImage.js

      const sharp = require("sharp");
      
      async function addTextOnImage() {
        try {
          const width = 750;
          const height = 483;
          const text = "Sammy the Shark";
      
          const svgImage = `
          <svg width="${width}" height="${height}">
            <style>
            .title { fill: #001; font-size: 70px; font-weight: bold;}
            </style>
            <text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
          </svg>
          `;
          const svgBuffer = Buffer.from(svgImage);
          const image = await sharp("sammy.png")
            .composite([
              {
                input: svgBuffer,
                top: 0,
                left: 0,
              },
            ])
            .toFile("sammy-text-overlay.png");
        } catch (error) {
          console.log(error);
        }
      }
      
      addTextOnImage();
      

      The composite() method reads the SVG image from the svgBuffer variable, and positions it 0 pixels from the top, and 0 pixels from the left edge of the sammy.png. Next, you save the composited image as sammy-text-overlay.png.

      Save and close your file, then run your program using the following command:

      Open sammy-text-overlay.png on your local machine. You should see text added over the image:

      text added on image

      You have now used the composite() method to add text created with SVG on another image.

      Conclusion

      In this article, you learned how to use sharp methods to process images in Node.js. First, you created an instance to read an image and used the metadata() method to extract the image metadata. You then used the resize() method to resize an image. Afterwards, you used the format() method to change the image type, and compress the image. Next, you proceeded to use various sharp methods to crop, grayscale, rotate, and blur an image. Finally, you used the composite() method to composite an image, and add text on an image.

      For more insight into additional sharp methods, visit the sharp documentation. If you want to continue learning Node.js, see How To Code in Node.js series.



      Source link