One place for hosting & domains

      Files

      How To Work With Zip Files in Node.js


      The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.

      Introduction

      Working with files is one of the common tasks among developers. As your files grow in size, they start taking significant space on your hard drive. Sooner or later you may need to transfer the files to other servers or upload multiple files from your local machine to different platforms. Some of these platforms have file size limits, and won’t accept large files. To get around this, you can group the files into a single ZIP file. A ZIP file is an archive format that packs and compresses files with the lossless compression algorithm. The algorithm can reconstruct the data without any data loss. In Node.js, you can use the adm-zip module to create and read ZIP archives.

      In this tutorial, you will use adm-zip module to compress, read, and decompress files. First, you’ll combine multiple files into a ZIP archive using adm-zip. You’ll then list the ZIP archive contents. After that, you’ll add a file to an existing ZIP archive, and then finally, you’ll extract a ZIP archive into a directory.

      Prerequisites

      To follow this tutorial, you’ll need:

      Step 1 — Setting Up the Project

      In this step, you’ll create the directory for your project and install adm-zip as a dependency. This directory is where you’ll keep your program files. You’ll also create another directory containing text files and an image. You’ll archive this directory in the next section.

      Create a directory called zip_app with the following command:

      Navigate into the newly created directory with the cd command:

      Inside the directory, create a package.json file to manage the project dependencies:

      The -y option creates a default package.json file.

      Next, install adm-zip with the npm install command:

      After you run the command, npm will install adm-zip and update the package.json file.

      Next, create a directory called test and move into it:

      In this directory, you will create three text files and download an image. The three files will be filled with dummy content to make their file sizes larger. This will help to demonstrate ZIP compression when you archive this directory.

      Create the file1.txt and fill it with dummy content using the following command:

      • yes "dummy content" | head -n 100000 > file1.txt

      The yes command logs the string dummy content repeatedly. Using the pipe command |, you send the output from the yes command to be used as input for the head command. The head command prints part of the given input into the standard output. The -n option specifies the number of lines that should be written to the standard output. Finally, you redirect the head output to a new file file1.txt using >.

      Create a second file with the string “dummy content” repeated 300,000 lines:

      • yes "dummy content" | head -n 300000 > file2.txt

      Create another file with the dummy content string repeated 600,000 lines:

      • yes "dummy content" | head -n 600000 > file3.txt

      Finally, download an image into the directory using curl:

      • curl -O https://assets.digitalocean.com/how-to-process-images-in-node-js-with-sharp/underwater.png

      Move back into the main project directory with the following command:

      The .. will move you to the parent directory, which is zip_app.

      You’ve now created the project directory, installed adm-zip, and created a directory with files for archiving. In the next step, you’ll archive a directory using the adm-zip module.

      Step 2 — Creating a ZIP Archive

      In this step, you’ll use adm-zip to compress and archive the directory you created in the previous section.

      To archive the directory, you’ll import the adm-zip module and use the module’s addLocalFolder() method to add the directory to the adm-zip module’s ZIP object. Afterward, you’ll use the module’s writeZip() method to save the archive in your local system.

      Create and open a new file createArchive.js in your preferred text editor. This tutorial uses nano, a command-line text editor:

      Next, require in the adm-zip module in your createArchive.js file:

      zip_app/createArchive.js

      const AdmZip = require("adm-zip");
      

      The adm-zip module provides a class that contains methods for creating ZIP archives.

      Since it’s common to encounter large files during the archiving process, you might end up blocking the main thread until the ZIP archive is saved. To write non-blocking code, you’ll define an asynchronous function to create and save a ZIP archive.

      In your createArchive.js file, add the following highlighted code:

      zip_app/createArchive.js

      
      const AdmZip = require("adm-zip");
      
      async function createZipArchive() {
        const zip = new AdmZip();
        const outputFile = "test.zip";
        zip.addLocalFolder("./test");
        zip.writeZip(outputFile);
        console.log(`Created ${outputFile} successfully`);
      }
      
      createZipArchive();
      

      createZipArchive is an asynchronous function that creates a ZIP archive from a given directory. What makes it asynchronous is the async keyword you defined before the function label. Within the function, you create an instance of the adm-zip module, which provides methods you can use for reading and creating archives. When you create an instance, adm-zip creates an in-memory ZIP where you can add files or directories.

      Next, you define the archive name and store it in the outputDir variable. To add the test directory to the in-memory archive, you invoke the addLocalFolder() method from adm-zip with the directory path as an argument.

      After the directory is added, you invoke the writeZip() method from adm-zip with a variable containing the name of the ZIP archive. The writeZip() method saves the archive to your local disk.

      Once that’s done, you invoke console.log() to log that the ZIP file has been created successfully.

      Finally, you call the createZipArchive() function.

      Before you run the file, wrap the code in a try…catch block to handle runtime errors:

      zip_app/createArchive.js

      const AdmZip = require("adm-zip");
      
      async function createZipArchive() {
        try {
          const zip = new AdmZip();
          const outputFile = "test.zip";
          zip.addLocalFolder("./test");
          zip.writeZip(outputFile);
          console.log(`Created ${outputFile} successfully`);
        } catch (e) {
          console.log(`Something went wrong. ${e}`);
        }
      }
      
      createZipArchive();
      

      Within the try block, the code will attempt to create a ZIP archive. If successful, the createZipArchive() function will exit, skipping the catch block. If creating a ZIP archive triggers an error, execution will skip to the catch block and log the error in the console.

      Save and exit the file in nano with CTRL+X. Enter y to save the changes, and confirm the file by pressing ENTER on Windows, or the RETURN key on the Mac.

      Run the createArchive.js file using the node command:

      You’ll receive the following output:

      Output

      Created test.zip successfully

      List the directory contents to see if the ZIP archive has been created:

      You’ll receive the following output showing the archive among the contents:

      Output

      createArchive.js node_modules package-lock.json package.json test test.zip

      With the confirmation that the ZIP archive has been created, you’ll compare the ZIP archive, and the test directory file size to see if the compression works.

      Check the test directory size using the du command:

      The -h flag instructs du to show the directory size in a human-readable format.

      After running the command, you will receive the following output:

      Output

      15M test

      Next, check the test.zip archive file size:

      The du command logs the following output:

      Output

      760K test.zip

      As you can see, creating the ZIP file has dropped the directory size from 15 Megabytes(MB) to 760 Kilobytes(KB), which is a huge difference. The ZIP file is more portable and smaller in size.

      Now that you created a ZIP archive, you’re ready to list the contents in a ZIP file.

      Step 3 — Listing Files in a ZIP Archive

      In this step, you’ll read and list all files in a ZIP archive using adm-zip. To do that, you’ll instantiate the adm-zip module with your ZIP archive path. You’ll then call the module’s getEntries() method which returns an array of objects. Each object holds important information about an item in the ZIP archive. To list the files, you’ll iterate over the array and access the filename from the object, and log it in the console.

      Create and open readArchive.js in your favorite text editor:

      In your readArchive.js, add the following code to read and list contents in a ZIP archive:

      zip_app/readArchive.js

      const AdmZip = require("adm-zip");
      
      async function readZipArchive(filepath) {
        try {
          const zip = new AdmZip(filepath);
      
          for (const zipEntry of zip.getEntries()) {
            console.log(zipEntry.toString());
          }
        } catch (e) {
          console.log(`Something went wrong. ${e}`);
        }
      }
      
      readZipArchive("./test.zip");
      

      First, you require in the adm-zip module.

      Next, you define the readZipArchive() function, which is an asynchronous function. Within the function, you create an instance of adm-zip with the path of the ZIP file you want to read. The file path is provided by the filepath parameter. adm-zip will read the file and parse it.

      After reading the archive, you define a for....of statement that iterates over objects in an array that the getEntries() method from adm-zip returns when invoked. On each iteration, the object is assigned to the zipEntry variable. Inside the loop, you convert the object into a string that represents the object using the Node.js toString() method, then log it in the console using the console.log() method.

      Finally, you invoke the readZipArchive() function with the ZIP archive file path as an argument.

      Save and exit your file, then run the file with the following command:

      You will get output that resembles the following(edited for brevity):

      Output

      { "entryName": "file1.txt", "name": "file1.txt", "comment": "", "isDirectory": false, "header": { ... }, "compressedData": "<27547 bytes buffer>", "data": "<null>" } ...

      The console will log four objects. The other objects have been edited out to keep the tutorial brief.

      Each file in the archive is represented with an object similar to the one in the preceding output. To get the filename for each file, you need to access the name property.

      In your readArchive.js file, add the following highlighted code to access each filename:

      zip_app/readArchive.js

      const AdmZip = require("adm-zip");
      
      async function readZipArchive(filepath) {
        try {
          const zip = new AdmZip(filepath);
      
          for (const zipEntry of zip.getEntries()) {
            console.log(zipEntry.name);
          }
        } catch (e) {
          console.log(`Something went wrong. ${e}`);
        }
      }
      
      readZipArchive("./test.zip");
      

      Save and exit your text editor. Now, run the file again with the node command:

      Running the file results in the following output:

      Output

      file1.txt file2.txt file3.txt underwater.png

      The output now logs the filename of each file in the ZIP archive.

      You can now read and list each file in a ZIP archive. In the next section, you’ll add a file to an existing ZIP archive.

      Step 4 — Adding a File to an Existing Archive

      In this step, you’ll create a file and add it to the ZIP archive you created earlier without extracting it. First, you’ll read the ZIP archive by creating an adm-zip instance. Second, you’ll invoke the module’s addFile() method to add the file in the ZIP. Finally, you’ll save the ZIP archive in the local system.

      Create another file file4.txt with dummy content repeated 600,000 lines:

      • yes "dummy content" | head -n 600000 > file4.txt

      Create and open updateArchive.js in your text editor:

      Require in the adm-zip module and the fs module that allows you to work with files in your updateArchive.js file:

      const AdmZip = require("adm-zip");
      const fs = require("fs").promises;
      

      You require in the promise-based version of the fs module version, which allows you to write asynchronous code. When you invoke an fs method, it will return a promise.

      Next in your updateArchive.js file, add the following highlighted code to add a new file to the ZIP archive:

      zip_app/updateArchive.js

      const AdmZip = require("adm-zip");
      const fs = require("fs").promises;
      
      async function updateZipArchive(filepath) {
        try {
          const zip = new AdmZip(filepath);
      
          content = await fs.readFile("./file4.txt");
          zip.addFile("file4.txt", content);
          zip.writeZip(filepath);
          console.log(`Updated ${filepath} successfully`);
        } catch (e) {
          console.log(`Something went wrong. ${e}`);
        }
      }
      
      updateZipArchive("./test.zip");
      

      updateZipArchive is an asynchronous function that reads a file in the filesystem and adds it to an existing ZIP. In the function, you create an instance of adm-zip with the ZIP archive file path in the filepath as a parameter. Next, you invoke the fs module’s readFile() method to read the file in the file system. The readFile() method returns a promise, which you resolve with the await keyword (await is valid in only asynchronous functions). Once resolved, the method returns a buffer object, which contains the file contents.

      Next, you invoke the addFile() method from adm-zip. The method takes two arguments. The first argument is the filename you want to add to the archive, and the second argument is the buffer object containing the contents of the file that the readFile() method reads.

      Afterwards, you invoke adm-zip module’s writeZip() method to save and write new changes in the ZIP archive. Once that’s done, you call the console.log() method to log a success message.

      Finally, you invoke the updateZipArchive() function with the Zip archive file path as an argument.

      Save and exit your file. Run the updateArchive.js file with the following command:

      You’ll see output like this:

      Output

      Updated ./test.zip successfully

      Now, confirm that the ZIP archive contains the new file. Run the readArchive.js file to list the contents in the ZIP archive with the following command:

      You’ll receive the following output:

      file1.txt
      file2.txt
      file3.txt
      file4.txt
      underwater.png
      

      This confirms that the file has been added to the ZIP.

      Now that you can add a file to an existing archive, you’ll extract the archive in the next section.

      In this step, you’ll read and extract all contents in a ZIP archive into a directory. To extract a ZIP archive, you’ll instantiate adm-zip with the archive file path. After that, you’ll invoke the module’s extractAllTo() method with the directory name you want your extracted ZIP contents to reside.

      Create and open extractArchive.js in your text editor:

      Require in the adm-zip module and the path module in your extractArchive.js file:

      zip_app/extractArchive.js

      const AdmZip = require("adm-zip");
      const path = require("path");
      

      The path module provides helpful methods for dealing with file paths.

      Still in your extractArchive.js file, add the following highlighted code to extract an archive:

      zip_app/extractArchive.js

      const AdmZip = require("adm-zip");
      const path = require("path");
      
      async function extractArchive(filepath) {
        try {
          const zip = new AdmZip(filepath);
          const outputDir = `${path.parse(filepath).name}_extracted`;
          zip.extractAllTo(outputDir);
      
          console.log(`Extracted to "${outputDir}" successfully`);
        } catch (e) {
          console.log(`Something went wrong. ${e}`);
        }
      }
      
      extractArchive("./test.zip");
      

      extractArchive() is an asynchronous function that takes a parameter containing the file path of the ZIP archive. Within the function, you instantiate adm-zip with the ZIP archive file path provided by the filepath parameter.

      Next, you define a template literal. Inside the template literal placeholder (${}), you invoke the parse() method from the path module with the file path. The parse() method returns an object. To get the name of the ZIP file without the file extension, you append the name property to the object that the parse() method returns. Once the archive name is returned, the template literal interpolates the value with the _extracted string. The value is then stored in the outputDir variable. This will be the name of the extracted directory.

      Next, you invoke adm-zip module’s extractAllTo method with the directory name stored in the outputDir to extract the contents in the directory. After that, you invoke console.log() to log a success message.

      Finally, you call the extractArchive() function with the ZIP archive path.

      Save your file and exit the editor, then run the extractArchive.js file with the following command:

      You receive the following output:

      Output

      Extracted to "test_extracted" successfully

      Confirm that the directory containing the ZIP contents has been created:

      You will receive the following output:

      Output

      createArchive.js file4.txt package-lock.json readArchive.js test.zip updateArchive.js extractArchive.js node_modules package.json test test_extracted

      Now, navigate into the directory containing the extracted contents:

      List the contents in the directory:

      You will receive the following output:

      Output

      file1.txt file2.txt file3.txt file4.txt underwater.png

      You can now see that the directory has all the files that were in the original directory.

      You’ve now extracted the ZIP archive contents into a directory.

      Conclusion

      In this tutorial, you created a ZIP archive, listed its contents, added a new file to the archive, and extracted all of its content into a directory using adm-zip module. This will serve as a good foundation for working with ZIP archives in Node.js.

      To learn more about adm-zip module, view the adm-zip documentation. To continue building your Node.js knowledge, see How To Code in Node.js series



      Source link

      How To Work with Files Using Streams in Node.js


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      The concept of streams in computing usually describes the delivery of data in a steady, continuous flow. You can use streams for reading from or writing to a source continuously, thus eliminating the need to fit all the data in memory at once.

      Using streams provides two major advantages. One is that you can use your memory efficiently since you do not have to load all the data into memory before you can begin processing. Another advantage is that using streams is time-efficient. You can start processing data almost immediately instead of waiting for the entire payload. These advantages make streams a suitable tool for large data transfer in I/O operations. Files are a collection of bytes that contain some data. Since files are a common data source in Node.js, streams can provide an efficient way to work with files in Node.js.

      Node.js provides a streaming API in the stream module, a core Node.js module, for working with streams. All Node.js streams are an instance of the EventEmitter class (for more on this, see Using Event Emitters in Node.js). They emit different events you can listen for at various intervals during the data transmission process. The native stream module provides an interface consisting of different functions for listening to those events that you can use to read and write data, manage the transmission life cycle, and handle transmission errors.

      There are four different kinds of streams in Node.js. They are:

      • Readable streams: streams you can read data from.
      • Writable streams: streams you can write data to.
      • Duplex streams: streams you can read from and write to (usually simultaneously).
      • Transform streams: a duplex stream in which the output (or writable stream) is dependent on the modification of the input (or readable stream).

      The file system module (fs) is a native Node.js module for manipulating files and navigating the local file system in general. It provides several methods for doing this. Two of these methods implement the streaming API. They provide an interface for reading and writing files using streams. Using these two methods, you can create readable and writable file streams.

      In this article, you will read from and write to a file using the fs.createReadStream and fs.createWriteStream functions. You will also use the output of one stream as the input of another and implement a custom transform steam. By performing these actions, you will learn to use streams to work with files in Node.js. To demonstrate these concepts, you will write a command-line program with commands that replicate the cat functionality found in Linux-based systems, write input from a terminal to a file, copy files, and transform the content of a file.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Setting up a File Handling Command-Line Program

      In this step, you will write a command-line program with basic commands. This command-line program will demonstrate the concepts you’ll learn later in the tutorial, where you’ll use these commands with the functions you’ll create to work with files.

      To begin, create a folder to contain all your files for this program. In your terminal, create a folder named node-file-streams:

      Using the cd command, change your working directory to the new folder:

      Next, create and open a file called mycliprogram in your favorite text editor. This tutorial uses GNU nano, a terminal text editor. To use nano to create and open your file, type the following command:

      In your text editor, add the following code to specify the shebang, store the array of command-line arguments from the Node.js process, and store the list of commands the application should have.

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const args = process.argv;
      const commands = ['read', 'write', 'copy', 'reverse'];
      

      The first line contains a shebang, which is a path to the program interpreter. Adding this line tells the program loader to parse this program using Node.js.

      When you run a Node.js script on the command-line, several command-line arguments are passed when the Node.js process runs. You can access these arguments using the argv property or the Node.js process. The argv property is an array that contains the command-line arguments passed to a Node.js script. In the second line, you assign that property to a variable called args.

      Next, create a getHelpText function to display a manual of how to use the program. Add the code below to your mycliprogram file:

      node-file-streams/mycliprogram

      ...
      const getHelpText = function() {
          const helpText = `
          simplecli is a simple cli program to demonstrate how to handle files using streams.
          usage:
              mycliprogram <command> <path_to_file>
      
              <command> can be:
              read: Print a file's contents to the terminal
              write: Write a message from the terminal to a file
              copy: Create a copy of a file in the current directory
              reverse: Reverse the content of a file and save its output to another file.
      
              <path_to_file> is the path to the file you want to work with.
          `;
          console.log(helpText);
      }
      

      The getHelpText function prints out the multi-line string you created as the help text for the program. The help text shows the command-line arguments or parameters that the program expects.

      Next, you’ll add the control logic to check the length of args and provide the appropriate response:

      node-file-streams/mycliprogram

      ...
      let command = '';
      
      if(args.length < 3) {
          getHelpText();
          return;
      }
      else if(args.length > 4) {
          console.log('More arguments provided than expected');
          getHelpText();
          return;
      }
      else {
          command = args[2]
          if(!args[3]) {
              console.log('This tool requires at least one path to a file');
              getHelpText();
              return;
          }
      }
      

      In the code snippet above, you have created an empty string command to store the command received from the terminal. The first if block checks whether the length of the args array is less than 3. If it is less than 3, it means that no other additional arguments were passed when running the program. In this case, it prints the help text to the terminal and terminates.

      The else if block checks to see if the length of the args array is greater than 4. If it is, then the program has received more arguments than it needs. The program will print a message to this effect along with the help text and terminate.

      Finally, in the else block, you store the third element or the element in the second index of the args array in the command variable. The code also checks whether there is a fourth element or an element with index = 3 in the args array. If the item does not exist, it prints a message to the terminal indicating that you need a file path to continue.

      Save the file. Then run the application:

      You might get a permission denied error similar to the output below:

      Output

      -bash: ./mycliprogram: Permission denied

      To fix this error, you will need to provide the file with execution permissions, which you can do with the following command:

      Re-run the file again. The output will look similar to this:

      Output

      simplecli is a simple cli program to demonstrate how to handle files using streams. usage: mycliprogram <command> <path_to_file> read: Print a file's contents to the terminal write: Write a message from the terminal to a file copy: Create a copy of a file in the current directory reverse: Reverse the content of a file and save it output to another file.

      Finally, you are going to partially implement the commands in the commands array you created earlier. Open the mycliprogram file and add the code below:

      node-file-streams/mycliprogram

      ...
      switch(commands.indexOf(command)) {
          case 0:
              console.log('command is read');
              break;
          case 1:
              console.log('command is write');
              break;
          case 2:
              console.log('command is copy');
              break;
          case 3:
              console.log('command is reverse');
              break;
          default:
              console.log('You entered a wrong command. See help text below for supported functions');
              getHelpText();
              return;
      }
      

      Any time you enter a command found in the switch statement, the program runs the appropriate case block for the command. For this partial implementation, you print the name of the command to the terminal. If the string is not in the list of commands you created above, the program will print out a message to that effect with the help text. Then the program will terminate.

      Save the file, then re-run the program with the read command and any file name:

      • ./mycliprogram read test.txt

      The output will look similar to this:

      Output

      command is read

      You have now successfully created a command-line program. In the following section, you will replicate the cat functionality as the read command in the application using createReadStream().

      Step 2 — Reading a File with createReadStream()

      The read command in the command-line application will read a file from the file system and print it out to the terminal similar to the cat command in a Linux-based terminal. In this section, you will implement that functionality using createReadStream() from the fs module.

      The createReadStream function creates a readable stream that emits events that you can listen to since it inherits from the EventsEmitter class. The data event is one of these events. Every time the readable stream reads a piece of data, it emits the data event, releasing a piece of data. When used with a callback function, it invokes the callback with that piece of data or chunk, and you can process that data within that callback function. In this case, you want to display that chunk in the terminal.

      To begin, add a text file to your working directory for easy access. In this section and some subsequent ones, you will be using a file called lorem-ipsum.txt. It is a text file containing ~1200 lines of lorem ipsum text generated using the Lorem Ipsum Generator, and it is hosted on GitHub. In your terminal, enter the following command to download the file to your working directory:

      • wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/lorem-ipsum.txt

      To replicate the cat functionality in your command-line application, you’ll need to import the fs module because it contains the createReadStream function you need. To do this, open the mycliprogram file and add this line immediately after the shebang:

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const fs = require('fs');
      

      Next, you will create a function below the switch statement called read() with a single parameter: the file path for the file you want to read. This function will create a readable stream from that file and listen for the data event on that stream.

      node-file-streams/mycliprogram

      ...
      function read(filePath) {
          const readableStream = fs.createReadStream(filePath);
      
          readableStream.on('error', function (error) {
              console.log(`error: ${error.message}`);
          })
      
          readableStream.on('data', (chunk) => {
              console.log(chunk);
          })
      }
      

      The code also checks for errors by listening for the error event. When an error occurs, an error message will print to the terminal.

      Finally, you should replace console.log() with the read() function in the first case block case 0 as shown in the code block below:

      node-file-streams/mycliprogram

      ...
      switch (command){
          case 0:
              read(args[3]);
              break;
          ...
      }
      

      Save the file to persist the new changes and run the program:

      • ./mycliprogram read lorem-ipsum.txt

      The output will look similar to this:

      Output

      <Buffer 0a 0a 4c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74 2c 20 63 6f 6e 73 65 63 74 65 74 75 72 20 61 64 69 70 69 73 63 69 ... > ... <Buffer 76 69 74 61 65 20 61 6e 74 65 20 66 61 63 69 6c 69 73 69 73 20 6d 61 78 69 6d 75 73 20 75 74 20 69 64 20 73 61 70 69 65 6e 2e 20 50 65 6c 6c 65 6e 74 ... >

      Based on the output above, you can see that the data was read in chunks or pieces, and these pieces of data are of the Buffer type. For the sake of brevity, the terminal output above shows only two chunks, and the ellipsis indicates that there are several buffers in between the chunks shown here. The larger the file, the greater the number of buffers or chunks.

      To return the data in a human-readable format, you will set the encoding type of the data by passing the string value of the encoding type you want as a second argument to the createReadStream() function. In the second argument to the createReadStream() function, add the following highlighted code to set the encoding type to utf8.

      node-file-streams/mycliprogram

      
      ...
      const readableStream = fs.createReadStream(filePath, 'utf8')
      ...
      

      Re-running the program will display the contents of the file in the terminal. The program prints the lorem ipsum text from the lorem-ipsum.txt file line by line as it appears in the file.

      Output

      Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean est tortor, eleifend et enim vitae, mattis condimentum elit. In dictum ex turpis, ac rutrum libero tempus sed... ... ...Quisque nisi diam, viverra vel aliquam nec, aliquet ut nisi. Nullam convallis dictum nisi quis hendrerit. Maecenas venenatis lorem id faucibus venenatis. Suspendisse sodales, tortor ut condimentum fringilla, turpis erat venenatis justo, lobortis egestas massa massa sed magna. Phasellus in enim vel ante viverra ultricies.

      The output above shows a small fraction of the content of the file printed to the terminal. When you compare the terminal output with the lorem-ipsum.txt file, you will see that the content is the same and takes the same formatting as the file, just like with the cat command.

      In this section, you implemented the cat functionality in your command-line program to read the content of a file and print it to the terminal using the createReadStream function. In the next step, you will create a file based on the input from the terminal using createWriteStream().

      Step 3 — Writing to a File with createWriteStream()

      In this section, you will write input from the terminal to a file using createWriteStream(). The createWriteStream function returns a writable file stream that you can write data to. Like the readable stream in the previous step, this writable stream emits a set of events like error, finish, and pipe. Additionally, it provides the write function for writing data to the stream in chunks or bits. The write function takes in the chunk, which could be a string, a Buffer, <Uint8Array>, or any other JavaScript value. It also allows you to specify an encoding type if the chunk is a string.

      To write input from a terminal to a file, you will create a function called write in your command-line program. In this function, you will create a prompt that receives input from the terminal (until the user terminates it) and writes the data to a file.

      First, you will need to import the readline module at the top of the mycliprogram file. The readline module is a native Node.js module that you can use to receive data from a readable stream like the standard input (stdin) or your terminal one line at a time. Open your mycliprogram file and add the highlighted line :

      node-file-streams/mycliprogram

      #!/usr/bin/env node
      
      const fs = require('fs');
      const readline = require('readline');
      

      Then, add the following code below the read() function.

      node-file-streams/mycliprogram

      ...
      function write(filePath) {
          const writableStream = fs.createWriteStream(filePath);
      
          writableStream.on('error',  (error) => {
              console.log(`An error occured while writing to the file. Error: ${error.message}`);
          });
      }
      

      Here, you are creating a writable stream with the filePath parameter. This file path will be the command-line argument after the write word. You are also listening for the error event if anything goes wrong (for example, if you provide a filePath that does not exist).

      Next, you will write the prompt to receive a message from the terminal and write it to the specified filePath using the readline module you imported earlier. To create a readline interface, a prompt, and to listen for the line event, update the write function as shown in the block:

      node-file-streams/mycliprogram

      ...
      function write(filePath) {
          const writableStream = fs.createWriteStream(filePath);
      
          writableStream.on('error',  (error) => {
              console.log(`An error occured while writing to the file. Error: ${error.message}`);
          });
      
          const rl = readline.createInterface({
              input: process.stdin,
              output: process.stdout,
              prompt: 'Enter a sentence: '
          });
      
          rl.prompt();
      
          rl.on('line', (line) => {
              switch (line.trim()) {
                  case 'exit':
                      rl.close();
                      break;
                  default:
                      sentence = line + 'n'
                      writableStream.write(sentence);
                      rl.prompt();
                      break;
              }
          }).on('close', () => {
              writableStream.end();
              writableStream.on('finish', () => {
                  console.log(`All your sentences have been written to ${filePath}`);
              })
              setTimeout(() => {
                  process.exit(0);
              }, 100);
          });
      }
      

      You created a readline interface (rl) that allows the program to read the standard input (stdin) from your terminal on a line-by-line basis and write a specified prompt string to standard output (stdout). You also called the prompt() function to write the configured prompt message to a new line and to allow the user to provide additional input.

      Then you chained two event listeners together on the rl interface. The first one listens for the line event emitted each time the input stream receives an end-of-line input. This input could be a line feed character (n), the carriage return character (r), or both characters together (rn), and it usually occurs when you press the ENTER or return key on your computer. Therefore, any time you press either of these keys while typing in the terminal, the line event is emitted. The callback function receives a string containing the single line of input line.

      You trimmed the line and checked to see if it is the word exit. If not, the program will add a new line character to line and write the sentence to the filePath using the .write() function. Then you called the prompt function to prompt the user to enter another line of text. If the line is exit, the program calls the close function on the rl interface. The close function closes the rl instance and releases the standard input (stdin) and output (stdout) streams.

      This function brings us to the second event you listened for on the rl instance: the close event. This event is emitted when you call rl.close(). After writing data to a stream, you have to call the end function on the stream to tell your program that it should no longer write data to the writable stream. Doing this will ensure that the data is completely flushed to your output file. Therefore, when you type the word exit, you close the rl instance and stop your writable stream by calling the end function.

      To provide feedback to the user that the program has successfully written all the text from the terminal to the specified filePath, you listened for the finish event on writableStream. In the callback function, you logged a message to the terminal to inform the user when writing is complete. Finally, you exited the process after 100ms to provide enough time for the finish event to provide feedback.

      Finally, to call this function in your mycliprogram, replace the console.log statement in the case 1 block in the switch statement with the new write function, as shown here:

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 1:
              write(args[3]);
              break;
      
          ...
      }
      

      Save the file containing the new changes. Then run the command-line application in your terminal with the write command.

      • ./mycliprogram write output.txt

      At the Enter a sentence prompt, add any input you’d like. After a couple of entries, type exit.

      The output will look similar to this (with your input displaying instead of the highlighted lines):

      Output

      Enter a sentence: Twinkle, twinkle, little star Enter a sentence: How I wonder what you are Enter a sentence: Up above the hills so high Enter a sentence: Like a diamond in the sky Enter a sentence: exit All your sentences have been written to output.txt

      Check output.txt to see the file content using the read command you created earlier.

      • ./mycliprogram read output.txt

      The terminal output should contain all the text you have typed into the command except exit. Based on the input above, the output.txt file has the following content:

      Output

      Twinkle, twinkle, little star How I wonder what you are Up above the hills so high Like a diamond in the sky

      In this step, you wrote to a file using streams. Next, you will implement the function that copies files in your command-line program.

      Step 4 — Copying Files Using pipe()

      In this step, you will use the pipe function to create a copy of a file using streams. Although there are other ways to copy files using streams, using pipe is preferred because you don’t need to manage the data flow.

      For example, one way to copy files using streams would be to create a readable stream for the file, listen to the stream on the data event, and write each chunk from the stream event to a writable stream of the file copy. The snippet below shows an example:

      example.js

      const fs = require('fs');
      const readableStream = fs.createReadStream('lorem-ipsum.txt', 'utf8');
      const writableStream = fs.createWriteStream('lorem-ipsum-copy.txt');
      
      readableStream.on('data', () => {
          writableStream.write(chunk);
      });
      
      writableStream.end();
      

      The disadvantage to this method is that you need to manage the events on both the readable and writeable streams.

      The preferred method for copying files using streams is to use pipe. A plumbing pipe passes water from a source such as a water tank (output) to a faucet or tap (input). Similarly, you use pipe to direct data from an output stream to an input stream. (If you are familiar with the Linux-based bash shell, the pipe | command directs data from one stream to another.)

      Piping in Node.js provides the ability to read data from a source and write it somewhere else without managing the data flow as you would using the first method. Unlike the previous approach, you do not need to manage the events on both the readable and writable streams. For this reason, it is a preferred approach for implementing a copy command in your command-line application that uses streams.

      In the mycliprogram file, you will add a new function invoked when a user runs the program with the copy command-line argument. The copy method will use pipe() to copy from an input file to the destination copy of the file. Create the copy function after the write function as shown below:

      node-file-streams/mycliprogram

      ...
      function copy(filePath) {
          const inputStream = fs.createReadStream(filePath)
          const fileCopyPath = filePath.split('.')[0] + '-copy.' + filePath.split('.')[1]
          const outputStream = fs.createWriteStream(fileCopyPath)
      
          inputStream.pipe(outputStream)
      
          outputStream.on('finish', () => {
              console.log(`You have successfully created a ${filePath} copy. The new file name is ${fileCopyPath}.`);
          })
      }
      

      In the copy function, you created an input or readable stream using fs.createReadStream(). You also generated a new name for the destination, output a copy of the file, and created an output or writable stream using fs.createWriteStream(). Then you piped the data from the inputStream to the outputStream using .pipe(). Finally, you listened for the finish event and printed out a message on a successful file copy.

      Recall that to close a writable stream, you have to call the end() function on the stream. When piping streams, the end() function is called on the writable stream (outputStream) when the readable stream (inputStream) emits the end event. The end() function of the writable stream emits the finish event, and you listen for this event to indicate that you have finished copying a file.

      To see this function in action, open the mycliprogram file and update the case 2 block of the switch statement as shown below:

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 2:
              copy(args[3]);
              break;
      
          ...
      }
      

      Calling the copy function in the case 2 block of the switch statements ensures that when you run the mycliprogram program with the copy command and the required file paths, the copy function is executed.

      Run mycliprogram:

      • ./mycliprogram copy lorem-ipsum.txt

      The output will look similar to this:

      Output

      You have successfully created a lorem-ipsum-copy.txt copy. The new file name is lorem-ipsum-copy.txt.

      Within the node-file-streams folder, you will see a newly added file with the name lorem-ipsum-copy.txt.

      You have successfully added a copy function to your command-line program using pipe. In the next step, you will use streams to modify the content of a file.

      Step 5 — Reversing the Content of a File using Transform()

      In the previous three steps of this tutorial, you have worked with streams using the fs module. In this section, you will modify file streams using the Transform() class from the native stream module, which provides a transform stream. You can use a transform stream to read data, manipulate the data, and provide new data as output. Thus, the output is a ‘transformation’ of the input data. Node.js modules that use transform streams include the crypto module for cryptography and the zlib module with gzip for compressing and uncompressing files.

      You are going to implement a custom transform stream using the Transform() abstract class. The transform stream you create will reverse the contents of a file line by line, which will demonstrate how to use transform streams to modify the content of a file as you want.

      In the mycliprogram file, you will add a reverse function that the program will call when a user passes the reverse command-line argument.

      First, you need to import the Transform() class at the top of the file below the other imports. Add the highlighted line as shown below:

      mycliprogram

      #!/usr/bin/env node
      ...
      const stream = require('stream');
      const Transform = stream.Transform || require('readable-stream').Transform;
      

      In Node.js versions earlier than v0.10, the Transform abstract class is missing. Therefore, the code block above includes the readable-streams polyfill so that this program can work with earlier versions of Node.js. If the Node.js version is > 0.10 the program uses the abstract class, and if not, it uses the polyfill.

      Note: If you are using a Node.js version < 0.10, you will have to run npm init -y to create a package.json file and install the polyfill using npm install readable-stream to your working directory for the polyfill to be applied.

      Next, you will create the reverse function right under your copy function. In that function, you will create a readable stream using the filePath parameter, generate a name for the reversed file, and create a writable stream using that name. Then you create reverseStream, an instance of the Transform() class. When you call the Transform() class, you pass in an object containing one function. This important function is the transform function.

      Beneath the copy function, add the code block below to add the reverse function.

      node-file-streams/mycliprogram

      ...
      function reverse(filePath) {
          const readStream = fs.createReadStream(filePath);
          const reversedDataFilePath = filePath.split('.')[0] + '-reversed.'+ filePath.split('.')[1];
          const writeStream = fs.createWriteStream(reversedDataFilePath);
      
          const reverseStream = new Transform({
              transform (data, encoding, callback) {
                  const reversedData = data.toString().split("").reverse().join("");
                  this.push(reversedData);
                  callback();
              }
          });
      
          readStream.pipe(reverseStream).pipe(writeStream).on('finish', () => {
              console.log(`Finished reversing the contents of ${filePath} and saving the output to ${reversedDataFilePath}.`);
          });
      }
      

      The transform function receives three parameters: data, encoding type, and a callback function. Within this function, you converted the data to a string, split the string, reversed the contents of the resultant array, and joined them back together. This process rewrites the data backward instead of forward.

      Next, you connected the readStream to the reverseStream and finally to the writeStream using two pipe() functions. Finally, you listened for the finish event to alert the user when the file contents have been completely reversed.

      You will notice that the code above uses another syntax for listening for the finish event. Instead of listening for the finish event for the writeStream on a new line, you chained the on function to the second pipe function. You can chain some event listeners on a stream. In this case, doing this has the same effect as calling the on('finish') function on the writeStream.

      To wrap things up, replace the console.log statement in the case 3 block of the switch statement with reverse().

      node-file-streams/mycliprogram

      ...
      switch (command){
          ...
      
          case 3:
              reverse(args[3]);
              break;
      
          ...
      }
      

      To test this function, you will use another file containing the names of countries in alphabetical order (countries.csv). You can download it to your working directory by running the command below.

      • wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/countries.csv

      You can then run mycliprogram.

      • ./mycliprogram reverse countries.csv

      The output will look similar to this:

      Output

      Finished reversing the contents of countries.csv and saving the output to countries-reversed.csv.

      Compare the contents of countries-reversed.csv with countries.csv to see the transformation. Each name is now written backward, and the order of the names has also been reversed (“Afghanistan” is written as “natsinahgfA” and appears last, and “Zimbabwe” is written as “ewbabmiZ” and appears first).

      You have successfully created a custom transform stream. You have also created a command-line program with functions that use streams for file handling.

      Conclusion

      Streams are used in native Node.js modules and in various yarn and npm packages that perform input/output operations because they provide an efficient way to handle data. In this article, you used various stream-based functions to work with files in Node.js. You built a command-line program with read, write, copy, and reverse commands. Then you implemented each of these commands in functions named accordingly. To implement the functions, you used functions like createReadStream, createWriteStream, pipe from the fs module, the createInterface function from the readline module, and finally the abstract Transform() class. Finally, you pieced these functions together in a small command-line program.

      As a next step, you could extend the command-line program you created to include other file system functionality you might want to use locally. A good example could be writing a personal tool to convert data from .tsv stream source to .csv or attempting to replicate the wget command you used in this article to download files from GitHub.

      The command-line program you have written handles command-line arguments itself and uses a simple prompt to get user input. You can learn more about building more robust and maintainable command-line applications by following How To Handle Command-line Arguments in Node.js Scripts and How To Create Interactive Command-line Prompts with Inquirer.js.

      Additionally, Node.js provides extensive documentation on the various Node.js stream module classes, methods, and events you might need for your use case.



      Source link

      How To Use Static Files in Gatsby


      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

      Introduction

      Like many popular Static Site Generators, Gatsby embraces the use of dynamic web frameworks, using React on the frontend and Node.js on the backend. But Gatsby can also pull in static files and assets, like images, CSS files, and JavaScript files.

      This tutorial covers the situations in which you might want to use static files with your Gatsby site. It will show you how to best go about adding images, stylesheets globally and as modules, JavaScript files, and arbitrary files like PDFs for your users to download.

      Prerequisites

      Before starting, here are a few things you will need:

      • A local installation of Node.js for running Gatsby and building your site. The installation procedure varies by operating system, but DigitalOcean has guides for Ubuntu 20.04 and macOS, and you can always find the latest release on the official Node.js download page.
      • A new Gatsby project, scaffolded from gatsby-starter-default. For satisfying this requirement and building a new Gatsby project from scratch, you can refer to Step 1 of the How To Set Up Your First Gatsby Website tutorial.
      • Some familiarity with React and JSX, as well as with HTML elements, if you want to customize the user interface (UI) of your posts beyond what is covered in this tutorial.
      • A program to unzip a zip archive file. On most operating systems, unzip is the command of choice, which you can download on Linux with your local package manager.
      • Access to the demo files repository used to provide sample files for this tutorial. You can access it at the DigitalOcean Community GitHub repository, and Step 1 will instruct you on how to download it.

      This tutorial was tested on Node.js v14.16.1, npm v6.14.12, Gatsby v3.13.1, and flexboxgrid v6.3.1.

      Step 1 — Preparing Example Files

      For the purposes of this tutorial, you will be working with a pre-arranged collection of static assets, which will be used throughout the following steps. The collection of files is available as a GitHub repository, and the first step of this tutorial is to download them and place them within your Gatsby project.

      First, you will extract the sample files to src/sample-assets, which you can either do manually by downloading the zip file and using an unzipping tool of your choice, or by running the following commands in your terminal at the root of your Gatsby project:

      • wget -O ../sample-assets.zip https://github.com/do-community/gatsby-static-files-tutorial-assets/archive/refs/heads/main.zip
      • unzip ../sample-assets.zip -d ./src

      The above command downloads an archive of the entire repo as a single zip archive file with wget, and then unzips the contents to the source directory.

      Once the files are unzipped, the next step is to create an empty Gatsby page component that will serve as the demo page for this tutorial. Create an empty file at src/pages/static-files-demo.js, then open the file in your editor of choice and add the following code:

      src/pages/static-files-demo.js

      import * as React from "react"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1>Static Files Demo</h1>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      This code serves as a Gatsby page component file that you will use to generate a public page at https://localhost/static-files-demo/. The StaticFilesDemo function is a React component that returns JSX, which becomes the page content. You use export default StaticFilesDemo as the final line, since Gatsby’s build system expects the default export of page components to be the React component responsible for rendering the page.

      After adding the page code, save the file, but keep it open as the following steps will add to it.

      In this first step you downloaded the static asset files that will be used throughout the tutorial and set up a demo page to build inside of. In the next step, you will add one of the most common forms of static assets: image files.

      Step 2 — Adding Images

      A common need for websites is to embed image files in a way that doesn’t impact the loading experience of the site. In this step, you will find out how to do this with Gatsby, using gatsby-plugin-image as well as some HTML to embed images into your Gatsby pages, while also optimizing for load time and bandwidth usage.

      Since gatsby-plugin-image is included in the gatsby-starter-default prerequisite, it is already installed as a dependency and ready for use. If you did not start your project from the gatsby-starter-default template, you can learn about installing and configuring gatsby-plugin-image in the official Gatsby docs.

      Open up the demo page component file that you created in the previous step and add the following highlighted code:

      src/pages/static-files-demo.js

      import * as React from "react"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1>Static Files Demo</h1>
      
          <section className="demo">
            <h2>Static Image Files Demo</h2>
      
            <figure className="image-demo">
              <StaticImage
                src="https://www.digitalocean.com/community/tutorials/gatsby-static-files-tutorial-assets-main/images/water.jpg"
                width={1000}
                quality={90}
                alt="Underwater view of clear, blue body of water"
              />
              <figcaption>
                Photo by{" "}
                <a target="_blank" rel="noreferrer noopener" href="https://unsplash.com/@cristianpalmer">
                  Cristian Palmer
                </a>
              </figcaption>
            </figure>
      
            <figure className="image-demo">
              <StaticImage
                src="https://www.digitalocean.com/community/tutorials/gatsby-static-files-tutorial-assets-main/images/turtle.jpg"
                width={1000}
                quality={90}
                alt="Overhead view of a turtle floating over blue water"
              />
              <figcaption>
                Photo by{" "}
                <a target="_blank" rel="noreferrer noopener" href="https://unsplash.com/@ruizra">
                  Randall Ruiz
                </a>
              </figcaption>
            </figure>
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      Instead of using the standard HTML img tag directly and pointing it to a public image URL, in this code you are using the StaticImage component from gatsby-plugin-image and passing in the path to your local static images. This is the best practice approach, as StaticImage will generate multiple resized versions of your source images (using gatsby-plugin-sharp under the hood) and deliver the closest match to the visitor of your webpage (using the srcset feature), resulting in a faster page load and smaller download size.

      For the images passed to StaticImage, you used a quality of 90 to override the default value of 50, showing how gatsby-plugin-image can still offer improvements in filesize while preserving quality. You also specified a width of 1000, which serves as a cap on the maximum width, since both of the source images have an original width that far exceeds it. For the purposes of the demo and many web pages, 1000 pixels in width is more than enough. These two options have a substantial impact on performance, but there are many other options for gatsby-plugin-image that are worth exploring, which you can find in the Gatsby docs.

      In this page, the original versions of the two images take up roughly 4 MB combined, no matter what size screen they are viewed on. But with StaticImage, and on a small mobile phone, they will be compressed down to only 100kB, or roughly 2.5% of the original size. The loading time drops from almost 2 minutes on a 3G internet connection down to seconds. The gatsby-plugin-image plugin also takes advantage of modern image formats that are better suited to compressed web delivery, such as webp and avif.

      Save the file before moving on.

      Note: If you ever need to bypass gatsby-plugin-image and load images completely as-is, Gatsby offers a way to do this via the static folder, but this is generally advised against since this would not include the image optimizations mentioned earlier.

      You have now added several new images to your site, using best practices within Gatsby to provide an optimal user experience to visitors of your web pages. In the next step, you will focus on the styling component of web pages by adding static CSS files to your site.

      Step 3 — Adding CSS Stylesheets

      As with embedding images, there is more than one way to add CSS-based styling to a Gatsby site. Although inline styling and CSS-in-JS are always an option with Gatsby, for site-wide or component styling it is often a better practice to use dedicated static stylesheets. In this step, you will add your own custom CSS file to your Gatsby project, as well as a third-party CSS file, using an approach that follows Gatsby best practices.

      Start by creating a CSS file in the same folder as your demo page, at src/pages/static-files-demo.module.css. You are using the .module.css suffix to mark that this file is meant to be used as a CSS Module, which ensures that the styling will end up scoped to the component it is imported into and not other areas of the user interface (UI).

      After opening the newly created file, add the following code:

      src/pages/static-files-demo.module.css

      .headerText {
        width: 100%;
        text-align: center;
      }
      .container img {
        border-radius: 8px;
      }
      

      In this CSS, you are center-aligning the text in any element with the class of .headerText and making it full width, as well as giving a rounded edge to any img element inside an element with a .container class.

      Save the CSS file and close it. Now open back up the demo page component and add the following highlighted code:

      src/pages/static-files-demo.js

      import * as React from "react"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      import * as DemoStyles from "./static-files-demo.module.css"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1 className={DemoStyles.headerText}>Static Files Demo</h1>
      
          <section className={'demo ' + DemoStyles.container}>
            <h2>Static Image Files Demo</h2>
      
            <figure className="image-demo">
              ...
            </figure>
            <figure className="image-demo">
              ...
            </figure>
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      The first change you made in this file is to add an import statement that imports the CSS file you just created. Instead of using a CSS @import statement, your are using the standard ES Module import statement, assigning the value of this import to the DemoStyles variable. Under the hood, Gatsby processes this CSS file with webpack, treating it as a CSS Module.

      You also updated the JSX in the component to use the classes from the CSS module file. You did this by updating the className attributes in strategic locations with the precise scoped class names from the DemoStyles import.

      Save your file.

      The next step involves the opposite scenario: adding static CSS code that you want to affect the entire site. An example of the kind of CSS you might want to load in this way is a small set of utility classes that help when designing flexbox-based layouts: flexboxgrid.

      Install this third-party CSS as a dependency by running this command in the root of your Gatsby project:

      Next, instead of importing the CSS that this library provides within the same demo page as before, you will import it at the highest level of your project so that it gets applied globally. In the starter template, this level is in the layout.js file, as every other component wraps its content inside it.

      Note: Another option for global CSS imports is in the gatsby-browser.js file, but Gatsby does not recommend this as the primary approach in most situations, as mentioned in the Gatsby docs about global styling. You can also use traditional <link> elements to import internal or external stylesheets, but that is also not recommended as it bypasses webpack.

      Open src/components/layout.js, and make the following change:

      src/components/layout.js

      /**
       * Layout component that queries for data
       * with Gatsby's useStaticQuery component
       *
       * See: https://www.gatsbyjs.com/docs/use-static-query/
       */
      
      import * as React from "react"
      import PropTypes from "prop-types"
      import { useStaticQuery, graphql } from "gatsby"
      
      import Header from "./header"
      import "./layout.css"
      import "../../node_modules/flexboxgrid/dist/flexboxgrid.min.css"
      
      const Layout = ({ children }) => {
        const data = useStaticQuery(graphql`
          query SiteTitleQuery {
            site {
              siteMetadata {
                title
              }
            }
          }
        `)
      
        return (
          <>
            ...
          </>
        )
      }
      
      Layout.propTypes = {
        children: PropTypes.node.isRequired,
      }
      
      export default Layout
      

      You have just added an import statement that directly imports the CSS file from the flexboxgrid library under node_modules. Since the CSS file is meant to be applied to the entire site, you are not assigning it to a variable, and because you don’t want to use it as a module, the filename does not end in .module.css.

      Save and close layout.js to complete the process of globally importing the CSS across your Gatsby site.

      With the CSS globally imported, you will now use the classes from flexboxgrid in your demo page, without having to import the CSS file again. Open the demo page component file and update the code:

      src/pages/static-files-demo.js

      import * as React from "react"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      import * as DemoStyles from "./static-files-demo.module.css"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1 className={DemoStyles.headerText}>Static Files Demo</h1>
      
          <section className={'demo row around-xs ' + DemoStyles.container}>
            <h2 className="col-xs-12">Static Image Files Demo</h2>
      
            <figure className="image-demo col-xs-10 col-sm-5">
              ...
            </figure>
      
            <figure className="image-demo col-xs-10 col-sm-5">
              ...
            </figure>
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      You have just added some classes to your demo page that use rulesets from the flexboxgrid CSS file. The row and around-xs class on the section element turn it into a wrapped flex element with justify-content set to space-around, and the col-* classes are used to control how much of the row each element takes up.

      Make sure to save your changes to the demo file before closing it. With this change, the page has become more responsive and the images will appear side-by-side on a large enough display.

      To preview your changes so far, run this command:

      This will start a local development server for your Gatsby site at http://localhost:8000/static-files-demo. Navigate to this URL and you will find your site rendered with your new styling:

      Screenshot showing that the demo images are now side-by-side, in a row, and with a space between them

      Note: Since Gatsby ships with React out of the box, in addition to the static CSS options outlined here, you also have the option of applying styling through React and React-based frameworks.

      In this step, you used static CSS files to add additional styling to your site. In the next step, you will use a similar approach to add static JavaScript files for added functionality.

      Step 4 — Adding JavaScript Files

      Gatsby already uses JavaScript in both the backend and frontend, but this is used as either Node.js code that only runs during the build process or React components for generating the static HTML output. In this step, you will include JavaScript files that are neither Node.js- or React-based, but still get pulled into every page generated by Gatsby.

      For this tutorial, you are adding a file that prints a message to any visitor to your site that opens their developer console. You can inspect the JavaScript that will run by opening the file at src/gatsby-static-files-tutorial-assets-main/js/every-page.js.

      Rather than importing this file directly into a Gatsby file via an ES Module import (as you did with the CSS files), you will add it to the page via a Gatsby server-side rendering (SSR) API. This approach gives you fine-grained control over where in the DOM the JavaScript file is pulled in, and also prevents your code from being executed as part of Gatsby’s build process.

      However, before using the Gatsby SSR API, you need to make the static JavaScript file accessible to the public. To do this, use the special static folder that Gatsby supports by making a folder named static in the root of your Gatsby project. Then, copy the static JavaScript file to this folder. You can do both of these actions manually in your file browser, or with the following commands ran from the root of your project:

      • mkdir static
      • cp src/gatsby-static-files-tutorial-assets-main/js/every-page.js static/every-page.js

      With this action, the JavaScript file is now publicly accessible at https://localhost:8000/every-page.js. The next part of this step is to trigger it to load via HTML.

      Open up gatsby-ssr.js in the root of your Gatsby project, as that is where Gatsby will allow you to hook into the server-rendering APIs. Now add the following code:

      gatsby-ssr.js

      /**
       * Implement Gatsby's SSR (Server Side Rendering) APIs in this file.
       *
       * See: https://www.gatsbyjs.com/docs/ssr-apis/
       */
      
      import * as React from "react"
      
      export const onRenderBody = ({ setPostBodyComponents }) => {
        setPostBodyComponents([
          <script type="text/javascript" src="https://www.digitalocean.com/every-page.js" key="every-page-js" defer></script>,
        ])
      }
      

      The first line of code you added is a statement that imports React, which is necessary to enable the use of JSX in the file. Next, you export a function called onRenderBody, which takes an object as an argument with a nested function of setPostBodyComponents, which you call from within your own function. You call this function with a script tag that will load your static JS file, while using the best practices of adding a unique key property, and defer, since in this case it does not matter when the JavaScript executes.

      The setPostBodyComponents will take any React components inside the array passed as its first argument, in this case a single script tag, and render it as part of the body, thereby triggering the loading of your script file in the browser. Save the file, but keep it open for the next step.

      Now navigate to your https://localhost:8000/static-files-demo URL and open up a JavaScript console. You will find the message created by the JavaScript file, as shown in the following image:

      A browser with the demo page loaded, with the console message from `every-page.js` showing up in the JavaScript console.

      Note: If you are using the live development feature of Gatsby, you might need to halt and restart npm run develop before changes to this file take effect.

      You have now added a local static JavaScript file using the Gatsby SSR API, but the same strategy can also be used for loading external JavaScript from other domains, also known as 3rd-party scripts. To make the images in your demo easy to zoom in on, you will add a third-party lightbox library called Fancybox. In the same gatsby-ssr.js file, add the following lines:

      gatsby-ssr.js

      /**
       * Implement Gatsby's SSR (Server Side Rendering) APIs in this file.
       *
       * See: https://www.gatsbyjs.com/docs/ssr-apis/
       */
      
      import * as React from "react"
      
      export const onRenderBody = ({ setPostBodyComponents }) => {
        setPostBodyComponents([
          <script type="text/javascript" src="https://www.digitalocean.com/every-page.js" key="every-page-js" defer></script>,
          <script
            src="https://cdn.jsdelivr.net/npm/@fancyapps/[email protected]/dist/fancybox.umd.js"
            integrity="sha256-B34QrPZs5i0CQ3eqywkXHKIWw8msfAVH30RWj/i+dMo="
            crossOrigin="anonymous"
            key="fancybox-js"
            defer
          ></script>,
          <link
            rel="stylesheet"
            href="https://cdn.jsdelivr.net/npm/@fancyapps/[email protected]/dist/fancybox.css"
            integrity="sha256-WIieo0WFPkV7kcA2lQ4ZCO5gTg1Bs/SBX5YzEB4JkyM="
            crossOrigin="anonymous"
            key="fancybox-css"
          ></link>,
        ])
      }
      

      In this code, you have added both the JavaScript and CSS for Fancybox, the third-party library, through the same onRenderBody Gatsby SSR API as you are using for the local every-page.js file. Two extra attributes are used this time, both of which help with security. The crossOrigin="anonymous" explicitly tells the browser not to share credentials with a domain that does not match your own, and the integrity attribute is used to enforce Subresource Integrity (SRI), which protects against sudden changes to a file after it has been added.

      Warning: As a rule of thumb, treat third-party scripts and styles as untrusted. In addition to inspecting them before use, always use SRI. In general, loading third-party assets via URL instead of bundling with imports is something that should be avoided when possible, but is sometimes necessary for analytics, embed widgets, and error-logging services.

      This completes the task of getting the third-party code to load, but for this specific library, there is another step to trigger the new UI features.

      Save and close gatsby-ssr.js, then open src/pages/static-files-demo.js back up and make the following edits:

      src/pages/static-files-demo.js

      import * as React from "react"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      import * as DemoStyles from "./static-files-demo.module.css"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1 className={DemoStyles.headerText}>Static Files Demo</h1>
      
          <section className={"demo row around-xs " + DemoStyles.container}>
            <h2 className="col-xs-12">Static Image Files Demo</h2>
      
            <figure className="image-demo col-xs-10 col-sm-5">
              <StaticImage
                data-fancybox
                src="https://www.digitalocean.com/community/tutorials/gatsby-static-files-tutorial-assets-main/images/water.jpg"
                ...
              />
              <figcaption>
                ...
            </figure>
      
            <figure className="image-demo col-xs-10 col-sm-5">
              <StaticImage
                data-fancybox
                src="https://www.digitalocean.com/community/tutorials/gatsby-static-files-tutorial-assets-main/images/turtle.jpg"
                ...
      
              />
              <figcaption>
                ...
              </figcaption>
            </figure>
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      Adding data-fancybox tells the fancybox library where to find images to trigger the lightbox effect on, and with that, your users can start using the lightbox viewer by clicking on a demo image, as shown in the following GIF:

      Screen recording showing that clicking a single image on the demo page launches a full-page lightbox viewer, complete with controls, which is closed after switching between the images

      Note: For loading scripts or styles in the <head> of the page, both your own and third-party, the recommended approach is with the gatsby-plugin-react-helmet plugin. This plugin is bundled with the starter template, and you can add to an existing usage of it in src/components/seo.js.

      You just used two different ways to pull local and remote JavaScript into your Gatsby site, each with its own use cases and trade-offs. For file types not covered by this step or previous ones, the next section in this tutorial will address how to include arbitrary files in your Gatsby site.

      Step 5 — Adding Arbitrary Static Files

      You have now implemented three common types of static assets in web development: images, CSS, and JavaScript. But that still leaves lots of other file types that might be part of a website. In this step, you will explore adding arbitrary static files to your Gatsby site, so that they can be embedded or offered as downloads to your visitors.

      The first way to add arbitrary static files so that users can access them is to import them inside of JavaScript files and use webpack to generate a public link for you. This strategy is recommended by the official Gatsby docs on importing assets, since webpack will help prevent typos in paths, avoid unnecessary disk space usage for files that are never imported, and in some cases, even inline the contents of the file as a data URI.

      Open the demo page component file and add the following edit:

      src/pages/static-files-demo.js

      import * as React from "react"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      import * as DemoStyles from "./static-files-demo.module.css"
      import helloWorldPdf from "../gatsby-static-files-tutorial-assets-main/hello-world.pdf"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1 className={DemoStyles.headerText}>Static Files Demo</h1>
      
          <section>
            <h2>Arbitrary Static Files</h2>
            <a href={helloWorldPdf} title="Download the Hello World file">
              Access the Hello World file by clicking here.
            </a>
          </section>
      
          <section className={"demo row around-xs " + DemoStyles.container}>
            <h2 className="col-xs-12">Static Image Files Demo</h2>
            ...
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      In this code, you are importing the static file (a PDF file) directly in JavaScript, which will get parsed by webpack and will generate a unique link. That link will populate as the value of helloWorldPdf, which is what your a tag is pointing to.

      Due to the way assets are handled with this method, the final link will look rather random, like /static/hello-world-2f669160afa9b953cbe496f2d6ccb046.pdf. This works for most scenarios, but if you need a permanent readable link, Gatsby offers another option in the form of the special static folder. Pretending that you are an employer that wants your employees to be able to bookmark the link of your_domain.com/time-off-form.pdf, you will add a new file using this static folder.

      First, copy the static file time-off-form.pdf from the demo files directory to the root static folder. You can do this manually, or with the following command:

      • cp src/gatsby-static-files-tutorial-assets-main/time-off-form.pdf static/time-off-form.pdf

      Next, add a link to it in the page file:

      src/pages/static-files-demo.js

      import * as React from "react"
      
      ...
      
      import helloWorldPdf from "../gatsby-static-files-tutorial-assets-main/hello-world.pdf"
      
      const StaticFilesDemo = () => (
        <Layout>
          <Seo title="Static Files Demo" />
          <h1 className={DemoStyles.headerText}>Static Files Demo</h1>
      
          <section>
            <h2>Arbitrary Static Files</h2>
            <a href={helloWorldPdf} title="Download the Hello World file">
              Access the Hello World file by clicking here.
            </a>
            <br />
            <a href="https://www.digitalocean.com/time-off-form.pdf" title="Time Off Form">
              Request Time Off - Form to fill out and submit.
            </a>
          </section>
      
          <section className={"demo row around-xs " + DemoStyles.container}>
            ...
          </section>
        </Layout>
      )
      
      export default StaticFilesDemo
      

      Save the changes to this file and close it.

      With the static folder approach, you get to provide your users with a permanent path of /time-off-form.pdf, but you lose the benefits of cache-busting for file changes and having webpack’s compilation step catching typos in filepaths.

      Navigate to https://localhost:8000/time-off-form.pdf to view the following PDF:

      A sample PDF that has instructions for employees to log their time off.

      Thanks to your efforts in this step, visitors to your site can now access extra static file types enabled by your new code. This is convenient for your site visitors, as they don’t have to leave your domain to access these files, and beneficial for you, as it makes them less likely to leave the site entirely.

      Note: If you are using Markdown as a source within Gatsby and want to automatically copy files that are linked inside Markdown links to the public part of your site so that users can download them, please take a look at the gatsby-remark-copy-linked-files plugin.

      Conclusion

      Through following the steps in this tutorial, you have added new types of static files and assets to your website, integrating them into the Gatsby system and your existing content. These approaches can be applied to almost any new or existing Gatsby site and across hundreds of use-cases and file types, so you are now well-equipped to handle static files in Gatsby for the future.

      Static files are involved with many parts of web development, and impact or are impacted by related design decisions. Some topics that are worth further consideration are:

      If you would like to read more on Gatsby, check out the rest of the How To Create Static Web Sites with Gatsby.js series.



      Source link