One place for hosting & domains

      November 2021

      How To Use PostgreSQL With Node.js on Ubuntu 20.04


      The author selected Society of Women Engineers to receive a donation as part of the Write for DOnations program.

      Introduction

      The Node.js ecosystem provides a set of tools for interfacing with databases. One of those tools is node-postgres, which contains modules that allow Node.js to interface with the PostgreSQL database. Using node-postgres, you will be able to write Node.js programs that can access and store data in a PostgreSQL database.

      In this tutorial, you’ll use node-postgres to connect and query the PostgreSQL (Postgres in short) database. First, you’ll create a database user and the database in Postgres. You will then connect your application to the Postgres database using the node-postgres module. Afterwards, you will use node-postgres to insert, retrieve, and modify data in the PostgreSQL database.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 – Setting Up the Project Directory

      In this step, you will create the directory for the node application and install node-postgres using npm. This directory is where you will work on building your PostgreSQL database and configuration files to interact.

      Create the directory for your project using the mkdir command:

      Navigate into the newly created directory using the cd command:

      Initialize the directory with a package.json file using the npm init command:

      The -y flag creates a default package.json file.

      Next, install the node-postgres module with npm install:

      You’ve now set up the directory for your project and installed node-postgres as a dependency. You’re now ready to create a user and a database in Postgres.

      Step 2 — Creating A Database User and a Database in PostgreSQL

      In this step, you’ll create a database user and the database for your application.

      When you install Postgres on Ubuntu for the first time, it creates a user postgres on your system, a database user named postgres, and a database postgres. The user postgres allows you to open a PostgreSQL session where you can do administrative tasks such as creating users and databases.

      PostgreSQL uses ident authentication connection scheme which allows a user on Ubuntu to login to the Postgres shell as long as the username is similar to the Postgres user. Since you already have a postgres user on Ubuntu and a postgres user in PostgreSQL created on your behalf, you’ll be able to log in to the Postgres shell.

      To login, switch the Ubuntu user to postgres with sudo and login into the Postgres shell using the psql command:

      The command’s arguments represents:

      • -u: a flag that switches the user to the given user on Ubuntu. Passing postgres user as an argument will switch the user on Ubuntu to postgres.
      • psql: a Postgres interactive terminal program where you can enter SQL commands to create databases, roles, tables, and many more.

      Once you login into the Postgres shell, your terminal will look like the following:

      postgres is the name of the database you’ll be interacting with and the # denotes that you’re logged in as a superuser.

      For the Node application, you’ll create a separate user and database that the application will use to connect to Postgres.

      To do that, create a new role with a strong password:

      • CREATE USER fish_user WITH PASSWORD 'password';

      A role in Postgres can be considered as a user or group depending on your use case. In this tutorial, you’ll use it as a user.

      Next, create a database and assign ownership to the user you created:

      • CREATE DATABASE fish OWNER fish_user;

      Assigning the database ownership to fish_user grants the role privileges to create, drop, and insert data into the tables in the fish database.

      With the user and database created, exit out of the Postgres interactive shell:

      To login into the Postgres shell as fish_user, you need to create a user on Ubuntu with a name similar to the Postgres user you created.

      Create a user with the adduser command:

      You have now created a user on Ubuntu, a PostgreSQL user, and a database for your Node application. Next, you’ll log in to the PostgreSQL interactive shell using the fish_user and create a table.

      Step 3 — Opening A Postgres Shell With a Role and Creating a Table

      In this section, you’ll open the Postgres shell with the user you created in the previous section on Ubuntu. Once you login into the shell, you’ll create a table for the Node.js app.

      To open the shell as the fish_user, enter the following command:

      • sudo -u fish_user psql -d fish

      sudo -u fish_user switches your Ubuntu user to fish_user and then runs the psql command as that user. The -d flag specifies the database you want to connect to, which is fish in this case. If you don’t specify the database, psql will try to connect to fish_user database by default, which it won’t find and it will throw an error.

      Once you’re logged in the psql shell, your shell prompt will look like the following:

      fish denotes that you’re now connected to the fish database.

      You can verify the connection using the conninfo command:

      You will receive output similar to the following:

      Output

      You are connected to database "fish" as user "fish_user" via socket in "/var/run/postgresql" at port "5432".

      The output confirms that you have indeed logged in as a fish_user and you’re connected to the fish database.

      Next, you’ll create a table that will contain the data your application will insert.

      The table you’ll create will keep track of shark names and their colors. When populated with data, it will look like the following:

      idnamecolor
      1sammyblue
      2joseteal

      Using the SQL create table command, create a table:

      • CREATE TABLE shark(
      • id SERIAL PRIMARY KEY,
      • name VARCHAR(50) NOT NULL,
      • color VARCHAR(50) NOT NULL);

      The CREATE TABLE shark command creates a table with 3 columns:

      • id: an auto-incrementing field and primary key for the table. Each time you insert a row, Postgres will increment and populate the id value.

      • name and color: fields that can store 50 characters. NOT NULL is a constraint that prevents the fields from being empty.

      Verify if the table has been created with the right owner:

      The dt command list all tables in the database.

      When you run the command, the output will resemble the following:

               List of relations
       Schema | Name  | Type  |   Owner
      --------+-------+-------+-----------
       public | shark | table | fish_user
      (1 row)
      
      

      The output confirms that the fish_user owns the shark table.

      Now exit out of the Postgres shell:

      It will take you back to the project directory.

      With the table created, you’ll use the node-postgres module to connect to Postgres.

      Step 4 — Connecting To a Postgres Database

      In this step, you’ll use node-postgres to connect your Node.js application to the PostgreSQL database. To do that, you’ll use node-postgres to create a connection pool. A connection pool functions as a cache for database connections allowing your app to reuse the connections for all the database requests. This can speed up your application and save your server resources.

      Create and open a db.js file in your preferred editor. In this tutorial, you’ll use nano, a terminal text editor:

      In your db.js file, require in the node-postgres module and use destructuring assignment to extract a class Pool from node-postgres.

      node_pg_app/db.js

      const { Pool } = require('pg')
      
      

      Next, create a Pool instance to create a connection pool:

      node_pg_app/db.js

      const { Pool} = require('pg')
      
      const pool = new Pool({
        user: 'fish_user',
        database: 'fish',
        password: 'password',
        port: 5432,
        host: 'localhost',
      })
      
      

      When you create the Pool instance, you pass a configuration object as an argument. This object contains the details node-postgres will use to establish a connection to Postgres.

      The object defines the following properties:

      • user: the user you created in Postgres.
      • database: the name of the database you created in Postgres.
      • password: the password for the user fish_user.
      • port: the port Postgres is listening on. 5432 is the default port.
      • host: the Postgres server you want node-postgres to connect to. Passing it localhost will connect the node-postgres to the Postgres server installed on your system. If your Postgres server resided on another droplet, your host would look like this:host: server_ip_address.

      Note: In production, it’s recommended to keep the configuration values in a different file, such as the .env file. This file is then added to the .gitignore file if using Git to avoid tracking it with version control. The advantage is that it hides sensitive information, such as your password, user, and database from attackers.

      Once you create the instance, the database connection is established and the Pool object is stored in the pool variable. To use this anywhere in your app, you will need to export it. In your db.js file, require in and define an instance of the Pool object, and set its properties and values:

      node_pg_app/db.js

      const { Pool } = require("pg");
      
      const pool = new Pool({
        user: "fish_user",
        database: "fish",
        password: "password",
        port: 5432,
        host: "localhost",
      });
      
      module.exports = { pool };
      
      

      Save the file and exit nano by pressing CTRL+X. Enter y to save the changes, and confirm your file name by pressing ENTER or RETURN key on Mac.

      Now that you’ve connected your application to Postgres, you’ll use this connection to insert data in Postgres.

      Step 5 — Inserting Data Into the Postgres Database

      In this step, you’ll create a program that adds data into the PostgreSQL database using the connection pool you created in the db.js file. To ensure that the program inserts different data each time it runs, you’ll give it functionality to accept command-line arguments. When running the program, you’ll pass it the name and color of the shark.

      Create and open insertData.js file in your editor:

      In your insertData.js file, add the following code to make the script process command-line arguments:

      node_pg_app/insertData.js

      const { pool } = require("./db");
      
      async function insertData() {
        const [name, color] = process.argv.slice(2);
        console.log(name, color);
      }
      
      insertData();
      

      First, you require in the pool object from the db.js file. This allows your program to use the database connection to query the database.

      Next, you declare the insertData() function as an asynchronous function with the async keyword. This lets you use the await keyword to make database requests asynchronous.

      Within the insertData() function, you use the process module to access the command-line arguments. The Node.js process.argv method returns all arguments in an array including the node and insertData.js arguments.

      For example, when you run the script on the terminal with node insertData.js sammy blue, the process.argv method will return an array: ['node', 'insertData.js', 'sammy', 'blue'] (the array has been edited for brevity).

      To skip the first two elements: node and insertData.js, you append JavaScript’s slice() method to the process.argv method. This returns elements starting from index 2 onwards. These arguments are then destructured into name and color variables.

      Save your file and exit nano with CTRL+X. Run the file using node and pass it the arguments sammy, and blue:

      • node insertData.js sammy blue

      After running the command, you will see the following output:

      Output

      sammy blue

      The function can now access the name and shark color from the command-line arguments. Next, you’ll modify the insertData() function to insert data into the shark table.

      Open the insertData.js file in your text editor again and add the highlighted code:

      node_pg_app/insertData.js

      const { pool } = require("./db");
      
      async function insertData() {
        const [name, color] = process.argv.slice(2);
        const res = await pool.query(
            "INSERT INTO shark (name, color) VALUES ($1, $2)",
            [name, color]
          );
        console.log(`Added a shark with the name ${name}`);
      }
      
      insertData();
      

      Now, the insertData() function defines the name and color of the shark. Next, it awaits the pool.query method from node-postgres that takes an SQL statement INSERT INTO shark (name, color) ... as the first argument. The SQL statement inserts a record into the shark table. It uses what’s called a parameterized query. $1, and $2 corresponds to the name and color variables in the array provided in the pool.query() method as a second argument: [name, color]. When Postgres is executing the statement, the variables are substituted safely protecting your application from SQL injection. After the query executes, the function logs a success message using console.log().

      Before you run the script, wrap the code inside insertData() function in a try...catch block to handle runtime errors:

      node_pg_app/insertData.js

      const { pool } = require("./db");
      
      async function insertData() {
        const [name, color] = process.argv.slice(2);
        try {
          const res = await pool.query(
            "INSERT INTO shark (name, color) VALUES ($1, $2)",
            [name, color]
          );
          console.log(`Added a shark with the name ${name}`);
        } catch (error) {
          console.error(error)
        }
      }
      
      insertData()
      

      When the function runs, the code inside the try block executes. If successful, the function will skip the catch block and exit. However, if an error is triggered inside the try block, the catch block will execute and log the error in the console.

      Your program can now take command-line arguments and use them to insert a record into the shark table.

      Save and exit out of your text editor. Run the insertData.js file with sammy and blue as command-line arguments:

      • node insertData.js sammy blue

      You’ll receive the following output:

      Output

      Added a shark with the name sammy

      Running the command insert’s a record in the shark table with the name sammy and the color blue.

      Next, execute the file again with jose and teal as command-line arguments:

      • node insertData.js jose teal

      Your output will look similar to the following:

      Output

      Added a shark with the name jose

      This confirms you inserted another record into the shark table with the name jose and the color teal.

      You’ve now inserted two records in the shark table. In the next step, you’ll retrieve the data from the database.

      Step 6 — Retrieving Data From the Postgres Database

      In this step, you’ll retrieve all records in the shark table using node-postgres, and log them into the console.

      Create and open a file retrieveData.js in your favorite editor:

      In your retrieveData.js, add the following code to retrieve data from the database:

      node_pg_app/retrieveData.js

      const { pool } = require("./db");
      
      async function retrieveData() {
        try {
          const res = await pool.query("SELECT * FROM shark");
          console.log(res.rows);
        } catch (error) {
          console.error(error);
        }
      }
      
      retrieveData()
      

      The retrieveData() function reads all rows in the shark table and logs them in the console. Within the function try block, you invoke the pool.query() method from node-postgres with an SQL statement as an argument. The SQL statement SELECT * FROM shark retrieves all records in the shark table. Once they’re retrieved, the console.log() statement logs the rows.

      If an error is triggered, execution will skip to the catch block, and log the error. In the last line, you invoke the retrieveData() function.

      Next, save and close your editor. Run the retrieveData.js file:

      You will see output similar to this:

      Output

      [ { id: 1, name: 'sammy', color: 'blue' }, { id: 2, name: 'jose', color: 'teal' } ]

      node-postgres returns the table rows in a JSON-like object. These objects are stored in an array.

      You can now retrieve data from the database. You’ll now modify data in the table using node-postgres.

      Step 7 — Modifying Data In the Postgres Database

      In this step, you’ll use node-postgres to modify data in the Postgres database. This will allow you to change the data in any of the shark table records.

      You’ll create a script that takes two command-line arguments: id and name. You will use the id value to select the record you want in the table. The name argument will be the new value for the record whose name you want to change.

      Create and open the modifyData.js file:

      In your modifyData.js file, add the following code to modify a record in the shark table:

      node_pg_app/modifyingData.js

      const { pool } = require("./db");
      
      async function modifyData() {
        const [id, name] = process.argv.slice(2);
        try {
          const res = await pool.query("UPDATE shark SET name = $1 WHERE id = $2", [
            name,
            id,
          ]);
          console.log(`Updated the shark name to ${name}`);
        } catch (error) {
          console.error(error);
        }
      }
      
      modifyData();
      

      First, you require the pool object from the db.js file in your modifyData.js file.

      Next, you define an asynchronous function modifyData() to modify a record in Postgres. Inside the function, you define two variables id and name from the command-line arguments using the destructuring assignment.

      Within the try block, you invoke the pool.query method from node-postgres by passing it an SQL statement as the first argument. On the UPDATE SQL statement, the WHERE clause selects the record that matches the id value. Once selected, SET name = $1 changes the value in the name field to the new value.

      Next, console.log logs a message that executes once the record name has been changed. Finally, you call the modifyData() function on the last line.

      Save and exit out of the file using CTRL+X. Run the modifyData.js file with 2 and san as the arguments:

      You will receive the following output:

      Output

      Updated the shark name to san

      To confirm that the record name has been changed from jose to san, run the retrieveData.js file:

      You will get output similar to the following:

      Output

      output [ { id: 1, name: 'sammy', color: 'blue' }, { id: 2, name: 'san', color: 'teal' } ]

      You should now see that the record with the id 2 now has a new name san replacing jose.

      With that done, you’ve now successfully updated a record in the database using node-postgres.

      Conclusion

      In this tutorial, you used node-postgres to connect and query a Postgres database. You began by creating a user and database in Postgres. You then created a table, connected your application to Postgres using node-postgres, and inserted, retrieved, and modified data in Postgres using the node-postgres module.

      For more information about node-postgres, visit their documentation. To improve your Node.js skills, you can explore the How To Code in Node.js series.



      Source link

      How To Design a Document Schema in MongoDB


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      If you have a lot of experience working with relational databases, it can be difficult to move past the principles of the relational model, such as thinking in terms of tables and relationships. Document-oriented databases like MongoDB make it possible to break free from rigidity and limitations of the relational model. However, the flexibility and freedom that comes with being able to store self-descriptive documents in the database can lead to other pitfalls and difficulties.

      This conceptual article outlines five common guidelines related to schema design in a document-oriented database and highlights various considerations one should make when modeling relationships between data. It will also walk through several strategies one can employ to model such relationships, including embedding documents within arrays and using child and parent references, as well as when these strategies would be most appropriate to use.

      Guideline 1 — Storing Together What Needs to be Accessed Together

      In a typical relational database, data is kept in tables, and each table is constructed with a fixed list of columns representing various attributes that make up an entity, object, or event. For example, in a table representing students at a a university, you might find columns holding each student’s first name, last name, date of birth, and a unique identification number.

      Typically, each table represents a single subject. If you wanted to store information about a student’s current studies, scholarships, or prior education, it could make sense to keep that data in a separate table from the one holding their personal information. You could then connect these tables to signify that there is a relationship between the data in each one, indicating that the information they contain has a meaningful connection.

      For instance, a table describing each student’s scholarship status could refer to students by their student ID number, but it would not store the student’s name or address directly, avoiding data duplication. In such a case, to retrieve information about any student with all information on the student’s social media accounts, prior education, and scholarships, a query would need to access more than one table at a time and then compile the results from different tables into one.

      This method of describing relationships through references is known as a normalized data model. Storing data this way — using multiple separate, concise objects related to each other — is also possible in document-oriented databases. However, the flexibility of the document model and the freedom it gives to store embedded documents and arrays within a single document means that you can model data differently than you might in a relational database.

      The underlying concept for modeling data in a document-oriented database is to “store together what will be accessed together.”“ Digging further into the student example, say that most students at this school have more than one email address. Because of this, the university wants the ability to store multiple email addresses with each student’s contact information.

      In a case like this, an example document could have a structure like the following:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ]
      }
      

      Notice that this example document contains an embedded list of email addresses.

      Representing more than a single subject inside a single document characterizes a denormalized data model. It allows applications to retrieve and manipulate all the relevant data for a given object (here, a student) in one go without a need to access multiple separate objects and collections. Doing so also guarantees the atomicity of operations on such a document without having to use multi-document transactions to guarantee integrity.

      Storing together what needs to be accessed together using embedded documents is often the optimal way to represent data in a document-oriented database. In the following guidelines, you’ll learn how different relationships between objects, such as one-to-one or one-to-many relationships, can be best modeled in a document-oriented database.

      Guideline 2 — Modeling One-to-One Relationships with Embedded Documents

      A one-to-one relationship represents an association between two distinct objects where one object is connected with exactly one of another kind.

      Continuing with the student example from the previous section, each student has only one valid student ID card at any given point in time. One card never belongs to multiple students, and no student can have multiple identification cards. If you were to store all this data in a relational database, it would likely make sense to model the relationship between students and their ID cards by storing the student records and the ID card records in separate tables that are tied together through references.

      One common method for representing such relationships in a document database is by using embedded documents. As an example, the following document describes a student named Sammy and their student ID card:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "id_card": {
              "number": "123-1234-123",
              "issued_on": ISODate("2020-01-23"),
              "expires_on": ISODate("2020-01-23")
          }
      }
      

      Notice that instead of a single value, this example document’s id_card field holds an embedded document representing the student’s identification card, described by an ID number, the card’s date of issue, and the card’s expiration date. The identity card essentially becomes a part of the document describing the student Sammy, even though it’s a separate object in real life. Usually, structuring the document schema like this so that you can retrieve all related information through a single query is a sound choice.

      Things become less straightforward if you encounter relationships connecting one object of a kind with many objects of another type, such as a student’s email addresses, the courses they attend, or the messages they post on the student council’s message board. In the next few guidelines, you’ll use these data examples to learn different approaches for working with one-to-many and many-to-many relationships.

      Guideline 3 — Modeling One-to-Few Relationships with Embedded Documents

      When an object of one type is related to multiple objects of another type, it can be described as a one-to-many relationship. A student can have multiple email addresses, a car can have numerous parts, or a shopping order can consist of multiple items. Each of these examples represents a one-to-many relationship.

      While the most common way to represent a one-to-one relationship in a document database is through an embedded document, there are several ways to model one-to-many relationships in a document schema. When considering your options for how to best model these, though, there are three properties of the given relationship you should consider:

      • Cardinality: Cardinality is the measure of the number of individual elements in a given set. For example, if a class has 30 students, you could say that class has a cardinality of 30. In a one-to-many relationship, the cardinality can be different in each case. A student could have one email address or multiple. They could be registered for just a few classes or they could have a completely full schedule. In a one-to-many relationship, the size of “many” will affect how you might model the data.
      • Independent access: Some related data will rarely, if ever, be accessed separately from the main object. For example, it might be uncommon to retrieve a single student’s email address without other student details. On the other hand, a university’s courses might need to be accessed and updated individually, regardless of the student or students that are registered to attend them. Whether or not you will ever access a related document alone will also affect how you might model the data.
      • Whether the relationship between data is strictly a one-to-many relationship: Consider the courses an example student attends at a university. From the student’s perspective, they can participate in multiple courses. On the surface, this may seem like a one-to-many relationship. However, university courses are rarely attended by a single student; more often, multiple students will attend the same class. In cases like this, the relationship in question is not really a one-to-many relationship, but a many-to-many relationship, and thus you’d take a different approach to model this relationship than you would a one-to-many relationship.

      Imagine you’re deciding how to store student email addresses. Each student can have multiple email addresses, such as one for work, one for personal use, and one provided by the university. A document representing a single email address might take a form like this:

      {
          "email": "[email protected]",
          "type": "work"
      }
      

      In terms of cardinality, there will be only a few email addresses for each student, since it’s unlikely that a student will have dozens — let alone hundreds — of email addresses. Thus, this relationship can be characterized as a one-to-few relationship, which is a compelling reason to embed email addresses directly into the student document and store them together. You don’t run any risk that the list of email addresses will grow indefinitely, which would make the document big and inefficient to use.

      Note: Be aware that there are certain pitfalls associated with storing data in arrays. For instance, a single MongoDB document cannot exceed 16MB in size. While it is possible and common to embed multiple documents using array fields, if the list of objects grows uncontrollably the document could quickly reach this size limit. Additionally, storing a large amount of data inside embedded arrays have a big impact on query performance.

      Embedding multiple documents in an array field will likely be suitable in many situations, but know that it also may not always be the best solution.

      Regarding independent access, email addresses will likely not be accessed separately from the student. As such, there is no clear incentive to store them as separate documents in a separate collection. This is another compelling reason to embed them inside the student’s document.

      The last thing to consider is whether this relationship is really a one-to-many relationship instead of a many-to-many relationship. Because an email address belongs to a single person, it’s reasonable to describe this relationship as a one-to-many relationship (or, perhaps more accurately, a one-to-few relationship) instead of a many-to-many relationship.

      These three assumptions suggest that embedding students’ various email addresses within the same documents that describe students themselves would be a good choice for storing this kind of data. A sample student’s document with email addresses embedded might take this shape:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ]
      }
      

      Using this structure, every time you retrieve a student’s document you will also retrieve the embedded email addresses in the same read operation.

      If you model a relationship of the one-to-few variety, where the related documents do not need to be accessed independently, embedding documents directly like this is usually desirable, as this can reduce the complexity of the schema.

      As mentioned previously, though, embedding documents like this isn’t always the optimal solution. The next section provides more details on why this might be the case in some scenarios, and outlines how to use child references as an alternative way to represent relationships in a document database.

      Guideline 4 — Modeling One-to-Many and Many-to-Many Relationships with Child References

      The nature of the relationship between students and their email addresses informed how that relationship could best be modeled in a document database. There are some differences between this and the relationship between students and the courses they attend, so the way you model the relationships between students and their courses will be different as well.

      A document describing a single course that a student attends could follow a structure like this:

      {
          "name": "Physics 101",
          "department": "Department of Physics",
          "points": 7
      }
      

      Say that you decided from the outset to use embedded documents to store information about each students’ courses, as in this example:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ],
          "courses": [
              {
                  "name": "Physics 101",
                  "department": "Department of Physics",
                  "points": 7
              },
              {
                  "name": "Introduction to Cloud Computing",
                  "department": "Department of Computer Science",
                  "points": 4
              }
          ]
      }
      

      This would be a perfectly valid MongoDB document and could well serve the purpose, but consider the three relationship properties you learned about in the previous guideline.

      The first one is cardinality. A student will likely only maintain a few email addresses, but they can attend multiple courses during their study. After several years of attendance, there could be dozens of courses the student took part in. Plus, they’d attend these courses along with many other students who are likewise attending their own set of courses over their years of attendance.

      If you decided to embed each course like the previous example, the student’s document would quickly get unwieldy. With a higher cardinality, the embedded document approach becomes less compelling.

      The second consideration is independent access. Unlike email addresses, it’s sound to assume there would be cases in which information about university courses would need to be retrieved on their own. For instance, say someone needs information about available courses to prepare a marketing brochure. Additionally, courses will likely need to be updated over time: the professor teaching the course might change, its schedule may fluctuate, or its prerequisites might need to be updated.

      If you were to store the courses as documents embedded within student documents, retrieving the list of all the courses offered by the university would become troublesome. Also, each time a course needs an update, you would need to go through all student records and update the course information everywhere. Both are good reasons to store courses separately and not embed them fully.

      The third thing to consider is whether the relationship between student and a university course is actually one-to-many or instead many-to-many. In this case, it’s the latter, as more than one student can attend each course. This relationship’s cardinality and independent access aspects suggest against embedding each course document, primarily for practical reasons like ease of access and update. Considering the many-to-many nature of the relationship between courses and students, it might make sense to store course documents in a separate collection with unique identifiers of their own.

      The documents representing classes in this separate collection might have a structure like these examples:

      {
          "_id": ObjectId("61741c9cbc9ec583c836170a"),
          "name": "Physics 101",
          "department": "Department of Physics",
          "points": 7
      },
      {
          "_id": ObjectId("61741c9cbc9ec583c836170b"),
          "name": "Introduction to Cloud Computing",
          "department": "Department of Computer Science",
          "points": 4
      }
      

      If you decide to store course information like this, you’ll need to find a way to connect students with these courses so that you will know which students attend which courses. In cases like this where the number of related objects isn’t excessively large, especially with many-to-many relationships, one common way of doing this is to use child references.

      With child references, a student’s document will reference the object identifiers of the courses that the student attends in an embedded array, as in this example:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ],
          "courses": [
              ObjectId("61741c9cbc9ec583c836170a"),
              ObjectId("61741c9cbc9ec583c836170b")
          ]
      }
      

      Notice that this example document still has a courses field which also is an array, but instead of embedding full course documents like in the earlier example, only the identifiers referencing the course documents in the separate collection are embedded. Now, when retrieving a student document, courses will not be immediately available and will need to be queried separately. On the other hand, it’s immediately known which courses to retrieve. Also, in case any course’s details need to be updated, only the course document itself needs to be altered. All references between students and their courses will remain valid.

      Note: There is no firm rule for when the cardinality of a relation is too great to embed child references in this manner. You might choose a different approach at either a lower or higher cardinality if it’s what best suits the application in question. After all, you will always want to structure your data to suit the manner in which your application queries and updates it.

      If you model a one-to-many relationship where the amount of related documents is within reasonable bounds and related documents need to be accessed independently, favor storing the related documents separately and embedding child references to connect to them.

      Now that you’ve learned how to use child references to signify relationships between different types of data, this guide will outline an inverse concept: parent references.

      Guideline 5 — Modeling Unbounded One-to-Many Relationships with Parent References

      Using child references works well when there are too many related objects to embed them directly inside the parent document, but the amount is still within known bounds. However, there are cases when the number of associated documents might be unbounded and will continue to grow with time.

      As an example, imagine that the university’s student council has a message board where any student can post whatever messages they want, including questions about courses, travel stories, job postings, study materials, or just a free chat. A sample message in this example consists of a subject and a message body:

      {
          "_id": ObjectId("61741c9cbc9ec583c836174c"),
          "subject": "Books on kinematics and dynamics",
          "message": "Hello! Could you recommend good introductory books covering the topics of kinematics and dynamics? Thanks!",
          "posted_on": ISODate("2021-07-23T16:03:21Z")
      }
      

      You could use either of the two approaches discussed previously — embedding and child references — to model this relationship. If you were to decide on embedding, the student’s document might take a shape like this:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ],
          "courses": [
              ObjectId("61741c9cbc9ec583c836170a"),
              ObjectId("61741c9cbc9ec583c836170b")
          ],
          "message_board_messages": [
              {
                  "subject": "Books on kinematics and dynamics",
                  "message": "Hello! Could you recommend good introductory books covering the topics of kinematics and dynamics? Thanks!",
                  "posted_on": ISODate("2021-07-23T16:03:21Z")
              },
              . . .
          ]
      }
      

      However, if a student is prolific with writing messages their document will quickly become incredibly long and could easily exceed the 16MB size limit, so the cardinality of this relation suggests against embedding. Additionally, the messages might need to be accessed separately from the student, as could be the case if the message board page is designed to show the latest messages posted by students. This also suggests that embedding is not the best choice for this scenario.

      Note: You should also consider whether the message board messages are frequently accessed when retrieving the student’s document. If not, having them all embedded inside that document would incur a performance penalty when retrieving and manipulating this document, even when the list of messages would not be used often. Infrequent access of related data is often another clue that you shouldn’t embed documents.

      Now consider using child references instead of embedding full documents as in the previous example. The individual messages would be stored in a separate collection, and the student’s document could then have the following structure:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ],
          "courses": [
              ObjectId("61741c9cbc9ec583c836170a"),
              ObjectId("61741c9cbc9ec583c836170b")
          ],
          "message_board_messages": [
              ObjectId("61741c9cbc9ec583c836174c"),
              . . .
          ]
      }
      

      In this example, the message_board_messages field now stores the child references to all messages written by Sammy. However, changing the approach solves only one of the issues mentioned before in that it would now be possible to access the messages independently. But although the student’s document size would grow more slowly using the child references approach, the collection of object identifiers could also become unwieldy given the unbounded cardinality of this relation. A student could easily write thousands of messages during their four years of study, after all.

      In such scenarios, a common way to connect one object to another is through parent references. Unlike the child references described previously, it’s now not the student document referring to individual messages, but rather a reference in the message’s document pointing towards the student that wrote it.

      To use parent references, you would need to modify the message document schema to contain a reference to the student who authored the message:

      {
          "_id": ObjectId("61741c9cbc9ec583c836174c"),
          "subject": "Books on kinematics and dynamics",
          "message": "Hello! Could you recommend a good introductory books covering the topics of kinematics and dynamics? Thanks!",
          "posted_on": ISODate("2021-07-23T16:03:21Z"),
          "posted_by": ObjectId("612d1e835ebee16872a109a4")
      }
      

      Notice the new posted_by field contains the object identifier of the student’s document. Now, the student’s document won’t contain any information about the messages they’ve posted:

      {
          "_id": ObjectId("612d1e835ebee16872a109a4"),
          "first_name": "Sammy",
          "last_name": "Shark",
          "emails": [
              {
                  "email": "[email protected]",
                  "type": "work"
              },
              {
                  "email": "[email protected]",
                  "type": "home"
              }
          ],
          "courses": [
              ObjectId("61741c9cbc9ec583c836170a"),
              ObjectId("61741c9cbc9ec583c836170b")
          ]
      }
      

      To retrieve the list of messages written by a student, you would use a query on the messages collection and filter against the posted_by field. Having them in a separate collection makes it safe to let the list of messages grow without affecting any of the student’s documents.

      Note: When using parent references, creating an index on the field referencing the parent document can significantly increase the query performance each time you filter against the parent document identifier.

      If you model a one-to-many relationship where the amount of related documents is unbounded, regardless of whether the documents need to be accessed independently, it’s generally advised that you store related documents separately and use parent references to connect them to the parent document.

      Conclusion

      Thanks to the flexibility of document-oriented databases, determining the best way to model relationships in a document databases is less of a strict science than it is in a relational database. By reading this article, you’ve acquainted yourself with embedding documents and using child and parent references to store related data. You’ve learned about considering the relationship cardinality and avoiding unbounded arrays, as well as taking into account whether the document will be accessed separately or frequently.

      These are just a few guidelines that can help you model typical relationships in MongoDB, but modeling database schema is not a one size fits all. Always take into account your application and how it uses and updates the data when designing the schema.

      To learn more about schema design and common patterns for storing different kinds of data in MongoDB, we encourage you to check the official MongoDB documentation on that topic.



      Source link

      How To Build a Telegram Quotes Generator Bot With Node.js, Telegraf, Jimp, and Pexels


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      In this tutorial, you will use Node.js, telegraf, jimp, and the Pexels API to build a Telegram chatbot that will send you a randomly selected image with a fact overlayed. A Telegram bot is a bot you can interact with using custom slash commands through your preferred Telegram client. You will create the bot through Telegram, and define its logic to select a random animal image and a fact on the animal using JavaScript.

      At the end of this tutorial you will have a Telegram chatbot that looks like the following:

      Imgur

      Once you’ve completed your bot, you will receive a fact about an animal whenever you send a custom Telegram slash command.

      Prerequisites

      In order to follow this tutorial the reader will need the following tools:

      This tutorial was verified with Node v12.18.2 and npm v6.14.8.

      Step 1 — Creating the Project Root Directory

      In this section, you will create the directory where you will build the chatbot, create a Node project, and install the required dependencies.

      Open a terminal window and create a new directory called facts-bot:

      Navigate into the directory:

      Create a directory named temp:

      With the command above, you created a directory named temp. In this directory, you will temporarily store the images that your bot will send to the user.

      Now, you’ll create a new Node.js project. Running npm’s init command will create a package.json file, which will manage your dependencies and metadata.

      Run the initialization command:

      To accept the default values, press ENTER to all prompts. Alternately, you can personalize your responses. To do this, review npm’s initialization settings in Step 1 of the tutorial How To Use Node.js Modules with npm and package.json.

      Open the package.json file and edit it:

      Now, you’ll update the properties in your package.json file. Replace the contents inside the file with the highlighted code:

      package.json

      {
        "name": "facts-bot",
        "version": "1.0.0",
        "description": "",
        "main": "main.js",
        "scripts": {
          "start": "nodemon main.js"
        },
        "author": "",
        "license": "ISC"
      }
      

      Here you changed the main and scripts properties. By changing the main property, you have set the application main file to main.js. This will inform Node the main.js file is the primary entry point to your program. In the scripts property you have have added a script named start, which allows you to set the command that is supposed to run when you start the application. Once you call the script the command nodemon will run the main.js file you will create in the next step.

      With your settings now defined in your package.json file, you will now create a file that will store your environment variables. In your terminal, create a file named .env:

      touch .env
      

      In your .env file, you will store your Telegram bot token and Pexels API key. A Telegram Bot token allows you to interact with your Telegram bot. The Pexels API key allows you to interact with the Pexels API. You will store your environment variables in a later step.

      This time, you’ll use npm to install the dependencies telegraf, dotenv, pexels, jimp, and uuid. You’ll also use the --save flag to save the dependencies. In your terminal, run the following command:

      • npm install telegraf dotenv pexels jimp uuid --save

      In this command, you have installed:

      • telegraf: a library that helps you develop your own Telegram bots using JavaScript or TypeScript. You are going to use it to build your bot.
      • dotenv: a zero-dependency module that loads environment variables from a .env file into process.env. You are going to use this module to retrieve the bot token and Pexels API key from the .env file you created.
      • pexels: a convenient wrapper around the Pexels API that can be used both on the server in Node.js and the browser. You are going to use this module to retrieve animal images from Pexels.
      • jimp: an image processing library written entirely in JavaScript for Node, with zero external or native dependencies. You are going to use this library to edit images retrieved from Pexels and insert a fact about an animal in them.
      • uuid: a module that allows you to generate RFC-compliant UUIDs in JavaScript. You are going to use this module to create a unique name for the image retrieved from Pexels.

      Now, install nodemon as a dev dependency:

      • npm install nodemon --save-dev

      nodemon is a tool that develops Node.js based applications by automatically restarting the Node application when it detects file changes in the directory. You will use this module to start and keep your app running as you test your bot.

      Note: At the time of writing these are the versions of the modules that are being used:telegraf : 4.3.0 ; dotenv : 8.2.0; pexels : 1.2.1 ;jimp : 0.16.1 ; uuid : 8.3.2; nodemon : 2.0.12.

      In this step, you created a project directory and initialized a Node.js project for your bot. You also installed the modules needed to build the bot. In the next step, you will register a bot in Telegram and retrieve an API key for the Pexels API.

      Step 2 — Registering Your Bot and Retrieving an API Key From the Pexels API

      In this section, you will first register a bot with BotFather, then retrieve an API key for the Pexels API. BotFather is a chatbot managed by Telegram that allows users to create and manage chatbots.

      Open your preferred Telegram client, search for @BotFather, and start the chat. Send the /newbot slash command and follow the instructions sent by the BotFather:

      Imgur

      After choosing your bot name and username you will receive a message containing your bot access token:

      Imgur

      Copy the bot token, and open your .env file:

      Save the Bot token in a variable named BOT_TOKEN:

      .env

      BOT_TOKEN = "Your bot token"
      

      Now that you have saved your bot token in the .env file, it’s time to retrieve the Pexels API key.

      Navigate to Pexels, and log in to your Pexels account. Click on the Image & Video API tab and create a new API key:

      Imgur

      Copy the API key, and open your .env file:

      Save the API key in a variable named PEXELS_API_KEY. Your .env should look like the following:

      .env

      BOT_TOKEN = "Your_bot_token"
      PEXELS_API_KEY = "Your_Pexels_API_key"
      

      In this section, You have registered your bot, retrieved your Pexels API key, and saved your bot token and Pexels API key in your .env file. In the next section, you are going to create the file responsible for running the bot.

      Step 3 — Creating the main.js File

      In this section, you will create and build out your bot. You will create a file with the label main.js, and this will contain your bot’s logic.

      In the root directory of your project, create and open the main.js file using your preferred text editor:

      Within the main.js file, add the following code to import the libraries you’ll use:

      main.js

      const { Telegraf } = require('telegraf')
      const { v4: uuidV4 } = require('uuid')
      require('dotenv').config()
      let factGenerator = require('./factGenerator')
      

      In this code block, you have required in the telegraf, the uuid, the dotenv module, and a file named factGenerator.js. You are going to use the telegraf module to start and manage the bot, the uuidmodule to generate a unique file name for the image, and the dotenv module to get your Telegram bot token and Pexels API key stored in the .env file. The factGenerator.js file will be used to retrieve a random animal image from Pexels, insert a fact about the animal, and delete the image after it’s sent to the user. You will create this file in the next section.

      Below the require statements, add the following code to create an instance of the bot:

      main.js

      . . .
      
      const bot = new Telegraf(process.env.BOT_TOKEN)
      
      bot.start((ctx) => {
          let message = ` Please use the /fact command to receive a new fact`
          ctx.reply(message)
      })
      

      Here, you retrieved and used the BOT_TOKEN that BotFather sent, created a new bot instance, and assigned it to a variable called bot. After creating a new bot instance, you added a command listener for the /start command. This command is responsible for initiating a conversation between a user and the bot. Once a user sends a message containing /start the bot replies with a message asking the user to use the /fact command to receive a new fact.

      You have now created the command handler responsible for starting the interaction with your chatbot. Now, let’s create the command handler for generating a fact. Below the .start() command, add the following code:

      main.js

      . . .
      
      bot.command('fact', async (ctx) => {
          try {
              ctx.reply('Generating image, Please wait !!!')
              let imagePath = `./temp/${uuidV4()}.jpg`
              await factGenerator.generateImage(imagePath)
              await ctx.replyWithPhoto({ source: imagePath })
              factGenerator.deleteImage(imagePath)
          } catch (error) {
              console.log('error', error)
              ctx.reply('error sending image')
          }
      })
      
      bot.launch()
      

      In this code block, you created a command listener for the custom /fact slash command. Once this command is triggered from the Telegram user interface, the bot sends a message to the user. The uuid module is used to generate the image name and path. The image will be stored in the /temp directory that you created in Step 1. Afterwards, the image path is passed to a method named generateImage() you’ll define in the factGenerator.js file to generate an image containing a fact about an animal. Once the image is generated, the image is sent to the user. Then, the image path is passed to a method named deleteFile in the factGenerator.js file to delete the image. Lastly, you launched your bot by calling the bot.launch() method.

      The main.js file will look like the following:

      main.js

      const { Telegraf } = require('telegraf')
      const { v4: uuidV4 } = require('uuid')
      require('dotenv').config()
      let factGenerator = require('./factGenerator')
      
      
      const bot = new Telegraf(process.env.BOT_TOKEN)
      
      bot.start((ctx) => {
          let message = ` Please use the /fact command to receive a new fact`
          ctx.reply(message)
      })
      
      
      bot.command('fact', async (ctx) => {
          try {
              ctx.reply('Generating image, Please wait !!!')
              let imagePath = `./temp/${uuidV4()}.jpg`
              await factGenerator.generateImage(imagePath)
              await ctx.replyWithPhoto({ source: imagePath })
              factGenerator.deleteImage(imagePath)
          } catch (error) {
              console.log('error', error)
              ctx.reply('error sending image')
          }
      });
      
      
      bot.launch()
      

      You have created the file responsible for running and managing your bot. You will now set facts for the animal and build out the bot’s logic in the factGenerator.js file.

      Step 4 — Creating the Fact Generator File and Building the Bot Logic

      In this section, you will create files named fact.js and factGenerator.js. fact.js will store facts about animals in one data source. The factGenerator.js file will contain the code needed to retrieve a random fact about an animal from a file, retrieve an image from Pexels, use jimp to write the fact in the retrieved image, and delete the image.

      In the root directory of your project, create and open the facts.js file using your preferred text editor:

      Within the facts.js file add the following code to create your data source:

      facts.js

      const facts = [
          {
              fact: "Mother pandas keep contact with their cub nearly 100% of the time during their first month - with the cub resting on her front and remaining covered by her paw, arm or head.",
              animal: "Panda"
          },
          {
              fact: "The elephant's temporal lobe (the area of the brain associated with memory) is larger and denser than that of people - hence the saying 'elephants never forget'.",
              animal: "Elephant"
          },
          {
              fact: "On average, males weigh 190kg and females weigh 126kg . They need this weight and power behind them to hunt large prey and defend their pride.  ",
              animal: "Lion"
          },
          {
              fact: "The Amazon river is home to four species of river dolphin that are found nowhere else on Earth. ",
              animal: "Dolphin"
          },
      ]
      
      module.exports = { facts }
      

      In this code block, you defined an object with an array containing facts about animals and stored in a variable named facts. Each object has the following properties: fact and animal. In the property named fact, its value is a fact about an animal, while the property animal stores the name of the animal. Lastly, you are exporting the facts array.

      Now, create a file named factGenerator.js:

      Inside the factGenerator.js file, add the following code to require in the dependencies you’ll use to build out the logic to make your animal image:

      factGenerator.js

      let { createClient } = require('pexels')
      let Jimp = require('jimp')
      const fs = require('fs')
      let { facts } = require('./facts')
      

      Here, you required in the pexels, the jimp, the fs module, and your facts.js file. You will use the pexels module to retrieve animal images from Pexels, the jimp module to edit the image retrieved from Pexels, and the fs module to delete the image from your file directory after it’s sent to the user.

      Below the require statements, add the following code to generate an image:

      factGenerator.js

      . . .
      
      async function generateImage(imagePath) {
        let fact = randomFact()
        let photo = await getRandomImage(fact.animal)
        await editImage(photo, imagePath, fact.fact)
      }
      

      In this code block, you created a function named generateImage(). This function takes as an argument the path of the Pexel image in your file directory. Once this function is called a function named randomFact() is called and the value returned is stored in a variable named fact. The randomFact() function randomly selects an object in the facts.js file. After receiving the object, its property animal is passed to a function named getRandomImage(). The getRandomImage() function will use the pexels module to search for images containing the name of the animal passed, and selects a random image. The value returned is stored in a variable named photo. Finally, the photo, imagePath, and the fact property from the facts.js file are passed to a function named editImage(). The editImage() function uses the jimp module to insert the random fact in the random image and then save the edited image in the imagePath.

      Here, you have created the function that is called when you send the /fact slash command to the bot. Now you’ll create the functions getRandomImage() and editImage() and construct the logic behind selecting and editing a random image.

      Below the generateImage() function, add the following code to set the randomization logic:

      factGenerator.js

      . . .
      
      function randomFact() {
        let fact = facts[randomInteger(0, (facts.length - 1))]
        return fact
      }
      
      
      function randomInteger(min, max) {
        return Math.floor(Math.random() * (max - min + 1)) + min;
      }
      

      You have now created the functions randomFact() and randomInteger(). The randomFact() function selects a random fact in the facts.js file by calling the randomInteger() function, and returns this object. The randomInteger() function returns a random integer in the interval of 0 and the number of facts in the facts.js file.

      Now that you’ve defined functions to return a random fact and random integer, you’ll need to create a function to get a random image from Pexels. Below the randomInteger() function, add the following code to get a random image:

      factGenerator.js

      . . .
      
      async function getRandomImage(animal) {
        try {
          const client = createClient(process.env.PEXELS_API_KEY)
          const query = animal
          let image
      
          await client.photos.search({ query, per_page: 10 }).then(res => {
            let images = res.photos
            image = images[randomInteger(0, (images.length - 1))]
      
          })
      
          return image
      
        } catch (error) {
          console.log('error downloading image', error)
          getRandomImage(animal)
        }
      }
      

      In this code block, you have created a function named getRandomImage(). This function takes as an argument an animal name. When this function is called a client object is created by using createClient() method object from the pexels module and the Pexels API key stored in the .env file. The animal name is stored in a variable called query, then the client object is used to search for images containing the value in the query. Once the images are found, a random image is selected with the help of the randomInteger() function. Finally, the random image is returned to the generateImage() method in your main.js file.

      With your getRandomImage() function in place, the image selected needs to have a text overlay before it’s sent to your Telegram bot. Below the getRandomImage() function, add the following code to set the overlay:

      factGenerator.js

      . . .
      
      async function editImage(image, imagePath, fact) {
        try {
          let imgURL = image.src.medium
          let animalImage = await Jimp.read(imgURL).catch(error => console.log('error ', error))
          let animalImageWidth = animalImage.bitmap.width
          let animalImageHeight = animalImage.bitmap.height
          let imgDarkener = await new Jimp(animalImageWidth, animalImageHeight, '#000000')
          imgDarkener = await imgDarkener.opacity(0.5)
          animalImage = await animalImage.composite(imgDarkener, 0, 0);
      
      
        } catch (error) {
          console.log("error editing image", error)
        } 
      
      }
      
      

      Here, you created a function named editImage(). This function takes as arguments a random animal labeled image, the imagePath, and a fact about this random animal. In the variable imgURL, the URL for the medium size of the image is retrieved from the Pexels API. Afterwards the read() method of jimp is used to load the image. Once the image is loaded and stored in a variable named animalImage, the image width and height are retrieved and stored in the variables animalImageWidth and animalImageHeight respectively. The variable imgDarkener stores new instance of Jimp() and darkens the image. The opacity() method of jimp is used to set imgDarkener’s opacity to 50%. Finally, the composite() method of jimp is used to put the contents in imgDarkener over the image in animalImage. This in return makes the image in animalImage darker before adding the text stored in the fact variable, and make the text visible over the image.

      Note: Jimp by default provides a method named color() that allows you to adjust an image’s tonal levels. For the purpose of this tutorial, you’ll write a custom tonal adjuster as the color() method does not offer the precision necessary here.

      At the bottom of the try block inside the editImage() function, add the following code:

      factGenerator.js

      . . .
      
      async function editImage(image, imagePath,fact) {
        try {
          . . .
      
          let posX = animalImageWidth / 15
          let posY = animalImageHeight / 15
          let maxWidth = animalImageWidth - (posX * 2)
          let maxHeight = animalImageHeight - posY
      
          let font = await Jimp.loadFont(Jimp.FONT_SANS_16_WHITE)
          await animalImage.print(font, posX, posY, {
            text: fact,
            alignmentX: Jimp.HORIZONTAL_ALIGN_CENTER,
            alignmentY: Jimp.VERTICAL_ALIGN_MIDDLE
          }, maxWidth, maxHeight)
      
          await animalImage.writeAsync(imagePath)
          console.log("Image generated successfully")
      
      
        } catch (error) {
          . . .
        }
      }
      
      

      In this code block, you used the animalImageWidth, and animalImageHeight to get the values that will be used to center the text in the animalImage. After, you used the loadFont() method of jimp to load the font, and store the font in a variable named font. The font color is white, the type is sans-serif (SANS), and the size is 16. Finally, you used the print() method of jimp to insert the fact in the animalImage, and the write() method to save the animalImage in the imagePath.

      Now that you’ve created the function responsible for editing the image, you’ll need a function to delete the image from your file structure after it is sent to the user. Below your editImage() function, add the following code:

      factGenerator.js

      . . .
      
      const deleteImage = (imagePath) => {
          fs.unlink(imagePath, (err) => {
              if (err) {
                  return
              }
              console.log('file deleted')
          })
      }
      
      
      module.exports = { generateImage, deleteImage }
      

      Here, you have created a function named deleteImage(). This function takes as an argument the variable imagePath. Once this function is called, the fs module is used to delete the image stored in the variable imagePath. Lastly, you exported the generateImage() function and the deleteImage() function.

      With your functions in place, the factGenerator.js file will look like the following:

      factGenerator.js

      let { createClient } = require('pexels')
      let Jimp = require('jimp')
      const fs = require('fs')
      let { facts } = require('./facts')
      
      async function generateImage(imagePath) {
        let fact = randomFact()
        let photo = await getRandomImage(fact.animal)
        await editImage(photo, imagePath, fact.fact)
      }
      
      
      function randomFact() {
        let fact = facts[randomInteger(0, (facts.length - 1))]
        return fact
      }
      
      
      function randomInteger(min, max) {
        return Math.floor(Math.random() * (max - min + 1)) + min;
      }
      
      
      async function getRandomImage(animal) {
        try {
          const client = createClient(process.env.PEXELS_API_KEY)
          const query = animal
          let image
      
          await client.photos.search({ query, per_page: 10 }).then(res => {
            let images = res.photos
            image = images[randomInteger(0, (images.length - 1))]
      
          })
      
          return image
      
        } catch (error) {
          console.log('error downloading image', error)
          getRandomImage(animal)
        }
      }
      
      
      async function editImage(image, imagePath, fact) {
        try {
          let imgURL = image.src.medium
          let animalImage = await Jimp.read(imgURL).catch(error => console.log('error ', error))
          let animalImageWidth = animalImage.bitmap.width
          let animalImageHeight = animalImage.bitmap.height
          let imgDarkener = await new Jimp(animalImageWidth, animalImageHeight, '#000000')
          imgDarkener = await imgDarkener.opacity(0.5)
          animalImage = await animalImage.composite(imgDarkener, 0, 0);
      
          let posX = animalImageWidth / 15
          let posY = animalImageHeight / 15
          let maxWidth = animalImageWidth - (posX * 2)
          let maxHeight = animalImageHeight - posY
      
          let font = await Jimp.loadFont(Jimp.FONT_SANS_16_WHITE)
          await animalImage.print(font, posX, posY, {
            text: fact,
            alignmentX: Jimp.HORIZONTAL_ALIGN_CENTER,
            alignmentY: Jimp.VERTICAL_ALIGN_MIDDLE
          }, maxWidth, maxHeight)
      
          await animalImage.writeAsync(imagePath)
          console.log("Image generated successfully")
      
        } catch (error) {
          console.log("error editing image", error)
        }
      
      }
      
      
      const deleteImage = (imagePath) => {
        fs.unlink(imagePath, (err) => {
          if (err) {
            return
          }
          console.log('file deleted')
        })
      }
      
      
      module.exports = { generateImage, deleteImage }
      
      

      Save your factGenerator.js file. Return to your terminal, and run the following command to start your bot:

      Open your preferred Telegram client, and search for your bot. Send a message with the /start command to initiate the conversation, or click the Start button. Then, send a message with the /fact command to receive your image.

      You will receive an image similar to the following:

      Imgur

      You now see the image in your preferred Telegram client with a fact imposed over the image. You’ve created the file and functions responsible for retrieving a random fact from the facts.js file, retrieving an animal image from Pexels, and inserting a fact onto the image.

      Conclusion

      In this tutorial, you built a Telegram chatbot that sends an image of an animal with a fact overlayed through a custom slash command. You created the command handlers for the bot through the telegraf module. You also created functions responsible for retrieving a random fact, random images from Pexels using the pexels module, and inserting a fact over the random image using the jimp module. For more information about the Pexels API, telegraf and jimp modules please refer to documentation on the Pexels API, telegraf, jimp.



      Source link