One place for hosting & domains


      An Introduction to the OSI Networking Model

      Computer networking is a complicated subject, with many interconnected layers and interactions. To help developers and engineers understand how the various networking components work together, several conceptual models have been developed. The
      Open Systems Interconnection (OSI) Model is a popular model that divides the networking stack into seven layers. This guide explains the OSI Model and describes each layer. It also lists the tools available for each layer and contrasts the OSI Model with the competing
      Internet Protocol suite.

      What is the OSI Model?

      The OSI Model provides a method for understanding how end-to-end internet communications work. It deconstructs the networking process into seven layers, each representing a different step of the transmission chain. Each layer has its own function and is responsible for well-defined tasks. Most user data passes through each layer upon both ingress and egress.

      The OSI model was originally developed in the 1970s and 1980s under the oversight of the International Organization for Standardization (ISO). It is formalized in the ITU-T series of
      X.200 recommendations. The model is mainly conceptual in nature and models the network at a high level of abstraction. It is designed to encourage a shared consensus of network standards and interoperability. While it has never been fully applied, it has gained popularity as a good educational model.

      The OSI Model originally included a number of network protocols to implement each of the different layers. However, these protocols were determined to be too complex and difficult to implement. They also involved too drastic of a change to established practices. Therefore, they were never adopted, and protocols from the Internet Protocol (IP) suite were used instead. The standard network protocols in use today show more complexity and do not perfectly align with the OSI model.

      The seven layers are numbered from lowest to highest. The highest layer is closest to the user applications, while the lowest relates to physical transmission. User data passes sequentially from the highest layer down through the lower layers until the device transmits it externally.

      The OSI model encourages a strict encapsulation model. Data from a higher level becomes part of the lower layer message. The data packet received from the higher layer is known as a service data unit (SDU). The lower layer prepends a header to the SDU. In some cases, it might also append a footer. The header and footer contain information intended for the peer layer of the receiving device. After the additional information is concatenated to the original packet from the higher layer, the message is called a Protocol Data Unit (PDU). The PDU is designed to be processed at the same layer on the destination node. This continues until the data reaches the physical layer. At this point, it is converted to a bitstream and physically transmitted to the receiver.

      On the incoming side, the order is reversed. Traffic is first received at the physical layer. It then passes upward one layer at a time. At each layer, the receiving layer reviews the information in the header and removes the encapsulating material. If necessary, the packet is then passed to the higher layer. This process continues until the packet is completely consumed.

      The OSI Layer Architecture

      The seven layers, from lowest to highest, are listed below. Each layer is described in a separate section later in this guide.

      1. Physical Layer
      2. Data Link Layer
      3. Network Layer
      4. Transport Layer
      5. Session Layer
      6. Presentation Layer
      7. Application Layer

      The acronym “All People Seem To Need Data Processing” can be used to remember the layers from highest to lowest. Not all data flows begin at the application layer. Lower layers negotiate automatically after they are configured, even if they are not serving any higher-layer application. Additionally, packets might only be partially processed by intermediate devices. For example, a core router examines packets at the network layer. It then forwards the packet, sending it back to the data link and physical layers to be transmitted.

      Each of the seven layers within the OSI is given its own set of responsibilities. The layers are numbered from the lowest layer, the physical layer, to the high-level application layer. Egress data passes from higher to lower layers. Ingress data is reversed and passes from the lowest layer to the upper layers.

      Layer 1: The Physical Layer

      The lowest layer is responsible for transmitting data to another device using some type of physical medium. It handles characteristics of the physical connection between nodes. All networked devices, from high-end network routers, mobile phones, and laptops, down to simple repeaters, transmit packets using the physical layer. Therefore all devices must use physical layer technologies to communicate with other devices. The physical layer converts data packets into a signal representing a stream of bits.

      This signal can be transmitted using a variety of techniques, including electrical, optical, and wireless encoding. Some examples of physical layer technologies include Wi-Fi, Ethernet, USB, and SONET/SDH. The implementation of this layer usually happens in hardware through a chip, rather than software. Physical layer standards usually include hardware specifications for the pin layout, cable attributes, and data encoding. However some attributes might be software controlled, including physical duplex and framing.

      Physical layer protocols are responsible for implementing the following functionality:

      • Voltage levels
      • Physical data rates
      • Physical connector specifications
      • Maximum transmission length
      • Modulation or channel access
      • Framing and bit stuffing
      • Signal timing and frequency
      • Transmission mode/duplex
      • Auto-negotiation

      Many transmission standards specify details for both the physical and data link layers. The Ethernet standard is a good example.

      Physical Layer Tools

      In a lab environment, a multimeter or oscilloscope can verify quality and compliance. In a real world setting, there is no practical way to debug physical layer problems. A trial and error process of swapping out cables, connectors, and physical ports is often required. If a cable is flakey or defective, throw it out and use another.

      The data link layer is responsible for transferring data between two nodes that are either directly connected or lie within the same network. To send data to a different network, network layer functionality is required. Layer two protocols can often correct physical layer errors using bit correction algorithms. At the data link layer, data is transported inside a frame. A network switch is an example of a data link layer device.

      Layer two specifications explain how to establish a connection and transmit data to another node. The Institute of Electrical and Electronics Engineers (IEEE) organization defines many of the data link specifications under the
      IEEE 802 family of standards. Some of these standards include Ethernet, Wireless LAN, Bluetooth, and Radio, while non-802 standards include the Point-to-Point Protocol (PPP) and Frame Relay. Unlike IP addresses, layer two addresses occupy a flat addressing space. This means the addresses are not hierarchical or routable.

      The IEEE 802 specifications can be further subdivided into two sub-layers, each with their own responsibilities.

      • Logical Link Control (LLC): This is the higher of the two layers. It acts as an interface between the network layer and the MAC layer. It encapsulates higher-layer protocols, and handles flow control, multiplexing, and error detection. However, some of those functions might also be handled at higher layers.
      • Medium Access Control (MAC): The MAC layer is closely entwined with the physical layer. The MAC layer controls network access, frame synchronization, byte/bit stuffing, and link addressing. It encapsulates data from the LLC layer into the appropriate format for the link layer protocol. It also adds and removes a frame checksum to help identify erroneous frames and implements collision detection.

      For a complete analysis, a packet capture tool such as
      Wireshark can capture and analyze the frames. However, many Linux commands allow users to examine interface statistics for packet stats and errors. The ip link command displays information about the network interfaces on the server. The command output includes the state, MTU, and MAC address of the link. See the
      Ubuntu ip command man page for more information.

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
          link/ether f2:3c:93:15:ce:03 brd ff:ff:ff:ff:ff:ff

      The nast utility is a packet sniffer for use in analyzing LAN traffic. It is not pre-installed, so users must install it using apt.

      Run the command at the sudo level and terminate it using the Ctrl+C combination. Specify the interface to listen to using the -i option. The
      Ubuntu nast man page includes more details.

      Sniffing on:
      - Device:	eth0
      - MAC address:	F2:3C:93:15:CE:03
      - IP address:
      - Netmask:
      - Promisc mode:	Set
      - Filter:	None
      - Logging:	None
      ---[ TCP ]----------------------------------------------------------- ->
      TTL: 64 	Window: 501	Version: 4	Length: 112
      FLAGS: ---PA--	SEQ: 855325394 - ACK: 3719741052
      Packet Number: 1
      ---[ TCP ]----------------------------------------------------------- ->
      TTL: 64 	Window: 501	Version: 4	Length: 124
      FLAGS: ---PA--	SEQ: 855325454 - ACK: 3719741052
      Packet Number: 2
      Packets Received:		35805
      Packets Dropped by kernel:	14803

      To list the configuration and capabilities of each network interface, use the ip netconf command.

      inet lo forwarding off rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet eth0 forwarding off rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet all forwarding off rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet default forwarding off rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet6 lo forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet6 eth0 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet6 all forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
      inet6 default forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off

      Layer 3: The Network Layer

      The network layer lies at the heart of the OSI network stack. It is responsible for addressing packets and routing them across the internet. Layer three data units are known as packets. The network layer allows packets to flow across non-adjacent networks. Most routers are network layer devices, although some also implement higher layer functions.

      Layer three protocols use the packet destination address to determine the best egress interface for the data. Before reaching its destination, a packet might be routed through many nodes. A path consists of all the routers a packet must pass through to reach a specific destination. Each network device a packet transits through is known as a hop. At each hop, the network layer processes the packet. If the packet has reached its final destination, the data is sent to the transport layer. Otherwise, the packet receives a new header and footer and is sent back to the data link layer for forwarding to the next hop.

      The network layer is responsible for breaking down packets that are too large for the lower layer links into smaller pieces. This process is called fragmentation. At the destination end, the network layer reassembles the fragments back into the original packet. Protocols at the network layer are not required to be reliable, although some protocols might report and retransmit missing packets. Network layer protocols are generally connectionless. Connections and sessions are managed by the higher layers.

      Many well-known network protocols operate at the network layer, including the following:

      • The Internet Protocol (IP). This protocol specifies the addressing format for the internet.
      • Routing protocols including Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are responsible for determining the best path to the final destination.
      • The Multiprotocol Label Switching (MPLS) protocol. In reality, MPLS is a multi-layer protocol. It includes functionality from both the network and transport layers.
      • The various Internet Control Message Protocol (ICMP) control messages, and related applications like ping and traceroute.
      • Multicast standards, including the Internet Group Management Protocol (IGMP).

      Network Layer Tools

      The ip command is also quite useful for network layer problems. The ip addr show command displays the IP address associated with each interface.

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
          link/ether f2:3c:93:15:ce:03 brd ff:ff:ff:ff:ff:ff
          inet brd scope global eth0
             valid_lft forever preferred_lft forever
          inet6 2a01:7e00::f03c:93ff:fe15:ce03/64 scope global dynamic mngtmpaddr noprefixroute
             valid_lft 5316sec preferred_lft 1716sec
          inet6 fe80::f03c:93ff:fe15:ce03/64 scope link
             valid_lft forever preferred_lft forever

      The ping and traceroute commands can determine whether a destination is reachable and track the path the packet follows to reach it. These commands can be used with either the name of a router or an IP address. Terminate the command using the Ctrl+C key combination.

      PING (2620:0:862:ed1a::1)) 56 data bytes
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=1 ttl=55 time=6.45 ms
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=2 ttl=55 time=6.41 ms
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=3 ttl=55 time=6.41 ms
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=4 ttl=55 time=6.55 ms
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=5 ttl=55 time=6.40 ms
      64 bytes from (2620:0:862:ed1a::1): icmp_seq=6 ttl=55 time=6.68 ms
      --- ping statistics ---
      6 packets transmitted, 6 received, 0% packet loss, time 5008ms
      rtt min/avg/max/mdev = 6.398/6.483/6.678/0.101 ms

      To view the contents of the system routing table, use the ip route show command. The ip neighbor show and ip nexthop show commands are also often useful.

      default via dev eth0 proto static dev eth0 proto kernel scope link src

      Layer 4: The Transport Layer

      The transport layer works in conjunction with the network layer to coordinate data transfer between the host and the destination. While the network layer is more concerned with addressing and routing, the transport layer is responsible for segmenting and ordering the data. It must collect and interleave packets from many different higher-level protocols. It must also associate these packets with the correct session. On the receiving side, the transport layer reassembles the packets and detects any missing segments. Some transport layer protocols also handle quality of service, congestion avoidance, reliability, and packet retransmission.

      Transport layer protocols are either connection-oriented or connectionless. The two most important transport protocols are the
      Transmission Control Protocol (TCP) and the
      User Datagram Protocol (UDP). Transport layer data units are sent from, and received on, a specific port. The full destination address consists of both an IP number and a port. For ease of use, many protocols are associated with a specific, well-known port.

      • Transmission Control Protocol: TCP is a robust connection-oriented protocol. It implements reliability and error-checking and guarantees packets are delivered in order. It is used for applications that cannot tolerate corrupted or missing packets, such as file transfers and email. TCP segments data based on the maximum transmission unit (MTU) of the egress interface. Some portions of the TCP specification, including the graceful close technique, better align with the session layer of the OSI model.
      • User Datagram Protocol: UDP is a connectionless, lightweight protocol that is far less complex than TCP. Unlike TCP, UDP does not segment packets. It is not necessarily reliable and does not retransmit packets. It is a best effort option for performance-oriented applications that can tolerate missing or corrupted packets. UDP is a good choice for streaming video and applications using built-in buffering mechanisms.

      The Transport Layer Security (TLS) protocol somewhat aligns with the OSI transport layer, but it also provides features from the higher layers.

      Transport Layer Tools

      There is no generic transport layer monitoring tool for Linux. Instead, tools are available for specific protocols. For TCP, the tcptrack utility displays a list of current sessions. tcptrack does not come preinstalled, so install it using apt.`

      sudo apt install tcptrack

      Use the -i option and the name of the interface to see all connections active on the interface. There is no corresponding UDP equivalent because UDP is connectionless. The
      Ubuntu tcptrack man page provides full usage instructions. Terminate the command using the Ctrl+C key combination.

      Client                Server                State        Idle A Speed     ESTABLISHED  0s     10 KB/s

      tcpdump is a packet analyzer for monitoring outgoing and incoming packets on a specific interface. The -i attribute indicates the interface to listen to. The eth0 interface is the default. It can also monitor UDP packets. tcpdump is also able to detect packets at lower layers than the transport layer, while another option allows users to view the Ethernet headers. Consult the
      Ubuntu tcpdump man page for a list of options. Terminate the command using the Ctrl+C key combination.

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
      18:52:14.806270 IP testworkstation.ssh > Flags [P.], seq 866780550:866780658, ack 3719759268, win 501, options [nop,nop,TS val 3917578569 ecr 3770283712], length 108

      Layer 5: The Session Layer

      The session layer is relatively lightweight. It is used to establish and maintain ongoing sessions of longer duration between two systems. It handles the negotiation of the connection and closes it when no longer required. The session layer often manages user authentication during the establishment phase. Sometimes the session layer provides a way to suspend, restart, or resume a session. Network sockets operate at this layer, and protocols including FTP and DNS make substantial use of session layer functionality. It is also heavily used by streaming services, and web/video conferencing. For some services, session layer protocols use flow control for proper synchronization.

      Session Layer Tools

      In many applications, the session layer is bundled together with the presentation and application layer. All layers are managed as a single unit. Therefore there are no generic tools for the session layer or any of the higher layers. Instead, users must employ the application tools. For instance, the
      FileZilla FTP application provides logs and a debug menu to help resolve FTP connectivity problems at the session level.

      Layer 6: The Presentation Layer

      The presentation layer is responsible for translating content between the application layer and the lower layers. It handles data formatting and translation, including data compression/decompression, encoding, and encryption. For some higher layer applications, the presentation layer might also handle graphics and operating system specific tasks. In most modern applications, the presentation and application layers are tightly integrated.

      A example of a protocol residing at the presentation layer is Multipurpose Internet Mail Extensions (MIME), for formatting email messages. The Transport Layer Security (TLS) encryption protocol is also a presentation layer application.

      Layer 7: The Application Layer

      The application layer is the highest layer and the one that is closest to the end user of most software applications. There is a tendency to think of this layer as being equivalent to the application, but the user applications actually directly interact with this layer. Many application layer protocols tend to be closely bound to the client software. They manage tasks including message handling, printer access, and database access. Some examples of application layer protocols include the following:

      • Hypertext Transfer Protocol (HTTP)
      • Simple Mail Transfer Protocol (SMTP)
      • Telnet
      • Secure Shell (SSH)
      • File Transfer Protocol (FTP)
      • Simple Network Management Protocol (SNMP)
      • Domain Name System (DNS)

      End-to-End Processing Using the OSI Model

      It is possible to use the OSI Model to explain how a user request passes from a client application down to the physical layer. For instance, depending on the web application, the following steps might occur when browsing the internet.

      1. The web browser client interacts with an application protocol at the application layer. The user request is translated to either an HTTP or HTTPS message. The DNS protocol is used to convert the domain name into an IP address.
      2. If HTTPS is used, the presentation layer encrypts the outgoing request using a TLS socket. If necessary, the data is encoded or translated to a different character set.
      3. At the session layer, a session is established to send and receive the HTTP/HTTPS messages. In most cases, the session layer opens a TCP session because web browsing requires reliable transmission. However, some streaming applications might opt for a UDP session.
      4. The transport layer TCP protocol initiates a connection to the destination server. When the session is operational, it transmits the packets in their original order and ensures all packets are sent and received. UDP sends all packets out in a best effort manner without a direct connection and does not wait for any acknowledgments. If necessary, the data packets are segmented into smaller packets. The transport protocol forwards all outgoing packets to the network layer.
      5. At the network layer, the routing protocols decide what egress interface to use based on the destination address. The data, including the address information, is encapsulated inside an IP packet. The packet is then forwarded to the data link layer.
      6. The data link layer converts the IP packets to frames, which might result in further fragmentation. It builds the frames based on the data link protocol being used.
      7. At the physical layer, the frames are converted to a stream of bits and transmitted onto the carrier media.

      Drawbacks of the OSI Model

      The OSI Model is useful as a tool for understanding networks. However, it has a number of drawbacks.

      • OSI is very complex, with too many layers. Some of the layers are much more significant and important than others.
      • There are too many OSI standards documents and recommendations.
      • The model does not reflect the real world network structure. In many cases, actual network models span multiple layers and do not align with the boundaries of the OSI layers.
      • The OSI protocols were not widely implemented and the model does not map very well to the protocols in use today.

      A Comparison Between the OSI Model and the Internet Protocol Suite

      The Internet Protocol suite is an alternative to the OSI model. The IP suite has four layers.

      1. Application layer: This maps to the OSI application and presentation layers and much of the session layer.
      2. Transport Layer: This includes some parts of the OSI session layer as well as the transport layer. The TCP and UDP protocols are part of this layer.
      3. Internet Layer: This layer closely matches the OSI network layer definition and includes the IP protocol.
      4. Link Layer: This encompasses both the physical and data link layers of the OSI model.

      The IP suite is considered less prescriptive and more flexible, and better reflects actual usage. Protocols such as TCP/IP and the main routing protocols are derived from the IP suite. However, the IP suite is not as informative, conceptual, or comprehensive as the OSI model, and is not as widely-used as a teaching aid. To properly understand networking concepts, engineers should familiarize themselves with both models.


      The OSI Model is a framework for understanding network communications. It breaks the network stack down into seven layers. The layers range from the low-level physical layer up to the application layer residing closest to a computer user. At the heart of the model are the mid-level network and transport layers. The network layer addresses and routes packets, while the transport layer establishes and maintains a connection with a far-end device.

      Although the OSI Model is a handy learning model, it is relatively abstract and does not always reflect real world behavior. The OSI-based protocols were never really implemented, and most commonly-used network protocols are more closely related to the IP suite. However, the OSI-model is integral to many networking methods, and many of the common networking tools still map to the different OSI layers.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      Source link

      Introduction to the Bun JavaScript Runtime

      Bun introduces a new JavaScript runtime with exceptional performance, built-in bundling & transpiling, and first-class support for TypeScript & JSX. This up-and-coming tool promises to be an asset for JavaScript developers and a strong competitor to Node.js and Deno.

      In this tutorial, learn about the Bun JavaScript runtime and how it compares to other runtimes like Node.js and Deno. See how to set up Bun on your own system, and follow along to build an example React application with it.

      Before You Begin

      1. Familiarize yourself with our
        Getting Started with Linode guide, and complete the steps for setting your Linode’s hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our
        How to Secure Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Update your system.

        • Debian and Ubuntu:

          sudo apt update && sudo apt upgrade
        • AlmaLinux, CentOS Stream (8 or later), Fedora, and Rocky Linux:

          sudo dnf upgrade


      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      What Is Bun?

      Bun enters the field of JavaScript runtimes opposite options like Node.js and Deno. Built on the lightning-fast JavaScriptCore engine, the Bun runtime stands out for its speed and built-in bundling & transpiling features.

      These next sections aim to make you more familiar with Bun and what it has to offer. Keep reading to learn more about JavaScript runtimes in general, and how Bun stacks up against its main competitors.

      What Are JavaScript Runtimes?

      JavaScript runtimes are tools that allow you to run JavaScript outside of a browser. With a JavaScript runtime, you can use JavaScript to build server, desktop, and mobile applications.

      By far, the predominant JavaScript runtime is Node.js. Built on the V8 JavaScript engine behind Google Chrome, Node.js is the default JavaScript runtime for many developers.

      Recently, the creator of Node.js put out a new JavaScript runtime, Deno. The Deno runtime, like Node.js, is built on the V8 JavaScript engine. However, Deno introduces numerous fundamental improvements to Node.js in terms of security, performance, and more. It also adds first-class support for TypeScript and JSX.

      The Bun Runtime

      The Bun runtime arose with a fresh approach to JavaScript runtimes. Developed using the Zig programming language, Bun constructs its runtime on the JavaScriptCore engine, used in Apple’s Safari web browser. The result is an incredibly fast runtime.

      Additionally, Bun has built-in handling for bundling and transpiling. With other runtimes, you need to rely on outside tools for bundling your JavaScript projects and for transpiling code from another language. Bun handles all of these features.

      What’s more, Bun’s runtime implements the Node.js algorithm for resolving modules. This means that Bun can make use of NPM packages. Bun’s bundler can find and install packages from the vast NPM repository and manage their dependencies, giving you a full-featured and seamless bundler.

      Like Deno, Bun also comes with first-class support for the TypeScript and JSX languages.

      Bun vs Node.js and Deno

      Bun offers some of the same advantages over Node.js as Deno. Besides the aforementioned first-class support for TypeScript and JSX, both offer performance and quality-of-life improvements over Node.js.

      However, the Bun runtime also aims to exceed Deno in terms of performance. Bun’s use of the JavaScriptCore engine has allowed Bun to achieve immense speed gains in its execution of JavaScript programs.

      With Bun, you also get simplified tooling. Bun includes transpiling and bundling features, which keeps you from having to adopt and maintain separate tools for those tasks.

      How to Install Bun

      Before proceeding, make sure your Linux system uses a version supported by Bun. Currently, Bun runs on systems using at least version 5.1 of the Linux kernel (though it prefers 5.6).

      You can check your kernel version with the command:

      uname -r

      On a CentOS Stream 9 system, for instance, you could expect an output like the following:


      For reference, here are versions of some popular Linux distributions that use at least version 5.1 of the Linux kernel:

      • CentOS Stream (9 or newer)
      • Debian (11 or newer)
      • Fedora (34 or newer)
      • Ubuntu (20.04 LTS or newer)

      The Bun installation script requires that you have Unzip installed on your system. You can install Unzip using one of the following commands:

      • Debian and Ubuntu:

        sudo apt install unzip
      • AlmaLinux, CentOS Stream, Fedora, and Rocky Linux:

        sudo dnf install unzip

      Bun can be installed using an installation script. The command below accesses the script and runs it in your shell session:

      curl | bash

      Once finished, the Bun installation script displays a success message:

      bun was installed successfully to /home/example-user/.bun/bin/bun

      The script may also inform you to add two lines to your .bashrc file. You can quickly do so using the following commands:

      echo 'export BUN_INSTALL="/home/example-user/.bun"' >> ~/.bashrc
      echo 'export PATH="$BUN_INSTALL/bin:$PATH"' >> ~/.bashrc

      Restart your shell session by exiting and reentering it, and you are finally ready to start using Bun. At this point you can verify your installation by checking the Bun version:

      bun -v

      Example of a Bun Project

      Like NPM, Bun can be used to create and manage application projects. To give you an idea of Bun’s capabilities, the next series of steps walk you through creating and running a React application with Bun.

      The example adds a simple analog clock widget to the base React template, which lets you see more of how Bun manages project dependencies.

      1. Create a new Bun project. This is done by giving the bun create command with a template name and project folder.

        You can get a list of some useful available templates by running the create command without any arguments:

        bun create

        For this example, create your project from the React template, and give the project a directory of example-react-app, like this:

        bun create react ./example-react-app
      2. Afterward, be sure to change into the new project directory. The rest of these steps assume you are working out of this directory:

        cd example-react-app
      3. This already gives you a working React application, you just need to start it:

        bun dev
      4. You can see the application in action by navigating to localhost:3000 in your browser.

        To see the application remotely, you can use an SSH tunnel.

        • On Windows, use the PuTTY tool to set up your SSH tunnel. Follow the appropriate section of the
          Setting up an SSH Tunnel with Your Linode for Safe Browsing guide, replacing the example port number there with 3000.

        • On macOS or Linux, use the following command to set up the SSH tunnel. Replace example-user with your username on the application server and with the server’s IP address:

          ssh -L3000:localhost:3000 [email protected]

        Default React application

      5. Use the CTRL+C key combination to stop Bun when you are finished viewing the application.

      6. Add an NPM package to your project. You can do so using the bun add command followed by the package name.

        This example uses the react-clock package, which allows you to easily render an analog clock for your React application:

        bun add react-clock
      7. The src/App.jsx file is the basis for the default React application. Open the file and incorporate the react-clock:

        nano src/App.jsx
      8. Replace the contents of src/App.jsx with the example file below. You can see the relatively simple modifications made to this file to incorporate the react-clock. The modified areas are prefaced with explanatory comments:

        File: src/App.jsx
        import logo from "./logo.svg";
        import "./App.css";
        // Import React modules to be used by react-clock.
        import React, { useEffect, useState } from 'react';
        // Import react-clock and its CSS file.
        import Clock from 'react-clock';
        import 'react-clock/dist/Clock.css';
        function App() {
            // Define a state variable for the clock value; initialize it with the
            // current date-time.
            const [clockValue, setValue] = useState(new Date());
            // Define an effect that updates the clock's value periodically.
            useEffect(() => {
                const clockInterval = setInterval(() => setValue(new Date()), 1000);
                return () => {
            }, []);
            // Add to the default layout a <Clock/> tag for rendering the clock;
            // give it the clockValue to display.
            return (
                <div className="App">
                    <header className="App-header">
                        <img src={logo} className="App-logo" alt="logo" />
                        <h3>Welcome to React!</h3>
                        <Clock value={clockValue} />
        export default App;
      9. Press CTRL+X to exit Nano, then Y to save, and Enter to confirm.

      10. Start up the application with Bun again:

        bun dev

        Once again, you should be able to visit the project by navigating to localhost:3000 in a web browser. Now you should see the default application modified with an analog clock.

        React application with an analog clock


      Now that you have a footing with the Bun runtime, you can start exploring and seeing all it has to offer. With its built-in bundling and transpiling, you can create and execute projects with simpler tooling, plus the benefits of Bun’s incredible performance.

      Keep learning about Bun through the links below, as well as through the
      official documentation.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      Source link

      An Introduction to the WordPress REST API

      When the REST API was finally added to WordPress core, it was the end of a long journey. Many had anticipated this change as the biggest step forward for WordPress in the platform’s history. However, if you’re not familiar with the REST API, you may be confused by what it all means.

      In short, the addition of the WordPress REST API turned WordPress into a fully-featured application framework. This significantly increased its ‘extensibility,’ or its ability to be extended with new features and capabilities. Plus, it expanded the platform’s potential for communicating with other sites and applications.

      An Introduction to REST APIs

      Before we dig deeper into the WordPress REST API, it’s important to get our terminology straight. This is a subject where we’ll need to use a lot of acronyms, so let’s clear those up first.

      First and foremost, you’ll need to know what Application Programming Interfaces (APIs) are. In the simplest terms, an API is a means by which one system enables other systems to connect to its data.

      For example, when a website adds a Facebook ‘like’ button to a page, it does this by hooking into Facebook’s API. This lets the web page use the API to receive data (the code for the like button) and send data (the like request).

      So, what is a REST API specifically? Representational State Transfer (REST) is a type of API specific to web services. It contains a standardized set of instructions and rules, making it easier for all ‘RESTful’ services to connect with each other.

      In short, REST APIs enable you to make requests to an external system. One example of this is Twitter. You can use its API to request a certain number of tweets from a specific user. The API will then return the tweets based on your request, which you can embed on your site using HTML and CSS.

      These requests are carried out using JavaScript Object Notation (JSON). This is a language specifically designed for sending, receiving, and storing data.

      We’re going to cover JSON later in this article, but we recommend taking the time to familiarize yourself with this language upfront. This will help prime you for using the WordPress REST API and understanding some of the concepts we’ll be talking about.

      What the WordPress REST API Is (And Why It’s Important)

      WordPress Rest API

      The WordPress REST API functions in largely the same way as the examples we’ve touched on already. Basically, the WordPress REST API gives you full access to WordPress features from any JSON-compatible framework.

      Similarly to how Twitter’s API enables you to retrieve and send tweets, the WordPress REST API can be used to manage posts, users, categories, and much more from external platforms. It lets you use WordPress in a number of previously unprecedented ways.

      The REST API was announced all the way back in 2013. It started life as a plugin, meant to be incorporated into the WordPress core by Version 4.1. As so often happens, delays pushed the release back until it was finally implemented into the core with the release of WordPress 4.7 three years later.

      This was a long but worthwhile wait for many people who saw the WordPress REST API as an important step forward for the platform. You might be wondering why this addition was such a big deal, especially since a lot of users probably didn’t notice much difference. As it turns out, the inclusion of the REST API was a fundamental change to WordPress for many reasons.

      By implementing a REST API, WordPress took a step away from simply being a platform for creating websites. Instead, it’s now become a full-fledged application framework. This means developers can use a WordPress site to create applications for mobile devices and the web or as an information repository.

      This shift also enabled WordPress to take a step away from its reliance on PHP. By making WordPress compatible with any JSON-compatible language, the REST API greatly expanded the possibilities for developers, enabling them to use WordPress functionality with practically any framework.

      Finally, the REST API provides increased flexibility with the interfaces you can use to work with the platform. It made the admin interface completely optional since you can now interact with your WordPress site entirely through JSON commands.

      Now, let’s look at how JSON and the REST API come together to make this possible.

      How the REST API and JSON Work Together

      By now, you should have a handle on the theoretical aspects of the WordPress REST API. So, let’s look at the more practical side of the technology. The official handbook describes using the REST API as follows:

      “The WordPress REST API provides API endpoints for WordPress data types that allow developers to interact with sites remotely, by sending and receiving JSON (JavaScript Object Notation) objects.”

      The first word we need to focus on here is “endpoints”. The easiest way to think of an endpoint is as a piece of data or a function that can be called using a JSON request. By default, WordPress provides a huge number of standard endpoints to use, but developers can also create custom endpoints.

      To reach an endpoint, you must use a ‘route,’ which takes the form of a normal URL. You can even try this yourself right now.

      Go to your own WordPress site, and add /wp-json/wp/v2 to the end of its URL. If your site is, you would enter

      When you load this route, you will reach the endpoint, which in this case, returns all content and meta-data for your site in a (messy) JSON format. By using different routes, you can access different endpoints to get specific types of information and perform various tasks.

      There are three primary JSON requests you will use with the REST API, so let’s also take a quick look at them now. They are:

      • GET. This type of request is used for retrieving and listing data from the API. For example, you would use a GET request to return a list of users on your site or compile blog posts from a certain timeframe.
      • POST. This request is used for sending data to the API. It enables you to push new information to WordPress, such as adding new users and posts or updating existing data.
      • DELETE. As the name suggests, this request is used to delete data. This enables you to remove posts, pages, users, and more.

      GET and POST can sometimes be used with the same endpoint to achieve different results.

      For example, let’s look at the endpoint /me/settings/. If you were to perform a GET request on this endpoint, you would receive a list of the current user’s settings. However, by using a POST request on the same endpoint, you would be able to update the settings instead.

      Get Content Delivered Straight to Your Inbox

      Subscribe to our blog and receive great content just like this delivered straight to your inbox.

      Getting Started with the WordPress REST API

      We’re now going to put all of this theory into practice and show you some very basic examples of what you can do with the REST API. This is only a taste to help you become comfortable using the REST API to process requests to WordPress.

      For more examples, we recommend checking out the official reference library and the REST API Resources.

      The following techniques will require you to use the command line to process JSON requests. This enables you to interact with your WordPress site by using a text-based interface and sending simple commands.

      If you don’t have any experience using the command line, we recommend taking some time to learn the basics first. You may also want to use SSH to create the connection with your site.

      Finally, when you’re ready, let’s look at some examples of how you can use the WordPress REST API!

      1. Return Posts from a Site

      While you will obviously need the proper authorization to edit a website, it’s possible to retrieve some information from almost any WordPress site. This is because the REST API is consistent across all WordPress installations.

      As we discussed, the main reason that APIs exist is to enable external applications to access some of your data. In this example, we can retrieve a single post from the official WordPress news blog:


      The ID has been set to 1, meaning that this request will retrieve the very first post on the blog. It might be hard to see since the JSON is not very readable, but among the code, you can spot all the content and meta-data for the post:

      retrieve a post from the WordPress blog using the WordPress Rest API

      You could then use this information in an application, for example, to display it using your own customized styling.

      If you want to return every post from the blog instead, all you have to do is remove the ID at the end. However, it’s more likely that you’ll want to return a select number of posts. The following request will return the latest three posts:


      You can try this out for yourself with other sites, and even your own blog.

      2. Update a Post

      Now, let’s try to make some changes to WordPress using the REST API. To do this, you will need to be logged in to the site you want to manage. For example, if you’re using SSH, you will need to log in to your server.

      In this example, we’ll update an existing post. First, let’s use a request to update the title of the post with the ID of 1:

      curl -X POST -d '{"title":"A Brand New Title"}'

      This is pretty self-explanatory. The title argument shows that you’re updating the post’s title, which is followed by the text string containing the replacement.

      There are plenty of other arguments you can use to make changes to a post. For instance, you can use a list to assign categories to the post, publish it, or change its contents entirely.

      3. Delete a User

      Finally, let’s look at how you can remove data using the REST API. In this example, we’ll remove a user from the site. Naturally, you’ll need to be logged in and authorized to manage users before you can use this function.

      Then, you can use the following request to delete the user with an ID of 101:

      curl -X DELETE

      This will remove the specified user from the site. You can use the additional parameters to reassign the user’s posts to another user based on their ID. Alternatively, you can force a permanent deletion instead of adding the user to the trash.

      Through these examples, you can start to see how the REST API enables you to manage the content on your site and connect to others. If you want to learn more, we recommend digging deeper into the REST API Handbook.

      Explore WordPress Development

      The WordPress REST API was a huge step forward for the platform, away from its roots and into the future. Developers were excited from day one, but if you weren’t familiar with REST APIs to begin with, you might have been confused about why.

      Although the REST API might seem overwhelming for beginners, you don’t need to be an experienced developer to use some basic requests. For example, the API enables you to perform diverse tasks on your own site (or others), such as returning posts, updating posts, and deleting users.

      Are you looking for high-performance hosting for your WordPress site? At Dreamhost, our DreamPress managed plans offer professional staging environments, automatic backups, in-built caching, and more. Check out our plans today!

      Do More with DreamPress

      DreamPress’ automatic updates, caching, and strong security defenses take WordPress management off your hands so you can focus on your website.

      Managed WordPress Hosting - DreamPress

      Source link