One place for hosting & domains

      Install

      Install and Configure Hubzilla


      Recent developments have led to a renewed interest in federated web applications. Much of the interest centers around the Twitter alternative Mastodon. However, there are many other federated applications worthy of attention. For example, the federated Hubzilla application allows users to create interconnected websites and channels. This guide provides an introduction to the Hubzilla application and explains how to install and configure it.

      What is the Federated Web?

      The federated web is also known as the Fediverse, which stands for “federated universe”. It is a collection of servers that are independent, but can communicate and interact with one another. The Fediverse is commonly used to host web content, including websites, social networks, and blogs. Some examples of federated web applications include Hubzilla, Friendica, Mastodon, and Pleroma.

      Communication protocols based on open standards allow the servers and users to communicate with each other across software boundaries. A user account or identity can potentially be used to create or access content on any server within the federation. In most cases, access control lists are used to control distribution and content creation rights.

      What is Hubzilla?

      Hubzilla is a highly versatile, free, and open source application. It allows users to create connected websites and tools in a modular fashion. Some of the main Hubzilla features include websites, social media, file sharing, photo sharing, forums, chat rooms, and calendars.

      In Hubzilla, all communications, content, permissions, and user identities are distributed across the federation. Hubzilla uses the ZOT web framework to enable these secure decentralized services. ZOT is a JSON-based communication framework designed with decentralization in mind. Hubzilla channels use the Open Web Auth (OWA) protocol to silently identify and authenticate themselves and access server resources. OWA is similar to OpenID but it is more efficient and does not require the Domain Name System (DNS). However, Hubzilla allows users to use OpenID to log in to off-grid sites.

      Hubzilla requires common web server technologies to operate, including a Linux LAMP stack or equivalent. A typical LAMP stack includes the Apache web server, the MySQL or MariaDB relational database management system (RDBMS), and the PHP programming language.

      Hubzilla uses a specific terminology to describe the relationship between servers and application users.

      • Hub: This is an individual server running Hubzilla. Each hub can host users and web content. Hubzilla allows anyone to run their own hub.
      • Grid: This is a decentralized network of hubs. A grid allows hubs, and therefore individual users, to connect with each other. Hubs communicate using the ZOT protocol and Hubzilla’s messaging system. However, a hub can be used as a stand-alone system. Hubs are not required to connect to the wider grid.
      • Channel: This is a Hubzilla entity within the grid. A channel can be an application or user, but it can also be a web page, blog, or forum. A user interacts with the Hubzilla grid using their own personalized channel called the me channel. Each channel has its own stream and can also connect to other channels. For example, a person can subscribe to a forum, and blog updates can be funneled to a web page. A channel can be identified through its unique tag [email protected]. Every channel is cryptographically secured and has its own set of authentication rights.

      Hubzilla follows the key federated principle of nomadic identity. This means a channel or account is not tied to a particular server and does not have a traditional server-based account. Channel authentication happens independently across the grid. This means users can move their identity from one hub to another, taking their data and connections with them. In addition, Hubzilla allows channels to clone themselves on multiple hubs. This enhances resiliency against external threats, including power failures and censorship.

      Hubzilla includes a sophisticated access control mechanism. It allocates grid-wide user permissions on a granular level. The various content channels are identity-aware. This allows for a single sign-on (SSO) service across the various channels. This is a huge difference from traditional websites, which authenticate users independently. All channel content includes access control permissions, so it is possible for different users to see dramatically different channel feeds. It is also possible to set permissions for users on an entirely different hub.

      Hubzilla is a complicated and multi-faceted application. It provides many features, including:

      • A wide variety of plug-ins for creating channel content and web applications. The Comanche description language provides a toolkit for building web pages, and users can apply customizable interface, themes, and widgets to their pages.
      • Built-in social network capabilities, including chat rooms. Hubzilla provides federation support to connect to Diaspora, GNU Social, Mastodon, and other applications. Hubzilla can also serve as a client application for Twitter, WordPress, and other applications, and distribute copies of a new update.
      • Cloud file storage functionality. Files can be shared or published using Hubzilla’s access control mechanism.
      • The ability to create a Wiki page for each channel.
      • An affinity slider allows users to determine how close they are to another connection or channel. This allows users to select what content they want to see. Only channels within the range of the affinity slider are displayed.
      • Connection filtering based on user-defined criteria.
      • Single sign-on across the grid.
      • An access-controlled photo album feature.
      • A powerful calendar feature, allowing users to share events and coordinate attendance. Hubzilla supports most popular calendar formats.
      • A flexible API for third-party use to configure and access hubs and create content.

      There are also a few drawbacks to using Hubzilla, including:

      • Some of the features and tools are not intuitive.
      • Documentation on Hubzilla is fragmented, disorganized, and incomplete.
      • Hubzilla lacks a solid mobile implementation.
      • It has a smaller user base than Mastodon or other federated applications.

      For more information about Hubzilla and its features, see the Hubzilla Documentation.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our Getting Started with Linode and Creating a Compute Instance guides.

      2. Follow our Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      3. Assign a domain name for the Hubzilla hub and point it to the IP address of the server. For information on domain names and pointing a domain name to a Linode, see the Linode DNS Manager guide.

      4. Enable email on the Linode server to allow Hubzilla to send out registration emails containing verification codes. Hubzilla requires a working mail server to authenticate new users. For more information on setting up a mail server, see our guides on email.

      Note
      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you are not familiar with the sudo command, see the Users and Groups guide.

      How to Install Hubzilla

      Before downloading and installing Hubzilla, Linux users must install and configure some form of LAMP stack. A virtual host for the Hubzilla site is also mandatory. The following instructions are designed for Ubuntu 22.04 LTS, but are generally applicable to most Linux distributions.

      Install the LAMP Stack

      Hubzilla relies upon a LAMP stack consisting of the Apache web server, the MariaDB database, and the PHP programming language. The MySQL database can be substituted for MariaDB. Other substitutions are possible and might improve performance, but Hubzilla does not explicitly support them.

      1. Ensure the Ubuntu packages are up to date. Reboot the system if advised to do so.

        sudo apt-get update -y && sudo apt-get upgrade -y
      2. Install the Apache web server and the MariaDB RDBMS.

        sudo apt install apache2 mariadb-server -y
      3. Install PHP, including all required PHP libraries, along with some additional utilities.

        Note

        Hubzilla requires PHP release 8.0 or higher. The following command installs the current PHP release 8.1. To confirm the release of PHP in use, run the command php -v.

        sudo apt install openssh-server git php php-fpm php-curl php-gd php-mbstring php-xml php-mysql php-zip php-json php-cli imagemagick fail2ban wget libapache2-mod-fcgid -y
      4. Use the systemctl command to ensure Apache and MariaDB are running and configured to start automatically at boot time.

        sudo systemctl start apache2
        sudo systemctl enable apache2
        sudo systemctl start mysql
        sudo systemctl enable mysql
      5. Verify Apache is running using the systemctl status command.

        sudo systemctl status apache2
        apache2.service - The Apache HTTP Server
            Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
            Active: active (running) since Wed 2022-11-16 13:38:25 UTC; 10min ago
            
      6. Configure the firewall to allow Apache connections. Ensure SSH connections are also allowed.

        Note

        If you intend to use SSL to secure the Hubzilla installation, use the Apache Full profile. Otherwise, enable the Apache profile. Hubzilla tests the well-known HTTPS port first, so this port must not be open unless HTTPS is used. HTTPS is highly recommended for enhanced security. For conciseness, this guide does not configure HTTPS. However, the basic Hubzilla installation process is the same in both cases.

        sudo ufw allow OpenSSH
        sudo ufw allow in "Apache"
        sudo ufw enable
      7. Ensure the firewall is enabled.

        Status: active
        
        To                         Action      From
        --                         ------      ----
        OpenSSH                    ALLOW       Anywhere
        Apache                     ALLOW       Anywhere
        OpenSSH (v6)               ALLOW       Anywhere (v6)
        Apache (v6)                ALLOW       Anywhere (v6)
            
      8. Visit the IP address or domain name of the server. The Apache2 Default Page should be displayed.

      Configure the LAMP Stack

      1. Enhance security for the MariaDB installation using the mysql_secure_installation utility. When prompted, answer the questions as follows:

        • For Enter current password for root (enter for none): enter the database password for the root account. If there is no password, hit the Enter key.

        • For Switch to unix_socket authentication and Change the root password?, enter n.

        • For Remove anonymous users?, Disallow root login remotely?, Remove test database and access to it? and Reload privilege tables now?, enter Y.

          sudo mysql_secure_installation
      2. Log in to MariaDB. Provide the root password if necessary.

      3. Create a hubzilla database for the application to use.

        CREATE DATABASE hubzilla;
      4. Create a user for the new database. Supply a unique password in place of mypassword.

        CREATE USER 'hubzilla'@'localhost' IDENTIFIED BY 'mypassword';
      5. Grant all privileges for the hubzilla database to the new user. Flush the privileges and exit.

        GRANT ALL PRIVILEGES ON hubzilla.* TO 'hubzilla'@'localhost';
        FLUSH PRIVILEGES;
        EXIT;
      6. Hubzilla requires the mpm_event module. To enable this module, first stop Apache and disable php-8.1 and mpm_prefork.

        Note

        Disable the php module associated with the current PHP release. The format of the module name is php-releasenumber, where the release number is the major and minor release of PHP. Use the command php -v to find this release information.

        sudo systemctl stop apache2
        sudo a2dismod php8.1
        sudo a2dismod mpm_prefork
      7. Enable the PHP rewrite and mpm_event modules.

        sudo a2enmod rewrite
        sudo a2enmod mpm_event
      8. Connect Apache to the fpm mechanism. Enable the following modules and configurations.

        sudo a2enconf php8.1-fpm
        sudo a2enmod proxy
        sudo a2enmod proxy_fcgi
      9. Restart Apache and verify it is active. If the restart fails, run the sudo apachectl configtest command. Review and correct any errors.

        sudo systemctl restart apache2
        sudo systemctl status apache2

      Configure A Hubzilla Virtual Host

      Hubzilla requires its own virtual host to function properly. Create a new hubzilla.conf file inside etc/apache2/sites-available and then enable it. To create the virtual host file, follow these steps.

      1. Change to the /etc/apache2/sites-available/ directory and create the new hubzilla.conf file.

        cd /etc/apache2/sites-available
        sudo nano hubzilla.conf
      2. Add the following information to the file. For the ServerAdmin and ServerName fields, replace example.com with the name of the Hubzilla domain. In the ProxyPassMatch variable, use the release number of the local instance of PHP in place of 8.1 for php8.1-fpm. Save and close the file.

        File: /etc/apache2/sites-available/hubzilla.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        <VirtualHost *:80>
            ServerAdmin [email protected]
            ServerName example.com
            DocumentRoot /var/www/html/hubzilla
            ProxyPassMatch ^/(.*\.php(/.*)?)$ unix:/run/php/php8.1-fpm.sock|fcgi://localhost/var/www/html/hubzilla
            <Directory /var/www/html/>
                Options Indexes FollowSymLinks
                AllowOverride All
                Order allow,deny
                allow from all
            </Directory>
            ErrorLog ${APACHE_LOG_DIR}/hubzilla_error.log
            LogLevel warn
            CustomLog ${APACHE_LOG_DIR}/hubzilla_access.log combined
        </VirtualHost>
      3. Enable the new site.

      4. Optional For extra security, disable the default Apache site.

        sudo a2dissite 000-default.conf
      5. Restart Apache and verify its status.

        sudo systemctl restart apache2
        sudo systemctl status apache2

      Installing and Configuring Hubzilla

      The LAMP stack is now fully configured and ready for Hubzilla. Use git to download Hubzilla and install the add-ons. Before Hubzilla is ready for use, some permissions must also be changed. A cron job for proper Hubzilla maintenance must also be added.

      1. Change directory to /var/www/html.

      2. Use git to clone the latest release of Hubzilla from the code base.

        sudo git clone https://framagit.org/hubzilla/core.git hubzilla
      3. Change to the hubzilla directory and install the add-ons.

        cd hubzilla
        sudo util/add_addon_repo https://framagit.org/hubzilla/addons addons-official
      4. Create the store directory and ensure it is writable.

        sudo mkdir -p "store/[data]/smarty3"
      5. Change ownership and permissions for the hubzilla directory.

        sudo chown -R www-data:www-data /var/www/html/hubzilla/
        sudo chmod -R 755 /var/www/html/hubzilla/
      6. Optional To access Hubzilla using HTTPS, which Hubzilla recommends, install certbot and use it to request and install a Let’s Encrypt certificate. For further instructions, see the Linode guide on How to Use Certbot to Enable HTTPS.

      7. Add a cron task to update the site every 10 minutes. Run the crontab -e command to edit the list of root cron jobs.

      8. Add the following line to the end of the cron file. usr/bin/php represents the path to the PHP installation. Before proceeding, confirm the location of PHP using the command which php.

        */10 * * * *    cd /var/www/html/hubzilla; /usr/bin/php Zotlabs/Daemon/Master.php Cron > /dev/null 2>&1

      Configuring Hubzilla

      To initialize Hubzilla and get it ready for deployment, use the simple web interface. Follow these steps to configure Hubzilla.

      1. Using a web browser, navigate to the domain name of the Hubzilla server. Use the https prefix if TLS is enabled, otherwise use http.

      2. The web server displays the Hubzilla Server - Setup page. Ensure all options are selected and click Next.

        Hubzilla Server Setup

      3. Hubzilla displays the Database Connection page. Enter the login name, password, and database name for the MariaDB database. The login name and database name are both hubzilla. For the password, enter the password for the hubzilla user. Select the Submit button to proceed.

        Hubzilla Database Setup

      4. Hubzilla proceeds to the Site settings page. Enter an email address for the site administrator and select the default time zone. The Website URL must be set to the domain name for the Hubzilla server. Click Submit to continue.

        Note

        The administration email must contain a valid email address. Hubzilla sends a verification email to the account to confirm the registration.

        Hubzilla Site Settings

      5. Hubzilla confirms the installation is successful. This page also provides a link to the registration page to register as a new member. You must register for an account before you can use Hubzilla. Click the link to register now. To register later, visit the domain name of the Hubzilla server.

        Hubzilla Setup Confirmation

      6. On the Registration page, enter an account name, a short nickname to represent the channel on the site, a valid email address, and a password. If registering as an administrator, use the administrator email provided earlier in the Hubzilla site settings. Select the checkbox to agree to the terms of service and click Register.

        Note

        After selecting Register, Hubzilla must send an email to the email address of the account. The Linode must have email turned on to complete this process.

        For non-production Hubzilla testing, it is possible to remove the mail server requirements by editing a Hubzilla configuration file. To do so, open the .htconfig.php file located at /var/www/html/hubzilla and change the 1 in the following line:

        File: /var/www/html/hubzilla/.htconfig.php
        1
        
        App::$config['system']['verify_email'] = 1;

        To instead be a 0:

        File: /var/www/html/hubzilla/.htconfig.php
        1
        
        App::$config['system']['verify_email'] = 0;

        This configuration change is not recommended on production installations.

        Hubzilla Registration

      7. Hubzilla sends an email with a verification token to the account address. Enter the registration token on the next page to complete the registration.

        Hubzilla Verification

      For information on how to use Hubzilla’s features, see the Hubzilla user guide.

      Conclusion

      Hubzilla is a federated web site that allows users to create shared interconnected web pages, blogs, social media feeds, and other interactive material. Each Hubzilla hub is connected to other hubs to form a grid. Hubzilla is based around channels. A channel can represent either users or web entities. Hubzilla uses a nomadic identity that allows users to access different hubs through an independent account. A fine-grained access control system determines who can create or view channel content.

      To use Hubzilla, first install and configure a LAMP stack, including Apache, MariaDB, and PHP. Then use git to install Hubzilla. Configure Hubzilla and create an administrator account using Hubzilla’s online interface. For more information on Hubzilla, see the Hubzilla website.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Install Chef on Ubuntu 20.04


      Chef is a free and open source Infrastructure as Code (IaC) application. It’s a configuration management system that allows administrators to provision and manage infrastructure using automation. A complete Chef workflow includes one or more Chef Workstations, a Chef Server, and a set of nodes. This guide provides some background on how Chef works, and explains how to install and configure Chef on Ubuntu 20.04.

      What is Chef?

      Chef is a IaC application for automating and streamlining the process of provisioning, configuring, deploying, and managing network nodes. It allows for continuous deployment and an automated environment. Chef can manage many types of components including servers, containers, and networking infrastructure.

      Chef operates using a hub-and-spoke architecture, with the master Chef Server at the center. One or more Chef Workstations interact with the Server, which automates the configuration of one or more Chef nodes. Configuration assets move from the workstation to the server and finally to the nodes. Workstations cannot interact with the nodes directly. The Chef infrastructure consists of the following components.

      • Chef Workstation: A workstation is a server for creating and testing configuration code. The code is then pushed to the Chef Server. Several workstations can interact with the same server, but each workstation only links to one server. The
        Chef Workstation documentation contains more information on how to use the workstation.
      • Chef Server: The Chef Server is the “command center” for the entire system. It stores and maintains all the configuration files, code, and scripts. A Chef Server includes many components, including a web server and PostgreSQL database. It is responsible for pushing the relevant assets to the various nodes and keeping track of the nodes under its management. Each server is highly capable, efficient, and robust, and is capable of managing a large number of nodes.
      • Chef Node: The Chef Server deploys and manages a node using assets developed on the Chef Workstation. Each node is administered by a single Chef Server. Although it is dependent on the server, a Chef Node contains a Chef client. The client queries the server for updates and keeps the node up to date.

      The following illustration indicates the relationship between the three parts of the Chef system.

      The workstations use Chef commands, such as the knife directive, to interact with the server. Chef incorporates extra security and authentication into all of its operations, using public key encryption. However, the Chef system is complex and has a high learning curve.

      Chef uses an idiosyncratic terminology based on cooking vocabulary. Some of the more important terms include the following:

      • Attribute: Specifies a value for an item on a node.
      • Bookshelf: Stores the various cookbooks and assets on a Chef Server using versioning control.
      • Chef-client: Runs on the node, and is responsible for verifying whether the node is up-to-date with the assets stored on the server.
      • Chef-repo: A directory on the Chef Workstation that contains the local cookbooks and configuration files.
      • Cookbook: The primary method of managing nodes. It contains information describing the final state of a node. The Chef server and node use the cookbook to guide configuration. Cookbooks contain recipes, along with attributes, libraries, templates, and scripts. These cookbooks can be developed on the workstation or downloaded from the
        Chef Supermarket.
      • Environment: Collects nodes into groups to better organize them. Similar configurations and scripts can be applied to the entire group.
      • Knife: A Chef Workstation uses the knife tool to correspond with the Chef Server. A knife command usually takes the format knife subcommand [ARGUMENT] (options).
      • Recipe: A recipe is contained within a cookbook. It explains the resources to add, change, or run on the node. Recipes are written in Ruby.
      • Resource: A resource is part of a recipe. It contains a type, name, and list of key-value pairs for a component.
      • Test Kitchen: This is a workstation module to help users test recipes before deployment.

      Linode has a helpful
      Beginner’s Guide to Chef. For detailed information about Chef, see the
      Chef documentation. Chef also makes the
      Learn Chef training resource available.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      3. At least three Linode systems running Ubuntu 20.04 are required to implement a Chef system. One server is for the Chef Workstation, the second for the Chef Server, while a third represents a node under administration. Due to Chef’s memory demands, the Chef Server requires a 8GB Linode. The other two servers can be 2GB Linodes. Both the Chef Server and Chef Workstation should be configured using the previous instructions. Chef is used to set up the target node.

      4. Ensure all Linode servers are updated using the following command.

        sudo apt update && sudo apt upgrade
      5. Assign a domain name to the Chef Server. For information on domain names and pointing the domain name to a Linode, see the
        Linode DNS Manager guide.

      6. Configure the host name of the Chef Server so it matches the domain name. This allows SSL certificate allocation to proceed normally. To set the host name of a Ubuntu server, use the command sudo hostnamectl set-hostname <hostname>, replacing <hostname> with the actual name of your domain.

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you are not familiar with the sudo command, see the
      Users and Groups guide.

      How to Install and Configure the Chef Server

      Because the Chef Server operates as a hub for the entire system, it is best to install and configure it first. The Chef Server uses a high amount of resources, so it is important to use a dedicated Linode with at least 8GB of memory.

      How to Install the Chef Server

      The Chef Server Core can be downloaded using wget. The following steps demonstrate how to download the latest release of Chef for the Ubuntu 20.04 release. For other releases of Ubuntu, see the
      Chef download page. For more detailed instructions, see the
      Chef Server installation page. To install the Chef Server, follow these steps.

      1. Download the Chef Server core using wget.

        wget https://packages.chef.io/files/stable/chef-server/15.1.7/ubuntu/20.04/chef-server-core_15.1.7-1_amd64.deb
      2. Install the server core.

        sudo dpkg -i chef-server-core_*.deb
        Selecting previously unselected package chef-server-core.
        (Reading database ... 108635 files and directories currently installed.)
        Preparing to unpack chef-server-core_15.1.7-1_amd64.deb ...
        Unpacking chef-server-core (15.1.7-1) ...
        Setting up chef-server-core (15.1.7-1) ...
        Thank you for installing Chef Infra Server!
      3. For better security and to preserve server space, remove the downloaded .deb file.

        rm chef-server-core_*.deb
      4. Start the Chef server. Answer yes when prompted to accept the product licenses.

        Note

        The installation process takes several minutes to complete. Upon a successful installation, the message Chef Infra Server Reconfigured! is displayed.

        sudo chef-server-ctl reconfigure

      How to Configure a Chef User and Organization

      To use Chef, configure an organization and at least one user on the Chef Server. This enables server access for workstations and nodes. To create these accounts, follow these steps.

      1. Create a .chef directory to store the keys. This should be a subdirectory located inside the home directory.

      2. Use the chef-server-ctl command to create a user account for the Chef administrator. Additional user accounts can be created later. Replace the USER_NAME, FIRST_NAME, LAST_NAME, EMAIL, and PASSWORD fields with the relevant information. For the --filename argument, replace USER_NAME.pem with the user name used earlier in the command.

        sudo chef-server-ctl user-create USER_NAME FIRST_NAME LAST_NAME EMAIL 'PASSWORD' --filename ~/.chef/USER_NAME.pem
      3. Review the user list and confirm the account now exists.

        sudo chef-server-ctl user-list
        USER_NAME
      4. Create a new organization, also using the chef-server-ctl command. Replace ORG_NAME and ORG_FULL_NAME with the actual name of the organization. The ORG_NAME field must be all lower case. The value for USER_NAME must be the same name used in the user-create command. For the --filename argument, in ORG_NAME.pem, replace ORG_NAME with the organization name used elsewhere in the command.

        sudo chef-server-ctl org-create ORG_NAME "ORG_FULL_NAME" --association_user USER_NAME --filename ~/.chef/ORG_NAME.pem
      5. List the organizations to confirm the new organization is successfully created.

        sudo chef-server-ctl org-list
        ORG_NAME

      How to Install and Configure a Chef Workstation

      A Chef Workstation is for users to create and test recipes. Any Linode with at least 2GB of memory can be used for this task. Unlike the Chef Server, a workstation can also be used for other tasks. However, in a larger organization hosting many users, it is often efficient to centralize workstation activities on one server hosting multiple accounts.

      How to Install a Chef Workstation

      The steps for installing a Chef Workstation are similar to those for installing the Server. Download the correct file using wget, then install it. To install a Chef Workstation, follow these steps.

      1. Download the source files for the Chef Workstation. For different releases of the Workstation, or downloads for earlier releases, see the
        Chef Workstation Downloads page. For more information on the installation process, see the
        Chef Workstation Installation Documentation.

        wget https://packages.chef.io/files/stable/chef-workstation/22.10.1013/ubuntu/20.04/chef-workstation_22.10.1013-1_amd64.deb
      2. Install the Chef Workstation.

        sudo dpkg -i chef-workstation_*.deb
        Thank you for installing Chef Workstation!
      3. Remove the source file.

        rm chef-workstation_*.deb
      4. Confirm the correct release of the Chef Workstation is installed.

        Chef Workstation version: 22.10.1013
        Chef Infra Client version: 17.10.0
        Chef InSpec version: 4.56.20
        Chef CLI version: 5.6.1
        Chef Habitat version: 1.6.521
        Test Kitchen version: 3.3.2
        Cookstyle version: 7.32.1

      How to Configure a Chef Workstation

      A few more items must be configured before the Workstation is operational. Tasks include generating a repository, editing the hosts file, and creating a subdirectory. To fully configure the workstation, follow these steps.

      1. Generate the chef-repo repository. This directory stores the Chef cookbooks and recipes. Enter yes when asked whether to accept the product licenses.

        chef generate repo chef-repo
        Your new Chef Infra repo is ready! Type `cd chef-repo` to enter it.
      2. Edit the /etc/hosts file. This file contains mappings between host names and their IP addresses. Add an entry for the Chef Server, containing the name of the server, which is also the domain name, and its IP address. In this example, this is indicated in the line 192.0.1.0 example.com. There must also be an entry for the local server. This is the 192.0.2.0 chefworkstation line in the example. This entry must contain the local IP address and the hostname of the server hosting the Chef Workstation. The file should resemble the following example.

        File: /etc/hosts
        1
        2
        3
        
        127.0.0.1 localhost
        192.0.1.0 example.com
        192.0.2.0 chefworkstation
      3. Create a .chef subdirectory. This is where the knife file is stored, along with files for encryption and security.

        mkdir ~/chef-repo/.chef
        cd chef-repo

      How to Add RSA Private Keys

      RSA private keys enable better security between the Chef Server and associated workstations through the use of encryption. Earlier, RSA private keys were created on the Chef Server. Copying these keys to a workstation allows it to communicate with the server. To enable encryption using RSA private keys, follow these steps.

      Note

      SSH password authentication must be enabled on the Chef Server to complete the key exchange. If SSH password authentication has been disabled for better security, enable it again before proceeding. After the keys have been retrieved and added to the workstation, SSH password authentication can be disabled again. See the Linode guide to
      How to Secure Your Server for more information.
      1. On the workstation, generate an RSA key pair. This key can be used to initially access the Chef server to copy over the private encryption files.

        Generating public/private rsa key pair.
        Enter file in which to save the key (/home/username/.ssh/id_rsa):
      2. Hit the Enter key to accept the default file names id_rsa and id_rsa.pub. Ubuntu stores these files in the /home/username/.ssh directory.

        Created directory '/home/username/.ssh'.
        Enter passphrase (empty for no passphrase):
      3. Enter a password when prompted, then enter it again. An identifier and public key are saved to the directory.

        Your identification has been saved in /home/username/.ssh/id_rsa
        Your public key has been saved in /home/username/.ssh/id_rsa.pub
      4. Copy the new public key from the workstation to the Chef Server. In the following command, use the account name for the Chef Server along with its IP address.

        ssh-copy-id username@192.0.1.0
      5. Use the scp command to copy the .pem files from the Chef Server to the workstation. In the following example, replace username with the user account for the Chef Server and 192.0.1.0 with the actual Chef Server IP address.

        scp username@192.0.1.0:~/.chef/*.pem ~/chef-repo/.chef/
        Enter passphrase for key '/home/username/.ssh/id_rsa':
        username.pem                                                                       100% 1674     1.7MB/s   00:00
        testcompany.pem                                                                 100% 1678     4.7MB/s   00:00
      6. List the contents of the .chef subdirectory to ensure the .pem files were successfully copied.

        username.pem  testcompany.pem

      How to Configure Git on a Chef Workstation

      A version control system helps the Chef Workstation track any changes to the cookbooks and restore earlier versions if necessary. This example uses Git, which is compatible with the Chef system. The following steps explain how to configure Git, initialize a Git repository, add new files, and commit them.

      1. Configure Git using the git config command. Replace username and user@email.com with your own values.

        git config --global user.name username
        git config --global user.email user@email.com
      2. Add the .chef directory to the .gitignore file. This ensures system and auto-generated files are not shown in the output of git status and other Git commands.

        echo ".chef" > ~/chef-repo/.gitignore
      3. Ensure the chef-repo directory is the current working directory. Add and commit the existing files using git add and git commit.

        cd ~/chef-repo
        git add .
        git commit -m "initial commit"
        [master (root-commit) a3208a3] initial commit
        13 files changed, 343 insertions(+)
        create mode 100644 .chef-repo.txt
        ...
        create mode 100644 policyfiles/README.md
      4. Run the git status command to ensure all files have been committed.

        On branch master
        nothing to commit, working tree clean

      How to Generate a Chef Cookbook

      To generate a new Chef cookbook, use the chef generate command.

      chef generate cookbook my_cookbook

      How to Configure the Knife Utility

      The Chef Knife utility helps a Chef workstation communicate with the server. It provides a method of managing cookbooks, nodes, and the Chef environment. Chef uses the config.rb file in the .chef subdirectory to store the Knife configuration. To configure Knife, follow these steps.

      1. Create a config.rb file in the ~/chef-repo/.chef directory. This example uses vi, but any text editor can be used.

        cd ~/chef-repo/.chef
        vi config.rb
      2. Use the following config.rb file as an example of how to configure Knife. Copy this sample configuration to the file.

        File: ~/chef-repo/.chef/config.rb
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        current_dir = File.dirname(__FILE__)
        log_level                :info
        log_location             STDOUT
        node_name                'node_name'
        client_key               "USER.pem"
        validation_client_name   'ORG_NAME-validator'
        validation_key           "ORG_NAME-validator.pem"
        chef_server_url          'https://example.com/organizations/ORG_NAME'
        cache_type               'BasicFile'
        cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
        cookbook_path            ["#{current_dir}/../cookbooks"]
      3. Make the following changes:

        • node_name must be the name of the user account created when configuring the Chef Server.
        • For client_key, replace USER with the user name associated with the .pem file, followed by .pem.
        • validation_client_name requires the same ORG_NAME used when creating the organization followed by -validator.
        • the validation_key field must contain the name used for ORG_NAME when the organization was created, followed by -validator.pem.
        • For chef_server_url, change example.com to the name of the domain. Follow the domain name with /organizations/ and the ORG_NAME used when creating the organization.
        • Leave the remaining fields unchanged.
      4. Move back to the chef-repo directory and fetch the necessary SSL certificates from the server using the knife fetch command.

        Note

        The SSL certificates were generated when the Chef server was installed. The certificates are self-signed. This means a certificate authority has not verified them. Before fetching the certificates, log in to the Chef server and ensure the hostname and fully qualified domain name (FQDN) are the same. These values can be confirmed using the commands hostname and hostname -f.

        Knife has no means to verify these are the correct certificates. You should verify the authenticity of these certificates after downloading.
        Adding certificate for example.com in /home/username/chef-repo/.chef/trusted_certs/example.com.crt
      5. To confirm the config.rb file is correct, run the knife client list command. The relevant validator name should be displayed.

        testcompany-validator

      How to Bootstrap a Node

      At this point, both the Chef Server and Chef Workstation are configured. They can now be used to bootstrap the node. The bootstrap process installs the chef client on the node and performs validation. The node can then retrieve any necessary updates from the Chef Server. To bootstrap the node, follow these steps.

      1. Log in to the target node, which is the node to be bootstrapped, and edit the /etc/hosts file. Add entries for the node, the Chef server domain name, and the workstation. The file should resemble the following example, using the actual names of the Chef Server, workstation, and the target node, along with their IP addresses.

        File: /etc/hosts
        1
        2
        3
        4
        
        127.0.0.1 localhost
        192.0.100.0 targetnode
        192.0.2.0 chefworkstation
        192.0.1.0 example.com
      2. Return to the Linode hosting the Chef Workstation and change the working directory to ~/chef-repo/.chef.

      3. Bootstrap the node using the knife bootstrap command. Specify the IP address of the target node for node_ip_address. This is the address of the node to bootstrap. In the following example, use the actual user name and password for the account in place of username and password. Enter the name of the node in place of nodename. Answer Y when asked “Are you sure you want to continue connecting”.

        Note

        The option to bootstrap using key-pair authentication no longer appears to be supported.

        knife bootstrap node_ip_address -U username -P password --sudo --use-sudo-password --node-name nodename
      4. Confirm the node has been successfully bootstrapped. List the client nodes using the knife client list command. All bootstrapped nodes should be listed.

        target-node
        testcompany-validator
      5. Add the bootstrapped node to the workstation /etc/hosts file as follows. Replace 192.0.100.0 targetnode with the IP address and name of the bootstrapped node.

        File: /etc/hosts
        1
        2
        3
        4
        
        127.0.0.1 localhost
        192.0.1.0 example.com
        192.0.2.0 chefworkstation
        192.0.100.0 targetnode

      How to Download and Apply a Cookbook (Optional)

      A cookbook is the most efficient way of keeping target nodes up to date. In addition, a cookbook can delete the validation.pem file that was created on the node when it was bootstrapped. It is important to delete this file for security reasons.

      It is not mandatory to download or create cookbooks to use Chef. But this section provides a brief example of how to download a cookbook and apply it to a node.

      1. On the Chef workstation, change to the ~/chef-repo/.chef directory.

      2. Download the cron-delvalidate cookbook from the Chef Supermarket. For more information on the supermarket command see the
        Chef supermarket documentation.

        knife supermarket download cron-delvalidate
        Downloading cron-delvalidate from Supermarket at version 0.1.3 to /home/username/chef-repo/.chef/cron-delvalidate-0.1.3.tar.gz
        Cookbook saved: /home/username/chef-repo/.chef/cron-delvalidate-0.1.3.tar.gz
      3. If the cookbook is downloaded as a .tar.gz file, use the tar command to extract it. Move the extracted directory to the cookbooks directory.

        tar -xf cron-delvalidate-0.1.3.tar.gz
        cp -r  cron-delvalidate ~/chef-repo/cookbooks/
      4. Review the cookbook’s default.rb file to see the recipe. This recipe is written in Ruby and demonstrates how a typical recipe is structured. It contains a cron job named clientrun. This job instantiates a new cron job to run the chef-client command on an hourly basis. It also removes the extraneous validation.pem file.

        File: ~/chef-repo/cookbooks/cron-delvalidate/recipes/default.rb
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        #
        # Cookbook Name:: cron-delvalidate
        # Recipe:: Chef-Client Cron & Delete Validation.pem
        #
        #
        cron "clientrun" do
          minute '0'
          hour '*/1'
          command "/usr/bin/chef-client"
          action :create
        end
        
        file "/etc/chef/validation.pem" do
          action :delete
        end
      5. Add the recipe to the run list for the node. In the following command, replace nodename with the name of the node.

        knife node run_list add nodename 'recipe[cron-delvalidate::default]'
        nodename:
          run_list: recipe[cron-delvalidate::default]
      6. Upload the cookbook and its recipes to the Chef Server.

        knife cookbook upload cron-delvalidate
        Uploading cron-delvalidate [0.1.3]
        Uploaded 1 cookbook.
      7. Run the chef-client command on the node using the knife ssh utility. This command causes the node to pull the recipes in its run list from the server. It also determines whether there are any updates. The Chef Server transmits the recipes to the target node. When the recipe runs, it deletes the file and installs a cron job to keep the node up to date in the future. In the following command, replace nodename with the actual name of the target node. Replace username with the name of a user account with sudo access. Enter the password for the account when prompted to do so.

        knife ssh 'name:nodename' 'sudo chef-client' -x username
        nodename Chef Infra Client, version 17.10.3
        nodename Patents: https://www.chef.io/patents
        nodename Infra Phase starting
        nodename Resolving cookbooks for run list: ["cron-delvalidate::default"]
        nodename Synchronizing cookbooks:
        nodename   - cron-delvalidate (0.1.3)
        nodename Installing cookbook gem dependencies:
        nodename Compiling cookbooks...
        nodename Loading Chef InSpec profile files:
        nodename Loading Chef InSpec input files:
        nodename Loading Chef InSpec waiver files:
        nodename Converging 2 resources
        nodename Recipe: cron-delvalidate::default
        nodename   * cron[clientrun] action create
        nodename     - add crontab entry for cron[clientrun]
        nodename   * file[/etc/chef/validation.pem] action delete (up to date)
        nodename
        nodename Running handlers:
        nodename Running handlers complete
        nodename Infra Phase complete, 1/2 resources updated in 03 seconds

      Conclusion

      Chef is an infrastructure as code (IaC) application for automating the deployment and management of infrastructure nodes. The Chef architecture consists of the Chef Server, which stores all the procedures, and a Chef Workstation, where the infrastructure code is developed. The managed nodes communicate with the server to receive updates. To use Chef, install the Chef Server and Chef Workstation software. Share RSA keys between the server and workstation, and install version control and the Chef Knife utility on the workstation. Bootstrap the target nodes using the knife bootstrap utility. After a node is bootstrapped, it is possible to download cookbooks and recipes using the node’s run list. For more information, see the
      Chef documentation.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Install Podman for Running Containers


      Podman is an open source containerization tool. Like Docker, Podman is a solution for creating, running, and managing containers. But Podman goes beyond Docker, using a secure daemonless process to run containers in rootless mode.

      For more on what Podman is and how it compares to Docker, you can refer to our guide
      Podman vs Docker. The guide familiarizes you with the basics of Podman and Docker and compares and contrast the two tools.

      In this tutorial, learn everything you need to install and start using Podman on your Linux system. By the end, you can run and manage containers using Podman.

      Before You Begin

      1. Familiarize yourself with our
        Getting Started with Linode guide, and complete the steps for setting your Linode’s hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our
        How to Secure Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Update your system.

        • Debian or Ubuntu:

          sudo apt update && sudo apt upgrade
          
        • AlmaLinux, CentOS Stream, Fedora, or Rocky Linux:

          sudo dnf upgrade
          

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      How to Install Podman

      1. Podman is available through the default package managers on most Linux distributions.

        • AlmaLinux, CentOS Stream, Fedora, or Rocky Linux:

          sudo dnf install podman
          
        • Debian or Ubuntu:

          sudo apt install podman
          

          Note

          Podman is only available through the APT package manager for Debian 11 or Ubuntu 20.10 and later.

      2. Afterward, verify your installation by checking the installed Podman version:

        podman -v
        

        Your output may vary from what is shown here, but you are just looking to see that Podman installed successfully:

        podman version 4.1.1

      Configuring Podman for Rootless Usage

      Podman operates using root privileges by default – for instance, using the sudo preface for commands. However, Podman is also capable of running in rootless mode, an appealing feature when you want limited users to execute container actions securely.

      Docker can allow you to run commands as a limited user, but the Docker daemon still runs as root. This is a potential security issue with Docker, one that may allow limited users to execute privileged commands through the Docker daemon.

      Podman solves this with the option of a completely rootless setup, where containers operate in a non-root environment. Below you can find the steps to set up your Podman instance for rootless usage.

      1. Install the slirp4netns and fuse-overlayfs tools to support your rootless Podman operations.

        • AlmaLinux, CentOS Stream, Fedora, or Rocky Linux:

          sudo dnf install slirp4netns fuse-overlayfs
          
        • Debian or Ubuntu:

          sudo apt install slirp4netns fuse-overlayfs
          
      2. Add subuids and subgids ranges for your limited user. This example does so for the user example-user. It gives that user a sub-UID and sub-GID of 100000, each with a range of 65535 IDs:

        sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 example-user
        

      With Podman installed, everything is ready for you to start running containers with it. These next sections walk you through the major features of Podman for finding container images and running and managing containers.

      Getting an Image

      Podman offers a few methods for procuring container images, which you can follow along with below. These section also give you a couple of images to start with, and which are used in later sections for further examples.

      Searching for Images

      Perhaps the most straightforward way to get started with a container is by finding an existing image in a registry. With Podman’s search command, you can find matching images in any container registries you have set up.

      Note

      Podman may come with some registries configured by default. However, on some systems, it may first be necessary to configure these registries manually. You can do this by opening the /etc/containers/registries.conf file with your preferred text editor and adding a line like the following to the end:

      unqualified-search-registries=['registry.access.redhat.com', 'registry.fedoraproject.org', 'docker.io', 'quay.io']
      

      You can replace the registries listed here with ones that you would like to look for container images on.

      Podman’s GitHub also has a registries.conf file
      here that you can use as an initial reference.

      This example searches for images matching the term buildah:

      podman search buildah
      

      Keep in mind that your matches may differ depending on the registries your Podman instance is configured with:

      NAME                                                            DESCRIPTION
      registry.access.redhat.com/ubi8/buildah                         Containerized version of Buildah
      registry.access.redhat.com/ubi9/buildah                         rhcc_registry.access.redhat.com_ubi9/buildah
      registry.redhat.io/rhel8/buildah                                Containerized version of Buildah
      registry.redhat.io/rhel9/buildah                                rhcc_registry.access.redhat.com_rhel9/builda...
      [...]

      Downloading an Image

      After searching the registries, you can use Podman to download, or pull, a particular image. This can be accomplished with Podman’s pull command followed by the name of the container image:

      podman pull buildah
      

      As the search output shows, there may be multiple registries matching a given container image:

      Resolved "buildah" as an alias (/etc/containers/registries.conf.d/shortnames.conf)
      Trying to pull quay.io/buildah/stable:latest...
      Getting image source signatures
      [...]

      But you can also be more specific. You can specify the entire image name, with the registry path, to pull from a specific location.

      For instance, this next example pulls the Buildah image from the docker.io registry:

      podman pull docker.io/buildah/buildah
      

      As you can see, it skipped the part where it resolves the shortname alias and pulls the Buildah image directly from the specified source:

      Trying to pull docker.io/buildah/buildah:latest...
      Getting image source signatures
      [...]

      Building an Image

      Like Docker, Podman also gives you the ability to create a container image from a file. Typically, this build process uses the Dockerfile format, though Podman supports the Containerfile format as well.

      You can learn more about crafting Dockerfiles in our guide
      How to Use a Dockerfile to Build a Docker Image. This guide also includes links to further tutorials with more in-depth coverage of Dockerfiles.

      But for now, as an example to see Podman’s build capabilities in action, you can use the following Dockerfile:

      File: Dockerfile
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      
      # Base on the most recently released Fedora
      FROM fedora:latest
      MAINTAINER ipbabble email buildahboy@redhat.com # not a real email
      
      # Install updates and httpd
      RUN echo "Updating all fedora packages"; dnf -y update; dnf -y clean all
      RUN echo "Installing httpd"; dnf -y install httpd && dnf -y clean all
      
      # Expose the default httpd port 80
      EXPOSE 80
      
      # Run the httpd
      CMD ["/usr/sbin/httpd", "-DFOREGROUND"]

      Place these contents in a file named Dockerfile. Then, working from the same directory the file is stored in, you can use the following Podman command to build an image from the file:

      podman build -t fedora-http-server .
      

      The -t option allows you to give the image a tag, or name – fedora-http-server in this case. The . at the end of the command specifies the directory in which the Dockerfile can be found, where a . represents the current directory.

      Keep reading onto the section below titled
      Running a Container Image to see how you can run a container from an image built as shown above.

      Podman’s build command works much like Docker’s, but is actually a subset of the build functionality within Buildah. In fact, Podman uses a portion of Buildah’s source code to implement its build function.

      Buildah offers more features and fine-grained control when it comes to building containers. For that reason, many see Podman and Buildah as complementary tools. Buildah provides a robust tool for crafting container images from both container files (e.g. Dockerfiles) and from scratch. Podman then excels at running and managing the resulting containers.

      You can learn more about Buildah, including steps for setup and usage, in our guide
      How to Use Buildah to Build OCI Container Images.

      Listing Local Images

      Once you have one or more images locally on your system, you can see them using Podman’s images command. This gives you a list of images that have been created or downloaded onto your system:

      podman images
      

      Following the two sections above — on downloading and then building container images — you could expect an output similar to:

      REPOSITORY                         TAG         IMAGE ID      CREATED       SIZE
      localhost/fedora-http-server       latest      f6f5a66c8a4d  2 hours ago   328 MB
      quay.io/buildah/stable             latest      eef9e8be5fea  2 hours ago  358 MB
      registry.fedoraproject.org/fedora  latest      3a66698e6040  2 hours ago  169 MB

      Running a Container Image

      With images either downloaded or created, you can begin using Podman to run containers.

      The process can be relatively straightforward with Podman’s run command, which just takes the name of the image to run a container from.

      Here is an example using the Buildah image downloaded above. This example runs the Buildah image, specifically executing the buildah command on the resulting container:

      podman run buildah buildah -v
      

      The -v option is included to output the version of the application:

      buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)

      Containers’ operations can get more complicated from there, and Podman has plenty of features to support a wide range of needs when it comes to running containers.

      Take the fedora-http-server example created from a Dockerfile above. This example runs an HTTP server on the container’s port 80. The following command demonstrates how Podman lets you control how that container operates.

      The command runs the container, which automatically starts up an HTTP server. The -p option given here publishes the container’s port 80 to the local machine’s port 8080, while the --rm option automatically stops the container when it finishes running — a fitting solution for a quick test.

      podman run -p 8080:80 --rm fedora-http-server
      

      Now, on the machine where the image is running, use a cURL command to verify that the default web page is being served on port 8080:

      curl localhost:8080
      

      You should see the HTML of the Fedora HTTP Server Test Page:

      <!doctype html>
      <html>
        <head>
          <meta charset='utf-8'>
          <meta name='viewport' content='width=device-width, initial-scale=1'>
          <title>Test Page for the HTTP Server on Fedora</title>
          <style type="text/css">
            /*<![CDATA[*/
      
            html {
              height: 100%;
              width: 100%;
            }
              body {
      [...]

      Managing Containers and Images

      Podman prioritizes effectively running and managing containers. As such, it comes with plenty of commands for keeping track of and operating your containers.

      These next several sections walk through some of the most useful Podman operations, and can help you get the most out of your containers.

      Listing Containers

      Often those working with containers may keep a container or two, sometimes several containers, running in the background.

      To keep track of these containers, you can use Podman’s ps command. This lists the currently running containers on your system.

      For instance, if you are in the process of running the fedora-http-server container shown above, you can expect something like:

      podman ps
      
      CONTAINER ID  IMAGE                                COMMAND               CREATED        STATUS            PORTS                 NAMES
      daadb647b880  localhost/fedora-http-server:latest  /usr/sbin/httpd -...  8 seconds ago  Up 8 seconds ago  0.0.0.0:8080->80/tcp  suspicious_goodall

      And if you want to list all containers, not just the ones that are currently running, you can add the -a option to the command:

      podman ps -a
      

      The output of this command also includes the buildah command executed using podman run further above:

      CONTAINER ID  IMAGE                                COMMAND               CREATED             STATUS                     PORTS                 NAMES
      db71818eda38  quay.io/buildah/stable:latest        buildah -v            12 minutes ago      Exited (0) 12 minutes ago                        exciting_kowalevski
      daadb647b880  localhost/fedora-http-server:latest  /usr/sbin/httpd -...  About a minute ago  Up About a minute ago      0.0.0.0:8080->80/tcp  suspicious_goodall

      Starting and Stopping Containers

      Podman can individually control when to stop and start containers, using the stop and start commands, respectively. Each of these commands takes either the container ID or container name as an argument, both of which you can find using the ps command, as shown above.

      For example, you can stop the fedora-http-server container above with:

      podman stop daadb647b880
      

      Had this container been run without the --rm option, which automatically removes the container when it has stopped running, you could start the container back up simply with:

      podman start daadb647b880
      

      For either command, you could substitute the container name for its ID, as so:

      podman stop suspicious_goodall
      

      Removing a Container

      You can manually remove a container using Podman’s rm command, which, like the stop and start commands, takes either a container ID or name as an argument.

      podman rm daadb647b880
      

      Creating an Image from a Container

      Podman can render a container into an image using the commit command. This can be used to manually create an updated container image after components have been added to, removed from, or modified on a container.

      Like other container-related commands, this command takes the container ID or name as an argument. It’s also good practice to include an author name along with the commit, via the --author option:

      podman commit --author "Example User" daadb647b880
      

      As noted in the section above on creating images with Podman, Buildah tends to offer more features and control when it comes to creating container images. But Podman is certainly capable in many cases and may be enough to fit your given needs.

      Conclusion

      Podman offers not just a simple alternative to Docker, but a powerful containerization tool with the weight of secure, rootless operations. And, with this tutorial, you have what you need to start using Podman for running and managing your containers.

      Keep learning about effective tools for working with containers through the links on Podman, Buildah, and Dockerfiles provided in the course of this tutorial. Continue sharpening your Podman knowledge through the links provided at the end of this tutorial.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link