One place for hosting & domains


      Podman vs Docker: Comparing the Two Containerization Tools

      Containers offer you powerful tools for developing and deploying applications. They give you distinct and portable virtual environments with a fraction of the overhead of traditional virtual machines.

      If you’re looking into containerization, you’ve likely seen Docker, the most popular and widely used containerization tool. But recently a capable and compelling alternative has risen, Podman.

      Both tools follow the Open Container Initiative (OCI) standards, and both offer robust capabilities for running and managing containers.

      So which one should you use? What features make Docker best for some use cases and Podman better for others?

      This tutorial aims to help you answer these questions. Learn the key characteristics of each tool, see a breakdown of their pros and cons, and walk through an analysis of each tool’s best use cases.

      What Are Containers?

      Containers are lightweight and standalone virtual environments for applications. With containers, you can run multiple application environments on a single system or package application environments as images for others to easily run on different systems.

      Each container works off a set of instructions, allowing it to replicate the necessary virtual infrastructure and applications. The container then houses and manages the applications and all of their dependencies.

      A container can be rendered to a container image. Such an image can then be used to construct the base container on other systems, only requiring a containerization tool, like Docker or Podman.

      Today, most containerization tools follow the OCI standards. Any containerization tools that conform to this standard can operate OCI containers built from other such tools. Thus, Podman can run containers built with Docker, and vice versa.

      What Is Docker?

      Docker is a platform for creating, deploying, and managing applications via containers. With Docker, you can create OCI-compliant containers using Dockerfiles (scripts for container creation) or existing container images.

      Docker has become an incredibly popular containerization tool, at least in part due to its relative simplicity. Its straightforward commands and the wealth of available documentation make Docker immanently approachable.

      Learn more about Docker in our guide
      An Introduction to Docker.

      What Is Podman?

      Podman, like Docker, is an open source engine for deploying and managing containerized applications. Podman builds OCI-compliant containers from existing images or from Containerfiles and Dockerfiles.

      The Podman engine was originally developed by Red Hat with the intention of providing a daemonless alternative to Docker. By employing a daemonless architecture, Podman seeks to remedy security concerns around Docker’s daemon-based process.

      Additionally, Podman’s daemonless architecture grants it a truly rootless mode. Docker commands can be run by non-root users, but its daemon that executes those commands continues to run on root. Podman, instead, executes commands directly and avoids the need for root privileges.

      Learn more about getting started with Podman in our guide
      How to Install Podman for Running Containers.

      Docker vs Podman

      Both Podman and Docker are containerization tools. With either one, you can fully start up, deploy, and manage containers.

      However, each tool has its pros and cons. These next couple of sections explore each, providing a list to compare and contrast the two containerization engines.

      Afterwards, keep on reading to see our advice on which tool to use when.

      Docker Pros and Cons

      Docker Pros:

      • Simple and approachable. Docker’s commands are designed to be relatively simple and easy to use. Alongside that, Docker maintains one of the most frequently used registries for container images.

        The Docker Hub holds a wide collection of well-maintained container images, many of which are composed and updated officially. This makes it relatively easy to, for example, pull a container image for a LAMP stack and start working quickly with Docker.

      • Popular. Docker’s widespread usage means you are more likely to encounter it anywhere that works with containers. It also means you have a vast and easily accessible collection of user documentation and troubleshooting to pull from.

      Docker Cons:

      • Daemon-based architecture. Docker runs on a long-running daemon process, which may pose security concerns for some. Additionally, that daemon process runs with root privileges. Thus, even limited users executing Docker commands are getting those commands fulfilled by a process with root privileges, a further security concern.

      Podman Pros and Cons

      Podman Pros:

      • Daemonless architecture. Podman directly interacts with containers and container images, without a long-running daemon process. Doing so reduces exposure to security risks.

      • Rootless processes. Because of its daemonless architecture, Podman can perform truly rootless operations. Users do not have to be granted root privileges to run Podman commands, and Podman does not have to rely on a root-privileged process.

      • Access to image registries. Podman can find and pull container images from numerous registries, including the Docker Hub. This means, with a little configuration, Podman can access the same image registries as Docker.

      Podman Cons:

      • Limited build features. Podman concerns itself primarily with running and managing containers. It can build containers and render them as images, often effectively for many use cases. However, its functionality for doing so represents a limited portion of the Buildah source code.

        Instead, Podman endorses using Buildah as a complimentary tool for more feature-rich container building and fine-tuned control over the process.

      Which One Should You Use?

      Docker and Podman each stand as viable containerization options. Each tool has a lot to offer, and for most containerization needs, either one works just as well as the other.

      But in what cases should you consider one of these two tools over the other?

      When to Use Docker

      Docker is best suited for when you want a more approachable containerization option. Docker’s design makes it relatively quick to pick up, and its feature set includes everything you’re likely to need when working with containers.

      Docker covers the full container life cycle, from container composition to deployment and maintenance. And it accomplishes this with a straightforward set of commands.

      Docker has established usage with many companies and has a proliferation of people experienced with it. When it comes to containerization tools, you are more likely to find people familiar with Docker than most other tools.

      Looking to go forward with Docker? Be sure to reference the guide linked above, as well as our guide
      When and Why to Use Docker. To see Docker in action, you may also want to look at our guide on
      How to install Docker and deploy a LAMP Stack.

      When to Use Podman

      Podman offers higher security options. Its daemonless architecture allows you to run rootless containers. This, combined with Podman’s direct (rather than long-running) processes for managing containers further secure them.

      Podman is a lightweight and specialized solution. It focuses on running, deploying, and managing containers, and gives you fine-grained control of these processes.

      At the same time, options for building containers and images are available, though limited. Podman keeps tightly focused on its specialization and prefers to work with Buildah as a complimentary tool for building containers and container images.

      This specialization and light footprint are useful in contexts where you want more control for running and managing containers, but don’t need the more advanced build capabilities (or are able to rely on another tool for them).

      In fact, you can effectively use Docker and Podman side-by-side, considering both tools are OCI-compliant. For instance, you can use Docker for your development environment, where you are creating application images but security is less of a concern. Then, use Podman to run and maintain those images in a production environment.

      Start moving forward with Podman by checking out our guide
      How to Install Podman for Running Containers. You may also be interested in taking a look at Buildah via our guide
      How to Use Buildah to Build OCI Container Images.


      You now have the knowledge needed to make a decision between Podman and Docker. Both are OCI-compliant containerization tools, each offering particular advantages. Each tool stands as a robust option for running, deploying, and managing containers. Which one you choose comes down to what particular features and use cases you need to cover.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      Source link

      How to Self-host Supabase with Docker

      Supabase is an open source Firebase alternative featuring a Postgres database, user authentication, and REST API capabilities. It offers a robust framework for creating the backend to Angular, React, Next.js, and other frontend applications.

      This tutorial, the first in our series on Supabase, introduces you to the basics of Supabase. It covers installing your own self-hosted Supabase instance with Docker, setting up an initial configuration, and securing your instance.

      Before You Begin

      1. Familiarize yourself with our
        Getting Started with Linode guide, and complete the steps for setting your Linode’s hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our
        How to Secure Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Update your system.

        • Debian and Ubuntu:

            sudo apt update && sudo apt upgrade
        • AlmaLinux, CentOS Stream, Fedora, and Rocky Linux:

            sudo dnf upgrade


      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the
      Users and Groups guide.

      How to Install Supabase with Docker

      Docker is the recommended solution for self-hosting Supabase. Moreover, Docker’s containerization makes setting up and managing a platform like Supabase more convenient.

      These next few sections show you how to use Docker and Docker Compose to get your own Supabase instance up and running. This includes steps for installing Docker and downloading the necessary Supabase files.

      Afterward, keep reading to see how you can start configuring your instance to fit your security needs.

      Installing Docker and Docker Compose

      The first step is to install Docker and Docker Compose. Docker runs your Supabase instance while Docker Compose organizes and coordinates the instance’s parts.

      1. Install Docker using the steps outlined in sections two and three of the following guides, depending on your Linux distribution.

      2. Install the Docker Compose plugin using your distribution’s package manager.

        • Debian and Ubuntu:

            sudo apt install docker-compose-plugin
        • AlmaLinux, CentOS Stream, Fedora, and Rocky Linux:

            sudo dnf install docker-compose-plugin
      3. Verify your Docker Compose installation:

         docker -v

        Your version may be different the one shown below, but that’s okay, you just want to get a version response:

        Docker version 20.10.17, build 100c701

      Download the Supabase Repository

      Supabase operates its Docker Compose setup out of its Git repository. Thus, you need to download your own copy of the repository to run your Supabase instance. Once you have it, the cloned repository houses your Supabase files and configuration.

      1. Clone the Supabase repository from GitHub. This creates a supabase subdirectory to your current directory:

         git clone --depth 1


        You may first need to install Git. Typically, you can do so through your system’s package manager.

        Debian and Ubuntu:

        sudo apt install git

        AlmaLinux, CentOS Stream, Fedora, and Rocky Linux:

        sudo dnf install git
      2. Change into the repository’s Docker subdirectory:

         cd supabase/docker
      3. Make a copy of the included configuration file, .env.example. For now, you can leave the contents of the file as is, but this file is where most of your instance’s configuration resides. Later, you can get some ideas for how to customize it for your security needs:

         cp .env.example .env

      Run Supabase

      You are now ready to start running your Supabase instance. You can start it up by running the appropriate Docker Compose command within the supabase/docker subdirectory:

      sudo docker compose up -d

      If you’re on a local machine, simply, navigate to localhost:3000 in your web browser to see the Supabase interface:

      Supabase dashboard

      However, if you are wanting to access Supabase remotely, you need to open the port in your system’s firewall. You can learn about how to do so through our guide on
      securing your server.

      You also need to modify the URL values in your Supabase instance’s configuration to match your server’s remote address. Open Supabase’s .env file, and change the SITE_URL, API_EXTERNAL_URL, and PUBLIC_REST_URL variables, replacing localhost with your server’s remote address.

      This example uses a remote IP address of for the server and assumes Supabase’s default ports:

      File: .env
      # [...]
      ## General
      # [...]
      # [...]
      # Studio - Configuration for the Dashboard
      PUBLIC_REST_URL= # replace if you intend to use Studio outside of localhost

      Similar changes need to be made again should you alter the server address or the instance’s ports. That is the case with the steps for implementing a reverse proxy server as shown further on in this tutorial.

      Once you have made the updates, restart your instance:

      sudo docker compose down
      sudo docker compose up -d

      After making the above preparations, you can access the Supabase interface remotely by navigating to port 3000 on your server’s remote IP address. For instance, if your server’s remote IP address is, navigate in a web browser to


      You may need to open the port in your system’s firewall. You can learn about how to do so through our guide on
      securing your server.

      How to Configure Supabase

      With your Supabase instance up and running, you can now adjust its configuration to fit your needs.

      Much of the Supabase configuration is controlled via the .env file as shown in the previous section. Open that file with your preferred text editor, make the desired changes, and save the file. For the changes to take effect you then need to stop your Supabase services and start them back up, like so:

      sudo docker compose down
      sudo docker compose up -d

      Securing Supabase

      The next several sections of this tutorial show you specific configurations you can use to make your Supabase instance more secure. This includes applying API keys and secrets as well as using a reverse proxy with SSL.

      Generating API Keys and Secrets

      Setting keys and secrets for your Supabase instance helps keep it secure. Doing so is actually part of the basic setup steps in Supabase’s documentation. These should certainly be set before running the instance in any production context.

      1. Generate two passwords without special characters and consisting of at least 32 characters, referred to henceforth as examplePassword1 and examplePassword2. You can generate random passwords for this purpose using Bitwarden’s
        password generator.

      2. Navigate to Supabase’s
        API-key generator. This tool takes examplePassword2 and creates two specific JavaScript Web Tokens (JWTs) from it. Input examplePassword2into the JWT Secret field, and make sure ANON_KEY is selected as the Preconfigured Payload. Then, click the Generate JWT button to generate exampleJWT1 and save it along with your passwords.

        Using a random example password like from above, the result could look like:

      3. Repeat the above step, input examplePassword2into the JWT Secret field, but this time select SERVICE_KEY as the Preconfigured Payload. Click the Generate JWT button to generate exampleJWT2 and save it along with your passwords.

        Using the same random example password, the result may resemble:

      4. Open the .env file in your supabase/docker directory. Replace the values for POSTGRES_PASSWORD, JWT_SECRET, ANON_KEY, and SERVICE_ROLE_KEY with your examplePassword1, examplePassword2, exampleJWT1, and exampleJWT2, respectively:

        File: .env
        # [...]
        # [...]
      5. Open the Kong configuration file, which is located at volumes/api/kong.yml in the supabase/docker directory. Find the consumers section of the file, and replace the key values under the anon and service_role usernames with your exampleJWT1 and exampleJWT2, respectively:

        File: volumes/api/kong.yml
        - username: anon
          - key: exampleJWT1
        - username: service_role
          - key: exampleJWT2
      6. Restart your Supabase instance for these changes to take effect:

         sudo docker compose down
         sudo docker compose up -d

      Using a Reverse Proxy

      NGINX provides an excellent proxy. It routes traffic between the various Supabase endpoints, giving you control over what gets exposed and how.

      Moreover, using NGINX gives a solution for applying SSL certification to your endpoints. Doing so, which is outlined in the next section, provides a vast improvement to your server’s security.

      1. Install NGINX. Follow steps two and three from our guide on
        How to Install and Use NGINX. Use the drop down at the top of the guide to select your Linux distribution and get the steps matched to it.

        Additionally, follow any directions in the above guide related to locating and preparing the NGINX default configuration. On Debian and Ubuntu, for instance, this just means finding the configuration file at /etc/nginx/sites-available/default. On AlmaLinux, by contrast, you need first to comment out a section in the /etc/nginx/nginx.conf file and create a /etc/nginx/conf.d/ file (replacing with your domain).

      2. Open the NGINX configuration file that you located/created as part of the above step. For this and following examples, the location is presumed to be /etc/nginx/sites-available/default, but know that your location may be different. Remove the configuration file’s default contents, and replace them with the following contents. Be sure to replace the example IP address with your server’s IP address and with your domain.

        File: /etc/nginx/sites-available/default
        map $http_upgrade $connection_upgrade {
            default upgrade;
            '' close;
        upstream supabase {
            server localhost:3000;
        upstream kong {
            server localhost:8000;
        server {
            listen 80;
            server_name localhost;
            # REST
            location ~ ^/rest/v1/(.*)$ {
                proxy_set_header Host $host;
                proxy_pass http://kong;
                proxy_redirect off;
            # AUTH
            location ~ ^/auth/v1/(.*)$ {
                proxy_set_header Host $host;
                proxy_pass http://kong;
                proxy_redirect off;
            # REALTIME
            location ~ ^/realtime/v1/(.*)$ {
                proxy_redirect off;
                proxy_pass http://kong;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection $connection_upgrade;
                proxy_set_header Host $host;
            # STUDIO
            location / {
                proxy_set_header Host $host;
                proxy_pass http://supabase;
                proxy_redirect off;
                proxy_set_header Upgrade $http_upgrade;
      3. Restart the NGINX service, which you can typically do with:

         sudo systemctl restart nginx

      Afterward, you should be able to access the Supabase dashboard without having to specify port 3000.


      Should you encounter a “bad gateway” error, your system may be denying NGINX due to SELinux rules. You can verify this by checking the NGINX logs at /var/log/nginx/error.log and looking for “Permission denied”.

      According to Stack Overflow, the issue can typically be resolved with the following command. This allows NGINX to make network connection on your system:

      sudo setsebool -P httpd_can_network_connect 1

      Adding an SSL Certificate

      The following steps show you how to apply an SSL certificate to Supabase using
      Certbot. Certbot allows you to easily request and download free certificates from
      Let’s Encrypt.

      With an SSL certificate, your instance’s traffic gets encrypted and secured over HTTPS.

      1. Follow along with our guide on
        Enabling HTTPS Using Certbot with NGINX up to the step for executing the certbot command. Be sure to select the appropriate Linux distribution from the dropdown at the top of that guide.

      2. Certbot needs to use port 80 for Let’s Encrypt verification, so temporarily stop NGINX:

        sudo systemctl stop nginx
      3. This guide uses a variant of the certbot command to retrieve the certificate only and to use a standalone verification method. Doing so gives more control over how the certificate is applied.

        You can achieve this with the command:

        sudo certbot certonly --standalone

        Follow along with the prompts, entering an email address for renewal notifications, accepting the terms of service, and entering your server’s domain name.

        Take note of the locations of your certificate files. Certbot outputs these locations upon success, and you need these locations for the next step. Typically, the locations resemble the following, replacing with your actual domain name:

      4. Open your NGINX configuration file again (typically located at /etc/nginx/sites-available/default. Make the following changes to the beginning of the server section.

        Be sure to replace the ssl_certificate and ssl_certificate_key values here with the locations of the fullchain.pem and privkey.pem files created by Certbot. And replace the in the server_name with your domain name:

        File: /etc/nginx/sites-available/default
        # [...]
        server {
            listen      80;
            server_name localhost;
            access_log  off;
            rewrite ^ https://$host$request_uri? permanent;
        server {
            listen 443 ssl;
            ssl_certificate /etc/letsencrypt/live/;
            ssl_certificate_key /etc/letsencrypt/live/;
        # [...]
      5. Restart NGINX:

         sudo systemctl start nginx

      Now, you can access your Supabase instance in a web browser via the HTTPS version of your domain. And you can be assured that your Supabase instance is secured using SSL certification.


      You can optionally also add your server’s remote IP address to the NGINX configuration above and use that as well. However, you may receive a certificate warning in your browser. This is because the certificate was issued for your server’s domain name, not its IP address.


      Now you have your Supabase instance running and configured for your security needs. Take advantage of your instance by reading the
      Supabase documentation. There, you can find guides on getting started with the wide range of features Supabase has to offer.

      And continue learning with us in our upcoming series of guides on Supabase. These cover everything from setting up your instance, to linking your instance to Linode Object Storage, to building JavaScript applications with Supabase.

      Have more questions or want some help getting started? Feel free to reach out to our
      Support team.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.

      Source link

      How To Build and Deploy a Flask Application Using Docker on Ubuntu 20.04

      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.


      Docker is an open-source application that allows administrators to create, manage, deploy, and replicate applications using containers. Containers can be thought of as a package that houses dependencies that an application requires to run at an operating system level. This means that each application deployed using Docker lives in an environment of its own and its requirements are handled separately.

      Flask is a web micro-framework that is built on Python. It is called a micro-framework because it does not require specific tools or plug-ins to run. The Flask framework is lightweight and flexible, yet highly structured, making it especially popular for small web apps written in Python.

      Deploying a Flask application with Docker will allow you to replicate the application across different servers with minimal reconfiguration.

      In this tutorial, you will create a Flask application and deploy it with Docker. This tutorial will also cover how to update an application after deployment.


      To follow this tutorial, you will need the following:

      Step 1 — Setting Up the Flask Application

      To get started, you will create a directory structure that will hold your Flask application. This tutorial will create a directory called TestApp in /var/www, but you can modify the command to name it whatever you’d like.

      • sudo mkdir /var/www/TestApp

      Move in to the newly created TestApp directory:

      Next, create the base folder structure for the Flask application:

      • sudo mkdir -p app/static app/templates

      The -p flag indicates that mkdir will create a directory and all parent directories that don’t exist. In this case, mkdir will create the app parent directory in the process of making the static and templates directories.

      The app directory will contain all files related to the Flask application such as its views and blueprints. Views are the code you write to respond to requests to your application. Blueprints create application components and support common patterns within an application or across multiple applications.

      The static directory is where assets such as images, CSS, and JavaScript files live. The templates directory is where you will put the HTML templates for your project.

      Now that the base folder structure is complete, you need to create the files needed to run the Flask application. First, create an file inside the app directory using nano or a text editor of your choice. This file tells the Python interpreter that the app directory is a package and should be treated as such.

      Run the following command to create the file:

      • sudo nano app/

      Packages in Python allow you to group modules into logical namespaces or hierarchies. This approach enables the code to be broken down into individual and manageable blocks that perform specific functions.

      Next, you will add code to the that will create a Flask instance and import the logic from the file, which you will create after saving this file. Add the following code to your new file:


      from flask import Flask
      app = Flask(__name__)
      from app import views

      Once you’ve added that code, save and close the file. You can save and close the file by pressing Ctrl+X, then when prompted, Y and Enter.

      With the file created, you’re ready to create the file in your app directory. This file will contain most of your application logic.

      Next, add the code to your file. This code will return the hello world! string to users who visit your web page:


      from app import app
      def home():
         return "hello world!"

      The @app.route line above the function is called a decorator. Decorators are a Python language convention that are widely used by Flask; their purpose is to modify the functions immediately following them. In this case, the decorator tells Flask which URL will trigger the home() function. The hello world text returned by the home function will be displayed to the user on the browser.

      With the file in place, you’re ready to create the uwsgi.ini file. This file will contain the uWSGI configurations for our application. uWSGI is a deployment option for Nginx that is both a protocol and an application server; the application server can serve uWSGI, FastCGI, and HTTP protocols.

      To create this file, run the following command:

      Next, add the following content to your file to configure the uWSGI server:


      module = main
      callable = app
      master = true

      This code defines the module that the Flask application will be served from. In this case, this is the file, referenced here as main. The callable option instructs uWSGI to use the app instance exported by the main application. The master option allows your application to keep running, so there is little downtime even when reloading the entire application.

      Next, create the file, which is the entry point to the application. The entry point instructs uWSGI on how to interact with the application.

      Next, copy and paste the following into the file. This imports the Flask instance named app from the application package that was previously created.


      from app import app

      Finally, create a requirements.txt file to specify the dependencies that the pip package manager will install to your Docker deployment:

      • sudo nano requirements.txt

      Add the following line to add Flask as a dependency:



      This specifies the version of Flask to be installed. At the time of writing this tutorial, 2.0.2 is the latest Flask version, and specifying >=2.0.2 will ensure you get version 2.0.2 or newer. Because you’re making a basic test app in this tutorial, the syntax is unlikely to go out of date due to future updates to Flask, but if you wanted to be safe and still receive minor updates, you could specify that you don’t want to install a future major version by specifying something like Flask>=2.0.2,<3.0. You can check for updates at the official website for Flask, or on the Python Package Index’s landing page for the Flask library.

      Save and close the file. You have successfully set up your Flask application and are ready to set up Docker.

      Step 2 — Setting Up Docker

      In this step you will create two files, Dockerfile and, to create your Docker deployment. The Dockerfile is a text document that contains the commands used to assemble the image. The file is a shell script that will build an image and create a container from the Dockerfile.

      First, create the Dockerfile.

      Next, add your desired configuration to the Dockerfile. These commands specify how the image will be built, and what extra requirements will be included.


      FROM tiangolo/uwsgi-nginx-flask:python3.8-alpine
      RUN apk --update add bash nano
      ENV STATIC_URL /static
      ENV STATIC_PATH /var/www/app/static
      COPY ./requirements.txt /var/www/requirements.txt
      RUN pip install -r /var/www/requirements.txt

      In this example, the Docker image will be built off an existing image, tiangolo/uwsgi-nginx-flask, which you can find on DockerHub. This particular Docker image is a good choice over others because it supports a wide range of Python versions and OS images.

      The first two lines specify the parent image that you’ll use to run the application and install the bash command processor and the nano text editor. It also installs the git client for pulling and pushing to version control hosting services such as GitHub, GitLab, and Bitbucket. ENV STATIC_URL /static is an environment variable specific to this Docker image. It defines the static folder where all assets such as images, CSS files, and JavaScript files are served from.

      The last two lines will copy the requirements.txt file into the container so that it can be executed, and then parses the requirements.txt file to install the specified dependencies.

      Save and close the file after adding your configuration.

      With your Dockerfile in place, you’re almost ready to write your script that will build the Docker container. Before writing the script, first make sure that you have an open port to use in the configuration. To check if a port is free, run the following command:

      • sudo nc localhost 56733 < /dev/null; echo $?

      If the output of the command above is 1, then the port is free and usable. Otherwise, you will need to select a different port to use in your configuration file.

      Once you’ve found an open port to use, create the script:

      The script is a shell script that will build an image from the Dockerfile and create a container from the resulting Docker image. Add your configuration to the new file:


      docker build -t ${app} .
      docker run -d -p 56733:80 
        -v $PWD:/app ${app}

      The first line is called a shebang. It specifies that this is a bash file and will be executed as commands. The next line specifies the name you want to give the image and container and saves as a variable named app. The next line instructs Docker to build an image from your Dockerfile located in the current directory. This will create an image called docker.test in this example.

      The last three lines create a new container named docker.test that is exposed at port 56733. Finally, it links the present directory to the /var/www directory of the container.

      You use the -d flag to start a container in daemon mode, or as a background process. You include the -p flag to bind a port on the server to a particular port on the Docker container. In this case, you are binding port 56733 to port 80 on the Docker container. The -v flag specifies a Docker volume to mount on the container, and in this case, you are mounting the entire project directory to the /var/www folder on the Docker container.

      Save and close the file after adding your configuration.

      Execute the script to create the Docker image and build a container from the resulting image:

      Once the script finishes running, use the following command to list all running containers:

      You will receive output that shows the containers:


      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58b05508f4dd docker.test "/ /sta…" 12 seconds ago Up 3 seconds 443/tcp,>80/tcp docker.test

      You will find that the docker.test container is running. Now that it is running, visit the IP address at the specified port in your browser: http://ip-address:56733

      You’ll see a page similar to the following:

      the home page

      In this step you have successfully deployed your Flask application on Docker. Next, you will use templates to display content to users.

      Step 3 — Serving Template Files

      Templates are files that display static and dynamic content to users who visit your application. In this step, you will create a HTML template to create a homepage for the application.

      Start by creating a home.html file in the app/templates directory:

      • sudo nano app/templates/home.html

      Add the code for your template. This code will create an HTML5 page that contains a title and some text.


      <!doctype html>
      <html lang="en-us">   
          <meta charset="utf-8">
          <meta http-equiv="x-ua-compatible" content="ie=edge">
          <title>Welcome home</title>
          <h1>Home Page</h1>
          <p>This is the home page of our application.</p>

      Save and close the file once you’ve added your template.

      Next, modify the app/ file to serve the newly created file:

      First, add the following line at the beginning of your file to import the render_template method from Flask. This method parses an HTML file to render a web page to the user.


      from flask import render_template

      At the end of the file, you will also add a new route to render the template file. This code specifies that users are served the contents of the home.html file whenever they visit the /template route on your application.


      def template():
          return render_template('home.html')

      The updated app/ file will look like this:


      from flask import render_template
      from app import app 
      def home():
          return "Hello world!"
      def template():
          return render_template('home.html')

      Save and close the file when done.

      In order for these changes to take effect, you will need to stop and restart the Docker containers. Run the following command to rebuild the container:

      • sudo docker stop docker.test && sudo docker start docker.test

      Visit your application at http://your-ip-address:56733/template to see the new template being served.


      In this you’ve created a Docker template file to serve visitors on your application. In the next step you will see how the changes you make to your application can take effect without having to restart the Docker container.

      Step 4 — Updating the Application

      Sometimes you will need to make changes to the application, whether it is installing new requirements, updating the Docker container, or HTML and logic changes. In this section, you will configure touch-reload to make these changes without needing to restart the Docker container.

      Python autoreloading watches the entire file system for changes and refreshes the application when it detects a change. Autoreloading is discouraged in production because it can become resource intensive very quickly. In this step, you will use touch-reload to watch for changes to a particular file and reload when the file is updated or replaced.

      To implement this, start by opening your uwsgi.ini file:

      Next, add the highlighted line to the end of the file:


      module = main
      callable = app
      master = true
      touch-reload = /app/uwsgi.ini

      This specifies a file that will be modified to trigger an entire application reload. Once you’ve made the changes, save and close the file.

      To demonstrate this, make a small change to your application. Start by opening your app/ file:

      Replace the string returned by the home function:


      from flask import render_template
      from app import app
      def home():
          return "<b>There has been a change</b>"
      def template():
          return render_template('home.html')

      Save and close the file after you’ve made a change.

      Next, if you open your application’s homepage at http://ip-address:56733, you will notice that the changes are not reflected. This is because the condition for reload is a change to the uwsgi.ini file. To reload the application, use touch to activate the condition:

      Reload the application homepage in your browser again. You will find that the application has incorporated the changes:

      Homepage Updated

      In this step, you set up a touch-reload condition to update your application after making changes.


      In this tutorial, you created and deployed a Flask application to a Docker container. You also configured touch-reload to refresh your application without needing to restart the container.

      With your new application on Docker, you can now scale with ease. To learn more about using Docker, check out their official documentation.

      Source link