One place for hosting & domains

      Docker

      How To Build and Deploy a Flask Application Using Docker on Ubuntu 20.04


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker is an open-source application that allows administrators to create, manage, deploy, and replicate applications using containers. Containers can be thought of as a package that houses dependencies that an application requires to run at an operating system level. This means that each application deployed using Docker lives in an environment of its own and its requirements are handled separately.

      Flask is a web micro-framework that is built on Python. It is called a micro-framework because it does not require specific tools or plug-ins to run. The Flask framework is lightweight and flexible, yet highly structured, making it especially popular for small web apps written in Python.

      Deploying a Flask application with Docker will allow you to replicate the application across different servers with minimal reconfiguration.

      In this tutorial, you will create a Flask application and deploy it with Docker. This tutorial will also cover how to update an application after deployment.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Setting Up the Flask Application

      To get started, you will create a directory structure that will hold your Flask application. This tutorial will create a directory called TestApp in /var/www, but you can modify the command to name it whatever you’d like.

      • sudo mkdir /var/www/TestApp

      Move in to the newly created TestApp directory:

      Next, create the base folder structure for the Flask application:

      • sudo mkdir -p app/static app/templates

      The -p flag indicates that mkdir will create a directory and all parent directories that don’t exist. In this case, mkdir will create the app parent directory in the process of making the static and templates directories.

      The app directory will contain all files related to the Flask application such as its views and blueprints. Views are the code you write to respond to requests to your application. Blueprints create application components and support common patterns within an application or across multiple applications.

      The static directory is where assets such as images, CSS, and JavaScript files live. The templates directory is where you will put the HTML templates for your project.

      Now that the base folder structure is complete, you need to create the files needed to run the Flask application. First, create an __init__.py file inside the app directory using nano or a text editor of your choice. This file tells the Python interpreter that the app directory is a package and should be treated as such.

      Run the following command to create the file:

      • sudo nano app/__init__.py

      Packages in Python allow you to group modules into logical namespaces or hierarchies. This approach enables the code to be broken down into individual and manageable blocks that perform specific functions.

      Next, you will add code to the __init__.py that will create a Flask instance and import the logic from the views.py file, which you will create after saving this file. Add the following code to your new file:

      /var/www/TestApp/app/__init__.py

      from flask import Flask
      app = Flask(__name__)
      from app import views
      

      Once you’ve added that code, save and close the file. You can save and close the file by pressing Ctrl+X, then when prompted, Y and Enter.

      With the __init__.py file created, you’re ready to create the views.py file in your app directory. This file will contain most of your application logic.

      Next, add the code to your views.py file. This code will return the hello world! string to users who visit your web page:

      /var/www/TestApp/app/views.py

      from app import app
      
      @app.route('/')
      def home():
         return "hello world!"
      

      The @app.route line above the function is called a decorator. Decorators are a Python language convention that are widely used by Flask; their purpose is to modify the functions immediately following them. In this case, the decorator tells Flask which URL will trigger the home() function. The hello world text returned by the home function will be displayed to the user on the browser.

      With the views.py file in place, you’re ready to create the uwsgi.ini file. This file will contain the uWSGI configurations for our application. uWSGI is a deployment option for Nginx that is both a protocol and an application server; the application server can serve uWSGI, FastCGI, and HTTP protocols.

      To create this file, run the following command:

      Next, add the following content to your file to configure the uWSGI server:

      /var/www/TestApp/uwsgi.ini

      [uwsgi]
      module = main
      callable = app
      master = true
      

      This code defines the module that the Flask application will be served from. In this case, this is the main.py file, referenced here as main. The callable option instructs uWSGI to use the app instance exported by the main application. The master option allows your application to keep running, so there is little downtime even when reloading the entire application.

      Next, create the main.py file, which is the entry point to the application. The entry point instructs uWSGI on how to interact with the application.

      Next, copy and paste the following into the file. This imports the Flask instance named app from the application package that was previously created.

      /var/www/TestApp/main.py

      from app import app
      

      Finally, create a requirements.txt file to specify the dependencies that the pip package manager will install to your Docker deployment:

      • sudo nano requirements.txt

      Add the following line to add Flask as a dependency:

      /var/www/TestApp/requirements.txt

      Flask>=2.0.2
      

      This specifies the version of Flask to be installed. At the time of writing this tutorial, 2.0.2 is the latest Flask version, and specifying >=2.0.2 will ensure you get version 2.0.2 or newer. Because you’re making a basic test app in this tutorial, the syntax is unlikely to go out of date due to future updates to Flask, but if you wanted to be safe and still receive minor updates, you could specify that you don’t want to install a future major version by specifying something like Flask>=2.0.2,<3.0. You can check for updates at the official website for Flask, or on the Python Package Index’s landing page for the Flask library.

      Save and close the file. You have successfully set up your Flask application and are ready to set up Docker.

      Step 2 — Setting Up Docker

      In this step you will create two files, Dockerfile and start.sh, to create your Docker deployment. The Dockerfile is a text document that contains the commands used to assemble the image. The start.sh file is a shell script that will build an image and create a container from the Dockerfile.

      First, create the Dockerfile.

      Next, add your desired configuration to the Dockerfile. These commands specify how the image will be built, and what extra requirements will be included.

      /var/www/TestApp/Dockerfile

      FROM tiangolo/uwsgi-nginx-flask:python3.8-alpine
      RUN apk --update add bash nano
      ENV STATIC_URL /static
      ENV STATIC_PATH /var/www/app/static
      COPY ./requirements.txt /var/www/requirements.txt
      RUN pip install -r /var/www/requirements.txt
      

      In this example, the Docker image will be built off an existing image, tiangolo/uwsgi-nginx-flask, which you can find on DockerHub. This particular Docker image is a good choice over others because it supports a wide range of Python versions and OS images.

      The first two lines specify the parent image that you’ll use to run the application and install the bash command processor and the nano text editor. It also installs the git client for pulling and pushing to version control hosting services such as GitHub, GitLab, and Bitbucket. ENV STATIC_URL /static is an environment variable specific to this Docker image. It defines the static folder where all assets such as images, CSS files, and JavaScript files are served from.

      The last two lines will copy the requirements.txt file into the container so that it can be executed, and then parses the requirements.txt file to install the specified dependencies.

      Save and close the file after adding your configuration.

      With your Dockerfile in place, you’re almost ready to write your start.sh script that will build the Docker container. Before writing the start.sh script, first make sure that you have an open port to use in the configuration. To check if a port is free, run the following command:

      • sudo nc localhost 56733 < /dev/null; echo $?

      If the output of the command above is 1, then the port is free and usable. Otherwise, you will need to select a different port to use in your start.sh configuration file.

      Once you’ve found an open port to use, create the start.sh script:

      The start.sh script is a shell script that will build an image from the Dockerfile and create a container from the resulting Docker image. Add your configuration to the new file:

      /var/www/TestApp/start.sh

      #!/bin/bash
      app="docker.test"
      docker build -t ${app} .
      docker run -d -p 56733:80 
        --name=${app} 
        -v $PWD:/app ${app}
      

      The first line is called a shebang. It specifies that this is a bash file and will be executed as commands. The next line specifies the name you want to give the image and container and saves as a variable named app. The next line instructs Docker to build an image from your Dockerfile located in the current directory. This will create an image called docker.test in this example.

      The last three lines create a new container named docker.test that is exposed at port 56733. Finally, it links the present directory to the /var/www directory of the container.

      You use the -d flag to start a container in daemon mode, or as a background process. You include the -p flag to bind a port on the server to a particular port on the Docker container. In this case, you are binding port 56733 to port 80 on the Docker container. The -v flag specifies a Docker volume to mount on the container, and in this case, you are mounting the entire project directory to the /var/www folder on the Docker container.

      Save and close the file after adding your configuration.

      Execute the start.sh script to create the Docker image and build a container from the resulting image:

      Once the script finishes running, use the following command to list all running containers:

      You will receive output that shows the containers:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58b05508f4dd docker.test "/entrypoint.sh /sta…" 12 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:56733->80/tcp docker.test

      You will find that the docker.test container is running. Now that it is running, visit the IP address at the specified port in your browser: http://ip-address:56733

      You’ll see a page similar to the following:

      the home page

      In this step you have successfully deployed your Flask application on Docker. Next, you will use templates to display content to users.

      Step 3 — Serving Template Files

      Templates are files that display static and dynamic content to users who visit your application. In this step, you will create a HTML template to create a homepage for the application.

      Start by creating a home.html file in the app/templates directory:

      • sudo nano app/templates/home.html

      Add the code for your template. This code will create an HTML5 page that contains a title and some text.

      /var/www/TestApp/app/templates/home.html

      
      <!doctype html>
      
      <html lang="en-us">   
        <head>
          <meta charset="utf-8">
          <meta http-equiv="x-ua-compatible" content="ie=edge">
          <title>Welcome home</title>
        </head>
      
        <body>
          <h1>Home Page</h1>
          <p>This is the home page of our application.</p>
        </body> 
      </html>
      

      Save and close the file once you’ve added your template.

      Next, modify the app/views.py file to serve the newly created file:

      First, add the following line at the beginning of your file to import the render_template method from Flask. This method parses an HTML file to render a web page to the user.

      /var/www/TestApp/app/views.py

      from flask import render_template
      ...
      

      At the end of the file, you will also add a new route to render the template file. This code specifies that users are served the contents of the home.html file whenever they visit the /template route on your application.

      /var/www/TestApp/app/views.py

      ...
      
      @app.route('/template')
      def template():
          return render_template('home.html')
      

      The updated app/views.py file will look like this:

      /var/www/TestApp/app/views.py

      from flask import render_template
      from app import app 
      
      @app.route('/')
      def home():
          return "Hello world!"
      
      @app.route('/template')
      def template():
          return render_template('home.html')
      

      Save and close the file when done.

      In order for these changes to take effect, you will need to stop and restart the Docker containers. Run the following command to rebuild the container:

      • sudo docker stop docker.test && sudo docker start docker.test

      Visit your application at http://your-ip-address:56733/template to see the new template being served.

      homepage

      In this you’ve created a Docker template file to serve visitors on your application. In the next step you will see how the changes you make to your application can take effect without having to restart the Docker container.

      Step 4 — Updating the Application

      Sometimes you will need to make changes to the application, whether it is installing new requirements, updating the Docker container, or HTML and logic changes. In this section, you will configure touch-reload to make these changes without needing to restart the Docker container.

      Python autoreloading watches the entire file system for changes and refreshes the application when it detects a change. Autoreloading is discouraged in production because it can become resource intensive very quickly. In this step, you will use touch-reload to watch for changes to a particular file and reload when the file is updated or replaced.

      To implement this, start by opening your uwsgi.ini file:

      Next, add the highlighted line to the end of the file:

      /var/www/TestApp/uwsgi.ini

      module = main
      callable = app
      master = true
      touch-reload = /app/uwsgi.ini
      

      This specifies a file that will be modified to trigger an entire application reload. Once you’ve made the changes, save and close the file.

      To demonstrate this, make a small change to your application. Start by opening your app/views.py file:

      Replace the string returned by the home function:

      /var/www/TestApp/app/views.py

      from flask import render_template
      from app import app
      
      @app.route('/')
      def home():
          return "<b>There has been a change</b>"
      
      @app.route('/template')
      def template():
          return render_template('home.html')
      

      Save and close the file after you’ve made a change.

      Next, if you open your application’s homepage at http://ip-address:56733, you will notice that the changes are not reflected. This is because the condition for reload is a change to the uwsgi.ini file. To reload the application, use touch to activate the condition:

      Reload the application homepage in your browser again. You will find that the application has incorporated the changes:

      Homepage Updated

      In this step, you set up a touch-reload condition to update your application after making changes.

      Conclusion

      In this tutorial, you created and deployed a Flask application to a Docker container. You also configured touch-reload to refresh your application without needing to restart the container.

      With your new application on Docker, you can now scale with ease. To learn more about using Docker, check out their official documentation.



      Source link

      How to Monitor Docker Using Zabbix on Ubuntu 20.04


      The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program.

      Docker is a popular application that simplifies managing application processes in containers. Containers are similar to virtual machines, but since they are more dependent on the host operating system, they are more portable and resource-friendly.

      When using containers in a production environment, you should know if they are all running and what resources they are consuming. Zabbix is a monitoring system that can monitor the state of almost any element of your IT infrastructure, such as networks, servers, virtual machines, and applications.

      Zabbix recently introduced a new agent (Zabbix agent 2) with advanced capabilities. One of the key advantages of the new agent is the ability to expand functionality using plugins. At the moment, there are several plugins available, including one for monitoring Docker.

      In this tutorial, you will set up Docker monitoring in Zabbix using Zabbix agent 2 on Ubuntu 20.04. You’ll explore metrics and simulate a crash to generate a notification. In the end, you will have a monitoring system in place for your Docker application, which will notify you of any problems with containers.

      Prerequisites

      To follow this tutorial, you will need:

      • Two Ubuntu 20.04 servers set up by following the Initial Server Setup Guide for Ubuntu 20.04, including a non-root user with sudo privileges and a firewall configured with ufw.
      • The first server, where you’ll install Zabbix, will be the Zabbix server. Install the following components:
      • The second server, where you’ll install Docker (and the Zabbix agent), will be the Docker server. Install Docker by following Step 1 of the tutorial How To Install and Use Docker on Ubuntu 20.04.
      • A fully registered domain name. This tutorial will use your_domain as an example throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      • An A record with your_domain pointing to your Zabbix server’s public IP address. You can follow this introduction to DigitalOcean DNS for details on how to add it.

      Step 1 — Installing and Configuring Zabbix Agent 2

      A Zabbix agent is a very small application that must be installed on every server or virtual machine you want to monitor. It will send monitoring data to the Zabbix server. Zabbiх suggests using one of two agents: Zabbix agent or Zabbix agent 2. (You can learn about their differences in the documentation.) For this tutorial, we’ll use Zabbix agent 2, since it can work with Docker. In this step, you will learn how to install and configure it.

      Start by installing the agent software. Log in to the Docker server:

      • ssh sammy@docker_server_ip_address

      Run the following commands to install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/5.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_5.0-1+focal_all.deb
      • sudo dpkg -i zabbix-release_5.0-1+focal_all.deb

      Next, update the package index:

      Then install the Zabbix agent:

      • sudo apt install zabbix-agent2

      Next, you will need to edit the agent configuration file, which contains all the agent’s settings. These settings specify the address to transfer data to, the port to use for connections to and from the server, whether to use a secure connection, and much more. In most cases, the default settings will be fine. All settings within this file are documented via informative comments throughout the file.

      For this tutorial, you will need to edit the Zabbix agent settings to set up its connection to the Zabbix server. Open the agent configuration file in your text editor:

      • sudo nano /etc/zabbix/zabbix_agent2.conf

      By default, the agent is configured to send data to a server on the same host as the agent, so you will need to add the IP address of the Zabbix server. Find the following section:

      /etc/zabbix/zabbix_agent2.conf

      ...
      ### Option: Server
      #       List of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers and Zabbix proxies.
      #       Incoming connections will be accepted only from the hosts listed here.
      #       If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally
      #       and '::/0' will allow any IPv4 or IPv6 address.
      #       '0.0.0.0/0' can be used to allow any IPv4 address.
      #       Example: Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
      #
      # Mandatory: yes, if StartAgents is not explicitly set to 0
      # Default:
      # Server=
      
      Server=127.0.0.1
      ...
      

      Change the default value to the IP of your Zabbix server:

      /etc/zabbix/zabbix_agent2.conf

      ...
      Server=zabbix_server_ip_address
      ...
      

      By default, the Zabbix server connects to the agent. But for some checks (for example, monitoring the logs), a reverse connection is required. For correct operation, you need to specify the Zabbix server address and a unique host name.

      Find the section that configures the active checks and change the default values as shown below:

      /etc/zabbix/zabbix_agent2.conf

      ...
      ##### Active checks related
      
      ### Option: ServerActive
      #       List of comma delimited IP:port (or DNS name:port) pairs of Zabbix servers and Zabbix proxies for active checks.
      #       If port is not specified, default port is used.
      #       IPv6 addresses must be enclosed in square brackets if port for that host is specified.
      #       If port is not specified, square brackets for IPv6 addresses are optional.
      #       If this parameter is not specified, active checks are disabled.
      #       Example: ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
      #
      # Mandatory: no
      # Default:
      # ServerActive=
      
      ServerActive=zabbix_server_ip_address
      
      ### Option: Hostname
      #       Unique, case sensitive hostname.
      #       Required for active checks and must match hostname as configured on the server.
      #       Value is acquired from HostnameItem if undefined.
      #
      # Mandatory: no
      # Default:
      # Hostname=
      
      Hostname=Docker server
      ...
      

      Save and close the file.

      For the Zabbix agent to monitor Docker, you’ll need to add the zabbix user to the docker group:

      • sudo usermod -aG docker zabbix

      Now you can restart the Zabbix agent and set it to start at boot time:

      • sudo systemctl restart zabbix-agent2
      • sudo systemctl enable zabbix-agent2

      For good measure, check that the Zabbix agent is running correctly:

      • sudo systemctl status zabbix-agent2

      The output will look similar to this, indicating that the agent is running:

      Output

      ● zabbix-agent2.service - Zabbix Agent 2 Loaded: loaded (/lib/systemd/system/zabbix-agent2.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2021-09-03 07:05:05 UTC; 1min 23s ago ...

      The agent will listen on port 10050 for connections from the server. Configure UFW to allow connections to this port:

      You can learn more about UFW in How To Set Up a Firewall with UFW on Ubuntu 20.04.

      In this step, you installed Zabbix agent 2, configured it to allow connections from your Zabbix server, and prepared to monitor Docker.

      Your agent is now ready to send data to the Zabbix server. But in order to use it, you have to link to it from the server’s web console and enable the Docker template. In the next step, you will complete the configuration.

      Step 2 — Adding the New Host to the Zabbix Server

      Installing an agent on the server you want to monitor is only half of the process. Each host you want to monitor needs to be registered on the Zabbix server, which you can do through the web interface. You also need to connect the appropriate template to it. Using templates, you can apply the same monitoring items, triggers, graphs, and low-level discovery rules to multiple hosts. When a template is linked to a host, all entities (items, triggers, graphs, etc.) of the template are added to the host.

      Log in to the Zabbix server web interface by navigating to the address http://zabbix_server_name or https://zabbix_server_name. (As mentioned in the tutorial linked in the prerequisites, How To Install and Configure Zabbix to Securely Monitor Remote Servers on Ubuntu 20.04, the default user is Admin and the password is zabbix.)

      Log in

      When you have logged in, click Configuration and then Hosts in the left navigation bar. Then click the Create host button in the top right corner of the screen. This will open the host configuration page.

      Create host

      Adjust the Host name and IP address to reflect the host name and IP address of your Docker server, then add the host to a group. You can select an existing group, such as Linux servers, or create your own group, such as Docker servers. Each host must be in a group and it can also be in multiple groups. To do this, enter the name of an existing or new group in the Groups field and select the desired value from the proposed list.

      After adding the group, click the Templates tab.

      Host templates

      To enable the necessary checks, you need to add a template. The template includes all the necessary checks, triggers, and graphs. The host can have multiple templates. To do this, enter the name of the template in the Search field and select it from the proposed list.

      For this tutorial, type Template App Docker and then select it from the list to add this template to the host. This will attach to the host all the items, triggers, and graphs required for Docker monitoring that were pre-configured in the template.

      Finally, click the Add button at the bottom of the form to create the host.

      You will see your new host in the list. Wait for a minute and reload the page to see green labels indicating that everything is working fine.

      Hosts

      In this step, you added a new host to the server and used ready-made templates to add the required checks.

      The Zabbix server is now monitoring your Docker server. In the next step, you will launch a test container and explore what metrics Zabbix can collect.

      Step 3 — Accessing Docker Metrics

      If you are running this tutorial in a new environment, then you have no containers running and nothing to monitor yet. In this step, you will start a test container and see what metrics are available. In this tutorial, you will run a test container based on ubuntu.

      Run the following command on the Docker server:

      • sudo docker run --name test_container -it ubuntu bash

      This will launch a test container named test_container using the ubuntu:latest image. The -it flag instructs Docker to allocate a pseudo-TTY connected to the container’s standard input, creating an interactive bash shell in the container.

      The output will look similar to this:

      Output

      Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 35807b77a593: Pull complete Digest: sha256:9d6a8699fb5c9c39cf08a0871bd6219f0400981c570894cd8cbea30d3424a31f Status: Downloaded newer image for ubuntu:latest

      Note: Do not leave the shell, as you will need it in the next step.

      A couple of minutes after starting the container, Zabbix will automatically find a new container and add the appropriate metrics.

      To see a list of all metrics, click Monitoring and then Hosts in the left navigation bar. Then click Latest data in the corresponding line.

      Hosts

      You will see a table with all the metrics that the Zabbix agent collects for a given host. For each item, you can see the name, the history storage period, the time of the last check, and the last value.

      You can scroll through the list and see the metrics associated with the test_container. You can also enter the name of the container in the Name field and click Apply to find all the relevant metrics.

      Metrics

      To view the history of changes for a specific metric, click Graph (for numerical values) or History (for text values).

      You can also select one or more metrics in the list and click the Display graph button at the very bottom of the list to see several graphs together.

      Select metrics

      Graphs

      In this step, you launched a test container and viewed its metrics in the Zabbix web interface. Next, you will trigger a notification.

      Step 4 — Generating a Test Alert

      In this step, you’ll see how monitoring works by simulating an unexpected shutdown of the container, which triggers a notification from Zabbix.

      Begin by simulating a container crash. Execute the following command in the container shell:

      With this command, you terminated the container with error code 1. This exit code is passed on to the caller of docker run, and is recorded in the test container’s metadata.

      To see the alert, click Monitoring and then Dashboard in the left navigation bar of the Zabbix web interface. In a minute or less, you will see the notification “Container has been stopped with error code”:

      Global view

      If you have previously configured email or other notifications, you will also receive a message similar to this:

      Problem started at 11:17:31 on 2021.09.03
      Problem name: Container /test_container: Container has been stopped with error code
      Host: Docker Server
      Severity: Average
      Operational data: 1
      Original problem ID: 103
      

      After checking the notifications, you can restart the container and the problem will disappear automatically.

      To restart the container, run the following command on the Docker server.

      • sudo docker start test_container

      You can check that the container is running with this command:

      The output will look similar to this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c9b8a264c2e1 ubuntu "bash" 23 minutes ago Up 7 seconds test_container

      To see the history of all alerts in the Zabbix interface, click Monitoring and then Problems in the left navigation bar. You can use options and filters to see events for a specific period of time, for specific hosts, etc. To view history, select the option Show: History

      Problems

      In this step, you simulated a container crash and received a notification in Zabbix.

      Conclusion

      In this tutorial, you learned how to set up a simple monitoring solution to help you track the health of your Docker containers. Now it can alert you to problems and you can analyze the processes taking place in your Docker application.

      With Zabbix, you can monitor not just containers, but also servers, databases, web applications, and much more using ready-made Zabbix official templates.

      To learn more about monitoring infrastructure, check out our Monitoring topic page.



      Source link

      How To Set Up Laravel, Nginx, and MySQL With Docker Compose on Ubuntu 20.04


      The author selected The FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the past few years, Docker has become a frequently used solution for deploying applications thanks to how it simplifies running and deploying applications in ephemeral containers. When you are using a LEMP application stack, for example, with PHP, Nginx, MySQL and the Laravel framework, Docker can significantly streamline the setup process.

      Docker Compose has further simplified the development process by allowing developers to define their infrastructure, including application services, networks, and volumes, in a single file. Docker Compose offers an efficient alternative to running multiple docker container create and docker container run commands.

      In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

      Prerequisites

      Before you start, you will need:

      Step 1 — Downloading Laravel and Installing Dependencies

      As a first step, you will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

      First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

      • cd ~
      • git clone https://github.com/laravel/laravel.git laravel-app

      Move into the laravel-app directory:

      Next, use Docker’s composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

      • docker run --rm -v $(pwd):/app composer install

      Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

      As a final step, set permissions on the project directory so that it is owned by your non-root user:

      • sudo chown -R sammy:sammy ~/laravel-app

      This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

      With your application code in place, you can move on to defining your services with Docker Compose.

      Step 2 — Creating the Docker Compose File

      Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up your Laravel application, you will write a docker-compose file that defines your web server, database, and application services.

      Open the file:

      • nano ~/laravel-app/docker-compose.yml

      In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace MYSQL_ROOT_PASSWORD with the root user’s MySQL password, defined as an environment variable under the db service, with a strong password of your choice:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: MYSQL_ROOT_PASSWORD
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      

      The services defined here include:

      • app: This service definition contains the Laravel application and runs a custom Docker image, digitalocean.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.
      • webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.
      • db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

      Each container_name property defines a name for the container, which corresponds to the name of the service. If you don’t define this property, Docker will assign a name to each container by combining a historically famous person’s name and a random word separated by an underscore.

      To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

      Next, you’ll look at how to add volumes and bind mounts to your service definitions to persist your application data.

      Step 3 — Persisting Data

      Docker has powerful and convenient features for persisting data. In your application, you will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container’s lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Your setup will make use of both.

      Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

      In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata: /var/lib/mysql
          networks:
            - app-network
        ...
      

      The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

      At the bottom of the file, add the definition for the dbdata volume:

      ~/laravel-app/docker-compose.yml

      ...
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      With this definition in place, you will be able to use this volume across services.

      Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
            - ./mysql/my.cnf:/etc/mysql/my.cnf
        ...
      

      This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

      Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

      ~/laravel-app/docker-compose.yml

      #Nginx Service
      webserver:
        ...
        volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
        networks:
            - app-network
      

      The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory’s contents as needed.

      Finally, add the following bind mounts to the app service for the application code and configuration files:

      ~/laravel-app/docker-compose.yml

      #PHP Service
      app:
        ...
        volumes:
             - ./:/var/www
             - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
            - app-network
      

      The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

      Your docker-compose file will now look like this:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          volumes:
            - ./:/var/www
            - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - dbdata:/var/lib/mysql/
            - ./mysql/my.cnf:/etc/mysql/my.cnf
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      Save the file and exit your editor when you are finished making changes.

      With your docker-compose file written, you can now build the custom image for your application.

      Step 4 — Creating the Dockerfile

      Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      Your Dockerfile will be located in your ~/laravel-app directory. Create the file:

      • nano ~/laravel-app/Dockerfile

      This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

      ~/laravel-app/php/Dockerfile

      FROM php:7.4-fpm
      
      # Copy composer.lock and composer.json
      COPY composer.lock composer.json /var/www/
      
      # Set working directory
      WORKDIR /var/www
      
      # Install dependencies
      RUN apt-get update && apt-get install -y 
          build-essential 
          libpng-dev 
          libjpeg62-turbo-dev 
          libfreetype6-dev 
          locales 
          zip 
          jpegoptim optipng pngquant gifsicle 
          vim 
          unzip 
          git 
          curl 
          libzip-dev
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install extensions
      RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
      RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
      RUN docker-php-ext-install gd
      
      # Install composer
      RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
      
      # Add user for laravel application
      RUN groupadd -g 1000 www
      RUN useradd -u 1000 -ms /bin/bash -g www www
      
      # Copy existing application directory contents
      COPY . /var/www
      
      # Copy existing application directory permissions
      COPY --chown=www:www . /var/www
      
      # Change current user to www
      USER www
      
      # Expose port 9000 and start php-fpm server
      EXPOSE 9000
      CMD ["php-fpm"]
      

      First, the Dockerfile creates an image on top of the php:7.4-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

      The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

      Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, you’ve created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that you are using with the --chown flag to copy the application folder’s permissions.

      Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

      Save the file and exit your editor when you are finished making changes.

      You can now move on to defining your PHP configuration.

      Step 5 — Configuring PHP

      Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

      To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

      Create the php directory:

      Next, open the local.ini file:

      • nano ~/laravel-app/php/local.ini

      To demonstrate how to configure PHP, you’ll add the following code to set size limitations for uploaded files:

      ~/laravel-app/php/local.ini

      upload_max_filesize=40M
      post_max_size=40M
      

      The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

      Save the file and exit your editor.

      With your PHP local.ini file in place, you can move on to configuring Nginx.

      Step 6 — Configuring Nginx

      With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server. For more information, please refer to this article on Understanding and Implementing FastCGI Proxying in Nginx.

      To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

      First, create the nginx/conf.d/ directory:

      • mkdir -p ~/laravel-app/nginx/conf.d

      Next, create the app.conf configuration file:

      • nano ~/laravel-app/nginx/conf.d/app.conf

      Add the following code to the file to specify your Nginx configuration:

      ~/laravel-app/nginx/conf.d/app.conf

      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      The server block defines the configuration for the Nginx web server with the following directives:

      • listen: This directive defines the port on which the server will listen to incoming requests.
      • error_log and access_log: These directives define the files for writing logs.
      • root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

      In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because your app container is running on a different host from your webserver container, a TCP socket makes the most sense for your configuration.

      Save the file and exit your editor when you are finished making changes.

      Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

      Next, you’ll look at and configure your MySQL settings.

      Step 7 — Configuring MySQL

      With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

      To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

      To demonstrate how this works, you’ll add settings to the my.cnf file that enable the general query log and specify the log file.

      First, create the mysql directory:

      • mkdir ~/laravel-app/mysql

      Next, make the my.cnf file:

      • nano ~/laravel-app/mysql/my.cnf

      In the file, add the following code to enable the query log and set the log file location:

      ~/laravel-app/mysql/my.cnf

      [mysqld]
      general_log = 1
      general_log_file = /var/lib/mysql/general.log
      

      This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

      Save the file and exit your editor.

      Your next step will be to start the containers.

      Step 8 — Modifying Environment Settings and Running the Containers

      Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, you will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

      You can now modify the .env file on the app container to include specific details about your setup.

      Open the file using nano or your text editor of choice:

      Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:

      • DB_HOST will be your db database container.
      • DB_DATABASE will be the laravel database.
      • DB_USERNAME will be the username you will use for your database. In this case, you will use laraveluser.
      • DB_PASSWORD will be the secure password you would like to use for this user account.

      /var/www/.env

      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=laravel
      DB_USERNAME=laraveluser
      DB_PASSWORD=your_laravel_db_password
      

      Save your changes and exit your editor.

      With all of your services defined in your docker-compose file, use the following single command to start all of the containers, create the volumes, and set up and connect the networks:

      When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

      Once the process is complete, use the following command to list all of the running containers:

      You will see the following output with details about your app, webserver, and db containers:

      Output

      CONTAINER ID NAMES IMAGE STATUS PORTS c31b7b3251e0 db mysql:5.7.22 Up 2 seconds 0.0.0.0:3306->3306/tcp ed5a69704580 app digitalocean.com/php Up 2 seconds 9000/tcp 5ce4ee31d7c0 webserver nginx:alpine Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

      The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container’s state: whether it’s running, restarting, or stopped.

      You’ll now use docker-compose exec to set the application key for the Laravel application. The docker-compose exec command allows you to run specific commands in containers.

      The following command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

      • docker-compose exec app php artisan key:generate

      You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application’s load speed, run:

      • docker-compose exec app php artisan config:cache

      Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

      As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:

      Laravel Home Page

      With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

      Step 9 — Creating a User for MySQL

      The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it’s better to avoid using the root administrative account when interacting with the database. Instead, let’s create a dedicated database user for your application’s Laravel database.

      To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

      • docker-compose exec db bash

      Inside the container, log into the MySQL root administrative account:

      You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

      Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

      You will see the laravel database listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | laravel | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

      Next, create the user account that will be allowed to access this database. Your username will be laraveluser, though you can replace this with another name if you’d prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

      • GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

      Flush the privileges to notify the MySQL server of the changes:

      Exit MySQL:

      Finally, exit the container:

      You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

      Step 10 — Migrating Data and Working with the Tinker Console

      With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

      First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

      • docker-compose exec app php artisan migrate

      This command will migrate the default Laravel tables. The output confirming the migration will look like this:

      Output

      Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table

      Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

      • docker-compose exec app php artisan tinker

      Test the MySQL connection by getting the data you just migrated:

      • DB::table('migrations')->get();

      You will see output that looks like this:

      Output

      => IlluminateSupportCollection {#2856 all: [ {#2862 +"id": 1, +"migration": "2014_10_12_000000_create_users_table", +"batch": 1, }, {#2865 +"id": 2, +"migration": "2014_10_12_100000_create_password_resets_table", +"batch": 1, }, ], }

      You can use tinker to interact with your databases and to experiment with services and models.

      With your Laravel application in place, you are ready for further development and experimentation.

      Conclusion

      You now have a LEMP stack application running on your server, which you’ve tested by accessing the Laravel welcome page and creating MySQL database migrations. Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command.



      Source link