One place for hosting & domains

      Docker

      Como Construir Imagens Docker e Hospedar um Repositório de Imagens Docker com o GitLab


      Introdução

      A containerização está rapidamente se tornando o método de empacotamento e deploy de aplicações mais aceito nos ambientes de nuvem. A padronização que ele fornece, juntamente com sua eficiência de recursos (quando comparado a máquinas virtuais completas) e flexibilidade, o tornam um grande facilitador da moderna mentalidade DevOps. Muitas estratégias interessantes de deployment, orquestração e monitoramento nativas para nuvem tornam-se possíveis quando suas aplicações e microsserviços são totalmente containerizados.

      Os containers Docker são de longe os tipos mais comuns de container atualmente. Embora os repositórios públicos de imagem do Docker como o Docker Hub estejam repletos de imagens de software opensource containerizado que você pode fazer um docker pull hoje, para código privado você precisará pagar um serviço para construir e armazenar suas imagens, ou executar seu próprio software para fazer isso.

      O GitLab Community Edition é um pacote de software auto-hospedado que fornece hospedagem de repositório Git, acompanhamento de projetos, serviços de CI/CD, e um registro de imagem Docker, entre outros recursos. Neste tutorial vamos utilizar o serviço de integração contínua do GitLab para construir imagens Docker a partir de uma aplicação de exemplo em Node.js. Estas imagens serão então testadas e carregadas para o nosso próprio registro privado do Docker.

      Pré-requisitos

      Antes de começarmos, precisamos configurar um servidor GitLab seguro, e um GitLab CI runner para executar tarefas de integração contínua. As seções abaixo fornecerão links e maiores detalhes.

      Um Servidor Gitlab Protegido com SSL

      Para armazenar nosso código fonte, executar tarefas de CI/CD, e hospedar um registro Docker, precisamos de uma instância do GitLab instalada em um servidor Ubuntu 16.04. Atualmente, o GitLab recomenda um servidor com pelo menos 2 núcleos de CPU e 4GB de RAM. Adicionalmente, iremos proteger o servidor com certificados SSL do Let’s Encrypt. Para fazer isto, precisaremos de um nome de domínio apontando para o servidor.

      Você pode completar esses pré-requisitos com os seguintes tutoriais:

      Um GitLab CI Runner

      O tutorial Como configurar pipelines de integração contínua com o GitLab CI no Ubuntu 16.04 fornecerá uma visão geral do serviço de CI ou integração contínua do GitLab e mostrará como configurar um CI runner para processar jobs. Vamos construir isso em cima da aplicação de demonstração e da infraestrutura do runner criados neste tutorial.

      Passo 1 — Configurando um GitLab CI Runner Privilegiado

      No pré-requisito do tutorial de integração contínua com o GitLab, configuramos um GitLab runner utilizando sudo gitlab-runner register e seu processo de configuração interativo. Este runner é capaz de executar builds e testes de software dentro de containers Docker isolados.

      Entretanto, para se construir imagens Docker, nosso runner precisa de acesso total ao próprio serviço do Docker. A maneira recomendada de se configurar isto é utilizar a imagem docker-in-docker oficial do Docker para executar os jobs. Isto requer conceder ao runner um modo de execução privileged ou privilegiado. Portanto, criaremos um segundo runner com este modo ativado.

      Nota: Conceder ao runner o modo privileged basicamente desativa todas as vantagens de segurança da utilização de containers. Infelizmente, os outros métodos de ativar runners compatíveis com o Docker também carregam implicações de segurança semelhantes. Por favor, veja a documentação oficial do GitLab no Docker Build para aprender mais sobre as diferentes opções de runners e qual é a melhor para a sua situação.

      Como existem implicações de segurança para a utilização de runner privilegiado, vamos criar um runner específico do projeto que aceitará somente jobs de Docker em nosso projeto hello_hapi (Os administradores de GitLab sempre podem adicionar manualmente esse runner a outros projetos posteriormente). A partir da página do nosso projeto hello_hapi, clique em Settings na parte inferior do menu à esquerda, em seguida clique em CI/CD no sub-menu:

      Agora, clique no botão Expand ao lado da seção de configurações de Runners:

      Haverá algumas informações sobre como configurar um Specific Runner, incluindo um token de registro. Tome nota desse token. Quando o utilizamos para registrar um novo runner, o runner será bloqueado apenas para este projeto.

      Estando nesta página, clique no botão Disable shared Runners. Queremos ter certeza de que nossos jobs de Docker sempre executarão em nosso runner privilegiado. Se um runner compartilhado não privilegiado estivesse disponível, o GitLab pode optar por utilizá-lo, o que resultaria em erros de build.

      Faça o login no servidor que possui o seu CI runner atual. Se você não tiver uma máquina já configurada com os runners, volte e complete a seção Installing the GitLab CI Runner Service do tutorial de pré-requisitos antes de continuar.

      Agora, execute o seguinte comando para configurar o runner privilegiado específico do projeto:

      • sudo gitlab-runner register -n
      • --url https://gitlab.example.com/
      • --registration-token seu-token
      • --executor docker
      • --description "docker-builder"
      • --docker-image "docker:latest"
      • --docker-privileged

      Output

      Registering runner... succeeded runner=61SR6BwV Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

      Certifique-se de substituir suas próprias informações. Nós definimos todas as opções do nosso runner na linha de comando em vez de usar os prompts interativos, porque os prompts não nos permitem especificar o modo --docker-privileged.

      Agora o seu runner está configurado, registrado e executando. Para verificar, volte ao seu navegador. Clique no ícone de chave inglesa na barra de menu principal do GitLab, em seguida clique em Runners no menu à esquerda. Seus runners serão listados:

      Agora que temos um runner capaz de criar imagens do Docker, vamos configurar um registro privado do Docker para carregar imagens para ele.

      Passo 2 — Configurando o Registro Docker do GitLab

      Configurar seu próprio registro do Docker permite que você envie e extraia imagens de seu próprio servidor privado, aumentando a segurança e reduzindo as dependências do seu fluxo de trabalho em serviços externos.

      O GitLab irá configurar um registro Docker privado com apenas algumas atualizações de configuração. Primeiro vamos configurar a URL onde o registro irá residir. Depois, iremos (opcionalmente) configurar o registro para usar um serviço de armazenamento de objetos compatível com S3 para armazenar seus dados.

      Faça SSH em seu servidor GitLab, depois abra o arquivo de configuração do GitLab:

      • sudo nano /etc/gitlab/gitlab.rb

      Role para baixo até a seção Container Registry settings. Vamos descomentar a linha registry_external_url e configurá-la para o nosso host GitLab com a porta número 5555:

      /etc/gitlab/gitlab.rb

      
      registry_external_url 'https://gitlab.example.com:5555'
      

      A seguir, adicione as duas linhas seguintes para dizer ao registro onde encontrar nossos certificados Let’s Encrypt:

      /etc/gitlab/gitlab.rb

      
      registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
      registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"
      

      Salve e feche o arquivo, depois reconfigure o GitLab:

      • sudo gitlab-ctl reconfigure

      Output

      . . . gitlab Reconfigured!

      Atualize o firewall para pemitir tráfego para a porta do registro:

      Agora mude para outra máquina com o Docker instalado e efetue o login no registro Docker privado. Se você não tiver o Docker no seu computador de desenvolvimento local, você pode usar qualquer servidor configurado para executar seus jobs do GitLab CI, já que ele tem o Docker instalado:

      • docker login gitlab.example.com:5555

      Você será solicitado para inserir o seu nome de usuário e senha. Use suas credenciais do GitLab para efetuar login.

      Output

      Login Succeeded

      Sucesso! O registro está configurado e funcionando. Atualmente, ele armazenará arquivos no sistema de arquivos local do servidor GitLab. Se você quiser usar um serviço de armazenamento de objetos, continue com esta seção. Se não, pule para o Passo 3.

      Para configurar um backend de armazenamento de objetos para o registro, precisamos saber as seguintes informações sobre o nosso serviço de armazenamento de objetos:

      • Access Key

      • Secret Key

      • Region (us-east-1) por exemplo, se estiver usando Amazon S3, ou Region Endpoint se estiver usando um serviço compatível com S3 (https://nyc.digitaloceanspaces.com)

      • Nome do Bucket

      Se você estiver usando o DigitalOcean Spaces, você pode descobrir como configurar um novo Space e obter as informações acima lendo Como Criar um Space e uma Chave de API na DigitalOcean.

      Quando você tiver suas informações sobre o amazenamento de objetos, abra o arquivo de configuração do GitLab:

      • sudo nano /etc/gitlab/gitlab.rb

      Novamente, role até a seção de registro do container. Procure pelo bloco registry['storage'], descomente o bloco e atualize-o para o seguinte, novamente certificando-se de substituir suas próprias informações, quando apropriado:

      /etc/gitlab/gitlab.rb

      
      registry['storage'] = {
        's3' => {
          'accesskey' => 'sua-key',
          'secretkey' => 'seu-secret',
          'bucket' => 'seu-bucket-name',
          'region' => 'nyc3',
          'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
        }
      }
      

      Se você estiver uando Amazon S3, você precisa apenas da region e não do regionendpoint. Se estiver usando um serviço S3 compatível, como o Spaces, você irá precisar do regionendpoint. Neste caso, region na verdade não configura nada e o valor que você digita não importa, mas ainda precisa estar presente e não em branco.

      Salve e feche o arquivo.

      Nota: Atualmente, há um bug em que o registro será encerrado após trinta segundos se seu bucket de armazenamento de objetos estiver vazio. Para evitar isso, coloque um arquivo no seu bucket antes de executar a próxima etapa. Você poderá removê-lo mais tarde, após o registro ter adicionado seus próprios objetos.

      Se você estiver usando o Spaces da DigitalOcean, você pode arrastar e soltar um arquivo para carregá-lo usando a interface do Painel de Controle.

      Reconfigure o GitLab mais uma vez:

      • sudo gitlab-ctl reconfigure

      Em sua outra máquina Docker, efetue login no registro novamente para ter certeza de que tudo está bem:

      • docker login gitlab.example.com:5555

      Você deve receber uma mensagem de Login Succeeded.

      Agora que temos nosso registro do Docker configurado, vamos atualizar a configuração de CI da nossa aplicação para criar e testar nossa app, e enviar as imagens Docker para o nosso registro privado.

      Passo 3 — Atualizando o gitlab-ci.yaml e Construindo uma Imagem Docker

      Nota: Se você não concluiu o artigo de pré-requisito do GitLab CI você precisará copiar o repositório de exemplo para o seu servidor GitLab. Siga a seção Copying the Example Repository From GitHub para fazer isto.

      Para que possamos fazer o building de nossa app no Docker, precisamos atualizar o arquivo .gitlab-ci.yml. Você pode editar este arquivo diretamente no GitLab clicando na página principal do projeto, e depois no botão Edit. Alternativamente, você poderia clonar o repositório para a sua máquina local, editar o arquivo, e então fazer um git push nele de volta para o GitLab. Isso ficaria assim:

      • git clone [email protected]:sammy/hello_hapi.git
      • cd hello_hapi
      • # edit the file w/ your favorite editor
      • git commit -am "updating ci configuration"
      • git push

      Primeiro, exclua tudo no arquivo, depois cole nele a seguinte configuração:

      .gitlab-ci.yml

      
      image: docker:latest
      services:
      - docker:dind
      
      stages:
      - build
      - test
      - release
      
      variables:
        TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
        RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest
      
      before_script:
        - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555
      
      build:
        stage: build
        script:
          - docker build --pull -t $TEST_IMAGE .
          - docker push $TEST_IMAGE
      
      test:
        stage: test
        script:
          - docker pull $TEST_IMAGE
          - docker run $TEST_IMAGE npm test
      
      release:
        stage: release
        script:
          - docker pull $TEST_IMAGE
          - docker tag $TEST_IMAGE $RELEASE_IMAGE
          - docker push $RELEASE_IMAGE
        only:
          - master
      

      Certifique-se de atualizar os URLs e nomes de usuários realçados com suas próprias informações e, em seguida, salve com o botão Commit changes no GitLab. Se você está atualizando o arquivo fora do GitLab, confirme as mudanças e faça git push de volta no GitLab.

      Este novo arquivo de configuração diz ao GitLab para usar a imagem mais recente do docker (image: docker:latest) e vinculá-la ao serviço docker-in-docker (docker:dind). Então, ele define os estágios de build, test, e release. O estágio de build cria a imagem do Docker usando o Dockerfile fornecido pelo repositório, em seguida o carrega para o nosso registro de imagens Docker. Se isso for bem sucedido, o estágio test vai baixar a imagem que acabamos de construir e executar o comando npm test dentro dele. Se o estágio test for bem sucedido, o estágio release irá lançar a imagem, irá colocar uma tag como hello_hapi:latest e irá retorná-la ao registro.

      Dependendo do seu fluxo de trabalho, você também pode adicionar mais estágios test, ou mesmo estágios deploy que levam o aplicativo para um ambiente de preparação ou produção.

      A atualização do arquivo de configuração deve ter acionado um novo build. Volte ao projeto hello_hapi no GitLab e clique no indicador de status do CI para o commit:

      Na página resultante, você pode clicar em qualquer um dos estágios para ver seu progresso:

      Eventualmente, todas as etapas devem indicar que eles foram bem sucedidos, mostrando ícones com a marca de verificação em verde. Podemos encontrar as imagens Docker que acabaram de ser construídas clicando no item Registry no menu à esquerda:

      Se você clicar no pequeno ícone "document" ao lado do nome da imagem, ele copiará o comando apropriado docker pull ... para a sua área de transferência. Você pode então baixar e executar sua imagem:

      • docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
      • docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest

      Output

      > [email protected] start /usr/src/app > node app.js Server running at: http://56fd5df5ddd3:3000

      A imagem foi baixada do registro e iniciada em um container. Mude para o seu navegador e conecte-se ao aplicativo na porta 3000 para testar. Neste caso, estamos executando o container em nossa máquina local, assim podemos acessá-la via localhost na seguinte URL:

      http://localhost:3000/hello/test
      

      Output

      Hello, test!

      Sucesso! Você pode parar o container com CTRL-C. A partir de agora, toda vez que enviarmos um novo código para a ramificação master do nosso repositório, vamos construir e testar automaticamente uma nova imagem hello_hapi: latest.

      Conclusão

      Neste tutorial, configuramos um novo GitLab runner para criar imagens do Docker, criamos um regisro privado do Docker para armazená-las, e atualizamos um app Node.js para ser construído e testado dentro de containers Docker.

      Para aprender mais sobre os vários componentes utilizados nesta configuração, você pode ler a documentação oficial do GitLab CE, GitLab Container Registry, e do Docker.

      Por Brian Boucheron



      Source link

      How to Install and Use Docker on Debian 9


      A previous version of this tutorial was written by finid.

      Introduction

      Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.

      For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

      In this tutorial, you’ll install and use Docker Community Edition (CE) on Debian 9. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

      Prerequisites

      To follow this tutorial, you will need the following:

      • One Debian 9 server set up by following the Debian 9 initial server setup guide, including a sudo non-root user and a firewall.
      • An account on Docker Hub if you wish to create your own images and push them to Docker Hub, as shown in Steps 7 and 8.

      Step 1 — Installing Docker

      The Docker installation package available in the official Debian repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

      First, update your existing list of packages:

      Next, install a few prerequisite packages which let apt use packages over HTTPS:

      • sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common

      Then add the GPG key for the official Docker repository to your system:

      • curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

      Add the Docker repository to APT sources:

      • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

      Next, update the package database with the Docker packages from the newly added repo:

      Make sure you are about to install from the Docker repo instead of the default Debian repo:

      • apt-cache policy docker-ce

      You'll see output like this, although the version number for Docker may be different:

      Output of apt-cache policy docker-ce

      docker-ce:
        Installed: (none)
        Candidate: 18.06.1~ce~3-0~debian
        Version table:
           18.06.1~ce~3-0~debian 500
              500 https://download.docker.com/linux/debian stretch/stable amd64 Packages
      

      Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Debian 9 (stretch).

      Finally, install Docker:

      • sudo apt install docker-ce

      Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:

      • sudo systemctl status docker

      The output should be similar to the following, showing that the service is active and running:

      Output

      ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago Docs: https://docs.docker.com Main PID: 21319 (dockerd) CGroup: /system.slice/docker.service ├─21319 /usr/bin/dockerd -H fd:// └─21326 docker-containerd --config /var/run/docker/containerd/containerd.toml

      Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

      Step 2 — Executing the Docker Command Without Sudo (Optional)

      By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker's installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

      Output

      docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

      If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

      • sudo usermod -aG docker ${USER}

      To apply the new group membership, log out of the server and back in, or type the following:

      You will be prompted to enter your user's password to continue.

      Confirm that your user is now added to the docker group by typing:

      Output

      sammy sudo docker

      If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

      • sudo usermod -aG docker username

      The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

      Let's explore the docker command next.

      Step 3 — Using the Docker Command

      Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

      • docker [option] [command] [arguments]

      To view all available subcommands, type:

      As of Docker 18, the complete list of available subcommands includes:

      Output

      attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

      To view the options available to a specific command, type:

      • docker docker-subcommand --help

      To view system-wide information about Docker, use:

      Let's explore some of these commands. We'll start by working with images.

      Step 4 — Working with Docker Images

      Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you'll need will have images hosted there.

      To check whether you can access and download images from Docker Hub, type:

      The output will indicate that Docker in working correctly:

      Output

      Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9db2ca6ccae0: Pull complete Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

      Docker was initially unable to find the hello-world image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.

      You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the Ubuntu image, type:

      The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

      Output

      NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 8320 [OK] dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 214 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 170 [OK] consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 128 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 95 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 88 [OK] neurodebian NeuroDebian provides neuroscience research s… 53 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 43 [OK] ubuntu-debootstrap debootstrap --variant=minbase --components=m… 39 [OK] nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK] tutum/ubuntu Simple Ubuntu docker images with SSH access 18 i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13 1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 12 [OK] ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12 eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK] darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK] codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK] 1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK] pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 2 1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK] ossobv/ubuntu Custom ubuntu image from scratch (based on o… 0 smartentry/ubuntu ubuntu with smartentry 0 [OK] 1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK] pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0 paasmule/bosh-tools-ubuntu Ubuntu based bosh-cli 0 [OK] ...

      In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identified the image that you would like to use, you can download it to your computer using the pull subcommand.

      Execute the following command to download the official ubuntu image to your computer:

      You'll see the following output:

      Output

      Using default tag: latest latest: Pulling from library/ubuntu 6b98dfc16071: Pull complete 4001a1209541: Pull complete 6319fc68c576: Pull complete b24603670dc3: Pull complete 97f170c87c6f: Pull complete Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d Status: Downloaded newer image for ubuntu:latest

      After an image has been downloaded, you can then run a container using the downloaded image with the run subcommand. As you saw with the hello-world example, if an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it.

      To see the images that have been downloaded to your computer, type:

      The output should look similar to the following:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 16508e5c265d 13 days ago 84.1MB hello-world latest 2cb0d9787c4d 7 weeks ago 1.85kB

      As you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

      Let's look at how to run containers in more detail.

      Step 5 — Running a Docker Container

      The hello-world container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

      As an example, let's run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:

      Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:

      Output

      root@d9b100f2f636:/#

      Note the container id in the command prompt. In this example, it is d9b100f2f636. You'll need that container ID later to identify the container when you want to remove it.

      Now you can run any command inside the container. For example, let's update the package database inside the container. You don't need to prefix any command with sudo, because you're operating inside the container as the root user:

      Then install any application in it. Let's install Node.js:

      This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:

      You'll see the version number displayed in your terminal:

      Output

      v8.10.0

      Any changes you make inside the container only apply to that container.

      To exit the container, type exit at the prompt.

      Let's look at managing the containers on our system next.

      Step 6 — Managing Docker Containers

      After using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:

      You will see output similar to the following:

      Output

      CONTAINER ID IMAGE COMMAND CREATED

      In this tutorial, you started two containers; one from the hello-world image and another from the ubuntu image. Both containers are no longer running, but they still exist on your system.

      To view all containers — active and inactive, run docker ps with the -a switch:

      You'll see output similar to this:

      d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 8 minutes ago                           sharp_volhard
      01c950718166        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                       festive_williams
      
      

      To view the latest container you created, pass it the -l switch:

      • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      • d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard

      To start a stopped container, use docker start, followed by the container ID or the container's name. Let's start the Ubuntu-based container with the ID of d9b100f2f636:

      • docker start d9b100f2f636

      The container will start, and you can use docker ps to see its status:

      CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
      d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Up 8 seconds                            sharp_volhard
      
      

      To stop a running container, use docker stop, followed by the container ID or name. This time, we'll use the name that Docker assigned the container, which is sharp_volhard:

      • docker stop sharp_volhard

      Once you've decided you no longer need a container anymore, remove it with the docker rm command, again using either the container ID or the name. Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it.

      • docker rm festive_williams

      You can start a new container and give it a name using the --name switch. You can also use the --rm switch to create a container that removes itself when it's stopped. See the docker run help command for more information on these options and others.

      Containers can be turned into images which you can use to build new containers. Let's look at how that works.

      Step 7 — Committing Changes in a Container to a Docker Image

      When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

      This section shows you how to save the state of a container as a new Docker image.

      After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.

      Then commit the changes to a new Docker image instance using the following command.

      • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

      The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.

      For example, for the user sammy, with the container ID of d9b100f2f636, the command would be:

      • docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

      When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so others can access it.

      Listing the Docker images again will show the new image, as well as the old one that it was derived from:

      You'll see output like this:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

      In this example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.

      You can also build Images from a Dockerfile, which lets you automate the installation of software in a new image. However, that's outside the scope of this tutorial.

      Now let's share the new image with others so they can create containers from it.

      Step 8 — Pushing Docker Images to a Docker Repository

      The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

      This section shows you how to push a Docker image to Docker Hub. To learn how to create your own private Docker registry, check out How To Set Up a Private Docker Registry on Ubuntu 14.04.

      To push your image, first log into Docker Hub.

      • docker login -u docker-registry-username

      You'll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed. Then you may push your own image using:

      • docker push docker-registry-username/docker-image-name

      To push the ubuntu-nodejs image to the sammy repository, the command would be:

      • docker push sammy/ubuntu-nodejs

      The process may take some time to complete as it uploads the images, but when completed, the output will look like this:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

      After pushing an image to a registry, it should be listed on your account's dashboard, like that show in the image below.

      New Docker image listing on Docker Hub

      If a push attempt results in an error of this sort, then you likely did not log in:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

      Log in with docker login and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.

      You can now use docker pull sammy/ubuntu-node<^> to pull the image to a new machine and use it to run a new container.

      Conclusion

      In this tutorial you installed Docker, worked with images and containers, and pushed a modified image to Docker Hub. Now that you know the basics, explore the other Docker tutorials in the DigitalOcean Community.



      Source link

      How To Install Docker Compose on Debian 9


      Introduction

      Docker is a great tool for automating the deployment of Linux applications inside software containers, but to take full advantage of its potential each component of an application should run in its own individual container. For complex applications with a lot of components, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy.

      The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all of your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.

      In this tutorial, we’ll show you how to install the latest version of Docker Compose to help you manage multi-container applications on a Debian 9 server.

      Prerequisites

      To follow this article, you will need:

      Note: Even though the Prerequisites give instructions for installing Docker on Debian 9, the docker commands in this article should work on other operating systems as long as Docker is installed.

      Step 1 — Installing Docker Compose

      Although we can install Docker Compose from the official Debian repositories, it is several minor versions behind the latest release, so we’ll install it from Docker’s GitHub repository. The command below is slightly different than the one you’ll find on the Releases page. By using the -o flag to specify the output file first rather than redirecting the output, this syntax avoids running into a permission denied error caused when using sudo.

      We’ll check the current release and, if necessary, update it in the command below:

      • sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

      Next we’ll set the permissions:

      • sudo chmod +x /usr/local/bin/docker-compose

      Then we’ll verify that the installation was successful by checking the version:

      This will print out the version we installed:

      Output

      docker-compose version 1.22.0, build f46880fe

      Now that we have Docker Compose installed, we're ready to run a "Hello World" example.

      Step 2 — Running a Container with Docker Compose

      The public Docker registry, Docker Hub, includes a Hello World image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image. We'll create this minimal configuration to run our hello-world container.

      First, we'll create a directory for the YAML file and move into it:

      • mkdir hello-world
      • cd hello-world

      Then, we'll create the YAML file:

      Put the following contents into the file, save the file, and exit the text editor:

      docker-compose.yml

      my-test:
       image: hello-world
      

      The first line in the YAML file is used as part of the container name. The second line specifies which image to use to create the container. When we run the docker-compose up command, it will look for a local image by the name we specified, hello-world. With this in place, we’ll save and exit the file.

      We can look manually at images on our system with the docker images command:

      When there are no local images at all, only the column headings display:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE

      Now, while still in the ~/hello-world directory, we'll execute the following command:

      The first time we run the command, if there's no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:

      Output

      Pulling my-test (hello-world:)... latest: Pulling from library/hello-world 9db2ca6ccae0: Pull complete Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc Status: Downloaded newer image for hello-world:latest . . .

      After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:

      Output

      . . . Creating helloworld_my-test_1... Attaching to helloworld_my-test_1 my-test_1 | my-test_1 | Hello from Docker. my-test_1 | This message shows that your installation appears to be working correctly. my-test_1 | . . .

      Then it prints an explanation of what it did:

      Output

      To generate this message, Docker took the following steps: my-test_1 | 1. The Docker client contacted the Docker daemon. my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. my-test_1 | (amd64) my-test_1 | 3. The Docker daemon created a new container from that image which runs the my-test_1 | executable that produces the output you are currently reading. my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it my-test_1 | to your terminal.

      Docker containers only run as long as the command is active, so once hello finished running, the container stopped. Consequently, when we look at active processes, the column headers will appear, but the hello-world container won't be listed because it's not running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

      We can see the container information, which we'll need in the next step, by using the -a flag. This shows all containers, not just active ones:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 06069fd5ca23 hello-world "/hello" 35 minutes ago Exited (0) 35 minutes ago hello-world_my-test_1

      This displays the information we'll need to remove the container when we're done with it.

      Step 3 — Removing the Image (Optional)

      To avoid using unnecessary disk space, we'll remove the local image. To do so, we'll need to delete all the containers that reference the image using the docker rm command, followed by either the CONTAINER ID or the NAME. Below, we're using the CONTAINER ID from the docker ps -a command we just ran. Be sure to substitute the ID of your container:

      Once all containers that reference the image have been removed, we can remove the image:

      Conclusion

      We've now installed Docker Compose, tested our installation by running a Hello World example, and removed the test image and container.

      While the Hello World example confirmed our installation, the simple configuration does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time. To see the power of Docker Compose in action, you might like to check out this practical example, How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04.



      Source link