One place for hosting & domains

      InSpec

      Como Testar Seu Deployment Ansible com InSpec e Kitchen


      O autor escolheu a Diversity in Tech Fund para receber uma doação como parte do programa Write for DOnations.

      Introdução

      O InSpec é um framework open-source de auditoria e teste automatizado usado para descrever e testar preocupações, recomendações ou requisitos regulatórios. Ele foi projetado para ser inteligível e independente de plataforma. Os desenvolvedores podem trabalhar com o InSpec localmente ou usando SSH, WinRM ou Docker para executar testes, portanto, é desnecessário instalar quaisquer pacotes na infraestrutura que está sendo testada.

      Embora com o InSpec você possa executar testes diretamente em seus servidores, existe um potencial de erro humano que poderia causar problemas em sua infraestrutura. Para evitar esse cenário, os desenvolvedores podem usar o Kitchen para criar uma máquina virtual e instalar um sistema operacional de sua escolha nas máquinas em que os testes estão sendo executados. O Kitchen é um executor de testes, ou ferramenta de automação de teste, que permite testar o código de infraestrutura em uma ou mais plataformas isoladas. Ele também suporta muitos frameworks de teste e é flexível com uma arquitetura de plug-in de driver para várias plataformas, como Vagrant, AWS, DigitalOcean, Docker, LXC containers, etc.

      Neste tutorial, você escreverá testes para seus playbooks Ansible em execução em um Droplet Ubuntu 18.04 da DigitalOcean. Você usará o Kitchen como executor de teste e o InSpec para escrever os testes. No final deste tutorial, você poderá testar o deploy do seu playbook Ansible.

      Pré-requisitos

      Antes de começar com este guia, você precisará de uma conta na DigitalOcean além do seguinte:

      Passo 1 — Configurando e Inicializando o Kitchen

      Você instalou o ChefDK como parte dos pré-requisitos que vem empacotados com o kitchen. Neste passo, você configurará o Kitchen para se comunicar com a DigitalOcean.

      Antes de inicializar o Kitchen, você criará e se moverá para um diretório de projeto. Neste tutorial, o chamaremos de ansible_testing_dir.

      Execute o seguinte comando para criar o diretório:

      • mkdir ~/ansible_testing_dir

      E então passe para ele:

      Usando o gem instale o pacote kitchen-digitalocean em sua máquina local. Isso permite que você diga ao kitchen para usar o driver da DigitalOcean ao executar testes:

      • gem install kitchen-digitalocean

      No diretório do projeto, você executará o comando kitchen init especificando ansible_playbook como o provisionador e digitalocean como o driver ao inicializar o Kitchen:

      • kitchen init --provisioner=ansible_playbook --driver=digitalocean

      Você verá a seguinte saída:

      Output

      create kitchen.yml create chefignore create test/integration/default

      Isso criou o seguinte no diretório do projeto:

      • test/integration/default é o diretório no qual você salvará seus arquivos de teste.

      • chefignore é o arquivo que você usaria para garantir que certos arquivos não sejam carregados para o Chef Infra Server, mas você não o usará neste tutorial.

      • kitchen.yml é o arquivo que descreve sua configuração de teste: o que você deseja testar e as plataformas de destino.

      Agora, você precisa exportar suas credenciais da DigitalOcean como variáveis de ambiente para ter acesso para criar Droplets a partir da sua CLI. Primeiro, inicie com seu token de acesso da DigitalOcean executando o seguinte comando:

      • export DIGITALOCEAN_ACCESS_TOKEN="SEU_TOKEN_DE_ACESSO_DIGITALOCEAN"

      Você também precisa obter seu número de ID da chave SSH; note que SEU_ID_DE_CHAVE_SSH_DIGITALOCEAN deve ser o ID numérico da sua chave SSH, não o nome simbólico. Usando a API da DigitalOcean, você pode obter o ID numérico de suas chaves com o seguinte comando:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN"

      Após este comando, você verá uma lista de suas chaves SSH e metadados relacionados. Leia a saída para encontrar a chave correta e identificar o número de ID nela:

      Output

      ... {"id":seu-ID-numérico,"fingerprint":"fingerprint","public_key":"ssh-rsa sua-chave-ssh","name":"nome-da-sua-chave-ssh" ...

      Nota: Se você deseja tornar sua saída mais legível para obter seus IDs numéricos, você pode encontrar e baixar o jq com base no seu sistema operacional na página de download do jq. Agora, você pode executar o comando anterior fazendo um pipe para o jq da seguinte maneira:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN" | jq

      Você verá as informações da chave SSH formatadas de forma semelhante a:

      Output

      { "ssh_keys": [ { "id": ID_DA_SUA_CHAVE_SSH, "fingerprint": "2f:d0:16:6b", "public_key": "ssh-rsa AAAAB3NzaC1yc2 [email protected]", "name": "sannikay" } ], }

      Depois de identificar seus IDs numéricos de SSH, exporte-os com o seguinte comando:

      • export DIGITALOCEAN_SSH_KEY_IDS="SEU_ID_DE_CHAVE_SSH_DIGITALOCEAN"

      Você inicializou o kitchen e configurou as variáveis de ambiente para suas credenciais da DigitalOcean. Agora você vai criar e executar testes em seus Droplets diretamente da linha de comando.

      Passo 2 — Criando o Playbook Ansible

      Neste passo, você criará um playbook e roles (funções) que configurará o Nginx e o Node.js no Droplet criado pelo kitchen no próximo passo. Seus testes serão executados no playbook para garantir que as condições especificadas no playbook sejam atendidas.

      Para começar, crie um diretório roles para as roles ou funções Nginx e Node.js:

      • mkdir -p roles/{nginx,nodejs}/tasks

      Isso criará uma estrutura de diretórios da seguinte maneira:

      roles
      ├── nginx
      │   └── tasks
      └── nodejs
          └── tasks
      

      Agora, crie um arquivo main.yml no diretório roles/nginx/tasks usando o seu editor preferido:

      • nano roles/nginx/tasks/main.yml

      Neste arquivo, crie uma tarefa ou task que configura e inicia o Nginx adicionando o seguinte conteúdo:

      roles/nginx/tasks/main.yml

      ---
      - name: Update cache repositories and install Nginx
        apt:
          name: nginx
          update_cache: yes
      
      - name: Change nginx directory permission
        file:
          path: /etc/nginx/nginx.conf
          mode: 0750
      
      - name: start nginx
        service:
          name: nginx
          state: started
      

      Depois de adicionar o conteúdo, salve e saia do arquivo.

      Em roles/nginx/tasks/main.yml você define uma tarefa que atualizará o repositório de cache do seu Droplet, o que equivale a executar o comando apt update manualmente em um servidor. Essa tarefa também altera as permissões do arquivo de configuração do Nginx e inicia o serviço Nginx.

      Você também criará um arquivo main.yml em roles/nodejs/tasks para definir uma tarefa que configure o Node.js.

      • nano roles/nodejs/tasks/main.yml

      Adicione as seguintes tarefas a este arquivo:

      roles/nodejs/tasks/main.yml

      ---
      - name: Update caches repository
        apt:
          update_cache: yes
      
      - name: Add gpg key for NodeJS LTS
        apt_key:
          url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key"
          state: present
      
      - name: Add the NodeJS LTS repo
        apt_repository:
          repo: "deb https://deb.nodesource.com/node_{{ NODEJS_VERSION }}.x {{ ansible_distribution_release }} main"
          state: present
          update_cache: yes
      
      - name: Install Node.js
        apt:
          name: nodejs
          state: present
      
      

      Salve e saia do arquivo quando terminar.

      Em roles/nodejs/tasks/main.yml, você primeiro define uma tarefa que atualizará o repositório de cache do seu Droplet. Em seguida, na próxima tarefa, você adiciona a chave GPG para o Node.js, que serve como um meio de verificar a autenticidade do repositório apt do Node.js. As duas tarefas finais adicionam o repositório apt do Node.js e o instalam.

      Agora você definirá suas configurações do Ansible, como variáveis, a ordem em que você deseja que suas roles sejam executadas e configurações de privilégios de superusuário. Para fazer isso, você criará um arquivo chamado playbook.yml, que serve como um entry point para o Kitchen. Quando você executa seus testes, o Kitchen inicia no seu arquivo playbook.yml e procura as roles a serem executadas, que são seus arquivos roles/nginx/tasks/main.yml e roles/nodejs/tasks/main.yml.

      Execute o seguinte comando para criar o playbook.yml:

      Adicione o seguinte conteúdo ao arquivo:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      

      Salve e saia do arquivo.

      Você criou as roles do playbook do Ansible com as quais executará seus testes para garantir que as condições especificadas no playbook sejam atendidas.

      Passo 3 — Escrevendo Seus Testes InSpec

      Neste passo, você escreverá testes para verificar se o Node.js está instalado no seu Droplet. Antes de escrever seu teste, vejamos o formato de um exemplo de teste InSpec. Como em muitos frameworks de teste, o código InSpec se assemelha a uma linguagem natural. O InSpec possui dois componentes principais, o assunto a ser examinado e o estado esperado desse assunto:

      block A

      describe '<entity>' do
        it { <expectation> }
      end
      

      Em block A, as palavras-chave do e end definem um bloco ou block. A palavra-chave describe é comumente conhecida como conjuntos ou suites de testes, que contêm casos de teste. A palavra-chave it é usada para definir os casos de teste.

      <entity> é o assunto que você deseja examinar, por exemplo, um nome de pacote, serviço, arquivo ou porta de rede. O <expectation> especifica o resultado desejado ou o estado esperado, por exemplo, o Nginx deve ser instalado ou deve ter uma versão específica. Você pode verificar a documentação da InSpec DSL para aprender mais sobre a linguagem InSpec.

      Outro exemplo de bloco de teste InSpec:

      block B

      control 'Pode ser qualquer coisa única' do  
        impact 0.7                         
        title 'Um título inteligível'     
        desc  'Uma descrição opcional'
        describe '<entity>' do             
          it { <expectation> }
        end
      end
      

      A diferença entre o bloco A e o bloco B é o bloco control. O bloco control é usado como um meio de controle regulatório, recomendação ou requisito. O bloco control tem um nome; geralmente um ID único, metadados como desc, title, impact e, finalmente, agrupam blocos describe relacionados para implementar as verificações.

      desc, title, e impact definem metadados que descrevem completamente a importância do controle, seu objetivo, com uma descrição sucinta e completa. impact define um valor numérico que varia de 0.0 a 1.0 onde 0.0 a <0.01 é classificado como sem impacto, 0.01 a <0.4 é classificado como baixo impacto, 0.4 a <0.7 é classificado como médio impacto, 0,7 a <0,9 é classificado como alto impacto, 0,9 a 1,0 é classificado como controle crítico.

      Agora, vamos implementar um teste. Usando a sintaxe do bloco A, você usará o recurso package do InSpec para testar se o Node.js está instalado no sistema. Você irá criar um arquivo chamado sample.rb em seu diretório test/integration/default para seus testes.

      Crie o sample.rb:

      • nano test/integration/default/sample.rb

      Adicione o seguinte ao seu arquivo:

      test/integration/default/sample.rb

      describe package('nodejs') do
        it { should be_installed }
      end
      

      Aqui seu teste está usando o recurso package para verificar se o node.js está instalado.

      Salve e saia do arquivo quando terminar.

      Para executar este teste, você precisa editar kitchen.yml para especificar o playbook que você criou anteriormente e para adicionar às suas configurações.

      Abra seu arquivo kitchen.yml:

      • nano ansible_testing_dir/kitchen.yml

      Substitua o conteúdo de kitchen.yml com o seguinte:

      ansible_testing_dir/kitchen.yml

      ---
      driver:
        name: digitalocean
      
      provisioner:
        name: ansible_playbook
        hosts: test-kitchen
        playbook: ./playbook.yml
      
      verifier:
        name: inspec
      
      platforms:
        - name: ubuntu-18
          driver_config:
            ssh_key: CAMINHO_PARA_SUA_CHAVE_PRIVADA_SSH
            tags:
              - inspec-testing
            region: fra1
            size: 1gb
            private_networking: false
          verifier:
            inspec_tests:
              - test/integration/default
      suites:
        - name: default
      
      

      As opções de platform incluem o seguinte:

      • name: A imagem que você está usando.
      • driver_config: A configuração do seu Droplet da DigitalOcean. Você está especificando as seguintes opções para driver_config:

        • ssh_key: Caminho para SUA_CHAVE_SSH_PRIVADA. Sua SUA_CHAVE_SSH_PRIVADA está localizada no diretório que você especificou ao criar sua chave ssh.
        • tags: As tags associadas ao seu Droplet.
        • region: A region ou região onde você deseja que seu Droplet seja hospedado.
        • size: A memória que você deseja que seu Droplet tenha.
      • verifier: Isso define que o projeto contém testes InSpec.

        • A parte do inspec_tests especifica que os testes existem no diretório test/integration/default do projeto.

      Observe que name e region usam abreviações. Você pode verificar na documentação do test-kitchen as abreviações que você pode usar.

      Depois de adicionar sua configuração, salve e saia do arquivo.

      Execute o comando kitchen test para executar o teste. Isso verificará se o Node.js está instalado — ele falhará propositalmente, porque você atualmente não possui a role Node.js no seu arquivo playbook.yml:

      Você verá uma saída semelhante à seguinte:

      Output: failing test results

      -----> Starting Kitchen (v1.24.0) -----> Cleaning up any prior instances of <default-ubuntu-18> -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145268853> destroyed. Finished destroying <default-ubuntu-18> (0m2.63s). -----> Testing <default-ubuntu-18> -----> Creating <default-ubuntu-18>... DigitalOcean instance <145273424> created. Waiting for SSH service on 138.68.97.146:22, retrying in 3 seconds [SSH] Established (ssh ready) Finished creating <default-ubuntu-18> (0m51.74s). -----> Converging <default-ubuntu-18>... $$$$$$ Running legacy converge for 'Digitalocean' Driver -----> Installing Chef Omnibus to install busser to run tests PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Downloading files from <default-ubuntu-18> Finished converging <default-ubuntu-18> (0m55.05s). -----> Setting up <default-ubuntu-18>... $$$$$$ Running legacy setup for 'Digitalocean' Driver Finished setting up <default-ubuntu-18> (0m0.00s). -----> Verifying <default-ubuntu-18>... Loaded tests from {:path=>". ansible_testing_dir.test.integration.default"} Profile: tests from {:path=>"ansible_testing_dir/test/integration/default"} (tests from {:path=>"ansible_testing_dir.test.integration.default"}) Version: (not specified) Target: ssh://[email protected]:22 System Package nodejs × should be installed expected that System Package nodejs is installed Test Summary: 0 successful, 1 failure, 0 skipped >>>>>> ------Exception------- >>>>>> Class: Kitchen::ActionFailed >>>>>> Message: 1 actions failed. >>>>>> Verify failed on instance <default-ubuntu-18>. Please see .kitchen/logs/default-ubuntu-18.log for more details >>>>>> ---------------------- >>>>>> Please see .kitchen/logs/kitchen.log for more details >>>>>> Also try running `kitchen diagnose --all` for configuration 4.54s user 1.77s system 5% cpu 2:02.33 total

      A saída informa que seu teste está falhando porque você não possui o Node.js instalado no Droplet que você provisionou com o kitchen. Você corrigirá seu teste adicionando a role nodejs ao seu arquivo playbook.yml e executará o teste novamente.

      Edite o arquivo playbook.yml para incluir a role nodejs:

      Adicione as seguintes linhas destacadas ao seu arquivo:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      
         roles:
          - nodejs
      

      Salve e feche o arquivo.

      Agora, você executará novamente o teste usando o comando kitchen test:

      Você verá a seguinte saída:

      Output

      ...... Target: ssh://[email protected]:22 System Package nodejs ✔ should be installed Test Summary: 1 successful, 0 failures, 0 skipped Finished verifying <default-ubuntu-18> (0m4.89s). -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145512952> destroyed. Finished destroying <default-ubuntu-18> (0m2.23s). Finished testing <default-ubuntu-18> (2m49.78s). -----> Kitchen is finished. (2m55.14s) 4.86s user 1.77s system 3% cpu 2:56.58 total

      Seu teste agora passa porque você tem o Node.js instalado usando a role nodejs.

      Aqui está um resumo do que o Kitchen está fazendo em Test Action:

      • Destrói o Droplet se ele existir
      • Cria o Droplet
      • Converge o Droplet
      • Verifica o Droplet com o InSpec
      • Destrói o Droplet

      O Kitchen interromperá a execução em seu Droplet se encontrar algum problema. Isso significa que, se o seu playbook do Ansible falhar, o InSpec não será executado e o seu Droplet não será destruído. Isso permite que você inspecione o estado da instância e corrija quaisquer problemas. O comportamento da ação final de destruição pode ser substituído, se desejado. Verifique a ajuda da CLI para a flag --destroy executando o comando kitchen help test.

      Você escreveu seus primeiros testes e os executou no seu playbook com uma instância falhando antes de corrigir o problema. Em seguida, você estenderá seu arquivo de teste.

      Passo 4 — Adicionando Casos de Teste

      Neste passo, você adicionará mais casos de teste ao seu arquivo de teste para verificar se os módulos do Nginx estão instalados no seu Droplet e se o arquivo de configuração tem as permissões corretas.

      Edite seu arquivo sample.rb para adicionar mais casos de teste:

      • nano test/integration/default/sample.rb

      Adicione os seguintes casos de teste ao final do arquivo:

      test/integration/default/sample.rb

      . . .
      control 'nginx-modules' do
        impact 1.0
        title 'NGINX modules'
        desc 'The required NGINX modules should be installed.'
        describe nginx do
          its('modules') { should include 'http_ssl' }
          its('modules') { should include 'stream_ssl' }
          its('modules') { should include 'mail_ssl' }
        end
      end
      
      control 'nginx-conf' do
        impact 1.0
        title 'NGINX configuration'
        desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
        describe file('/etc/nginx/nginx.conf') do
          it { should be_owned_by 'root' }
          it { should be_grouped_into 'root' }
          it { should_not be_readable.by('others') }
          it { should_not be_writable.by('others') }
          it { should_not be_executable.by('others') }
        end
      end
      

      Esses casos de teste verificam se os módulos nginx-modules no seu Droplet incluem http_ssl, stream_ssl e mail_ssl. Você também está verificando as permissões do arquivo /etc/nginx/nginx.conf.

      Você está usando as palavras-chave it e its para definir seu teste. A palavra-chave its é usada apenas para acessar propriedades de resources. Por exemplo, modules é uma propriedade de nginx.

      Salve e saia do arquivo depois de adicionar os casos de teste.

      Agora execute o comando kitchen test para testar novamente:

      Você verá a seguinte saída:

      Output

      ... Target: ssh://[email protected]:22 ↺ nginx-modules: NGINX modules ↺ The `nginx` binary not found in the path provided. × nginx-conf: NGINX configuration (2 failed) × File /etc/nginx/nginx.conf should be owned by "root" expected `File /etc/nginx/nginx.conf.owned_by?("root")` to return true, got false × File /etc/nginx/nginx.conf should be grouped into "root" expected `File /etc/nginx/nginx.conf.grouped_into?("root")` to return true, got false ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 0 successful controls, 1 control failure, 1 control skipped Test Summary: 4 successful, 2 failures, 1 skipped

      Você verá que alguns dos testes estão falhando. Você irá corrigi-los adicionando a role nginx ao seu arquivo playbook e executando novamente o teste. No teste que falhou, você está verificando módulos nginx e permissões de arquivo que não estão presentes atualmente no seu servidor.

      Abra seu arquivo playbook.yml:

      • nano ansible_testing_dir/playbook.yml

      Adicione a seguinte linha destacada às suas roles:

      ansible_testing_dir/playbook.yml

      ---
      - hosts: all
        become: true
        remote_user: ubuntu
        vars:
        NODEJS_VERSION: 8
      
        roles:
        - nodejs
        - nginx
      

      Salve e feche o arquivo quando terminar.

      Em seguida, execute seus testes novamente:

      Você verá a seguinte saída:

      Output

      ... Target: ssh://[email protected]:22 ✔ nginx-modules: NGINX version ✔ Nginx Environment modules should include "http_ssl" ✔ Nginx Environment modules should include "stream_ssl" ✔ Nginx Environment modules should include "mail_ssl" ✔ nginx-conf: NGINX configuration ✔ File /etc/nginx/nginx.conf should be owned by "root" ✔ File /etc/nginx/nginx.conf should be grouped into "root" ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 9 successful, 0 failures, 0 skipped

      Depois de adicionar a role nginx ao playbook, todos os seus testes agora passam. A saída mostra que os módulos http_ssl, stream_ssl e mail_ssl estão instalados em seu Droplet e as permissões corretas estão definidas para o arquivo de configuração.

      Quando terminar, ou não precisar mais do seu Droplet, você poderá destruí-lo executando o comando kitchen destroy para excluí-lo após executar seus testes:

      Após este comando, você verá uma saída semelhante a:

      Output

      -----> Starting Kitchen (v1.24.0) -----> Destroying <default-ubuntu-18>... Finished destroying <default-ubuntu-18> (0m0.00s). -----> Kitchen is finished. (0m5.07s) 3.79s user 1.50s system 82% cpu 6.432 total

      Você escreveu testes para o seu playbook, executou os testes e corrigiu os testes com falha para garantir que todos os testes sejam aprovados. Agora você está pronto para criar um ambiente virtual, escrever testes para o seu Playbook Ansible e executar seu teste no ambiente virtual usando o Kitchen.

      Conclusão

      Agora você tem uma base flexível para testar seu deployment Ansible, que lhe permite testar seus playbooks antes de executar em um servidor ativo. Você também pode empacotar seu teste em um perfil. Você pode usar perfis para compartilhar seu teste através do Github ou do Chef Supermarket e executá-lo facilmente em um servidor ativo.

      Para detalhes mais abrangentes sobre o InSpec e o Kitchen, consulte a documentação oficial do InSpec e a documentação oficial do Kitchen.



      Source link

      How To Test Your Ansible Deployment with InSpec and Kitchen


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      InSpec is an open-source auditing and automated testing framework used to describe and test for regulatory concerns, recommendations, or requirements. It is designed to be human-readable and platform-agnostic. Developers can work with InSpec locally or using SSH, WinRM, or Docker to run testing, so it’s unnecessary to install any packages on the infrastructure that is being tested.

      Although with InSpec you can run tests directly on your servers, there is a potential for human error that could cause issues in your infrastructure. To avoid this scenario, developers can use Kitchen to create a virtual machine and install an OS of their choice on the machines where tests are running. Kitchen is a test runner, or test automation tool, that allows you to test infrastructure code on one or more isolated platforms. It also supports many testing frameworks and is flexible with a driver plugin architecture for various platforms such as Vagrant, AWS, DigitalOcean, Docker, LXC containers, etc.

      In this tutorial, you’ll write tests for your Ansible playbooks running on a DigitalOcean Ubuntu 18.04 Droplet. You’ll use Kitchen as the test-runner and InSpec for writing the tests. By the end of this tutorial, you’ll be able to test your Ansible playbook deployment.

      Prerequisites

      Before you begin with this guide, you’ll need a DigitalOcean account in addition to the following:

      Step 1 — Setting Up and Initializing Kitchen

      You’ve installed ChefDK as part of the prerequisites that comes packaged with kitchen. In this step, you’ll set up Kitchen to communicate with DigitalOcean.

      Before initializing Kitchen, you’ll create and move into a project directory. In this tutorial, we’ll call it ansible_testing_dir.

      Run the following command to create the directory:

      • mkdir ~/ansible_testing_dir

      And then move into it:

      Using gem install the kitchen-digitalocean package on your local machine. This allows you to tell kitchen to use the DigitalOcean driver when running tests:

      • gem install kitchen-digitalocean

      Within the project directory, you’ll run the kitchen init command specifying ansible_playbook as the provisioner and digitalocean as the driver when initializing Kitchen:

      • kitchen init --provisioner=ansible_playbook --driver=digitalocean

      You’ll see the following output:

      Output

      create kitchen.yml create chefignore create test/integration/default

      This has created the following within your project directory:

      • test/integration/default is the directory to which you’ll save your test files.

      • chefignore is the file you would use to ensure certain files are not uploaded to the Chef Infra Server, but you won’t be using it in this tutorial.

      • kitchen.yml is the file that describes your testing configuration: what you want to test and the target platforms.

      Now, you need to export your DigitalOcean credentials as environment variables to have access to create Droplets from your CLI. First, start with your DigitalOcean access token by running the following command:

      • export DIGITALOCEAN_ACCESS_TOKEN="YOUR_DIGITALOCEAN_ACCESS_TOKEN"

      You also need to get your SSH Key ID number; note that YOUR_DIGITALOCEAN_SSH_KEY_IDS must be the numeric ID of your SSH key, not the symbolic name. Using the DigitalOcean API, you can get the numeric ID of your keys with the following command:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN"

      From this command you’ll see a list of your SSH Keys and related metadata. Read through the output to find the correct key and identify the ID number within the output:

      Output

      ... {"id":your-ID-number,"fingerprint":"fingerprint","public_key":"ssh-rsa your-ssh-key","name":"your-ssh-key-name" ...

      Note: If you would like to make your output more readable to obtain your numeric IDs, you can find and download jq based on your OS on the jq download page. Now, you can run the previous command piped into jq as following:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN" | jq

      You’ll see your SSH Key information formatted similarly to:

      Output

      { "ssh_keys": [ { "id": YOUR_SSH_KEY_ID, "fingerprint": "2f:d0:16:6b", "public_key": "ssh-rsa AAAAB3NzaC1yc2 [email protected]", "name": "sannikay" } ], }

      Once you’ve identified your SSH numeric IDs, export them with the following command:

      • export DIGITALOCEAN_SSH_KEY_IDS="YOUR_DIGITALOCEAN_SSH_KEY_ID"

      You’ve initialized kitchen and set up the environment variables for your DigitalOcean credentials. Now you’ll move on to create and run tests on your DigitalOcean Droplets directly from the command line.

      Step 2 — Creating the Ansible Playbook

      In this step, you’ll create a playbook and roles that set up Nginx and Node.js on the Droplet created by kitchen in the next step. Your tests will be run against the playbook to ensure the conditions specified in the playbook are met.

      To begin, create a roles directory for both the Nginx and Node.js roles:

      • mkdir -p roles/{nginx,nodejs}/tasks

      This will create a directory structure as follows:

      roles
      ├── nginx
      │   └── tasks
      └── nodejs
          └── tasks
      

      Now, create a main.yml file in the roles/nginx/tasks directory using your preferred editor:

      • nano roles/nginx/tasks/main.yml

      In this file, create a task that sets up and starts Nginx by adding the following content:

      roles/nginx/tasks/main.yml

      ---
      - name: Update cache repositories and install Nginx
        apt:
          name: nginx
          update_cache: yes
      
      - name: Change nginx directory permission
        file:
          path: /etc/nginx/nginx.conf
          mode: 0750
      
      - name: start nginx
        service:
          name: nginx
          state: started
      

      Once you’ve added the content, save and exit the file.

      In roles/nginx/tasks/main.yml, you define a task that will update the cache repository of your Droplet, which is an equivalent of running the apt update command manually on a server. This task also changes the Nginx configuration file permissions and starts the Nginx service.

      You are also going to create a main.yml file in roles/nodejs/tasks to define a task that sets up Node.js:

      • nano roles/nodejs/tasks/main.yml

      Add the following tasks to this file:

      roles/nodejs/tasks/main.yml

      ---
      - name: Update caches repository
        apt:
          update_cache: yes
      
      - name: Add gpg key for NodeJS LTS
        apt_key:
          url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key"
          state: present
      
      - name: Add the NodeJS LTS repo
        apt_repository:
          repo: "deb https://deb.nodesource.com/node_{{ NODEJS_VERSION }}.x {{ ansible_distribution_release }} main"
          state: present
          update_cache: yes
      
      - name: Install Node.js
        apt:
          name: nodejs
          state: present
      
      

      Save and exit the file when you’re finished.

      In roles/nodejs/tasks/main.yml, you first define a task that will update the cache repository of your Droplet. Then with the next task you add the GPG key for Node.js that serves as a means of verifying the authenticity of the Node.js apt repository. The final two tasks add the Node.js apt repository and install Node.js.

      Now you’ll define your Ansible configurations, such as variables, the order in which you want your roles to run, and super user privilege settings. To do this, you’ll create a file named playbook.yml, which serves as an entry point for Kitchen. When you run your tests, Kitchen starts from your playbook.yml file and looks for the roles to run, which are your roles/nginx/tasks/main.yml and roles/nodejs/tasks/main.yml files.

      Run the following command to create playbook.yml:

      Add the following content to the file:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      

      Save and exit the file.

      You’ve created the Ansible playbook roles that you’ll be running your tests against to ensure conditions specified in the playbook are met.

      Step 3 — Writing Your InSpec Tests

      In this step, you’ll write tests to check if Node.js is installed on your Droplet. Before writing your test, let’s look at the format of an example InSpec test. As with many test frameworks, InSpec code resembles a natural language. InSpec has two main components, the subject to examine and the subject’s expected state:

      block A

      describe '<entity>' do
        it { <expectation> }
      end
      

      In block A, the keywords do and end define a block. The describe keyword is commonly known as test suites, which contain test cases. The it keyword is used for defining the test cases.

      <entity> is the subject you want to examine, for example, a package name, service, file, or network port. The <expectation> specifies the desired result or expected state, for example, Nginx should be installed or should have a specific version. You can check the InSpec DSL documentation to learn more about the InSpec language.

      Another example InSpec test block:

      block B

      control 'Can be anything unique' do  
        impact 0.7                         
        title 'A human-readable title'     
        desc  'An optional description'
        describe '<entity>' do             
          it { <expectation> }
        end
      end
      

      The difference between block A and block B is the control block. The control block is used as a means of regulatory control, recommendation or requirement. The control block has a name; usually a unique ID, metadata such as desc, title, impact, and finally group together related describe block to implement the checks.

      desc, title, and impact define metadata that fully describe the importance of the control, its purpose, with a succinct and complete description. impact defines a numeric value that ranges from 0.0 to 1.0 where 0.0 to <0.01 is classified as no impact, 0.01 to <0.4 is classified as low impact, 0.4 to <0.7 is classified as medium impact, 0.7 to <0.9 is classified as high impact, 0.9 to 1.0 is classified as critical control.

      Now to implement a test. Using the syntax of block A, you’ll use InSpec’s package resource to test if Node.js is installed on the system. You’ll create a file named sample.rb in your test/integration/default directory for your tests.

      Create sample.rb:

      • nano test/integration/default/sample.rb

      Add the following to your file:

      test/integration/default/sample.rb

      describe package('nodejs') do
        it { should be_installed }
      end
      

      Here your test is using the package resource to check Node.js is installed.

      Save and exit the file when you’re finished.

      To run this test, you need to edit kitchen.yml to specify the playbook you created earlier and to add to your configurations.

      Open your kitchen.yml file:

      • nano ansible_testing_dir/kitchen.yml

      Replace the content of kitchen.yml with the following:

      ansible_testing_dir/kitchen.yml

      ---
      driver:
        name: digitalocean
      
      provisioner:
        name: ansible_playbook
        hosts: test-kitchen
        playbook: ./playbook.yml
      
      verifier:
        name: inspec
      
      platforms:
        - name: ubuntu-18
          driver_config:
            ssh_key: PATH_TO_YOUR_PRIVATE_SSH_KEY
            tags:
              - inspec-testing
            region: fra1
            size: 1gb
            private_networking: false
          verifier:
            inspec_tests:
              - test/integration/default
      suites:
        - name: default
      
      

      The platform options include the following:

      • name: The image you’re using.
      • driver_config: Your DigitalOcean Droplet configuration. You’re specifying the following options for the driver_config:

        • ssh_key: Path to YOUR_PRIVATE_SSH_KEY. Your YOUR_PRIVATE_SSH_KEY is located in the directory you specified when creating your ssh key.
        • tags: The tags associated with your Droplet.
        • region: The region where you want your Droplet to be hosted.
        • size: The memory you want your Droplet to have.
      • verifier: This defines that the project contains InSpec tests.

        • The inspec_tests part specifies that the tests exist under the project test/integration/default directory.

      Note that the name and region use abbreviations. You can check on the test-kitchen documentation for the abbreviations you can use.

      Once you’ve added your configuration, save and exit the file.

      Run the kitchen test command to run the test. This will check to see if Node.js is installed—this will purposefully fail because you don’t currently have the Node.js role in your playbook.yml file:

      You’ll see output similar to the following:

      Output: failing test results

      -----> Starting Kitchen (v1.24.0) -----> Cleaning up any prior instances of <default-ubuntu-18> -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145268853> destroyed. Finished destroying <default-ubuntu-18> (0m2.63s). -----> Testing <default-ubuntu-18> -----> Creating <default-ubuntu-18>... DigitalOcean instance <145273424> created. Waiting for SSH service on 138.68.97.146:22, retrying in 3 seconds [SSH] Established (ssh ready) Finished creating <default-ubuntu-18> (0m51.74s). -----> Converging <default-ubuntu-18>... $$$$$$ Running legacy converge for 'Digitalocean' Driver -----> Installing Chef Omnibus to install busser to run tests PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Downloading files from <default-ubuntu-18> Finished converging <default-ubuntu-18> (0m55.05s). -----> Setting up <default-ubuntu-18>... $$$$$$ Running legacy setup for 'Digitalocean' Driver Finished setting up <default-ubuntu-18> (0m0.00s). -----> Verifying <default-ubuntu-18>... Loaded tests from {:path=>". ansible_testing_dir.test.integration.default"} Profile: tests from {:path=>"ansible_testing_dir/test/integration/default"} (tests from {:path=>"ansible_testing_dir.test.integration.default"}) Version: (not specified) Target: ssh://[email protected]:22 System Package nodejs × should be installed expected that System Package nodejs is installed Test Summary: 0 successful, 1 failure, 0 skipped >>>>>> ------Exception------- >>>>>> Class: Kitchen::ActionFailed >>>>>> Message: 1 actions failed. >>>>>> Verify failed on instance <default-ubuntu-18>. Please see .kitchen/logs/default-ubuntu-18.log for more details >>>>>> ---------------------- >>>>>> Please see .kitchen/logs/kitchen.log for more details >>>>>> Also try running `kitchen diagnose --all` for configuration 4.54s user 1.77s system 5% cpu 2:02.33 total

      The output notes that your test is failing because you don’t have Node.js installed on the Droplet you provisioned with kitchen. You’ll fix your test by adding the nodejs role to your playbook.yml file and run the test again.

      Edit the playbook.yml file to include the nodejs role:

      Add the following highlighted lines to your file:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      
         roles:
          - nodejs
      

      Save and close the file.

      Now, you’ll rerun the test using the kitchen test command:

      You’ll see the following output:

      Output

      ...... Target: ssh://[email protected]:22 System Package nodejs ✔ should be installed Test Summary: 1 successful, 0 failures, 0 skipped Finished verifying <default-ubuntu-18> (0m4.89s). -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145512952> destroyed. Finished destroying <default-ubuntu-18> (0m2.23s). Finished testing <default-ubuntu-18> (2m49.78s). -----> Kitchen is finished. (2m55.14s) 4.86s user 1.77s system 3% cpu 2:56.58 total

      Your test now passes because you have Node.js installed using the nodejs role.

      Here is a summary of what Kitchen is doing in the Test Action:

      • Destroys the Droplet if it exists
      • Creates the Droplet
      • Converges the Droplet
      • Verifies the Droplet with InSpec
      • Destroys the Droplet

      Kitchen will abort the run on your Droplet if it encounters any issues. This means if your Ansible playbook fails, InSpec won’t run and your Droplet won’t be destroyed. This gives you a chance to inspect the state of the instance and fix any issues. The behavior of the final destroy action can be overridden if desired. Check out the CLI help for the --destroy flag by running the kitchen help test command.

      You’ve written your first tests and run them against your playbook with one instance failing before fixing the issue. Next you’ll extend your test file.

      Step 4 — Adding Test Cases

      In this step, you’ll add more test cases to your test file to check if Nginx modules are installed on your Droplet and the configuration file has the right permissions.

      Edit your sample.rb file to add more test cases:

      • nano test/integration/default/sample.rb

      Add the following test cases to the end of the file:

      test/integration/default/sample.rb

      . . .
      control 'nginx-modules' do
        impact 1.0
        title 'NGINX modules'
        desc 'The required NGINX modules should be installed.'
        describe nginx do
          its('modules') { should include 'http_ssl' }
          its('modules') { should include 'stream_ssl' }
          its('modules') { should include 'mail_ssl' }
        end
      end
      
      control 'nginx-conf' do
        impact 1.0
        title 'NGINX configuration'
        desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
        describe file('/etc/nginx/nginx.conf') do
          it { should be_owned_by 'root' }
          it { should be_grouped_into 'root' }
          it { should_not be_readable.by('others') }
          it { should_not be_writable.by('others') }
          it { should_not be_executable.by('others') }
        end
      end
      

      These test cases check that the nginx-modules on your Droplet include http_ssl, stream_ssl, and mail_ssl. You are also checking for /etc/nginx/nginx.conf file permissions.

      You are using both the it and its keywords to define your test. The keyword its is only used to access properties of the resources. For example, modules is a property of nginx.

      Save and exit the file once you’ve added the test cases.

      Now run the kitchen test command to test again:

      You’ll see the following output:

      Output

      ... Target: ssh://[email protected]:22 ↺ nginx-modules: NGINX modules ↺ The `nginx` binary not found in the path provided. × nginx-conf: NGINX configuration (2 failed) × File /etc/nginx/nginx.conf should be owned by "root" expected `File /etc/nginx/nginx.conf.owned_by?("root")` to return true, got false × File /etc/nginx/nginx.conf should be grouped into "root" expected `File /etc/nginx/nginx.conf.grouped_into?("root")` to return true, got false ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 0 successful controls, 1 control failure, 1 control skipped Test Summary: 4 successful, 2 failures, 1 skipped

      You’ll see that some of the tests are failing. You’re going to fix those by adding the nginx role to your playbook file and rerunning the test. In the failing test, you’re checking for nginx modules and file permissions that are currently not present on your server.

      Open your playbook.yml file:

      • nano ansible_testing_dir/playbook.yml

      Add the following highlighted line to your roles:

      ansible_testing_dir/playbook.yml

      ---
      - hosts: all
        become: true
        remote_user: ubuntu
        vars:
        NODEJS_VERSION: 8
      
        roles:
        - nodejs
        - nginx
      

      Save and close the file when you’re finished.

      Then run your tests again:

      You’ll see the following output:

      Output

      ... Target: ssh://[email protected]:22 ✔ nginx-modules: NGINX version ✔ Nginx Environment modules should include "http_ssl" ✔ Nginx Environment modules should include "stream_ssl" ✔ Nginx Environment modules should include "mail_ssl" ✔ nginx-conf: NGINX configuration ✔ File /etc/nginx/nginx.conf should be owned by "root" ✔ File /etc/nginx/nginx.conf should be grouped into "root" ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 9 successful, 0 failures, 0 skipped

      After adding the nginx role to the playbook all your tests now pass. The output shows that the http_ssl, stream_ssl, and mail_ssl modules are installed on your Droplet and the right permissions are set for the configuration file.

      Once you’re finished, or you no longer need your Droplet, you can destroy it by running the kitchen destroy command to delete it after running your tests:

      Following this command you’ll see output similar to:

      Output

      -----> Starting Kitchen (v1.24.0) -----> Destroying <default-ubuntu-18>... Finished destroying <default-ubuntu-18> (0m0.00s). -----> Kitchen is finished. (0m5.07s) 3.79s user 1.50s system 82% cpu 6.432 total

      You’ve written tests for your playbook, run the tests, and fixed the failing tests to ensure all the tests are passing. You’re now set up to create a virtual environment, write tests for your Ansible Playbook, and run your test on the virtual environment using Kitchen.

      Conclusion

      You now have a flexible foundation for testing your Ansible deployment, which allows you to test your playbooks before running on a live server. You can also package your test into a profile. You can use profiles to share your test through Github or the Chef Supermarket and easily run it on a live server.

      For more comprehensive details on InSpec and Kitchen, refer to the official InSpec documentation and the official Kitchen documentation.



      Source link

      How To Audit a PostgreSQL Database with InSpec on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      InSpec is an open-source, automated testing framework for testing and auditing your system to ensure the compliance of integration, security, and other policy requirements. Developers can test the actual state of their infrastructure and applications against a target state using InSpec code.

      To specify the policy requirements you’re testing for, InSpec includes audit controls. Traditionally, developers manually enforce policy requirements and often do this right before deploying changes to production. With InSpec however, developers can continuously evaluate compliance at every stage of product development, which aids in solving issues earlier in the process of development. The InSpec DSL (Domain Specific Language) built on RSpec, a DSL testing tool written in Ruby, specifies the syntax used to write the audit controls.

      InSpec also includes a collection of resources to assist in configuring specific parts of your system and to simplify making audit controls. There is a feature to write your own custom resources when you need to define a specific solution that isn’t available. Universal matchers allow you to compare resource values to expectations across all InSpec tests.

      In this tutorial, you’ll install InSpec on a server running Ubuntu 18.04. You will start by writing a test that verifies the operating system family of the server, then you’ll create a PostgreSQL audit profile from the ground up. This audit profile starts by checking that you have PostgreSQL installed on the server and that its services are running. Then you’ll add tests to check that the PostgreSQL service is running with the correct port, address, protocol, and user. Next you’ll test specific PostgreSQL configuration parameters, and finally, you’ll audit client authentication configuration.

      Prerequisites

      Before following this tutorial, you will need the following:

      Step 1 — Preparing the Environment

      In this step, you’ll download and unpack the latest stable version of InSpec into your home directory. InSpec provides installable binaries on their downloads page.

      Navigate to your home directory:

      Now download the binary with curl:

      • curl -LO https://packages.chef.io/files/stable/inspec/3.7.11/ubuntu/18.04/inspec_3.7.11-1<^>_amd64.deb

      Next, use the sha256sum command to generate a checksum of the downloaded file. This is to verify the integrity and authenticity of the downloaded file.

      • sha256sum inspec_3.7.11-1_amd64.deb

      Checksums for each binary are listed on the InSpec downloads page, so visit the downloads page to compare with your output from this command.

      Output

      e665948f9c0441e8648b08f8d3c8d34a86f9e994609877a7e4853c012dbc7523 inspec_3.7.11-1_amd64.deb

      If the checksums are different, delete the downloaded file and repeat the download process.

      Next, you'll install the downloaded binary. For this, you'll use the dpkg command that you can use for package management, and which comes with all Debian-based systems, such as Ubuntu, by default. The -i flag prompts the dpkg command to install the package files.

      • sudo dpkg -i inspec_3.7.11-1_amd64.deb

      If there are no errors, it means that you've installed InSpec successfully. To verify the installation, enter the following command:

      You'll receive output showing the version of InSpec you just installed:

      Output

      3.7.11

      If you don't see a version number displayed, run over step 1 again.

      After this, you can delete inspec_3.7.11-1_amd64.deb since you don't need it anymore as you've installed the package:

      • rm inspec_3.7.11-1_amd64.deb

      You've successfully installed InSpec on your server. In the next step, you will write a test to verify the operating system family of your server.

      Step 2 — Completing Your First InSpec Test

      In this step, you'll complete your first InSpec test, which will be testing that your operating system family is debian.

      You will use the os resource, which is a built-in InSpec audit resource to test the platform on which the system is running. You'll also use the eq matcher. The eq matcher is a universal matcher that tests for the exact equality of two values.

      An InSpec test consists of a describe block, which contains one or more it and its statements each of which validates one of the resource's features. Each statement describes an expectation of a specific condition of the system as assertions. Two keywords that you can include to make an assertion are should and should_not, which assert that the condition should be true and false respectively.

      Create a file called os_family.rb to hold your test and open it with your text editor:

      Add the following to your file:

      os_family.rb

      describe os.family do
        it {should eq 'debian'}
      end
      

      This test ensures that the operating system family of the target system is debian. Other possible values are windows, unix, bsd, and so on. You can find a complete list in the os resource documentation. Save and exit the file.

      Next, run your test with the following command:

      The test will pass, and you'll receive output resembling the following:

      Output

      Profile: tests from os_family.rb (tests from os_family.rb) Version: (not specified) Target: local:// debian ✔ should eq "debian" Test Summary: 1 successful, 0 failures, 0 skipped

      In your output, the Profile contains the name of the profile that just executed. Since this test is not included in a profile, InSpec generates a default profile name from the test's file name tests from os_family.rb. (You'll work with InSpec profiles in the next section where you will start building your PostgreSQL InSpec profile.) Here InSpec presents the Version as not specified, because you can only specify versions in profiles.

      The Target field specifies the target system that the test is executed on, which can be local or a remote system via ssh. In this case, you've executed your test on the local system so the target shows local://.

      Usefully, the output also displays the executed test with a checkmark symbol (✔) to the left indicating a successful test. The output will show a cross symbol (✘) if the test fails.

      Finally, the test summary gives overall details about how many tests were successful, failed, and skipped. In this instance, you had a single successful test.

      Now you'll see what the output looks like for a failed test. Open os_family.rb:

      In the test you created earlier in this step, you'll now change the expected value of the operating system family from debian to windows. Your file contents after this will be the following:

      os_family.rb

      describe os.family do
        it {should eq 'windows'}
      end
      

      Save and exit the file.

      Next, run the updated test with the following command:

      You will get output similar to the following:

      Output

      Profile: tests from os_family.fail.rb (tests from os_family.fail.rb) Version: (not specified) Target: local:// debian (✘) should eq "windows" expected: "windows" got: "debian" (compared using ==) Test Summary: 0 successful, 1 failure, 0 skipped

      As expected, the test failed. The output indicates that your expected (windows) and actual (debian) values do not match for the os.family property. The (compared using ==) output indicates that the eq matcher performed a string comparison between the two values to come up with this result.

      In this step, you've written a successful test that verifies the operating system family of the server. You've also created a failed test in order to see what the InSpec output for a failed test looks like. In the next step, you will start building the audit profile to test your PostgreSQL installation.

      Step 3 — Auditing Your PostgreSQL Installation

      Now, you will audit your PostgreSQL installation. You'll start by checking that you have PostgreSQL installed and its service is running correctly. Finally, you'll audit the PostgreSQL system port and process. For your PostgreSQL audit, you will create various InSpec controls, all within an InSpec profile named PostgreSQL.

      An InSpec control is a high-level grouping of related tests. Within a control, you can have multiple describe blocks, as well as metadata to describe your tests such as impact level, title, description, and tags. InSpec profiles organize controls to support dependency management and code reuse, which both help manage test complexity. They are also useful for packaging and sharing tests with the public via the Chef Supermarket. You can use profiles to define custom resources that you would implement as regular Ruby classes.

      To create an InSpec profile, you will use the init command. Enter this command to create the PostgreSQL profile:

      • inspec init profile PostgreSQL

      This creates the profile in a new directory with the same name as your profile, in this case PostgreSQL. Now, move into the new directory:

      The directory structure will look like this:

      PostgreSQL/
      ├── controls
      │   └── example.rb
      ├── inspec.yml
      ├── libraries
      └── README.md
      

      The controls/example.rb file contains a sample control that tests to see if the /tmp folder exists on the target system. This is present only as a sample and you will replace it with your own test.

      Your first test will be to ensure that you have the package postgresql-10 installed on your system and that you have the postgresql service installed, enabled, and running.

      Rename the controls/example.rb file to controls/postgresql.rb:

      • mv controls/example.rb controls/postgresql.rb

      Next, open the file with your text editor:

      • nano controls/postgresql.rb

      Replace the content of the file with the following:

      controls/postgresql.rb

      control '1-audit_installation' do
        impact 1.0
        title 'Audit PostgreSQL Installation'
        desc 'Postgres should be installed and running'
      
        describe package('postgresql-10') do
          it {should be_installed}
          its('version') {should cmp >= '10'}
        end
      
        describe service('postgresql@10-main') do
          it {should be_enabled}
          it {should be_installed}
          it {should be_running}
        end
      end
      

      In the preceding code block, you begin by defining the control with its name and metadata.

      In the first describe block, you use the package resource and pass in the PostgreSQL package name postgresql-10 as a resource argument. The package resource provides the matcher be_installed to test that the named package is installed on the system. It returns true if you have the package installed, and false otherwise. Next, you used the its statement to validate that the version of the installed PostgreSQL package is at least 10. You are using cmp instead of eq because package version strings usually contain other attributes apart from the numerical version. eq returns true only if there is an exact match while cmp is less-restrictive.

      In the second describe block, you use the service resource and pass in the PostgreSQL 10 service name postgresql@10-main as a resource argument. The service resource provides the matchers be_enabled, be_installed, and be_running and they return true if you have the named service installed, enabled, and running on the target system respectively.

      Save and exit your file.

      Next, you will run your profile. Make sure you're in the ~/PostgreSQL directory before running the following command:

      Since you completed the PostgreSQL prerequisite tutorial, your test will pass. Your output will look similar to the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running Profile Summary: 1 successful control, 0 control failures, 0 controls skipped Test Summary: 5 successful, 0 failures, 0 skipped

      The output indicates that your control was successful. A control is successful if, and only if, all the tests in it are successful. The output also confirms that all your tests were successful.

      Now that you've verified that the correct version of PostgreSQL is installed and the service is fine, you will create a new control that ensures that PostgreSQL is listening on the correct port, address, and protocol.

      For this test, you will also use attributes. An InSpec attribute is used to parameterize a profile to enable easy re-use in different environments or target systems. You'll define the PORT attribute.

      Open the inspec.yml file in your text editor:

      You'll append the port attribute to the end of the file. Add the following at the end of your file:

      inspec.yml

      ...
      attributes:
        - name: port
          type: string
          default: '5432'
      

      In the preceding code block, you added the port attribute and set it to a default value of 5432 because that is the port PostgreSQL listens on by default.

      Save and exit the file. Then run inspec check to verify the profile is still valid since you just edited inspec.yml:

      If there are no errors, you can proceed. Otherwise, open the inspec.yml file and ensure that the attribute is present at the end of the file.

      Now you'll create the control that checks that the PostgreSQL process is running and configured with the correct user. Open controls/postgresql.rb in your text editor:

      • nano controls/postgresql.rb

      Append the following control to the end of your current tests file controls/postgresql.rb:

      controls/postgresql.rb

      ...
      PORT = attribute('port')
      
      control '2-audit_address_port' do
        impact 1.0
        title 'Audit Process and Port'
        desc 'Postgres port should be listening and the process should be running'
      
        describe port(PORT) do
          it {should be_listening}
          its('addresses') {should include '127.0.0.1'}
          its('protocols') {should cmp 'tcp'}
        end
      
        describe processes('postgres') do
          it {should exist}
          its('users') {should include 'postgres'}
        end
      
        describe user('postgres') do
          it {should exist}
        end
      end
      

      Here you begin by declaring a PORT variable to hold the value of the port profile attribute. Then you declare the control and its metadata.

      In the first describe block, you include the port resource to test basic port properties. The port resource provides the matchers be_listening, addresses, and protocols. You use the be_listening matcher to test that the named port is listening on the target system. It returns true if the port 5432 is listening and returns false otherwise. The addresses matcher tests if the specified address is associated with the port. In this case, PostgreSQL will be listening on the local address, 127.0.0.1.
      The protocols matcher tests the Internet protocol the port is listening for, which can be icmp, tcp/tcp6, or udp/udp6. PostgreSQL will be listening for tcp connections.

      In the second describe block, you include the processes resource. You use the processes resource to test properties for programs that are running on the system. First, you verify that the postgres process exists on the system, then you use the users matcher to test that the postgres user owns the postgres process.

      In the third describe block, you have the user resource. You include the user resource to test user properties for a user such as whether the user exists or not, the group the user belongs to, and so on. Using this resource, you test that the postgres user exists on the system. Save and exit controls/postgresql.rb.

      Next, run your profile with the following command:

      The tests will pass, and your output will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 11 successful, 0 failures, 0 skipped

      The output indicates that both of your controls and all of your tests were successful.

      In this section, you have created your first InSpec profile and control and used them to organize your tests. You've used several InSpec resources to ensure that you have the correct version of PostgreSQL installed, the PostgreSQL service enabled and running correctly, and that the PostgreSQL user exists on the system. With this set up you're ready to audit your configuration.

      Step 4 — Auditing Your PostgreSQL Configuration

      In this step, you'll audit some PostgreSQL configuration values, which will give you a foundation for working with these configuration files, allowing you to audit any PostgreSQL configuration parameters as desired.

      Now that you have tests auditing the PostgreSQL installation, you'll audit your PostgreSQL configuration itself. PostgreSQL has several configuration parameters that you can use to tune it as desired, and these are stored in the configuration file located by default at /etc/postgresql/10/main/postgresql.conf. You could have different requirements regarding PostgreSQL configuration for your various deployments such as logging, password encryption, SSL, and replication strategies — these requirements you specify in the configuration file.

      You will be using the postgres_conf resource that tests for specific, named configuration options against expected values in the contents of the PostgreSQL configuration file.

      This test will assume some non-default PostgreSQL configuration values that you'll set manually.

      Open the PostgreSQL configuration file in your favorite text editor:

      • sudo nano /etc/postgresql/10/main/postgresql.conf

      Set the following configuration values. If the option already exists in the file but is commented out, uncomment it by removing the #, and set the value as provided:

      /etc/postgresql/10/main/postgresql.conf

      password_encryption = scram-sha-256
      logging_collector = on
      log_connections = on
      log_disconnections = on
      log_duration = on
      

      The configuration values you have set:

      • Ensure that saved passwords are always encrypted with the scram-sha-256 algorithm.
      • Enable the logging collector, which is a background process that captures log messages from the standard error (stderr) and redirects them to a log file.
      • Enable logging of connection attempts to the PostgreSQL server as well as successful connections.
      • Enable logging of session terminations.
      • Enable logging of the duration of every completed statement.

      Save and exit the configuration file. Then restart the PostgreSQL service:

      • sudo service postgresql@10-main restart

      You'll test for only a few configuration options, but you can test any PostgreSQL configuration option with the postgres_conf resource.

      You will pass in your PostgreSQL configuration directory, which is at /etc/postgresql/10/main, using a new profile attribute, postgres_conf_dir. This configuration directory is not the same across all operating systems and platforms, so by passing it in as a profile attribute, you'll be making this profile easier to reuse in different environments.

      Open your inspec.yml file:

      Add this new attribute to the attributes section of inspec.yml:

      inspec.yml

      ...
        - name: postgres_conf_dir
          type: string
          default: '/etc/postgresql/10/main'
      

      Save and exit your file. Then run the following command to verify the InSpec profile is still valid because you just edited the inspec.yml:

      If there are no errors, you can proceed. Otherwise, open the inspec.yml file and ensure that the above lines are present at the end of the file.

      Now you will create the control that audits the configuration values you are enforcing. Append the following control to the end of the tests file controls/postgresql.rb:

      controls/postgresql.rb

      ...
      POSTGRES_CONF_DIR = attribute('postgres_conf_dir')
      POSTGRES_CONF_PATH = File.join(POSTGRES_CONF_DIR, 'postgresql.conf')
      
      control '3-postgresql' do
        impact 1.0
        title 'Audit PostgreSQL Configuration'
        desc 'Audits specific configuration options'
      
        describe postgres_conf(POSTGRES_CONF_PATH) do
          its('port') {should eq PORT}
          its('password_encryption') {should eq 'scram-sha-256'}
          its('ssl') {should eq 'on'}
          its('logging_collector') {should eq 'on'}
          its('log_connections') {should eq 'on'}
          its('log_disconnections') {should eq 'on'}
          its('log_duration') {should eq 'on'}
        end
      end
      

      Here you define two variables:

      • POSTGRES_CONF_DIR holds the postgres_conf_dir attribute as defined in the profile configuration.
      • POSTGRES_CONF_PATH holds the absolute path of the configuration file by concatenating the configuration file name with the configuration directory using File.join.

      Next, you define the control with its name and metadata. Then you use the postgres_conf resource together with the eq matcher to ensure your required values for the configuration options are correct. Save and exit controls/postgresql.rb.

      Next, you will run the test with the following command:

      The tests will pass, and your outputs will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist ✔ 3-postgresql: Audit PostgreSQL Configuration ✔ PostgreSQL Configuration port should eq "5432" ✔ PostgreSQL Configuration password_encryption should eq "scram-sha-256" ✔ PostgreSQL Configuration ssl should eq "on" ✔ PostgreSQL Configuration logging_collector should eq "on" ✔ PostgreSQL Configuration log_connections should eq "on" ✔ PostgreSQL Configuration log_disconnections should eq "on" ✔ PostgreSQL Configuration log_duration should eq "on" Profile Summary: 3 successful controls, 0 control failures, 0 controls skipped Test Summary: 18 successful, 0 failures, 0 skipped

      The output indicates that your three controls and all your tests were successful without any skipped tests or controls.

      In this step, you've added a new InSpec control that tests specific PostgreSQL configuration values from the configuration file using the postgres_conf resource. You audited a few values in this section, but you can use it to test any configuration option from the configuration file.

      Step 5 — Auditing PostgreSQL Client Authentication

      Now that you've written some tests for your PostgreSQL configuration, you'll write some tests for client authentication. This is important for installations that need to ensure specific authentication methods for different kinds of users; for example, to ensure clients connecting to PostgreSQL locally always need to authenticate with a password, or to reject connections from a specific IP address or IP address range, and so on.

      An important configuration for PostgreSQL installations where security is a concern is to only allow encrypted password authentications. PostgreSQL 10 supports two password encryption methods for client authentication: md5 and scram-sha-256. This test will require password encryption for all clients so this means that the METHOD field for all clients in the client configuration file must be set to either md5 or scram-sha-256. For these tests, you will use scram-sha-256 since it is more secure than md5.

      By default, local clients have their peer authentication method in the pg_hba.conf file. For the test, you need to change these to scram-sha-256. Open the /etc/postgresql/10/main/pg_hba.conf file:

      • sudo nano /etc/postgresql/10/main/pg_hba.conf

      The top of the file contains comments. Scroll down and look for uncommented lines where the authentication type is local, and change the authentication method from peer to scram-sha-256. For example, change:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                peer
      ...
      

      to:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                scram-sha-256
      ...
      

      At the end, your pg_hba.conf configuration will resemble the following:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                scram-sha-256
      
      # TYPE  DATABASE        USER            ADDRESS                 METHOD
      
      # "local" is for Unix domain socket connections only
      local   all             all                                     scram-sha-256
      # IPv4 local connections:
      host    all             all             127.0.0.1/32            scram-sha-256
      # IPv6 local connections:
      host    all             all             ::1/128                 scram-sha-256
      # Allow replication connections from localhost, by a user with the
      # replication privilege.
      local   replication     all                                     scram-sha-256
      host    replication     all             127.0.0.1/32            scram-sha-256
      host    replication     all             ::1/128                 scram-sha-256
      ...
      

      Save and exit the configuration file. Then restart the PostgreSQL service:

      • sudo service postgresql@10-main restart

      For this test, you'll use the postgres_hba_conf resource. This resource is used to test the client authentication data defined in the pg_hba.conf file. You'll pass in the path of your pg_hba.conf file as a parameter to this resource.

      Your control will consist of two describe blocks that check the auth_method fields for both local and host clients respectively to ensure that they are both equal to scram-sha-256. Open controls/postgresql.rb in your text editor:

      • nano controls/postgresql.rb

      Append the following control to the end of the test file controls/postgresql.rb:

      controls/postgresql.rb

      POSTGRES_HBA_CONF_FILE = File.join(POSTGRES_CONF_DIR, 'pg_hba.conf')
      
      control '4-postgres_hba' do
        impact 1.0
        title 'Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf'
        desc 'Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf. Do not allow untrusted authentication methods.'
      
        describe postgres_hba_conf(POSTGRES_HBA_CONF_FILE).where { type == 'local' } do
          its('auth_method') { should all eq 'scram-sha-256' }
        end
      
        describe postgres_hba_conf(POSTGRES_HBA_CONF_FILE).where { type == 'host' } do
          its('auth_method') { should all eq 'scram-sha-256' }
        end
      end
      

      In this code block, you define a new variable POSTGRES_HBA_CONF_FILE to store the absolute location of your pg_hba.conf file. File.join is a Ruby method to concatenate two file path segments with /. You use it here to join the POSTGRES_CONF_DIR variable, declared in the previous section, with the PostgreSQL configuration file pg_hba.conf. This will produce an absolute file path of the pg_hba.conf file and store it in the POSTGRES_HBA_CONF_FILE variable.

      After that, you declare and configure the control and its metadata. The first describe block checks that all configuration entries where the client type is local also have scram-sha-256 as their authentication methods. The second describe block does the same for cases where the client type is host. Save and exit controls/postgresql.rb.

      You'll execute this control as the postgres user because Read access to the PostgreSQL HBA configuration is granted only to Owner and Group, which is the postgres user. Execute the profile by running:

      • sudo -u postgres inspec exec .

      Your output will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist ✔ 3-postgresql: Audit PostgreSQL Configuration ✔ PostgreSQL Configuration port should eq "5432" ✔ PostgreSQL Configuration password_encryption should eq "scram-sha-256" ✔ PostgreSQL Configuration ssl should eq "on" ✔ PostgreSQL Configuration logging_collector should eq "on" ✔ PostgreSQL Configuration log_connections should eq "on" ✔ PostgreSQL Configuration log_disconnections should eq "on" ✔ PostgreSQL Configuration log_duration should eq "on" ✔ 4-postgres_hba: Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf ✔ Postgres Hba Config /etc/postgresql/10/main/pg_hba.conf with type == "local" auth_method should all eq "scram-sha-256" ✔ Postgres Hba Config /etc/postgresql/10/main/pg_hba.conf with type == "host" auth_method should all eq "scram-sha-256" Profile Summary: 4 successful controls, 0 control failures, 0 controls skipped Test Summary: 20 successful, 0 failures, 0 skipped

      This output indicates that the new control you added, together with all of the previous controls, are successful. It also indicates that all the tests in your profile are successful.

      In this step, you have added a control to your profile that successfully audited your PostgreSQL client authentication configuration to ensure that all clients are authenticated via scram-sha-256 using the postgres_hba_conf resource.

      Conclusion

      You've set up InSpec and successfully audited a PostgreSQL 10 installation. In the process, you've used a selection of InSpec tools, such as: the InSpec DSL, matchers, resources, profiles, attributes, and the CLI. From here, you can incorporate other resources that InSpec provides in the Resources section of their documentation. InSpec also provides a mechanism for defining custom resources for your specific needs. These custom resources are written as a regular Ruby class.

      You can also explore the Compliance Profiles section of the Chef supermarket that contains publicly shared InSpec profiles that you can execute directly or extend in your own profiles. You can also share your own profiles with the general public in the Chef Supermarket.

      You can go further by exploring other tools in the Chef universe such as Chef and Habitat. InSpec is integrated with Habitat and this provides the ability to ship your compliance controls together with your Habitat-packaged applications and continuously run them. You can explore official and community InSpec tutorials on the tutorials page. For more advanced InSpec references, check the official InSpec documentation.



      Source link