One place for hosting & domains


      How To Automate Deployment Using CircleCI and GitHub on Ubuntu 18.04

      The author selected International Medical Corps to receive a donation as part of the Write for DOnations program.


      Continuous Integration/Continuous Deployment (CI/CD) is a development practice that allows software teams to build, test, and deploy applications easier and quicker on multiple platforms. CircleCI is a popular automation platform that allows you to build and maintain CI/CD workflows for your projects.

      Having continuous deployment is beneficial in many ways. It helps to standardize the deployment steps of an application and to protect it from un-recorded changes. It also helps avoid performing repetitive steps and lets you focus more on development. With CircleCI, you can have a single view across all your different deployment processes for development, testing, and production.

      In this tutorial, you’ll build a Node.js app locally and push it to GitHub. Following that, you’ll configure CircleCI to connect to a virtual private server (VPS) that’s running Ubuntu 18.04, and you’ll go through the steps to set up your code for auto-deployment on the VPS. By the end of the article, you will have a working CI/CD pipeline where CircleCI will pick up any code you push from your local environment to the GitHub repo and deploy it on your VPS.


      Before you get started, you’ll need to have the following:

      Step 1 — Creating a Local Node Project

      In this step, you’ll create a Node.js project locally that you will use during this tutorial as an example application. You’ll push this to a repo on GitHub later.

      Go ahead and run these commands on your local terminal so that you can set up a quick Node development environment.

      First, create a directory for the test project:

      Change into the new directory:

      Follow this up by initializing a npm environment to pull the dependencies if you have any. The -y flag will auto-accept every prompt thrown by npm init:

      For more information on npm, check out our How To Use Node.js Modules with npm and package.json tutorial.

      Next, create a basic server that serves Hello World! when someone accesses any route. Using a text editor, create a file called app.js in the root directory of the project. This tutorial will use nano:

      Add the following code to the app.js file:


      const http = require('http');
      http.createServer(function (req, res) {
        res.write('Hello World!'); 
      }).listen(8080, ''); 

      This sample server uses the http package to listen to any incoming requests on port 8080 and fires a request listener function that replies with the string Hello World.

      Save and close the file.

      You can test this on your local machine by running the following command from the same directory in the terminal. This will create a Node process that runs the server (app.js):

      Now visit the http://localhost:8080 URL in your browser. Your browser will render the string Hello World!. Once you have tested the app, stop the server by pressing CTRL+C on the same terminal where you started the Node process.

      You’ve now set up your sample application. In the next step, you will add a configuration file in the project so that CircleCI can use it for deployment.

      Step 2 — Adding a Config File to Your Project

      CircleCI executes workflows according to a configuration file in your project folder. In this step, you will create that file to define the deployment workflow.

      Create a folder called .circleci in the root directory of your project:

      Add a new file called config.yml in it:

      • nano .circleci/config.yml

      This will open a file with the YAML file extension. YAML is a language that is often used for configuration management.

      Add the following configuration to the new config.yml file:


      version: 2.1
      # Define the jobs we want to run for this project
            - image: arvindr226/alpine-ssh
            - checkout
            - run: ssh -oStrictHostKeyChecking=no -v $USER@$IP "./"
      # Orchestrate our job run sequence
        version: 2
            - pull-and-build:
                      - main

      Save this file and exit the text editor.

      This file tells the CircleCI pipeline the following:

      • There is a job called pull-and-build whose steps involve spinning up a Docker container, SSHing from it to the VPS, and then running the file.
      • The Docker container serves the purpose of creating a temporary environment for executing the commands mentioned in the steps section of the config. In this case, all you need to do is SSH into the VPS and run sh command, so the environment needs to be lightweight but still allow the SSH command. The Docker image arvindr226/alpine-ssh is an Alpine Linux image that supports SSH.
      • is a file that you will create in the VPS. It will run every time as a part of the deployment process and will contain steps specific to your project.
      • In the workflows section, you inform CircleCI that it needs to perform this job based on some filters, which in this case is that only changes to the main branch will trigger this job.

      Next, you will commit and push these files to a GitHub repository. You will do this by running the following commands from the project directory.

      First, initialize the Node.js project directory as a git repo:

      Go ahead and add the new changes to the git repo:

      Then commit the changes:

      • git commit -m "initial commit"

      If this is the first time committing, git will prompt you to run some git config commands to identify you.

      From your browser navigate to GitHub and log in with your GitHub account. Create a new repository called circleci-test without a README or license file. Once you’ve created the repository, return to the command line to push your local files to GitHub.

      To follow GitHub protocol, rename your branch main with the following command:

      Before you push the files for the first time, you need to add GitHub as a remote repository. Do that by running:

      • git remote add origin

      Follow this with the push command, which will transfer the files to GitHub:

      You have now pushed your code to GitHub. In the next step, you’ll create a new user in the VPS that will execute the steps in the pull-and-build part.

      Step 3 — Creating a New User for Deployment

      Now that you have the project ready, you will create a deployment user in the VPS.

      Connect to your VPS as your sudo user

      • ssh your_username@your_server_ip

      Next, create a new user that doesn’t use a password for login using the useradd command.

      • sudo useradd -m -d /home/circleci -s /bin/bash circleci

      This command creates a new user on the system. The -m flag instructs the command to create a home directory specified by the -d flag.

      circleci will be the new deployment user in this case. For security purposes, you are not going to add this user to the sudo group, since the only job of this user is to create an SSH connection from the VPS to the CircleCI network and run the script.

      Make sure that the firewall on your VPS is open to port 8080:

      You now need to create an SSH key, which the new user can use to log in. You are going to create an SSH key with no passphrase, or else CircleCI will not be able to decrypt it. You can find more information in the official CircleCI documentation. Also, CircleCI expects the format of the SSH keys to be in the PEM format, so you are going to enforce that while creating the key pair.

      Back on your local system, move to your home folder:

      Then run the following command:

      • ssh-keygen -m PEM -t rsa -f .ssh/circleci

      This command creates an RSA key with the PEM format specified by the -m flag and the key type specified by the -t flag. You also specify the -f to create a new key pair called circleci and Specifying the name will avoid overwriting your existing id_rsa file.

      Print out the new public key:

      This outputs the public key that you generated. You will need to register this public key in your VPS. Copy this to your clipboard.

      Back on the VPS, create a .ssh directory for the circleci user:

      • sudo mkdir /home/circleci/.ssh

      Here you’ll add the public key you copied from the local machine into a file called authorized_keys:

      • sudo nano /home/circleci/.ssh/authorized_keys

      Add the copied public key here, save the file, and exit the text editor.

      Give the circleci user its directory permissions so that it doesn’t run into permission issues during deployment.

      • sudo chown -R circleci:circleci /home/circleci

      Verify if you can log in as the new user by using the private key. Open a new terminal on your local system and run:

      • ssh circleci@your_server_ip -i ~/.ssh/circleci

      You will now log in as the circleci user into your VPS. This shows that the SSH connection is successful. Next, you will connect your GitHub repo to CircleCI.

      Step 4 — Adding Your GitHub Project to CircleCI

      In this step, you’ll connect your GitHub account to your CircleCI account and add the circleci-test project for CI/CD. If you signed up with your GitHub account, then your GitHub will be automatically linked with your CircleCI account. If not, head over to and connect it.

      To add your circleci-test project, navigate to your CircleCI project dashboard at

      CircleCI Projects Tab

      Here you will find all the projects from GitHub listed. Click on Set Up Project for the project circleci-test. This will bring you to the project setup page:

      Setting up a project in the CircleCI Interface

      You’ll now have the option to set the config for the project, which you have already set in the repo. Since this is already set up, choose the Use Existing Config option. This will bring up a popup box confirming that you want to build the pipeline:

      Popup confirming the config file for the CircleCI build

      From here, go ahead and click on Start Building. This will bring you to the circleci-test pipeline page. For now, this pipeline will fail. This is because you must first update the SSH keys for your project.

      Navigate to the project settings at and select the SSH keys section on the left.

      Retrieve the private key named circleci you created earlier from your local machine by running:

      Copy the output from this command.

      Under the Additional SSH Keys section, click on the Add SSH Key button.

      Adding SSH Keys section of the settings page

      This will open up a window asking you to enter the hostname and the SSH key. Enter a hostname of your choice, and add in the private SSH key that you copied from your local environment.

      CircleCI will now be able to log in as the new circleci user to the VPS using this key.

      The last step is to provide the username and IP of the VPS to CircleCI. In the same Project Settings page, go to the Environment Variables tab on the left:

      Environment Variables section of the settings page

      Add an environment variable named USER with a value of circleci and IP with the value of the IP address of your VPS (or domain name of your VPS, if you have a DNS record).

      Once you’ve created these variables, you have completed the setup needed for CircleCI. Next, you will give the circleci user access to GitHub via SSH.

      Step 5 — Adding SSH Keys to GitHub

      You now need to provide a way that the circleci user can authenticate with GitHub so that it can perform git operations like git pull.

      To do this, you will create an SSH key for this user to authenticate against GitHub.

      Connect to the VPS as the circleci user:

      • ssh circleci@your_server_ip -i ~/.ssh/circleci

      Create a new SSH key pair with no passphrase:

      Then output the public key:

      Copy the output, then head over to your circleci-test GitHub repo’s deploy key settings at

      Click on Add deploy key to add the copied public key. Fill the Title field with your desired name for the key, then add the copied public key in the Key field. Finally, click the Add key button to add the key to your account.

      Now that the circleci user has access to your GitHub account, you’ll use this SSH authentication to set up your project.

      Step 6 — Setting Up the Project on the VPS

      Now for setting up the project, you are going to clone the repo and make the initial setup of the project on the VPS as the circleci user.

      On your VPS, run the following command:

      Navigate into it:

      First, install the dependencies:

      Now test the app out by running the server you built:

      Head over to your browser and try the address http://your_vps_ip:8080. You will receive the output Hello World!.

      Stop this process with CTRL+C and use pm2 to run this app as a background process.

      Install pm2 so that you can run the Node app as an independent process. pm2 is a versatile process manager written in Node.js. Here it will help you keep the sample Node.js project running as an active process even after you log out of the server. You can read a bit more about this in the How To Set Up a Node.js Application for Production on Ubuntu 18.04 tutorial.

      Note: On some systems such as Ubuntu 18.04, installing an npm package globally can result in a permission error, which will interrupt the installation. Since it is a security best practice to avoid using sudo with npm install, you can instead resolve this by changing npm’s default directory. If you encounter an EACCES error, follow the instructions at the official npm documentation.

      You can use the pm2 start command to run the app.js file as a Node process. You can name it app using the --name flag to identify it later:

      • pm2 start app.js --name "app"

      You will also need to provide the deployment instructions. These commands will run every time the circleci user deploys the code.

      Head back to the home directory since that will be the path the circleci user will land in during a successful login attempt:

      Go ahead and create the file, which will contain the deploy instructions:

      You will now use a Bash script to automate the deployment:

      #replace this with the path of your project on the VPS
      cd ~/circleci-test
      #pull from the branch
      git pull origin main
      # followed by instructions specific to your project that you used to do manually
      npm install
      export PATH=~/.npm-global/bin:$PATH
      source ~/.profile
      pm2 restart app

      This will automatically change the working directory to the project root, pull the code from GitHub, install the dependencies, then restart the app. Save and exit the file.

      Make this file an executable by running:

      Now head back to your local machine and make a quick change to test it out. Change into your project directory:

      Open up your app.js file:

      • nano circleci-test/app.js

      Now add in the following highlighted line:


      const http = require('http');
      http.createServer(function (req, res) {
        res.write('Foo Bar!');
      }).listen(8080, '');

      Save the file and exit the text editor.

      Add this change and commit it:

      • git add .
      • git commit -m "modify app.js"

      Now push this to your main branch:

      This will trigger a new pipeline for deployment. Navigate to to view the pipeline in action.

      Once it’s successful, refresh the browser at http://your_vps_ip:8080. Foo Bar! will now render in your browser.


      These are the steps to integrate CircleCI with your GitHub repository and Linux-based VPS. You can modify the for more specific instructions related to your project.

      If you would like to learn more about CI/CD, check out our CI/CD topic page. For more on setting up workflows with CircleCI, head over to the CircleCI documentation.

      Source link

      How To Automate Jenkins Job Configuration Using Job DSL

      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.


      Jenkins is a popular automation server, often used to orchestrate continuous integration (CI) and continuous deployment (CD) workflows. However, the process of setting up Jenkins itself has traditionally been a manual, siloed process for the system administrator. The process typically involves installing dependencies, running the Jenkins server, configuring the server, defining pipelines, and configuring jobs.

      Then came the Everything as Code (EaC) paradigm, which allowed administrators to define these manual tasks as declarative code that can be version-controlled and automated. In previous tutorials, we covered how to define Jenkins pipelines as code using Jenkinsfiles, as well as how to install dependencies and define configuration of a Jenkins server as code using Docker and JCasC. But using only Docker, JCasC, and pipelines to set up your Jenkins instance would only get you so far—these servers would not come pre-loaded with any jobs, so someone would still have to configure them manually. The Job DSL plugin provides a solution, and allows you to configure Jenkins jobs as code.

      In this tutorial, you’ll use Job DSL to configure two demo jobs: one that prints a 'Hello World' message in the console, and one that runs a pipeline from a Git repository. If you follow the tutorial to the end, you will have a minimal Job DSL script that you can build on for your own use cases.


      To complete this tutorial, you will need:

      Step 1 — Installing the Job DSL Plugin

      The Job DSL plugin provides the Job DSL features you’ll use in this tutorial for your demo jobs. In this step, you will install the Job DSL plugin.

      First, navigate to your_jenkins_url/pluginManager/available. In the search box, type in Job DSL. Next, in the resulting plugins list, check the box next to Job DSL and click Install without restart.

      Plugin Manager page showing Job DSL checked

      Note: If searching for Job DSL returned no results, it either means the Job DSL plugin is already installed, or that your Jenkin server’s plugin list is not updated.

      You can check if the Job DSL plugin is already installed by navigating to your_jenkins_url/pluginManager/installed and searching for Job DSL.

      You can update your Jenkins server’s plugin list by navigating to your_jenkins_url/pluginManager/available and clicking on the Check Now button at the bottom of the (empty) plugins list.

      After initiating the installation process, you’ll be redirected to a page that shows the progress of the installation. Wait until you see Success next to both Job DSL and Loading plugin extensions before continuing to the next step.

      You’ve installed the Job DSL plugin. You are now ready to use Job DSL to configure jobs as code. In the next step, you will define a demo job inside a Job DSL script. You’ll then incorporate the script into a seed job, which, when executed, will create the jobs defined.

      Step 2 — Creating a Seed Job

      The seed job is a normal Jenkins job that runs the Job DSL script; in turn, the script contains instructions that create additional jobs. In short, the seed job is a job that creates more jobs. In this step, you will construct a Job DSL script and incorporate it into a seed job. The Job DSL script that you’ll define will create a single freestyle job that prints a 'Hello World!' message in the job’s console output.

      A Job DSL script consists of API methods provided by the Job DSL plugin; you can use these API methods to configure different aspects of a job, such as its type (freestyle versus pipeline jobs), build triggers, build parameters, post-build actions, and so on. You can find all supported methods on the API reference site.

      Jenkins Job DSL API Reference web page

      By default, the site shows the API methods for job configuration settings that are available as part of the core Jenkins installation, as well as settings that are enabled by 184 supported plugins (accurate as of v1.77). To get a clearer picture of what API methods the Job DSL plugin provides for only the core Jenkins installation, click on the funnel icon next to the search box, and then check and uncheck the Filter by Plugin checkbox to deselect all the plugins.

      Jenkins Job DSL API reference web page showing only the core APIs

      The list of API methods are now significantly reduced. The ones that remain would work even if the Jenkins installation had no plugins installed apart from the Job DSL plugin.

      For the ‘Hello World’ freestyle job, you need the job API method (freeStyleJob is an alias of job and would also work). Let’s navigate to the documentation for the job method.

      job API method reference

      Click the ellipsis icon () in job(String name) { … } to show the methods and blocks that are available within the job block.

      Expanded view of the job API method reference

      Let’s go over some of the most commonly used methods and blocks within the job block:

      • parameters: setting parameters for users to input when they create a new build of the job.
      • properties: static values that are to be used within the job.
      • scm: configuration for how to retrieve the source code from a source-control management provider like GitHub.
      • steps: definitions for each step of the build.
      • triggers: apart from manually creating a build, specifies in what situations the job should be run (for example, periodically like a cron job, or after some events like a push to a GitHub repository).

      You can further expand child blocks to see what methods and blocks are available within. Click on the ellipsis icon () in steps { … } to uncover the shell(String command) method, which you can use to run a shell script.

      Reference for the Job DSL steps block

      Putting the pieces together, you can write a Job DSL script like the following to create a freestyle job that, when run, will print 'Hello World!' in the output console.

      job('demo') {
          steps {
              shell('echo Hello World!')

      To run the Job DSL script, we must first incorporate it into a seed job.

      To create the seed job, go to your_jenkins_url, log in (if necessary), click the New Item link on the left of the dashboard. On the screen that follows, type in seed, select Freestyle project, and click OK.

      Part of the New Item screen where you give the item the name of 'seed' and with the 'Freestyle project' option selected

      In the screen that follows, scroll down to the Build section and click on the Add build step dropdown. Next select Process Job DSLs.

      Screen showing the Add build step dropdown expanded and the Process Job DSLs option selected

      Then, click on the radio button next to Use the provided DSL script, and paste the Job DSL script you wrote into the DSL Script text area.

      Job DSL script added to the Process Job DSLs build step

      Click Save to create the job. This will take you to the seed job page.

      Seed job page

      Then, navigate to your_jenkins_url and confirm that the seed job is there.

      Jenkins jobs list showing the seed job

      You’ve successfully created a seed job that incorporates your Job DSL script. In the next step, you will run the seed job so that new jobs are created based on your Job DSL script.

      Step 3 — Running the Seed Job

      In this step, you will run the seed job and confirm that the jobs defined within the Job DSL script are indeed created.

      First, click back into the seed job page and click on the Build Now button on the left to run the seed job.

      Refresh the page and you’ll see a new section that says Generated Items; it lists the demo job that you’ve specified in your Job DSL script.

      Seed job page showing a list of generated items from running the seed job

      Navigate to your_server_ip and you will find the demo job that you specified in the Job DSL script.

      Jenkins jobs list showing the demo and seed jobs

      Click the demo link to go to the demo job page. You’ll see Seed job: seed, indicating that this job is created by the seed job. Now, click the Build Now link to run the demo job once.

      Demo job page showing a section on seed job

      This creates an entry inside the Build History box. Hover over the date of the entry to reveal a little arrow; click on it to reveal the dropdown. From the dropdown, choose Console Output.

      Screen showing the Console Output option selected in the dropdown for Build #1 inside the Build History box

      This will bring you the logs and console output from this build. In it, you will find the line + echo Hello World! followed by Hello World!, which corresponds to the shell('echo Hello World!') step in your Job DSL script.

      Console output of build #1 showing the echo Hello World! command and output

      You’ve run the demo job and confirmed that the echo step specified in the Job DSL script was executed. In the next and final step, you will be modifying and re-applying the Job DSL script to include an additional pipeline job.

      Step 4 — Defining Pipeline Jobs

      In line with the Everything as Code paradigm, more and more developers are choosing to define their builds as pipeline jobs—those that use a pipeline script (typically named Jenkinsfile)—instead of freestyle jobs. The demo job you’ve defined so far is a small demonstration. In this step, you will define a more realistic job that pulls down a Git repository from GitHub and run a pipeline defined in one of its pipeline scripts.

      For Jenkins to pull a Git repository and build using pipeline scripts, you’ll need to install additional plugins. So, before you make any changes to the Job DSL script, first make sure that the required plugins are installed.

      Navigate to your_jenkins_url/pluginManager/installed and check the plugins lists for the presence of the Git, Pipeline: Job, and Pipeline: Groovy plugins. If any of them are not installed, go to your_jenkins_url/pluginManager/available and search for and select the plugins, then click Install without restart.

      Now that the required plugins are installed, let’s shift our focus to modifying your Job DSL script to include an additional pipeline job.

      We will be defining a pipeline job that pulls the code from the public jenkinsci/pipeline-examples Git repository and run the environmentInStage.groovy declarative pipeline script found in it.

      Once again, navigate to the Jenkins Job DSL API Reference, click the funnel icon to bring up the Filter by Plugin menu, then deselect all the plugins except Git, Pipeline: Job, and Pipeline: Groovy.

      The Jenkins Job DSL API Reference page with all plugins deselected except for Pipeline: Job, and (not shown) Git and Pipeline: Groovy

      Click on pipelineJob on the left-hand side menu and expand the pipelineJob(String name) { … } block, then, in order, the definition { … }, cpsScm { … }, and scm { … } blocks.

      Expanded view of the pipelineJob API method block

      There are comments above each API method that explain their roles. For our use case, you’d want to define your pipeline job using a pipeline script found inside a GitHub repository. So you’d need to modify your Job DSL script as follows:

      job('demo') {
          steps {
              shell('echo Hello World!')
      pipelineJob('github-demo') {
          definition {
              cpsScm {
                  scm {
                      git {
                          remote {

      To make the change, go to your_jenkins_url/job/seed/configure and find the DSL Script text area, and replace the contents with your new Job DSL script. Then press Save. In the next screen, click on Build Now to re-run the seed job.

      Then, go to the Console Output page of the new build and you’ll find Added items: GeneratedJob{name="github-demo"}, which means you’ve successfully added the new pipeline job, whilst the existing job remains unchanged.

      Console output for the modified seed job, showing that the github-demo job has been added

      You can confirm this by going to your_jenkins_url; you will find the github-demo job appear in the list of jobs.

      Job list showing the github-demo job

      Finally, confirm that your job is working as intended by navigating to your_jenkins_url/job/github-demo/ and clicking Build Now. After the build has finished, navigate to your_jenkins_url/job/github-demo/1/console and you will find the Console Output page showing that Jenkins has successfully cloned the repository and executed the pipeline script.


      In this tutorial, you’ve used the Job DSL plugin to configure jobs on Jenkins servers in a consistent and repeatable way.

      But Job DSL is not the only tool in the Jenkins ecosystem that follows the Everything as Code (EaC) paradigm. You can also deploy Jenkins as Docker containers and set it up using Jenkins Configuration as Code (JCasC). Together, Docker, JCasC, Job DSL, and pipelines allow developers and administrators to deploy and configure Jenkins completely automatically, without any manual involvement.

      Source link

      How To Execute Ansible Playbooks to Automate Server Setup


      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers. With a minimalist design intended to get users up and running quickly, it allows you to control one to hundreds of systems from a central location with either playbooks or ad hoc commands.

      While ad hoc commands allow you to run one-off tasks on servers registered within your inventory file, playbooks are typically used to automate a sequence of tasks for setting up services and deploying applications to remote servers. Playbooks are written in YAML, and can contain one or more plays.

      This short guide demonstrates how to execute Ansible playbooks to automate server setup, using an example playbook that sets up an Nginx server with a single static HTML page.


      In order to follow this guide, you’ll need:

      • One Ansible control node. This guide assumes your control node is an Ubuntu 20.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 20.04.
      • One or more Ansible hosts. An Ansible host is any machine that your Ansible control node is configured to automate. This guide assumes your Ansible hosts are remote Ubuntu 20.04 servers. Make sure each Ansible host has:
        • The Ansible control node’s SSH public key added to the authorized_keys of a system user. This user can be either root or a regular user with sudo privileges. To set this up, you can follow Step 2 of How to Set Up SSH Keys on Ubuntu 20.04.
      • An inventory file set up on the Ansible control node. Make sure you have a working inventory file containing all your Ansible hosts. To set this up, please refer to the guide on How To Set Up Ansible Inventories.

      Once you have met these prerequisites, run a connection test as outlined in our guide on How To Manage Multiple Servers with Ansible Ad Hoc Commands to make sure you’re able to connect and execute Ansible instructions on your remote nodes. In case you don’t have a playbook already available to you, you can create a testing playbook as described in the next section.

      Creating a Test Playbook

      To try out the examples described in this guide, you’ll need an Ansible playbook. We’ll set up a testing playbook that installs Nginx and sets up an index.html page on the remote server. This file will be copied from the Ansible control node to the remote nodes in your inventory file.

      Create a new file called playbook.yml in the same directory as your inventory file. If you followed our guide on how to create inventory files, this should be a folder called ansible inside your home directory:

      • cd ~/ansible
      • nano playbook.yml

      The following playbook has a single play and runs on all hosts from your inventory file, by default. This is defined by the hosts: all directive at the beginning of the file. The become directive is then used to indicate that the following tasks must be executed by a super user (root by default).

      It defines two tasks: one to install required system packages, and the other one to copy an index.html file to the remote host, and save it in Nginx’s default document root location, /var/www/html. Each task has tags, which can be used to control the playbook’s execution.

      Copy the following content to your playbook.yml file:


      - hosts: all
        become: true
          - name: Install Packages
            apt: name={{ item }} update_cache=yes state=latest
            loop: [ 'nginx', 'vim' ]
            tags: [ 'setup' ]
          - name: Copy index page
              src: index.html
              dest: /var/www/html/index.html
              owner: www-data
              group: www-data
              mode: '0644'
            tags: [ 'update', 'sync' ]

      Save and close the file when you’re done. Then, create a new index.html file in the same directory, and place the following content in it:


              <title>Testing Ansible Playbooks</title>
              <h1>Testing Ansible Playbooks</h1>
              <p>This server was set up using an Nginx playbook.</p>

      Don’t forget to save and close the file.

      Executing a Playbook

      To execute the testing playbook on all servers listed within your inventory file, which we’ll refer to as inventory throughout this guide, you may use the following command:

      • ansible-playbook -i inventory playbook.yml

      This will use the current system user as remote SSH user, and the current system user’s SSH key to authenticate to the nodes. In case those aren’t the correct credentials to access the server, you’ll need to include a few other parameters in the command, such as -u to define the remote user or --private-key to define the correct SSH keypair you want to use to connect. If your remote user requires a password for running commands with sudo, you’ll need to provide the -K option so that Ansible prompts you for the sudo password.

      More information about connection options is available in our Ansible Cheatsheet guide.

      Listing Playbook Tasks

      In case you’d like to list all tasks contained in a playbook, without executing any of them, you may use the --list-tasks argument:

      • ansible-playbook -i inventory playbook.yml --list-tasks


      playbook: nginx.yml play #1 (all): all TAGS: [] tasks: Install Packages TAGS: [setup] Copy index page TAGS: [sync, update]

      Tasks often have tags that allow you to have extended control over a playbook’s execution. To list current available tags in a playbook, you can use the --list-tags argument as follows:

      • ansible-playbook -i inventory playbook.yml --list-tags


      playbook: nginx.yml play #1 (all): all TAGS: [] TASK TAGS: [setup, sync, update]

      Executing Tasks by Tag

      To only execute tasks that are marked with specific tags, you can use the --tags argument, along with the tags that you want to trigger:

      • ansible-playbook -i inventory playbook.yml --tags=setup

      Skipping Tasks by Tag

      To skip tasks that are marked with certain tags, you may use the --exclude-tags argument, along with the names of tags that you want to exclude from execution:

      • ansible-playbook -i inventory playbook.yml --exclude-tags=setup

      Starting Execution at Specific Task

      Another way to control the execution flow of a playbook is by starting the play at a certain task. This is useful when a playbook execution finishes prematurely, in which case you might want to run a retry.

      • ansible-playbook -i inventory playbook.yml --start-at-task=Copy index page

      Limiting Targets for Execution

      Many playbooks set up their target as all by default, and sometimes you want to limit the group or single server that should be the target for that setup. You can use -l (limit) to set up the target group or server in that play:

      • ansible-playbook -l dev -i inventory playbook.yml

      Controlling Output Verbosity

      If you run into errors while executing Ansible playbooks, you can increase output verbosity in order to get more information about the problem you’re experiencing. You can do that by including the -v option to the command:

      • ansible-playbook -i inventory playbook.yml -v

      If you need more detail, you can use -vv or -vvv instead. If you’re unable to connect to the remote nodes, use -vvvv to obtain connection debugging information:

      • ansible-playbook -i inventory playbook.yml -vvvv


      In this guide, you’ve learned how to execute Ansible playbooks to automate server setup. We’ve also seen how to obtain information about playbooks, how to manipulate a playbook’s execution flow using tags, and how to adjust output verbosity in order to obtain detailed debugging information in a play.

      Source link