One place for hosting & domains

      Configure

      How To Configure Keyfile Authentication for MongoDB Replica Sets on Ubuntu 20.04


      Introduction

      MongoDB, also known as Mongo, is an open-source document database used in many modern web applications. It is classified as a NoSQL database because it does not rely on the relational database model. Instead, it uses JSON-like documents with dynamic schemas. This means that, unlike relational databases, MongoDB does not require a predefined schema before you add data to a database.

      When you’re working with multiple distributed MongoDB instances, as in the case of a replica set or a sharded database architecture, it’s important to ensure that the communications between them are secure. One way to do this is through keyfile authentication. This involves creating a special file that essentially functions as a shared password for each member in the cluster.

      This tutorial outlines how to update an existing replica set to use keyfile authentication. The procedure involved in this guide will also ensure that the replica set doesn’t go through any downtime, so the data within the replica set will remain available for any clients or applications that need access to it.

      Prerequisites

      To complete this tutorial, you will need:

      • Three servers, each running Ubuntu 20.04. All three of these servers should have an administrative non-root user and a firewall configured with UFW. To set this up, follow our initial server setup guide for Ubuntu 20.04.
      • MongoDB installed on each of your Ubuntu servers. Follow our tutorial on How To Install MongoDB on Ubuntu 20.04, making sure to complete each step on each of your servers.
      • All three of your MongoDB installations configured as a replica set. Follow this tutorial on How To Configure a MongoDB Replica Set on Ubuntu 20.04 to set this up.
      • SSH keys generated for each server. In addition, you should ensure that each server has the other two servers’ public keys added to its authorized_keys file. This is to ensure that each machine can communicate with one another over SSH, which will make it easier to distribute the keyfile to each of them in Step 2. To set these up, follow our guide on How To Set Up SSH Keys on Ubuntu 20.04.

      Please note that, for clarity, this guide will follow the conventions established in the prerequisite replica set tutorial and refer to the three servers as mongo0, mongo1, and mongo2. It will also assume that you’ve completed Step 1 of that guide and configured each server’s hosts file so that the following hostnames will resolve to given server’s IP address:

      HostnameResolves to
      mongo0.replset.membermongo0
      mongo1.replset.membermongo1
      mongo2.replset.membermongo2

      There are a few instances in this guide in which you must run a command or update a file on only one of these servers. In such cases, this guide will default to using mongo0 in examples and will signify this by showing commands or file changes in a blue background, like this:

      Any commands that must be run or file changes that must be made on multiple servers will have a standard gray background, like this:

      About Keyfile Authentication

      In MongoDB, keyfile authentication relies on Salted Challenge Response Authentication Mechanism (SCRAM), the database system’s default authentication mechanism. SCRAM involves MongoDB reading and verifying credentials presented by a user against a combination of their username, password, and authentication database, all of which are known by the given MongoDB instance. This is the same mechanism used to authenticate users who supply a password when connecting to the database.

      In keyfile authentication, the keyfile acts as a shared password for each member in the cluster. A keyfile must contain between 6 and 1024 characters. Keyfiles can only contain characters from the base64 set, and note that MongoDB strips whitespace characters when reading keys. Beginning in version 4.2 of Mongo, keyfiles use YAML format, allowing you to share multiple keys in a single keyfile.

      Warning: The Community version of MongoDB comes with two authentication methods that can help keep your database secure, keyfile authentication and x.509 authentication. For production deployments that employ replication, the MongoDB documentation recommends using x.509 authentication, and it describes keyfiles as “bare-minimum forms of security” that are “best suited for testing or development environments.”

      The process of obtaining and configuring x.509 certificates comes with a number of caveats and decisions that must be made on a case-by-case basis, meaning that this procedure is beyond the scope of a DigitalOcean tutorial. If you plan on using a replica set in a production environment, we strongly encourage you to review the official MongoDB documentation on x.509 authentication.

      If you plan on using your replica set for testing or development, you can proceed with following this tutorial to add a layer of security to your cluster.

      Step 1 — Creating a User Administrator

      When you enable authentication in MongoDB, it will also enable role-based access control for the replica set. Per the MongoDB documentation:

      MongoDB uses Role-Based Access Control (RBAC) to govern access to a MongoDB system. A user is granted one or more roles that determine the user’s access to database resources and operations.

      When access control is enabled on a MongoDB instance, it means that you won’t be able to access any of the resources on the system unless you’ve authenticated as a valid MongoDB user. Even then, you must authenticate as a user with the appropriate privileges to access a given resource.

      If you don’t create a user for your MongoDB system before enabling keyfile authentication (and, consequently, access control), you will not be locked out of your replica set. You can create a MongoDB user which you can use to authenticate to the set and, if necessary, create other users through Mongo’s localhost exception. This is a special exception MongoDB makes for configurations that have enabled access control but lack users. This exception only allows you to connect to the database on the localhost and then create a user in the admin database.

      However, relying on the localhost exception to create a MongoDB user after enabling authentication means that your replica set will go through a period of downtime, since the replicas will not be able to authenticate their connection until after you create a user. This step outlines how to create a user before enabling authentication to ensure that your replica set remains available. This user will have permissions to create other users on the database, giving you the freedom to create other users with whatever permissions they need in the future. In MongoDB, a user with such permissions is known as a user administrator.

      To begin, connect to the primary member of your replica set. If you aren’t sure which of your members is the primary, you can run the rs.status() method to identify it.

      Run the following mongo command from the bash prompt of any of the Ubuntu servers hosting a MongoDB instance in your replica set. This command’s --eval option instructs the mongo operation to not open up the shell interface environment that appears when you run mongo by itself and instead run the command or method, wrapped in single quotes, that follows the --eval argument:

      • mongo --eval 'rs.status()'

      rs.status() returns a lot of information, but the relevant portion of the output is the "members" : array. In the context of MongoDB, an array is a collection of documents held between a pair of square brackets ([ and ]).

      In the "members": array you’ll find a number of documents, each of which contains information about one of the members in your replica set. Within each of these member documents, find the "stateStr" field. The member whose "stateStr" value is "PRIMARY" is the primary member of your replica set. The following example shows a situation where mongo0 is the primary:

      Output

      . . . "members" : [ { "_id" : 0, "name" : "mongo0.replset.member:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", . . . }, . . .

      Once you know which of your replica set members is the primary, SSH into the server hosting that instance. For demonstration purposes, this guide will continue to use examples in which mongo0 is the primary:

      • ssh sammy@mongo0_ip_address

      After logging into the server, connect to MongoDB by opening up the mongo shell environment:

      When creating a user in MongoDB, you must create them within a specific database which will be used as their authentication database. The combination of the user’s name and their authentication database serve as a unique identifier for that user.

      Certain administrative actions are only available to users whose authentication database is the admin database — a special privileged database included in every MongoDB installation — including the ability to create new users. Because the goal of this step is to create an user administrator that can create other users in the replica set, connect to the admin database so you can grant this user the appropriate privileges:

      Output

      switched to db admin

      MongoDB comes installed with a number of JavaScript-based shell methods you can use to manage your database. One of these, the db.createUser method, is used to create new users in the database in which the method is run.

      Initiate the db.createUser method:

      Note: Mongo won’t register the db.createUser method as complete until you enter a closing parenthesis. Until you do, the prompt will change from a greater than sign (>) to an ellipsis (...).

      This method requires you to specify a username and password for the user, as well as any roles you want the user to have. Recall that MongoDB stores its data in JSON-like documents; when you create a new user, all you’re doing is creating a document to hold the appropriate user data as individual fields.

      As with objects in JSON, documents in MongoDB begin and end with curly braces ({ and }). Enter an opening curly brace to begin the user document:

      Next, enter a user: field, with your desired username as the value in double quotes followed by a comma. The following example specifies the username UserAdminSammy, but you can enter whatever username you like:

      Next, enter a pwd field with the passwordPrompt() method as its value. When you execute the db.createUser method, the passwordPrompt() method will provide a prompt for you to enter your password. This is more secure than the alternative, which is to type out your password in cleartext as you did for your username.

      Note: The passwordPrompt() method is only compatible with MongoDB versions 4.2 and newer. If you’re using an older version of Mongo, then you will have to write out your password in cleartext, similarly to how you wrote out your username:

      Be sure to follow this field with a comma as well:

      Then enter a roles field followed by an array detailing the roles you want your administrative user to have. In MongoDB, roles define what actions the user can perform on the resources that they have access to. You can define custom roles yourself, but Mongo also comes with a number of built-in roles that grant commonly-needed permissions.

      Because you’re creating a user administrator, at a minimum you should grant them the built-in userAdminAnyDatabase role over the admin database. This will allow the user administrator to create and modify new users and roles. Because the administrative user has this role in the admin database, this will also grant it superuser access to the entire cluster:

      • roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]

      Following that, enter a closing brace to signify the end of the document:

      Then enter a closing parenthesis to close and execute the db.createUser method:

      All together, here’s what your db.createUser method should look like:

      > db.createUser(
      ... {
      ... user: "UserAdminSammy",
      ... pwd: passwordPrompt(),
      ... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
      ... }
      ... )
      

      If each line’s syntax is correct, the method will execute properly and you’ll be prompted to enter a password:

      Output

      Enter password:

      Enter a strong password of your choosing. Then, you’ll receive a confirmation that the user was added:

      Output

      Successfully added user: { "user" : "UserAdminSammy", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" }, "readWriteAnyDatabase" ] }

      With that, you’ve added a MongoDB user profile which you can use to manage other users and roles on your system. You can test this out by creating another user, as outlined in the remainder of this step.

      Begin by authenticating as the user administrator you just created:

      • db.auth( "UserAdminSammy", passwordPrompt() )

      db.auth() will return 1 if authentication was successful:

      Output

      1

      Note: In the future, if you want to authenticate as the user administrator when connecting to the cluster, you can do so directly from your server prompt with a command like the following:

      • mongo -u "UserAdminSammy" -p --authenticationDatabase "admin"

      In this command, the -u option tells the shell that the following argument is the username which you want to authenticate as. The -p flag tells it to prompt you to enter a password, and the --authenticationDatabase option precedes the name of the user’s authentication database. If you enter an incorrect password or the username and authentication database do not match, you won’t be able to authenticate and you’ll have to try connecting again.

      Also, be aware that in order for you to create new users in the replica set as the user administrator, you must be connected to the set’s primary member.

      The procedure for adding another user is the same as it was for the user administrator. The following example creates a new user with the clusterAdmin role, which means they will be able to perform a number of operations related to replication and sharding. Within the context of MongoDB, a user with these privileges is known as a cluster administrator.

      Having a dedicated user to perform specific functions like this is a good security practice, as it limits the number of privileged users you have on your system. After you enable keyfile authentication later in this tutorial, any client that wants to perform any of the operations allowed by the clusterAdmin role — such as any of the rs. methods, like rs.status() or rs.conf() — must first authenticate as the cluster administrator.

      That said, you can provide whatever role you’d like to this user, and likewise provide them with a different name and authentication database. However, if you want the new user to function as a cluster administrator, then you must grant them the clusterAdmin role within the admin database.

      In addition to creating a user to serve as the cluster administrator, the following method names the user ClusterAdminSammy and uses the passwordPrompt() method to prompt you to enter a password:

      • db.createUser(
      • {
      • user: "ClusterAdminSammy",
      • pwd: passwordPrompt(),
      • roles: [ { role: "clusterAdmin", db: "admin" } ]
      • }
      • )

      Again, if you’re using a version of MongoDB that precedes version 4.2, then you will have to write out your password in cleartext instead of using the passwordPrompt() method.

      If each line’s syntax is correct, the method will execute properly and you’ll be prompted to enter a password:

      Output

      Enter password:

      Enter a strong password of your choosing. Then, you’ll receive a confirmation that the user was added:

      Output

      Successfully added user: { "user" : "ClusterAdminSammy", "roles" : [ { "role" : "clusterAdmin", "db" : "admin" } ] }

      This output confirms that your user administrator is able to create new users and grant them roles. You can now close the MongoDB shell:

      Alternatively, you can close the shell by pressing CTRL + C.

      At this point, if you have any clients or applications connected to your MongoDB cluster, it would be a good time to create one or more dedicated users with the appropriate roles which they can use to authenticate to the database. Otherwise, read on to learn how to generate a keyfile, distribute it among the members of your replica set, and then configure each one to require the replica set members to authenticate with the keyfile.

      Step 2 — Creating and Distributing an Authentication Keyfile

      Before creating a keyfile, it can be helpful to create a directory on each server where you will store the keyfile in order to keep things organized. Run the following command, which creates a directory named mongo-security in the administrative Ubuntu user’s home directory, on each of your three servers:

      Then generate a keyfile on one of your servers. You can do this on any one of your servers but, for illustration purposes, this guide will generate the keyfile on mongo0.

      Navigate to the mongo-security directory you just created:

      Within that directory, create a keyfile with the following openssl command:

      • openssl rand -base64 768 > keyfile.txt

      Take note of this command’s arguments:

      • rand: instructs OpenSSL to generate pseudo-random bytes of data
      • -base64: specifies that the command should use base64 encoding to represent the pseudo-random data as printable text. This is important because, as mentioned previously, MongoDB keyfiles can only contain characters in the base64 set
      • 768: the number of bytes the command should generate. In base64 encoding, three binary bytes of data are represented as four characters. Because MongoDB keyfiles can have a maximum of 1024 characters, 768 is the maximum number of bytes you can generate for a valid keyfile

      Following this command’s 768 argument is a greater-than sign (>). This redirects the command’s output into a new file named keyfile.txt which will serve as your keyfile. Feel free to name the keyfile something other than keyfile.txt if you’d like, but be sure to change the filename whenever it appears in later commands.

      Next, modify the keyfile’s permissions so that only the owner has read access:

      Following this, distribute the keyfile to the other two servers hosting the MongoDB instances in your replica set. Assuming you followed the prerequisite guide on How To Set Up SSH Keys, you can do so with the scp command:

      • scp keyfile.txt sammy@mongo1.replset.member:/home/sammy/mongo-security
      • scp keyfile.txt sammy@mongo2.replset.member:/home/sammy/mongo-security

      Notice that each of these commands copies the keyfile directly to the ~/mongo-security/ directories you created previously on mongo1 and mongo2. Be sure to change sammy to the name of the administrative Ubuntu user profile you created on each server.

      Next, change the file’s owner to the mongodb user profile. This is a special user that was created when you installed MongoDB, and it’s used to run the mongod service. This user must have access to the keyfile in order for MongoDB to use it for authentication.

      Run the following command on each of your servers to change the keyfile’s owner to the mongodb user account:

      • sudo chown mongodb:mongodb ~/mongo-security/keyfile.txt

      After changing the keyfiles’ owner on each server, you’re ready to reconfigure each of your MongoDB instances to enforce keyfile authentication.

      Step 3 — Enabling Keyfile Authentication

      Now that you’ve generated a keyfile and distributed it to each of the servers in your replica set, you can update the MongoDB configuration file on each server to enforce keyfile authentication.

      In order to avoid any downtime while configuring the members of your replica set to require authentication, this step involves reconfiguring the secondary members of the set first. Then, you’ll direct your primary member to step down and become a secondary member. This will cause the secondary members to hold an election to select a new primary, keeping your cluster available to whatever clients or applications need access to it. You’ll then reconfigure the former primary node to enable authentication.

      On each of your servers hosting a secondary member of your replica set, open up MongoDB’s configuration file with your preferred text editor:

      • sudo nano /etc/mongod.conf

      Within the file, find the security section. It will look like this by default:

      /etc/mongod.conf

      . . .
      #security:
      . . .
      

      Uncomment this line by removing the pound sign (#). Then, on the next line, add a keyFile: directive followed by the full path to the keyfile you created in the previous step:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
      . . .
      

      Note that there are two spaces at the beginning of this new line. These are necessary for the configuration file to be read correctly. When you enter this line in your own configuration files, make sure that the path you provide reflects the actual path of the keyfile on each server.

      Below the keyFile directive, add a transitionToAuth directive with a value of true. When set to true, this configuration option allows the MongoDB instance to accept both authenticated and non-authenticated connections. This is useful when reconfiguring a replica set to enforce authentication, as it will ensure that your data remains available as you restart each member of the set:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        transitionToAuth: true
      . . .
      

      Again, make sure that you include two blank spaces before the transitionToAuth directive.

      After making those changes, save and close the file. If you used nano to edit it, you can do so by pressing CTRL + X, Y, and then ENTER.

      Then restart the mongod service on both of the secondary instances’ servers to immediately put these changes into effect:

      • sudo systemctl restart mongod

      With that, you’ve configured keyfile authentication for the secondary members of your replica set. At this point, both authenticated and non-authenticated users can access these members without restriction.

      Next, you’ll repeat this procedure on the primary member. Before doing so, though, you must step down the member so it’s no longer the primary. To do this, open up the MongoDB shell on the server hosting the primary member. For illustration purposes, this guide will again assume this is mongo0:

      From the prompt, run the rs.stepDown() method. This will instruct the primary to become a secondary member, and will cause the current secondary members to hold an election to determine which will serve as the new primary:

      If the method returns "ok" : 1 in the output, it means the primary member successfully stepped down to become a secondary:

      Output

      { "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1614795467, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1614795467, 1) }

      After stepping down the primary, you can close the Mongo shell:

      Next, open up the MongoDB configuration file on this server:

      • sudo nano /etc/mongod.conf

      Find the security section and uncomment the security header by removing the pound sign. Then add the same keyFile and transitionToAuth directives you added to the other MongoDB instances. After making these changes, the security section will look like this:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        transitionToAuth: true
      . . .
      

      Again, make sure that the file path after the keyFile directive reflects the keyfile’s actual location on this server.

      When finished, save and close the file. Then restart the mongod process:

      • sudo systemctl restart mongod

      Following that, all of your MongoDB instances are able to accept both authenticated and non-authenticated connections. In the final step of this guide, you’ll configure your instances to require users to authenticate before performing privileged actions.

      Step 4 — Restarting Each Member Without transitionToAuth to Enforce Authentication

      At this point, each of your MongoDB instances are configured with the transitionToAuth set to true. This means that even though you’ve enabled each server to use the keyfile you created to authenticate connections internally, they’re still able to accept non-authenticated connections.

      To change this and require each member to enforce authentication, reopen the mongod.conf file on each server:

      • sudo nano /etc/mongod.conf

      Find the security section and disable the transitionToAuth directive. You can do this by commenting the line out by prepending it with a pound sign:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        #transitionToAuth: true
      . . .
      

      After disabling the transitionToAuth directive in each instance’s configuration file, save and close each file.

      Then, restart the mongod service on each server:

      • sudo systemctl restart mongod

      Following that, each of the MongoDB instances in your replica set will require you to authenticate to perform privileged actions.

      To test this, try running a MongoDB method that works when invoked by an authenticated user that has the appropriate privileges. Try running the following command from any of your Ubuntu servers’ prompts:

      • mongo --eval 'rs.status()'

      Even though you ran this method successfully in Step 1, the rs.status() method can now only be run by a user that has been granted the clusterAdmin or clusterManager roles since you’ve enabled keyfile authentication. Regardless of whether you run this command on a server hosting the primary member or one of the secondary members, it will not work because you have not authenticated:

      Output

      . . . MongoDB server version: 4.4.4 { "operationTime" : Timestamp(1616184183, 1), "ok" : 0, "errmsg" : "command replSetGetStatus requires authentication", "code" : 13, "codeName" : "Unauthorized", "$clusterTime" : { "clusterTime" : Timestamp(1616184183, 1), "signature" : { "hash" : BinData(0,"huJUmB/lrrxpx9YfnONM4mayJwo="), "keyId" : NumberLong("6941116945081040899") } } }

      Recall that, after enabling access control, all of the cluster administration methods (including rs. methods like rs.status()) will only work when invoked by an authenticated user that has been granted the appropriate cluster management roles. If you’ve created a cluster administrator — as outlined in Step 1 — and authenticate as that user, then this method will work as expected:

      • mongo -u "ClusterAdminSammy" -p --authenticationDatabase "admin" --eval 'rs.status()'

      After entering the user’s password when prompted, you will see the rs.status() method’s output:

      Output

      . . . MongoDB server version: 4.4.4 { "set" : "shard2", "date" : ISODate("2021-03-19T20:21:45.528Z"), "myState" : 2, "term" : NumberLong(4), "syncSourceHost" : "mongo1.replset.member:27017", "syncSourceId" : 2, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, . . .

      This confirms that the replica set is enforcing authentication, and that you’re able to authenticate successfully.

      Conclusion

      By completing this tutorial, you created a keyfile with OpenSSL and then configured a MongoDB replica set to require its members to use it for internal authentication. You also created a user administrator which will allow you to manage users and roles in the future. Throughout all of this, your replica set will not have gone through any downtime and your data will have remained available to your clients and applications.

      If you’d like to learn more about MongoDB, we encourage you to check out our entire library of MongoDB content.



      Source link

      How To Install and Configure LXD on Ubuntu 20.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      A Linux container is a set of processes that is separated from the rest of the system. To the end-user, a Linux container functions as a virtual machine, but it’s much more light-weight. You don’t have the overhead of running an additional Linux kernel, and the containers don’t require any CPU hardware virtualization support. This means you can create more containers than virtual machines on the same server.

      Imagine that you have a server that should run multiple web sites for your customers. On the one hand, each web site could be a virtual host/server block of the same instance of the Apache or Nginx web server. On the other hand, when using virtual machines, you would create a separate nested virtual machine for each website. Linux containers sit somewhere between virtual hosts and virtual machines.

      LXD lets you create and manage these containers. LXD provides a hypervisor service to manage the entire life cycle of containers. In this tutorial, you’ll configure LXD and use it to run Nginx in a container. You’ll then route traffic from the internet to the container to make a sample web page accessible.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      Note: Starting from Ubuntu 20.04, LXD is available officially as a snap package. This is a new package format and it has several advantages. A snap package can be installed in any Linux distribution that supports snap packages. It is suggested to use a server with at least 2GB RAM when running the LXD snap package. The following table summarizes the features of the LXD snap package:

      Featuresnap package
      available LXD versions2.0, 3.0, 4.0, 4.x
      memory requirementsmoderate, for snapd service. Suggested server with 2GB RAM
      upgrade considerationscan defer LXD upgrade up to 60 days
      ability to upgrade from the other package formatcan upgrade from deb to snap

      Follow the rest of this tutorial to use LXD from the snap package in Ubuntu 20.04. If, however, you want to use the LXD deb package, see our tutorial How To Install and Use LXD on Ubuntu 18.04.

      Step 1 — Preparing Your Environment for LXD

      Before you configure and run LXD, you will prepare your server’s environment. This involves adding your sudo user to the lxd group and configuring your storage backend.

      Adding your non-root account to the lxd Unix group

      When setting up your non-root account, add them to the lxd group using the following command. The adduser command takes as arguments the user account and the Unix group in order to add the user account into the existing Unix group:

      Now apply the new membership:

      Enter your password and press ENTER.

      Finally, confirm that your user is now added to the lxd group:

      You will receive an output like this:

      Now you are ready to continue configuring LXD.

      Preparing the storage backend

      To begin, you will configure the storage backend.

      The recommended storage backend for LXD when you run it on Ubuntu is the ZFS filesystem. ZFS also works very well with DigitalOcean Block Storage. To enable ZFS support in LXD, first update your package list and then install the zfsutils-linux auxiliary package:

      • sudo apt update
      • sudo apt install -y zfsutils-linux

      We are almost ready to run the LXD initialization script.

      Before you do, you must identify and take a note of the device name for your block storage.

      To do so, use ls to check the /dev/disk/by-id/ directory:

      In this specific example, the full path of the device name is /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-0:

      Output

      total 0 lrwxrwxrwx 1 root root 9 Sep 16 20:30 scsi-0DO_Volume_volume-fra1-0 -> ../../sda

      Note down the full file path for your storage device. You will use it in the following step when you configure LXD.

      Step 2 — Initializing and Configuring LXD

      LXD is available as a snap package in Ubuntu 20.04. It comes pre-installed, but you must configure it.

      First, verify that the LXD snap package is installed. The command snap list shows installed snap packages:

      Ubuntu 20.04 preinstalls LXD 4.0.3, and it is tracking the 4.0/stable channel. LXD 4.0 is supported for five years (until the year 2025). It will only receive security updates:

      Output of the "snap list" command — Listing the installed snap packages

      Name Version Rev Tracking Publisher Notes core18 20200724 1885 latest/stable canonical✓ base lxd 4.0.3 16922 4.0/stable/… canonical✓ - snapd 2.45.3.1 8790 latest/stable canonical✓ snapd

      To find more information about the LXD installed snap package, run snap info lxd. You will be able to see the available versions, including when the package was last updated.

      You will now configure LXD.

      Configuring Storage Options for LXD

      Start the LXD initialization process using the sudo lxd init command:

      First, the program will ask if you want to enable LXD clustering. For the purposes of this tutorial, press ENTER to accept the default no, or type no and then press ENTER. LXD clustering is an advanced topic that enables high availability for your LXD setup and requires at least three LXD servers running in a cluster:

      Output

      Would you like to use LXD clustering? (yes/no) [default=no]: no

      The next six prompts deal with the storage pool. Give the following responses:

      • Press ENTER to configure a new storage pool.
      • Press ENTER to accept the default storage pool name.
      • Press ENTER to accept the default zfs storage backend.
      • Press ENTER to create a new ZFS pool.
      • Type yes to use an existing block device.
      • Lastly, type the full path to the block storage device name (This is what you recorded earlier. It should be something like: /dev/disk/by-id/device_name).

      Your answers will look like the following:

      Output

      Do you want to configure a new storage pool? (yes/no) [default=yes]: yes Name of the new storage pool [default=default]: default Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool? (yes/no) [default=yes]: yes Would you like to use an existing block device? (yes/no) [default=no]: yes Path to the existing block device: /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-01

      You have now configured the storage backend for LXD. Continuing with LXD’s init script, you will now configure some networking options.

      Configuring Networking Options for LXD

      LXD now asks whether you want to connect to a MAAS (Metal As A Server) server. MAAS is software that makes a bare-metal server appear as, and be handled as if, a virtual machine.

      We are running LXD in standalone mode, therefore accept the default and answer no:

      Output

      Would you like to connect to a MAAS server? (yes/no) [default=no]: no

      You are then asked to configure a network bridge for LXD containers. This enables the following features:

      • Each container automatically gets a private IP address.
      • Each container can communicate with each other over the private network.
      • Each container can initiate connections to the internet.
      • Each container remains inaccessible from the internet by default; you cannot initiate a connection from the internet and reach a container unless you explicitly enable it. You’ll learn how to allow access to a specific container in the next step.

      When asked to create a new local network bridge, choose yes:

      Output

      Would you like to create a new local network bridge? (yes/no) [default=yes]: yes

      Then accept the default name, lxdbr0:

      Output

      What should the new bridge be called? [default=lxdbr0]: lxdbr0

      Accept the automated selection of private IP address range for the bridge:

      Output

      What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto

      Finally, LXD asks the following miscellaneous questions:

      When asked if you want to manage LXD over the network, press ENTER or answer no:

      Output

      Would you like LXD to be available over the network? (yes/no) [default=no]: no

      When asked if you want to update stale container images automatically, press ENTER or answer yes:

      Output

      Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes

      When asked if you want to view and keep the YAML configuration you just created, answer yes if you do. Otherwise, you press ENTER or answer no:

      Output

      Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no

      A script will run in the background. It is normal no to receive any output.

      You have now configured your network and storage options for LXD. Next you will create your first LXD container.

      Step 2 — Creating and Configuring an LXD Container

      Now that you have successfully configured LXD, you are ready to create and manage your first container. In LXD, you manage containers using the lxc command followed by an action, such as list, launch, start, stop and delete.

      Use lxc list to view the available installed containers:

      Since this is the first time that the lxc command communicates with the LXD hypervisor, it shows some information about how to launch a container. Finally, the command shows an empty list of containers. This is expected because we haven’t created any yet:

      Output of the "lxd list" command

      To start your first container, try: lxc launch ubuntu:18.04 +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+

      Now create a container that runs Nginx. To do so, first use the lxc launch command to create and start an Ubuntu 18.04 container named webserver.

      Create the webserver container. The 18.04 in ubuntu:18.04 is a shortcut for Ubuntu 18.04. ubuntu: is the identifier for the preconfigured repository of LXD images. You could also use ubuntu:bionic for the image name:

      • lxc launch ubuntu:20.04 webserver

      Note: You can find the full list of all available Ubuntu images by running lxc image list ubuntu: and other Linux distributions by running lxc image list images:. Both ubuntu: and images: are repositories of container images. For each container image, you can get more information with the command lxc image info ubuntu:20.04.

      Because this is the first time you’ve created a container, this command downloads the container image from the internet and caches it. You’ll see this output once your new container finishes downloading:

      Output

      Creating webserver Starting webserver

      With the webserver container started, use the lxc list command to show information about it. We added --columns ns4 in order to show only the columns for name, state and IPv4 address. The default lxc list command shows three more columns: the IPv6 address, whether the container is persistent or ephemeral, and whether there are snapshots available for each container:

      The output shows a table with the name of each container, its current state, its IP address, and its type:

      Output

      +-----------+---------+------------------------------------+ | NAME | STATE | IPV4 | +-----------+---------+------------------------------------+ | webserver | RUNNING | your_webserver_container_ip (eth0) | +-----------+---------+------------------------------------+

      LXD’s DHCP server provides this IP address and in most cases it will remain the same even if the server is rebooted. However, in the following steps you will create iptables rules to forward connections from the internet to the container. Therefore, you should instruct LXD’s DHCP server to always give the same IP address to the container.

      The following set of commands will configure the container to obtain a static IP assignment. First, you will override the network configuration for the eth0 device that is inherited from the default LXD profile. This allows you to set a static IP address, which ensures proper communication of web traffic into and out of the container.

      Specifically, lxc config device is a command that performs the config action to configure a device. The first line has the sub-action override to override the device eth0 from the container webserver. The second line has the sub-action to set the ipv4.address field of the eth0 device of the webserver container to the IP address that was given by the DHCP server in the beginning.

      Run the first config command:

      • lxc config device override webserver eth0

      You will receive an output like this:

      Output

      Device eth0 overridden for webserver

      Now set the static IP:

      • lxc config device set webserver eth0 ipv4.address your_webserver_container_ip

      If the command is successful, you will receive no output.

      Restart the container:

      Now check the status of the container:

      You should see that the container is RUNNING and the IPV4 address is your static address.

      You are ready to install and configure Nginx inside the container.

      Step 3 — Configuring Nginx Inside an LXD Container

      In this step you will connect to the webserver container and configure the web server.

      Connect to the container with lxc shell command, which takes the name of the container and starts a shell inside the container:

      Once inside the container, your shell prompt will look like the following:

      This shell, even if it is a root shell, is limited to the container. Anything that you run in this shell stays in the container and cannot escape to the host server.

      Note: When getting a shell into a container, you may see a warning such as mesg: ttyname failed: No such device. This message is produced when the shell in the container tries to run the command mesg from the configuration file /root/.profile. You can safely ignore it. To avoid seeing it, you may remove the command mesg n || true from /root/.profile.

      Once inside your container, update the package list and install Nginx:

      • apt update
      • apt install nginx

      With Nginx installed, you will now edit the default Nginx web page. Specifically, you will add two lines of text so that it is clear that this site is hosted inside the webserver container.

      Using nano or your preferred editor, open the file /var/www/html/index.nginx-debian.html:

      • nano /var/www/html/index.nginx-debian.html

      Add the two highlighted phrases to the file:

      /var/www/html/index.nginx-debian.html

      <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on LXD container webserver!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> ...

      You have edited the file in two places and specifically added the text on LXD container webserver. Save the file and exit your text editor.

      Now log out of the container:

      Once the server’s default prompt returns, use curl to test that the web server in the container is working. To do this, you’ll need the IP address of the web container, which you found using the lxc list command earlier.

      Use curl to test your web server:

      • curl http://your_webserver_container_ip

      You will receive the Nginx default HTML welcome page as output. Note that it includes your edits:

      Output

      <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on LXD container webserver!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> ...

      The web server is working but you can only access it while on the host using the private IP. In the next step, you will route external requests to this container so the world can access your web site through the internet.

      Step 4 — Forwarding Incoming Connections to the Nginx Container Using LXD

      Now that you have configured Nginx, it’s time to connect the webserver container to the internet. To begin, you need to set up the server to forward any connections that it may receive on port 80 to the webserver container. To do this, you’ll create an iptables rule to forward network connections. You can learn more about IPTables in our tutorials, How the IPtables Firewall Works and IPtables Essentials: Common Firewall Rules and Commands.

      This iptables command requires two IP addresses: the public IP address of the server (your_server_ip) and the private IP address of the webserver container (your_webserver_container_ip), which you can obtain using the lxc list command.

      Execute this command to create a new IPtables rule:

      • PORT=80 PUBLIC_IP=your_server_ip CONTAINER_IP=your_container_ip IFACE=eth0 sudo -E bash -c 'iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'

      Let’s study that command:

      • -t nat specifies that we’re using the nat table for address translation.
      • -I PREROUTING specifies that we’re adding the rule to the PREROUTING chain.
      • -i $IFACE specifies the interface eth0, which is the default public network interface on the host for Droplets.
      • -p TCP says we’re using the TCP protocol.
      • -d $PUBLIC_IP specifies the destination IP address for the rule.
      • --dport $PORT: specifies the destination port (such as 80).
      • -j DNAT says that we want to perform a jump to Destination NAT (DNAT).
      • --to-destination $CONTAINER_IP:$PORT says that we want the request to go to the IP address of the specific container and the destination port.

      Note: You can reuse this command to set up forwarding rules. Reset the variables PORT, PUBLIC_IP, CONTAINER_IP and IFACE at the start of the line. Just change the highlighted values.

      Now list your IPTables rules:

      • sudo iptables -t nat -L PREROUTING

      You’ll see output like this:

      Output

      Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to this container */ to:your_container_ip:80 ...

      Now test that the webserver is accessible from the internet

      Use the curl command from your local machine to test the connections:

      • curl --verbose 'http://your_server_ip'

      You’ll see the headers followed by the contents of the web page you created in the container:

      Output

      * Trying your_server_ip... * Connected to your_server_ip (your_server_ip) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 (Ubuntu) ... <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { ...

      This confirms that the requests are going to the container.

      Finally, you will save the firewall rule so that it reapplies after a reboot.

      To do so, first install the iptables-persistent package:

      • sudo apt install iptables-persistent

      When installing the package, the application will prompt you to save the current firewall rules. Accept and save all current rules.

      When you reboot your machine, the firewall rule will load. In addition, the Nginx service in your LXD container will automatically restart.

      You’ve successfully configured LXD. In the final step you will learn how to stop and destroy the service.

      Step 5 — Stopping and Removing Containers Using LXD

      You may decide that you want to take down the container and delete it. In this step you will stop and remove your container.

      First, stop the container:

      Use the lxc list command to verify the status:

      You will see that the container’s state reads STOPPED:

      Output

      +-----------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------+---------+------+------+------------+-----------+ | webserver | STOPPED | | | PERSISTENT | 0 | +-----------+---------+------+------+------------+-----------+

      To remove the container, use lxc delete:

      Running lxc list again shows that there’s no container running:

      The command will output the following:

      +------+-------+------+------+------+-----------+
      | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
      +------+-------+------+------+------+-----------+
      

      Use the lxc help command to see additional options.

      To remove the firewall rule that routes traffic to the container, first locate the rule in the list of rules with this command, which associates a line number with each rule:

      • sudo iptables -t nat -L PREROUTING --line-numbers

      You’ll see your rule, prefixed with a line number, like this:

      Output

      Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to the Nginx container */ to:your_container_ip

      Use that line number to remove the rule:

      • sudo iptables -t nat -D PREROUTING 1

      List the rules again to ensure removal:

      • sudo iptables -t nat -L PREROUTING --line-numbers

      The rule is removed:

      Output

      Chain PREROUTING (policy ACCEPT) num target prot opt source destination

      Now save the changes so that the rule doesn’t come back when you restart your server:

      • sudo netfilter-persistent save

      You can now bring up another container with your own settings and add a new firewall rule to forward traffic to it.

      Conclusion

      In this tutorial, you installed and configured LXD. You then created a website using Nginx running inside an LXD container and made it publicly available us IPtables.

      From here, you could configure more websites, each confined to its own container, and use a reverse proxy to direct traffic to the appropriate container. The tutorial How to Host Multiple Web Sites with Nginx and HAProxy Using LXD on Ubuntu 16.04 walks you through that setup.

      See the LXD reference documentation for more information on how to use LXD.

      To practice with LXD, you can try LXD online and follow the web-based tutorial.

      To get user support on LXD, visit the LXD discussion forum.



      Source link

      How To Configure WebDAV Access with Apache on Ubuntu 20.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      WebDAV is an extension of the HTTP protocol that allows users to manage files on remote servers. There are many ways to use a WebDAV server, you can, for example, share Word or Excel documents with your colleagues by uploading them to your WebDAV server. You can also share your music collection with your family and friends by simply giving them a URL. All of this can be achieved without them installing any additional software as everything is built right into their operating system.

      In this article, you’ll configure an Apache web server to enable WebDAV access from Windows, Mac, and Linux with SSL and password authentication.

      Prerequisites

      Before you begin this guide, you will need the following:

      WebDAV requires very few server resources, so any sized virtual machine will be enough to get your WebDAV server up and running.

      Log in to your server as the sudo-enabled, non-root user to start the first step.

      Step 1 — Enabling the WebDAV Apache Modules

      The Apache web server provides a lot of functionality as optional modules. You can enable and disable these modules to add and remove their functionality from Apache. Its WebDAV functionality is included in a module that you installed along with Apache, but is not enabled by default.
      You enable the WebDAV module for Apache using the a2enmod utility. The following two commands will enable the WebDAV modules:

      • sudo a2enmod dav
      • sudo a2enmod dav_fs

      Now, restart Apache to load the new modules:

      • sudo systemctl restart apache2.service

      The WebDAV module is now loaded and running. In the next step, you will configure Apache to serve your files via WebDAV.

      Step 2 — Configuring Apache

      In this step, you will create all the configurations that Apache needs to implement a WebDAV server.

      First, create the WebDAV root folder at /var/www/webdav that will hold the files you want to make available over WebDAV:

      • sudo mkdir /var/www/webdav

      Then, set Apache’s user, www-data, to be the owner of the WebDAV directory:

      • sudo chown www-data:www-data /var/www/webdav

      Next, you need to create a location for the database file that Apache uses to manage and lock the files that WebDAV users are accessing. This file needs to be readable and writable by Apache, but must not be available from the website as this can leak sensitive information.

      Create a new directory with the mkdir utility for the database file at /usr/local/apache/var/:

      • sudo mkdir -p /usr/local/apache/var/

      The -p option tells the mkdir utility to create all the directories in the path you specified if they don’t exist.

      Next, set the owner and group of the new directory to Apache’s user and group with the chown utility:

      • sudo chown www-data:www-data /usr/local/apache/var

      Now, you need to edit the VirtualHost file that holds the Apache configuration about your domain name. This file is located in /etc/apache2/sites-enabled/ and ends in le-ssl.conf if you used Certbot to register the SSL certificate.

      Open the VirtualHost file with a text editor:

      • sudo nano /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      On the first line, add the DavLockDB directive:

      /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      DavLockDB /usr/local/apache/var/DavLock
      . . .
      

      Next, add the following Alias and Directory directives inside the <VirtualHost> tags following all the other directives:

      /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      . . .
      Alias /webdav /var/www/webdav
      
      <Directory /var/www/webdav>
          DAV On
      </Directory>
      

      The Alias directive maps requests to http://your.server/webdav to the /var/www/webdav folder.

      The Directory directive tells Apache to enable WebDAV for the /var/www/webdav folder. You can find out more about mod_dav from the Apache docs.

      Your final VirtualHost file will be as follows, which includes the DavLockDB, Alias, and Directory directives in the correct locations:

      /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      DavLockDB /usr/local/apache/var/DavLock
      <IfModule mod_ssl.c>
      <VirtualHost *:443>
              ServerAdmin admin@your_domain
              ServerName your_domain
              ServerAlias your_domain
              DocumentRoot /var/www/your_domain
              ErrorLog ${APACHE_LOG_DIR}/error.log
              CustomLog ${APACHE_LOG_DIR}/access.log combined
      
              SSLCertificateFile /etc/letsencrypt/live/your_domain/fullchain.pem
              SSLCertificateKeyFile /etc/letsencrypt/live/your_domain/privkey.pem
              Include /etc/letsencrypt/options-ssl-apache.conf
      
              Alias /webdav /var/www/webdav
      
              <Directory /var/www/webdav>
                  DAV On
              </Directory>
      
      </VirtualHost>
      </IfModule>
      

      If you make any syntax errors while you are editing Apache’s configuration it will refuse to start. It’s a good practice to check your Apache configuration before restarting Apache.

      Use the apachectl utility to check the configuration:

      • sudo apachectl configtest

      If your configuration is error free, apachectl will print Syntax OK. When you receive this, it is safe to restart Apache to load the new configuration:

      • sudo systemctl restart apache2.service

      You’ve now configured Apache as a WebDAV server to serve files from /var/www/webdav. However, you don’t yet have authentication configured or enabled so anyone that can access your server will be able to read, write, and edit your files. In the next section, you will enable and configure WebDAV authentication.

      Step 3 — Adding Authentication to WebDAV

      The authentication method that you will use is called digest authentication. Digest authentication is the more secure method of WebDAV authentication, especially when coupled with HTTPS.

      Digest authentication works with a file that stores the usernames and passwords of users that are allowed to access the WebDAV server. Just as with the DavLockDB the digest file needs to be stored in a location that Apache can read and write to and that cannot be served from your website.

      As you already created /usr/local/apache/var/ for this purpose, you will place the digest file there as well.

      First, create an empty file called users.password at /usr/local/apache/var/ with the touch utility:

      • sudo touch /usr/local/apache/var/users.password

      Then change the owner and group to www-data so Apache can read and write to it:

      • sudo chown www-data:www-data /usr/local/apache/var/users.password

      New users are added to WebDAV using the htdigest utility. The following command adds the user sammy:

      • sudo htdigest /usr/local/apache/var/users.password webdav sammy

      The webdav in this command is the realm and should be thought of as the group you are adding the new user to. It is also the text displayed to users as they enter their username and password when they access your WebDAV server. You can choose whatever realm best describes your use case.

      htdigest will prompt you to enter a password and confirm it when you run it:

      Output

      Adding user sammy in realm webdav New password: Re-type new password:

      Next, you’ll tell Apache to require authentication for WebDAV access and to use the users.password file.

      Open your VirtualHost file:

      • sudo nano /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      Then, add the following lines inside the Directory directive block:

      /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      AuthType Digest
      AuthName "webdav"
      AuthUserFile /usr/local/apache/var/users.password
      Require valid-user
      

      These directives do the following:

      • AuthType Digest: Use the digest authentication method.
      • AuthName "webdav": Only allow users from the webdav realm.
      • AuthUserFile /usr/local/apache/var/users.password: Use the usernames and passwords contained in /usr/local/apache/var/users.password.
      • Require valid-user: Allow access to any user listed in the users.password file that supplied the correct password.

      Your <Directory> directive will be as follows:

      /etc/apache2/sites-enabled/your_domain-le-ssl.conf

      <Directory /var/www/webdav>
        DAV On
        AuthType Digest
        AuthName "webdav"
        AuthUserFile /usr/local/apache/var/users.password
        Require valid-user
      </Directory>
      

      Next, enable the auth_digest Apache module so that Apache knows how to use the digest authentication method:

      Finally, restart Apache to load all the new configuration:

      • sudo systemctl restart apache2.service

      You’ve now configured your WebDAV server to use HTTPS and digest authentication. It is ready to start serving files to your users. In the next section, you’ll access a WebDAV server from either Windows, Linux, or macOS.

      Step 4 — Accessing WebDAV

      In this step, you’ll access a WebDAV server with the native file browsers of macOS, Windows, and Linux (KDE and GNOME).

      Before you get started accessing your WebDAV server you should put a file into the WebDAV folder, so you have a file to test.

      Open a new file with a text editor:

      • sudo nano /var/www/webdav/webdav-testfile.txt

      Add some text then save and exit. Now, set the owner and group of this file to www-data:

      • sudo chown www-data:www-data /var/www/webdav/webdav-testfile.txt

      You are now ready to start accessing and testing your WebDAV server.

      Linux KDE

      First, open the KDE Dolphin file manager. Then edit the address bar with a URL that has the following form:

      webdavs://your_domain/webdav
      

      image showing WebDAV link in the Dolphin address bar

      When you hit ENTER you will be prompted to enter a username and password.

      image showing the username and password dialog box

      Check the Remember password option if you want Dolphin to retain your password. Then click OK to continue. It will now present you with the contents of the /var/www/webdav/ directory, which you can manipulate as if they were on your local system.

      Bookmark your WebDAV server by grabbing the folder icon in the address bar and dragging it under the Remote section in the left-hand navigation panel.

      Image showing the WebDAV server in the Dolphin Remote locations

      Linux GNOME

      First, open the Files application by clicking on its icon on the right-hand side of the desktop.

      Image showing Finder icon

      When Files opens do the following:

      1. Click on + Other Locations.
      2. Enter the URL of your WebDAV instance with the following form:
      davs://your_domain/webdav
      

      Image showing the Files application

      Then, click on Connect. It will then prompt you with a username and password dialog box.

      Image showing the username and password dialog

      Enter your username and password then click Connect to log in to your WebDAV server. Check the Remember forever option if you do not want to enter your password every time you access your files.

      Your WebDAV folder will now be available in Files where you can manage your files:

      Image showing the WebDAV server in the Files application

      macOS

      First, open the Finder application. Next, click on the Go menu and then on Connect to server.

      Image showing the Go menu in the Finder application

      You will now find a new dialog box where you enter the URL of the WebDAV server. This URL must have the following form:

      https://your_domain/webdav
      

      Image showing the URL entry dialog box

      Click on the Connect button to continue. It will prompt you to enter a username and password.

      Image showing the username and password dialog

      Click on Connect to complete adding your WebDAV server to your system.

      You will now find your WebDAV server in Finder under the Locations section.

      Image showing the WebDAV share in Finder

      Windows

      First, from the Start Menu, open the File Explorer application. When this opens select This PC from the left-hand navigation panel.

      Image showing This PC in the navigation panel

      Next, click on the Map network drive icon in the top navigation bar.

      Image showing the Map network drive icon in top navigation panel

      Enter the URL of your WebDAV server with a URL of the following form:

      https://your_domain/webdav
      

      Image showing the URL entry dialog

      Click Finish to connect to your WebDAV server. It will prompt you to enter a username and password.

      Image showing username and password entry dialog

      Enter your username and password and click OK to log in to your server. Check the Remember my credentials option if you do not want to enter your password every time you access your files.

      Your WebDAV will now appear as a location under the This PC section of the File Explorer left-hand navigation panel.

      Image showing the WebDAV share in File Explorer

      Conclusion

      You have now set up and configured a secure WebDAV server to serve your files to your users. No matter what operating system your users have on their local system they will be able to access and manage the files in your WebDAV server.



      Source link