One place for hosting & domains


      Recommended Steps to Secure a DigitalOcean Kubernetes Cluster

      The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.


      Kubernetes, the open-source container orchestration platform, is steadily becoming the preferred solution for automating, scaling, and managing high-availability clusters. As a result of its increasing popularity, Kubernetes security has become more and more relevant.

      Considering the moving parts involved in Kubernetes and the variety of deployment scenarios, securing Kubernetes can sometimes be complex. Because of this, the objective of this article is to provide a solid security foundation for a DigitalOcean Kubernetes (DOKS) cluster. Note that this tutorial covers basic security measures for Kubernetes, and is meant to be a starting point rather than an exhaustive guide. For additional steps, see the official Kubernetes documentation.

      In this guide, you will take basic steps to secure your DigitalOcean Kubernetes cluster. You will configure secure local authentication with TLS/SSL certificates, grant permissions to local users with Role-based access controls (RBAC), grant permissions to Kubernetes applications and deployments with service accounts, and set up resource limits with the ResourceQuota and LimitRange admission controllers.


      In order to complete this tutorial you will need:

      • A DigitalOcean Kubernetes (DOKS) managed cluster with 3 Standard nodes configured with at least 2 GB RAM and 1 vCPU each. For detailed instructions on how to create a DOKS cluster, read our Kubernetes Quickstart guide. This tutorial uses DOKS version 1.16.2-do.1.
      • A local client configured to manage the DOKS cluster, with a cluster configuration file downloaded from the DigitalOcean Control Panel and saved as ~/.kube/config. For detailed instructions on how to configure remote DOKS management, read our guide How to Connect to a DigitalOcean Kubernetes Cluster. In particular, you will need:
        • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation. This tutorial will use kubectl version 1.17.0-00.
        • The official DigitalOcean command-line tool, doctl. For instructions on how to install this, see the doctl GitHub page. This tutorial will use doctl version 1.36.0.

      Step 1 — Enabling Remote User Authentication

      After completing the prerequisites, you will end up with one Kubernetes superuser that authenticates through a predefined DigitalOcean bearer token. However, sharing those credentials is not a good security practice, since this account can cause large-scale and possibly destructive changes to your cluster. To mitigate this possibility, you can set up additional users to be authenticated from their respective local clients.

      In this section, you will authenticate new users to the remote DOKS cluster from local clients using secure SSL/TLS certificates. This will be a three-step process: First, you will create Certificate Signing Requests (CSR) for each user, then you will approve those certificates directly in the cluster through kubectl. Finally, you will build each user a kubeconfig file with the appropriate certificates. For more information regarding additional authentication methods supported by Kubernetes, refer to the Kubernetes authentication documentation.

      Creating Certificate Signing Requests for New Users

      Before starting, check the DOKS cluster connection from the local machine configured during the prerequisites:

      Depending on your configuration, the output will be similar to this one:


      Kubernetes master is running at CoreDNS is running at To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      This means that you are connected to the DOKS cluster.

      Next, create a local folder for client’s certificates. For the purpose of this guide, ~/certs will be used to store all certificates:

      In this tutorial, we will authorize a new user called sammy to access the cluster. Feel free to change this to a user of your choice. Using the SSL and TLS library OpenSSL, generate a new private key for your user using the following command:

      • openssl genrsa -out ~/certs/sammy.key 4096

      The -out flag will make the output file ~/certs/sammy.key, and 4096 sets the key as 4096-bit. For more information on OpenSSL, see our OpenSSL Essentials guide.

      Now, create a certificate signing request configuration file. Open the following file with a text editor (for this tutorial, we will use nano):

      • nano ~/certs/sammy.csr.cnf

      Add the following content into the sammy.csr.cnf file to specify in the subject the desired username as common name (CN), and the group as organization (O):


      [ req ]
      default_bits = 2048
      prompt = no
      default_md = sha256
      distinguished_name = dn
      [ dn ]
      CN = sammy
      O = developers
      [ v3_ext ]

      The certificate signing request configuration file contains all necessary information, user identity, and proper usage parameters for the user. The last argument extendedKeyUsage=serverAuth,clientAuth will allow users to authenticate their local clients with the DOKS cluster using the certificate once it’s signed.

      Next, create the sammy certificate signing request:

      • openssl req -config ~/certs/sammy.csr.cnf -new -key ~/certs/sammy.key -nodes -out ~/certs/sammy.csr

      The -config lets you specify the configuration file for the CSR, and -new signals that you are creating a new CSR for the key specified by -key.

      You can check your certificate signing request by running the following command:

      • openssl req -in ~/certs/sammy.csr -noout -text

      Here you pass in the CSR with -in and use -text to print out the certificate request in text.

      The output will show the certificate request, the beginning of which will look like this:


      Certificate Request: Data: Version: 1 (0x0) Subject: CN = sammy, O = developers Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (4096 bit) ...

      Repeat the same procedure to create CSRs for any additional users. Once you have all certificate signing requests saved in the administrator’s ~/certs folder, proceed with the next step to approve them.

      Managing Certificate Signing Requests with the Kubernetes API

      You can either approve or deny TLS certificates issued to the Kubernetes API by using kubectl command-line tool. This gives you the ability to ensure that the requested access is appropriate for the given user. In this section, you will send the certificate request for sammy and aprove it.

      To send a CSR to the DOKS cluster use the following command:

      cat <<EOF | kubectl apply -f -
      kind: CertificateSigningRequest
        name: sammy-authentication
        - system:authenticated
        request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n')
        - digital signature
        - key encipherment
        - server auth
        - client auth

      Using a Bash here document, this command uses cat to pass the certificate request to kubectl apply.

      Let’s take a closer look at the certificate request:

      • name: sammy-authentication creates a metadata identifier, in this case called sammy-authentication.
      • request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n') sends the sammy.csr certificate signing request to the cluster encoded as Base64.
      • server auth and client auth specify the intended usage of the certificate. In this case, the purpose is user authentication.

      The output will look similar to this:

      Output created

      You can check certificate signing request status using the command:

      Depending on your cluster configuration, the output will be similar to this:


      NAME AGE REQUESTOR CONDITION sammy-authentication 37s your_DO_email Pending

      Next, approve the CSR by using the command:

      • kubectl certificate approve sammy-authentication

      You will get a message confirming the operation:

      Output approved

      Note: As an administrator you can also deny a CSR by using the command kubectl certificate deny sammy-authentication. For more information about managing TLS certificates, please read Kubernetes official documentation.

      Now that the CSR is approved, you can download it to the local machine by running:

      • kubectl get csr sammy-authentication -o jsonpath='{.status.certificate}' | base64 --decode > ~/certs/sammy.crt

      This command decodes the Base64 certificate for proper usage by kubectl, then saves it as ~/certs/sammy.crt.

      With the sammy signed certificate in hand, you can now build the user’s kubeconfig file.

      Building Remote Users Kubeconfig

      Next, you will create a specific kubeconfig file for the sammy user. This will give you more control over the user’s access to your cluster.

      The first step in building a new kubeconfig is making a copy of the current kubeconfig file. For the purpose of this guide, the new kubeconfig file will be called config-sammy:

      • cp ~/.kube/config ~/.kube/config-sammy

      Next, edit the new file:

      • nano ~/.kube/config-sammy

      Keep the first eight lines of this file, as they contain the necessary information for the SSL/TLS connection with the cluster. Then starting from the user parameter, replace the text with the following highlighted lines so that the file looks similar to the following:


      apiVersion: v1
      - cluster:
          certificate-authority-data: certificate_data
        name: do-nyc1-do-cluster
      - context:
          cluster: do-nyc1-do-cluster
          user: sammy
        name: do-nyc1-do-cluster
      current-context: do-nyc1-do-cluster
      kind: Config
      preferences: {}
      - name: sammy
          client-certificate: /home/your_local_user/certs/sammy.crt
          client-key: /home/your_local_user/certs/sammy.key

      Note: For both client-certificate and client-key, use the absolute path to their corresponding certificate location. Otherwise, kubectl will produce an error.

      Save and exit the file.

      You can test the new user connection using kubectl cluster-info:

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy cluster-info

      You will see an error similar to this:


      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Error from server (Forbidden): services is forbidden: User "sammy" cannot list resource "services" in API group "" in the namespace "kube-system"

      This error is expected because the user sammy has no authorization to list any resource on the cluster yet. Granting authorization to users will be covered in the next step. For now, the output is confirming that the SSL/TLS connection was successful and the sammy authentication credentials were accepted by the Kubernetes API.

      Step 2 — Authorizing Users Through Role Based Access Control (RBAC)

      Once a user is authenticated, the API determines its permissions using Kubernetes built-in Role Based Access Control (RBAC) model. RBAC is an effective method of restricting user rights based on the role assigned to it. From a security point of view, RBAC allows setting fine-grained permissions to limit users from accessing sensitive data or executing superuser-level commands. For more detailed information regarding user roles refer to Kubernetes RBAC documentation.

      In this step, you will use kubectl to assign the predefined role edit to the user sammy in the default namespace. In a production environment, you may want to use custom roles and/or custom role bindings.

      Granting Permissions

      In Kubernetes, granting permissions means assigning the desired role to a user. Assign edit permissions to the user sammy in the default namespace using the following command:

      • kubectl create rolebinding sammy-edit-role --clusterrole=edit --user=sammy --namespace=default

      This will give output similar to the following:

      Output created

      Let’s analyze this command in more detail:

      • create rolebinding sammy-edit-role creates a new role binding, in this case called sammy-edit-role.
      • --clusterrole=edit assigns the predefined role edit at a global scope (cluster role).
      • --user=sammy specifies what user to bind the role to.
      • --namespace=default grants the user role permissions within the specified namespace, in this case default.

      Next, verify user permissions by listing pods in the default namespace. You can tell if RBAC authorization is working as expected if no errors are shown.

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy auth can-i get pods

      You will get the following output:



      Now that you have assigned permissions to sammy, you can now practice revoking those permissions in the next section.

      Revoking Permissions

      Revoking permissions in Kubernetes is done by removing the user role binding.

      For this tutorial, delete the edit role from the user sammy by running the following command:

      • kubectl delete rolebinding sammy-edit-role

      You will get the following output:

      Output "sammy-edit-role" deleted

      Verify if user permissions were revoked as expected by listing the default namespace pods:

      • kubectl --kubeconfig=/home/localuser/.kube/config-sammy --namespace=default get pods

      You will receive the following error:


      Error from server (Forbidden): pods is forbidden: User "sammy" cannot list resource "pods" in API group "" in the namespace "default"

      This shows that the authorization has been revoked.

      From a security standpoint, the Kubernetes authorization model gives cluster administrators the flexibility to change users rights on-demand as required. Moreover, role-based access control is not limited to a physical user; you can also grant and remove permissions to cluster services, as you will learn in the next section.

      For more information about RBAC authorization and how to create custom roles, please read the official documentation.

      Step 3 — Managing Application Permissions with Service Accounts

      As mentioned in the previous section, RBAC authorization mechanisms extend beyond human users. Non-human cluster users, such as applications, services, and processes running inside pods, authenticate with the API server using what Kubernetes calls service accounts. When a pod is created within a namespace, you can either let it use the default service account or you can define a service account of your choice. The ability to assign individual SAs to applications and processes gives administrators the freedom of granting or revoking permissions as required. Moreover, assigning specific SAs to production-critical applications is considered a best security practice. Since service accounts are used for authentication, and thus for RBAC authorization checks, cluster administrators could contain security threats by changing service account access rights and isolating the offending process.

      To demonstrate service accounts, this tutorial will use an Nginx web server as a sample application.

      Before assigning a particular SA to your application, you need to create the SA. Create a new service account called nginx-sa in the default namespace:

      • kubectl create sa nginx-sa

      You will get:


      serviceaccount/nginx-sa created

      Verify that the service account was created by running the following:

      This will give you a list of your service accounts:


      NAME SECRETS AGE default 1 22h nginx-sa 1 80s

      Now you will assign a role to the nginx-sa service account. For this example, grant nginx-sa the same permissions as the sammy user:

      • kubectl create rolebinding nginx-sa-edit
      • --clusterrole=edit
      • --serviceaccount=default:nginx-sa
      • --namespace=default

      Running this will yield the following:

      Output created

      This command uses the same format as for the user sammy, except for the --serviceaccount=default:nginx-sa flag, where you assign the nginx-sa service account in the default namespace.

      Check that the role binding was successful using this command:

      This will give the following output:


      NAME AGE nginx-sa-edit 23s

      Once you’ve confirmed that the role binding for the service account was successfully configured, you can assign the service account to an application. Assigning a particular service account to an application will allow you to manage its access rights in real-time and therefore enhance cluster security.

      For the purpose of this tutorial, an nginx pod will serve as the sample application. Create the new pod and specify the nginx-sa service account with the following command:

      • kubectl run nginx --image=nginx --port 80 --serviceaccount="nginx-sa"

      The first portion of the command creates a new pod running an nginx web server on port :80, and the last portion --serviceaccount="nginx-sa" indicates that this pod should use the nginx-sa service account and not the default SA.

      This will give you output similar to the following:


      deployment.apps/nginx created

      Verify that the new application is using the service account by using kubectl describe:

      • kubectl describe deployment nginx

      This will output a lengthy description of the deployment parameters. Under the Pod Template section, you will see output similar to this:


      ... Pod Template: Labels: run=nginx Service Account: nginx-sa ...

      In this section, you created the nginx-sa service account in the default namespace and assigned it to the nginx webserver. Now you can control nginx permissions in real-time by changing its role as needed. You can also group applications by assigning the same service account to each one and then make bulk changes to permissions. Finally, you could isolate critical applications by assigning them a unique SA.

      Summing up, the idea behind assigning roles to your applications/deployments is to fine-tune permissions. In real-world production environments, you may have several deployments requiring different permissions ranging from read-only to full administrative privileges. Using RBAC brings you the flexibility to restrict the access to the cluster as needed.

      Next, you will set up admission controllers to control resources and safeguard against resource starvation attacks.

      Step 4 — Setting Up Admission Controllers

      Kubernetes admission controllers are optional plug-ins that are compiled into the kube-apiserver binary to broaden security options. Admission controllers intercept requests after they pass the authentication and authorization phase. Once the request is intercepted, admission controllers execute the specified code just before the request is applied.

      While the outcome of either an authentication or authorization check is a boolean that allows or denies the request, admission controllers can be much more diverse. Admission controllers can validate requests in the same manner as authentication, but can also mutate or change the requests and modify objects before they are admitted.

      In this step, you will use the ResourceQuota and LimitRange admission controllers to protect your cluster by mutating requests that could contribute to a resource starvation or Denial-of-Service attack. The ResourceQuota admission controller allows administrators to restrict computing resources, storage resources, and the quantity of any object within a namespace, while the LimitRange admission controller will limit the number of resources used by containers. Using these two admission controllers together will protect your cluster from attacks that render your resources unavailable.

      To demonstrate how ResourceQuota works, you will implement a few restrictions in the default namespace. Start by creating a new ResourceQuota object file:

      • nano resource-quota-default.yaml

      Add in the following object definition to set constraints for resource consumption in the default namespace. You can adjust the values as needed depending on your nodes’ physical resources:


      apiVersion: v1
      kind: ResourceQuota
        name: resource-quota-default
          pods: "2"
          requests.cpu: "500m"
          requests.memory: 1Gi
          limits.cpu: "1000m"
          limits.memory: 2Gi
          configmaps: "5"
          persistentvolumeclaims: "2"
          replicationcontrollers: "10"
          secrets: "3"
          services: "4"
          services.loadbalancers: "2"

      This definition uses the hard keyword to set hard constraints, such as the maximum number of pods, configmaps, PersistentVolumeClaims, ReplicationControllers, secrets, services, and loadbalancers. This also set contraints on compute resources, like:

      • requests.cpu, which sets the maximum CPU value of requests in milliCPU, or one thousandth of a CPU core.
      • requests.memory, which sets the maximum memory value of requests in bytes.
      • limits.cpu, which sets the maximum CPU value of limits in milliCPUs.
      • limits.memory, which sets the maximum memory value of limits in bytes.

      Save and exit the file.

      Now, create the object in the namespace running the following command:

      • kubectl create -f resource-quota-default.yaml --namespace=default

      This will yield the following:


      resourcequota/resource-quota-default created

      Notice that you are using the -f flag to indicate to Kubernetes the location of the ResourceQuota file and the --namespace flag to specify which namespace will be updated.

      Once the object has been created, your ResourceQuota will be active. You can check the default namespace quotas with describe quota:

      • kubectl describe quota --namespace=default

      The output will look similar to this, with the hard limits you set in the resource-quota-default.yaml file:


      Name: resource-quota-default Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 5 limits.cpu 0 1 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 1 2 replicationcontrollers 0 10 requests.cpu 0 500m requests.memory 0 1Gi secrets 2 3 services 1 4 services.loadbalancers 0 2

      ResourceQuotas are expressed in absolute units, so adding additional nodes will not automatically increase the values defined here. If more nodes are added, you will need to manually edit the values here to proportionate the resources. ResourceQuotas can be modified as often as you need, but they cannot be removed unless the entire namespace is removed.

      If you need to modify a particular ResourceQuota, update the corresponding .yaml file and apply the changes using the following command:

      • kubectl apply -f resource-quota-default.yaml --namespace=default

      For more information regarding the ResourceQuota Admission Controller, refer to the official documentation.

      Now that your ResourceQuota is set up, you will move on to configuring the LimitRange Admission Controller. Similar to how the ResourceQuota enforces limits on namespaces, the LimitRange enforces the limitations declared by validating and mutating containers.

      In a similar way to before, start by creating the object file:

      • nano limit-range-default.yaml

      Now, you can use the LimitRange object to restrict resource usage as needed. Add the following content as an example of a typical use case:


      apiVersion: v1
      kind: LimitRange
        name: limit-range-default
        - max:
            cpu: "400m"
            memory: "1Gi"
            cpu: "100m"
            memory: "100Mi"
            cpu: "250m"
            memory: "800Mi"
            cpu: "150m"
            memory: "256Mi"
          type: Container

      The sample values used in limit-ranges-default.yaml restrict container memory to a maximum of 1Gi and limits CPU usage to a maximum of 400m, which is a Kubernetes metric equivalent to 400 milliCPU, meaning the container is limited to use almost half its core.

      Next, deploy the object to the API server using the following command:

      • kubectl create -f limit-range-default.yaml --namespace=default

      This will give the following output:


      limitrange/limit-range-default created

      Now you can check the new limits with following command:

      • kubectl describe limits --namespace=default

      Your output will look similar to this:


      Name: limit-range-default Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu 100m 400m 150m 250m - Container memory 100Mi 1Gi 256Mi 800Mi -

      To see LimitRanger in action, deploy a standard nginx container with the following command:

      • kubectl run nginx --image=nginx --port=80 --restart=Never

      This will give the following output:


      pod/nginx created

      Check how the admission controller mutated the container by running the following command:

      • kubectl get pod nginx -o yaml

      This will give many lines of output. Look in the container specification section to find the resource limits specified in the LimitRange Admission Controller:


      ... spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 250m memory: 800Mi requests: cpu: 150m memory: 256Mi ...

      This would be the same as if you manually declared the resources and requests in the container specification.

      In this step, you used the ResourceQuota and LimitRange admission controllers to protect against malicious attacks toward your cluster’s resources. For more information about LimitRange admission controller, read the official documentation.


      Throughout this guide, you configured a basic Kubernetes security template. This established user authentication and authorization, applications privileges, and cluster resource protection. Combining all the suggestions covered in this article, you will have a solid foundation for a production Kubernetes cluster deployment. From there, you can start hardening individual aspects of your cluster depending on your scenario.

      If you would like to learn more about Kubernetes, check out our Kubernetes resource page, or follow our Kubernetes for Full-Stack Developers self-guided course.

      Source link

      Recommended Steps To Harden Apache HTTP on FreeBSD 12.0

      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.


      Although the default installation of an Apache HTTP server is already safe to use, its configuration can be substantially improved with a few modifications. You can complement already present security mechanisms, for example, by setting protections around cookies and headers, so connections can’t be tampered with at the user’s client level. By doing this you can dramatically reduce the possibilities of several attack methods, like Cross-Site Scripting attacks (also known as XSS). You can also prevent other types of attacks, such as Cross-Site Request Forgery, or session hijacking, as well as Denial of Service attacks.

      In this tutorial you’ll implement some recommended steps to reduce how much information on your server is exposed. You will verify the directory listings and disable indexing to check the access to resources. You’ll also change the default value of the timeout directive to help mitigate Denial of Service type of attacks. Furthermore you’ll disable the TRACE method so sessions can’t be reversed and hijacked. Finally you’ll secure headers and cookies.

      Most of the configuration settings will be applied to the Apache HTTP main configuration file found at /usr/local/etc/apache24/httpd.conf.


      Before you begin this guide you’ll need the following:

      With the prerequisites in place you have a FreeBSD system with a stack on top able to serve web content using anything written in PHP, such as major CMS software. Furthermore, you’ve encrypted safe connections through Let’s Encrypt.

      Reducing Server Information

      The operating system banner is a method used by computers, servers, and devices of all kinds to present themselves into networks. Malicious actors can use this information to gain exploits into the relevant systems. In this section you’ll reduce the amount of information published by this banner.

      Sets of directives control how this information is displayed. For this purpose the ServerTokens directive is important; by default it displays all details about the operating system and compiled modules to the client that’s connecting to it.

      You’ll use a tool for network scanning to check what information is currently revealed prior to applying any changes. To install nmap run the following command:

      To get your server’s IP address, you can run the following command:

      • ifconfig vtnet0 | awk '/inet / {print $2}'

      You can check the web server response by using the following command:

      • nmap -sV -p 80 your-server-ip

      You invoke nmap to make a scan (hence the -s flag), to display the version (the -V flag) on port 80 (the -p flag) on the given IP or domain.

      You’ll receive information about your web server, similar to the following:


      Starting Nmap 7.80 ( ) at 2020-01-22 00:30 CET Nmap scan report for Host is up (0.054s latency). PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.4.41 ((FreeBSD) OpenSSL/1.1.1d-freebsd Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 7.59 seconds

      This output shows that information such as the operating system, the Apache HTTP version, and OpenSSL are visible. This can be useful for attackers to gain information about the server and choose the right tools to exploit, for example, a vulnerability in the software running on the server.

      You’ll place the ServerTokens directive in the main configuration file since it doesn’t come configured by default. The lack of this configuration makes Apache HTTP display the full information about the server as the documentation states. To limit the information that is revealed about your server and configuration, you’ll place the ServerTokens directive inside the main configuration file.

      You’ll place this directive following the ServerName entry in the configuration file. Run the following command to find the directive

      • grep -n 'ServerName' /usr/local/etc/apache24/httpd.conf

      You’ll find the line number that you can then search with vi:


      226 #ServerName

      Run the following command:

      • sudo vi +226 /usr/local/etc/apache24/httpd.conf

      Add the following highlighted line:


      . . .
      ServerTokens Prod

      Save and exit the file with :wq and ENTER.

      Setting the ServerTokens directive to Prod will make it only display that this is an Apache web server.

      For this to take effect restart the Apache HTTP server:

      To test the changes, run the following command:

      • nmap -sV -p 80 your-server-ip

      You’ll see similar output to the following with more minimal information on your Apache web server:


      Starting Nmap 7.80 ( ) at 2020-01-22 00:58 CET Nmap scan report for WPressBSD ( Host is up (0.056s latency). PORT STATE SERVICE VERSION 80/tcp open http Apache httpd Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 7.59 seconds

      You’ve seen what information the server was announcing prior to the change and you’ve now reduced this to the minimum. With this you’re providing fewer clues about your server to an external actor. In the next step you’ll manage the directory listings for your web server.

      Managing Directory Listings

      In this step you’ll ensure the directory listing is correctly configured, so the right parts of the system are publicly available as intended, while the remainder are protected.

      Note: When an argument is declared it is active, but the + can visually reinforce it is in fact enabled. When a minus sign - is placed the argument is denied, for example, Options -Indexes.

      Arguments with + and/or - can not be mixed, it is considered bad syntax in Apache HTTP and it may be rejected at the start up.

      Adding the statement Options -Indexes will set the content inside the data path /usr/local/www/apache24/data to not index (read listed) automatically if an .html file doesn’t exist, and not show if a URL maps this directory. This will also apply when using virtual host configurations such as the one used for the prerequisite tutorial for the Let’s Encrypt certificate.

      You will set the Options directive with the -Indexes argument and with the +FollowSymLinks directive, which will allow symbolic links to be followed. You’ll use the + symbol in order to comply with Apache’s HTTP conventions.

      Run the following command to find the line to edit in the configuration file:

      • grep -n 'Options Indexes FollowSymLinks' /usr/local/etc/apache24/httpd.conf

      You’ll see output similar to the following:


      263 : Options Indexes FollowSymLinks

      Run this command to directly access the line for editing:

      • sudo vi +263 /usr/local/etc/apache24/httpd.conf

      Now edit the line as per the configuration:


      . . .
      Options -Indexes +FollowSymLinks
      . . .

      Save and exit the file with :wq and ENTER.

      Restart Apache HTTP to implement these changes:

      At your domain in the browser, you’ll see a forbidden access message, also known as the 403 error. This is due to the changes you’ve applied. Placing -Indexes into the Options directive has disabled the auto-index capability of Apache HTTP and therefore there’s no index.html file inside the data path.

      You can solve this by placing an index.html file inside the VirtualHost you enabled in the prerequisite tutorial for the Let’s Encrypt certificate. You’ll use the default block within Apache HTTP and place it in the same folder as the DocumentRootthat you declared in the virtual host.


      <VirtualHost *:80>
          DocumentRoot "/usr/local/www/apache24/data/"
          ErrorLog "/var/log/"
          CustomLog "/var/log/" common

      Use the following command to do this:

      • sudo cp /usr/local/www/apache24/data/index.html /usr/local/www/apache24/data/

      Now you’ll see an It works! message when visiting your domain.

      In this section you’ve placed restrictions to the Indexes directive to not automatically enlist and display content other than what you intend. Now if there is not an index.html file inside the data path Apache HTTP will not automatically create an index of contents. In the next step you’ll move beyond obscuring information and customize different directives.

      Reducing the Timeout Directive Value

      The Timeout directive sets the limit of time Apache HTTP will wait for new input/output before failing the connection request. This failure can occur due to different circumstances such as packets not arriving to the server or data not being confirmed as received by the client.

      By default the timeout is set to 60 seconds. In environments where the internet service is slow this default value may be sensible, but one minute is quite a long time particularly if the server is covering a target of users with faster internet service. Furthermore the time during which the server is not closing the connection can be abused to perform Denial of Service attacks (DoS). If a flood of these malicious connections occurs the server will stumble and possibly become saturated and irresponsive.

      To change the value you’ll find the Timeout entries in the httpd-default.conf file:

      • grep -n 'Timeout' /usr/local/etc/apache24/extra/httpd-default.conf

      You’ll see similar output to:


      8 # Timeout: The number of seconds before receives and sends time out. 10 Timeout 60 26 # KeepAliveTimeout: Number of seconds to wait for the next request from the 29 KeepAliveTimeout 5 89 RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500

      In the output line 10 sets the Timeout directive value. To directly access this line run the following command:

      • sudo vi +10 /usr/local/etc/apache24/extra/httpd-default.conf

      You’ll change it to 30 seconds, for example, like the following:


      # Timeout: The number of seconds before receives and sends time out.
      Timeout 30

      Save and exit the file with :wq and ENTER.

      The value of the Timeout directive has to balance a time range large enough for those events to allow a legitimate and successful connection to happen, but short enough to prevent undesired connection attempts.

      Note: Denial of Service attacks can drain the server’s resources quite effectively. A complementary and very capable counter measure is using a threaded MPM to get the best performance out of how Apache HTTP handles connections and processes. In this tutorial How To Configure Apache HTTP with MPM Event and PHP-FPM on FreeBSD 12.0 there are steps on enabling this capability.

      For this change to take effect restart the Apache HTTP server:

      You’ve changed the default value of the Timeout directive in order to partially mitigate DoS attacks.

      Disabling the TRACE method

      The Hypertext Transport Protocol was developed following a client-server model and as such, the protocol has request methods to retrieve or place information from/to the server. The server needs to understand these sets of methods and the interaction between them. In this step you’ll configure the minimum necessary methods.

      TheTRACE method, which was considered harmless, was leveraged to perform Cross Site Tracing attacks. These types of attacks allow malicious actors to steal user sessions through that method. The method was designed for debugging purposes by the server returning the same request originally sent by the client. Because the cookie from the browser’s session is sent to the server it will be sent back again. However, this could potentially be intercepted by a malicious actor, who can then redirect a browser’s connection to a site of their control and not to the original server.

      Because of the possibility of the misuse of the TRACE method it is recommended to only use it for debugging and not in production. In this section you’ll disable this method.

      Edit the httpd.conf file with the following command and then press G to reach the end of the file:

      • sudo vi /usr/local/etc/apache24/httpd.conf

      Add the following entry path at the end of the file:


      . . .
      TraceEnable off

      A good practice is to only specify the methods you’ll use in your Apache HTTP web server. This will help limit potential entry points for malicious actors.

      LimitExcept can be useful for this purpose since it will not allow any other methods than those declared in it. For example a configuration can be established like this one:


      DocumentRoot "/usr/local/www/apache24/data"
      <Directory "/usr/local/www/apache24/data">
          Options -Indexes +FollowSymLinks -Includes
          AllowOverride none
           <LimitExcept GET POST HEAD>
             deny from all
          Require all granted

      As declared within the LimitExcept directive only the GET, POST, and HEAD methods are allowed in the configuration.

      • The GET method is part of the HTTP protocol and it is used to retrieve data.
      • The POST method is also part of the HTTP protocol and is used to send data to the server.
      • The HEAD method is similar to GET, however this has no response body.

      You’ll use the following command and place the LimitExcept block inside the file:

      • sudo vi +272 /usr/local/etc/apache24/httpd.conf

      To set this configuration you’ll place the following block into the DocumentRoot directive entry where the content will be read from, more specifically inside the Directory entry:


      . . .
      <LimitExcept GET POST HEAD>
         deny from all
      . . .

      To apply the changes restart Apache HTTP:

      The newer directive AllowedMethods provides similar functionality, although its status is still experimental.

      You’ve seen what HTTP methods are, their use, and the protection they offer from malicious activity leveraging the TRACE method as well as how to declare what methods to use. Next you’ll work with further protections dedicated to HTTP headers and cookies.

      Securing Headers and Cookies

      In this step you’ll set specific directives to protect the sessions that the client machines will open when visiting your Apache HTTP web server. This way your server will not load unwanted content, encryption will not be downgraded, and you’ll avoid content sniffing.

      Headers are components of the requests methods. There are headers to adjust authentication, communication between server and client, caching, content negotiation, and so on.

      Cookies are bits of information sent by the server to the browser. These bits allow the server to recognize the client browser from one computer to another. They also allow servers to recognize user sessions. For example, they can track a shopping cart of a logged-in user, payment information, history, and so on. Cookies are used and retained in the client’s web browser since HTTP is a stateless protocol, meaning once the connection closes the server does not remember the request sent by one client, or another one.

      It is important to protect headers as well as cookies because they provide communication between the web browser client and the web server.

      The headers module comes activated by default. To check if it’s loaded you’ll use the following command:

      • sudo apachectl -M | grep 'headers'

      You’ll see the following output:


      headers_module (shared)

      If you don’t see any output, check if the module is activated inside Apache’s httpd.conf file:

      • grep -n 'mod_headers' /usr/local/etc/apache24/httpd.conf

      As output you’ll see an uncommented line referring to the specific module for headers:


      . . .
      122  LoadModule headers_module libexec/apache24/
      . . .

      Remove the hashtag at the beginning of the line, if present, to activate the directive.

      By making use of the following Apache HTTP directives you’ll protect headers and cookies from malicious activity to reduce the risk for clients and servers.

      Now you’ll set the header’s protection. You’ll place all these header values in one block. You can choose to apply these values as you wish, but all are recommended.

      Edit the httpd.conf file with the following command and then press G to reach the end of the file:

      • sudo vi /usr/local/etc/apache24/httpd.conf

      Place the following block at the end of the file:


      . . .
      <IfModule mod_headers.c>
        # Add security and privacy related headers
        Header set Content-Security-Policy "default-src 'self'; upgrade-insecure-requests;"
        Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
        Header always edit Set-Cookie (.*) "$1; HttpOnly; Secure"
        Header set X-Content-Type-Options "nosniff"
        Header set X-XSS-Protection "1; mode=block"
        Header set Referrer-Policy "strict-origin"
        Header set X-Frame-Options: "deny"
        SetEnv modHeadersAvailable true
      • Header set Strict-Transport-Security "max-age=31536000; includeSubDomains": HTTP Strict Transport Security (HTSTS) is a mechanism for web servers and clients (mainly browsers) to establish communications using only HTTPS. By implementing this you’re avoiding man-in-the-middle attacks, where a third party in between the communication could potentially access the bits, but also tamper with them.

      • Header always edit Set-Cookie (.*) "$1; HttpOnly; Secure": The HttpOnly and Secure flags on headers help prevent cross-site scripting attacks, also known as XSS. Cookies can be misused by attackers to pose as legitimate visitors presenting themselves as someone else (identity theft), or be tampered.

      • Header set Referrer-Policy "strict-origin": The Referrer-Policy header sets what information is included as the referrer information in the header field.

      • Header set Content-Security-Policy "default-src 'self'; upgrade-insecure-requests;": The Content-Security-Policy header (CSP) will completely prevent loading content not specified in the parameters, which is helpful to prevent cross-site scripting (XSS) attacks. There are many possible parameters to configure the policy for this header. The bottom line is configuring it to load content from the same site and upgrade any content with an HTTP origin.

      • Header set X-XSS-Protection "1; mode=block": This supports older browsers that do not cope with Content-Security-Policy headers. The ‘X-XSS-Protection’ header provides protection against Cross-Site Scripting attacks. You do not need to set this header unless you need to support old browser versions, which is rare.

      • Header set X-Frame-Options: "deny": This prevents clickjacking attacks. The 'X-Frame-Options’ header tells a browser if a page can be rendered in a <frame>, <iframe>, <embed>, or <object>. This way content from other sites cannot be embedded into others, preventing clickjacking attacks. Here you’re denying all frame render so the web page can’t be embedded anywhere else, not even inside the same web site. You can adapt this to your needs, if, for example, you must authorize rendering some pages because they are advertisements or collaborations with specific websites.

      • Header set X-Content-Type-Options "nosniff": The 'X-Content-Type-Options’ header controls MIME types so they’re not changed and followed. MIME types are file format standards; they work for text, audio, video, image, and so on. This header blocks malicious actors from content sniffing those files and trying to alter the file types.

      Now restart Apache for the changes to take effect:

      To check the security levels of your configuration settings, visit the security headers website. Having followed the steps in this tutorial, your domain will score an A grade.

      Note: If you make your headers check by visiting and get an F grade it could be because there is no index.html inside the DocumentRoot of your site as instructed at the end of Step 2. If checking your headers you get a different grade than an A or an F, check each Header set line looking for any misspelling that may have caused the downgrade.

      In this step you have worked with up to seven settings to improve the security of your headers and cookies. These will help prevent cross-site scripting, clickjacking, and other types of attacks.


      In this tutorial you’ve addressed several security aspects, from information disclosure, to protecting sessions, through setting alternative configuration settings for important functionality.

      For further resources on hardening Apache, here are some other references:

      For extra tools to protect Apache HTTP:

      Source link

      Recommended Steps For New FreeBSD 12.0 Servers


      When setting up a new FreeBSD server, there are a number of optional steps you can take to get your server into a more production-friendly state. In this guide, we will cover some of the most common examples.

      We will set up a simple, easy-to-configure firewall that denies most traffic. We will also make sure that your server’s time zone accurately reflects its location. We will set up NTP polling in order to keep the server’s time accurate and, finally, demonstrate how to add some extra swap space to your server.

      Before you get started with this guide, you should log in and configure your shell environment the way you’d like it. You can find out how to do this by following this guide.

      How To Configure a Simple IPFW Firewall

      The first task is setting up a simple firewall to secure your server.

      FreeBSD supports and includes three separate firewalls. These are called pf, ipfw, and ipfilter. In this guide, we will be using ipfw as our firewall. ipfw is a secure, stateful firewall written and maintained as part of FreeBSD.

      Configuring the Basic Firewall

      Almost all of your configuration will take place in the /etc/rc.conf file. To modify the configuration you’ll use the sysrc command, which allows users to change configuration in /etc/rc.conf in a safe manner. Inside this file you’ll add a number of different lines to enable and control how the ipfw firewall will function. You’ll start with the essential rules; run the following command to begin:

      • sudo sysrc firewall_enable="YES"

      Each time you run sysrc to modify your configuration, you’ll receive output showing the changes:


      firewall_enable: NO -> YES

      As you may expect, this first command enables the ipfw firewall, starting it automatically at boot and allowing it to be started with the usual service commands.

      Now run the following:

      • sudo sysrc firewall_quiet="YES"

      This tells ipfw not to output anything to standard out when it performs certain actions. This might seem like a matter of preference, but it actually affects the functionality of the firewall.

      Two factors combine to make this an important option. The first is that the firewall configuration script is executed in the current shell environment, not as a background task. The second is that when the ipfw command reads a configuration script without the "quiet" flag, it reads and outputs each line, in turn, to standard out. When it outputs a line, it immediately executes the associated action.

      Most firewall configuration files flush the current rules at the top of the script in order to start with a clean slate. If the ipfw firewall comes across a line like this without the quiet flag, it will immediately flush all rules and revert to its default policy, which is usually to deny all connections. If you’re configuring the firewall over SSH, this would drop the connection, close the current shell session, and none of the rules that follow would be processed, effectively locking you out of the server. The quiet flag allows the firewall to process the rules as a set instead of implementing each one individually.

      After these two lines, you can begin configuring the firewall’s behavior. Now select "workstation" as the type of firewall you’ll configure:

      • sudo sysrc firewall_type="workstation"

      This sets the firewall to protect the server from which you’re configuring the firewall using stateful rules. A stateful firewall monitors the state of network connections over time and stores information about these connections in memory for a short time. As a result, not only can rules be defined on what connections the firewall should allow, but a stateful firewall can also use the data it has learned about previous connections to evaluate which connections can be made.

      The /etc/rc.conf file also allows you to customize the services you want clients to be able to access by using the firewall_myservices and firewall_allowservices options.

      Run the following command to open ports that should be accessible on your server, such as port 22 for your SSH connection and port 80 for a conventional HTTP web server. If you use SSL on your web server, make sure to add port 443:

      • sudo sysrc firewall_myservices="22/tcp 80/tcp 443/tcp"

      The firewall_myservices option is set to a list of TCP ports or services, separated by spaces, that should be accessible on your server.

      Note: You could also use services by name. The services that FreeBSD knows by name are listed in the /etc/services file. For instance, you could change the previous command to something like this:

      • firewall_myservices="ssh http https"

      This would have the same results.

      The firewall_allowservices option lists items that should be allowed to access the provided services. Therefore it allows you to limit access to your exposed services (from firewall_myservices) to particular machines or network ranges. For example, this could be useful if you want a machine to host web content for an internal company network. The keyword "any" means that any IPs can access these services, making them completely public:

      • sudo sysrc firewall_allowservices="any"

      The firewall_logdeny option tells ipfw to log all connection attempts that are denied to a file located at /var/log/security. Run the following command to set this:

      • sudo sysrc firewall_logdeny="YES"

      To check on the changes you’ve made to the firewall configuration, run the following command:

      • grep 'firewall' /etc/rc.conf

      This portion of the /etc/rc.conf file will look like this:


      firewall_enable="YES" firewall_quiet="YES" firewall_type="workstation" firewall_myservices="22 80 443" firewall_allowservices="any" firewall_logdeny="YES"

      Remember to adjust the firewall_myservices option to reference the services you wish to expose to clients.

      Allowing UDP Connections (Optional)

      The ports and services listed in the firewall_myservices option in the /etc/rc.conf file allow access for TCP connections. If you have services that you wish to expose that use UDP, you need to edit the /etc/rc.firewall file:

      You configured your firewall to use the "workstation" firewall type, so look for a section that looks like this:


      . . .
      . . .

      There is a section within this block that is dedicated to processing the firewall_allowservices and firewall_myservices values that you set. It will look like this:


      for i in ${firewall_allowservices} ; do
        for j in ${firewall_myservices} ; do
          ${fwcmd} add pass tcp from $i to me $j

      After this section, you can add any services or ports that should accept UDP packets by adding lines like this:

      ${fwcmd} add pass udp from any to me port_num

      In vi, press i to switch to INSERT mode and add your content, then save and close the file by pressing ESC, typing :wq, and pressing ENTER. In the previous example, you can leave the "any" keyword if the connection should be allowed for all clients or change it to a specific IP address or network range. The port_num should be replaced by the port number or service name you wish to allow UDP access to. For example, if you're running a DNS server, you may wish to have a line that looks something like this:

      for i in ${firewall_allowservices} ; do
        for j in ${firewall_myservices} ; do
          ${fwcmd} add pass tcp from $i to me $j
      ${fwcmd} add pass udp from to me 53

      This will allow any client from within the network range to access a DNS server operating on the standard port 53. Note that in this example you would also want to open this port up for TCP connections as that is used by DNS servers for longer replies.

      Save and close the file when you are finished.

      Starting the Firewall

      When you are finished with your configuration, you can start the firewall by typing:

      The firewall will start correctly, blocking unwanted traffic while adhering to your allowed services and ports. This firewall will start automatically at every boot.

      You also want to configure a limit on how many denials per IP address you'll log. This will prevent your logs from filling up from a single, persistent user. You can do this in the /etc/sysctl.conf file:

      At the bottom of the file, you can limit your logging to "5" by adding the following line:



      Save and close the file when you are finished. This will configure that setting on the next boot.

      To implement this same behavior for your currently active session without restarting, you can use the sysctl command itself, like this:

      • sudo sysctl net.inet.ip.fw.verbose_limit=5

      This should immediately implement the limit for this boot.

      How To Set the Time Zone for Your Server

      It is a good idea to correctly set the time zone for your server. This is an important step for when you configure NTP time synchronization in the next section.

      FreeBSD comes with a menu-based tool called tzsetup for configuring time zones. To set the time zone for your server, call this command with sudo privileges:

      First, you will be asked to select the region of the world your server is located in:

      FreeBSD region of the world

      You will need to choose a sub-region or country next:

      FreeBSD country

      Note: To navigate these menus, you'll need to use the PAGE UP and PAGE DOWN keys. If you do not have these on your keyboard, you can use FN + DOWN or FN + UP.

      Finally, select the specific time zone that is appropriate for your server:

      FreeBSD time zone

      Confirm the time zone selection that is presented based on your choices.

      At this point, your server's time zone should match the selections you made.

      How To Configure NTP to Keep Accurate Time

      Now that you have the time zone configured on your server, you can set up NTP, or Network Time Protocol. This will help keep your server's time in sync with others throughout the world. This is important for time-sensitive client-server interactions as well as accurate logging.

      Again, you can enable the NTP service on your server by adjusting the /etc/rc.conf file. Run the following command to add the line ntpd_enable="YES" to the file:

      • sudo sysrc ntpd_enable="YES"

      You also need to add a second line that will sync the time on your machine with the remote NTP servers at boot. This is necessary because it allows your server to exceed the normal drift limit on initialization. Your server will likely be outside of the drift limit at boot because your time zone will be applied prior to the NTP daemon starting, which will offset your system time:

      • sudo sysrc ntpd_sync_on_start="YES"

      If you did not have this line, your NTP daemon would fail when started due to the timezone settings that skew your system time prior in the boot process.

      You can start your ntpd service by typing:

      This will maintain your server's time by synchronizing with the NTP servers listed in /etc/ntp.conf.

      On FreeBSD servers configured on DigitalOcean, 1 Gigabyte of swap space is automatically configured regardless of the size of your server. You can see this by typing:

      It should show something like this:


      Device 1G-blocks Used Avail Capacity /dev/gpt/swapfs 1 0 1 0%

      Some users and applications may need more swap space than this. This is accomplished by adding a swap file.

      The first thing you need to do is to allocate a chunk of the filesystem for the file you want to use for swap. You'll use the truncate command, which can quickly allocate space on the fly.

      We'll put the swapfile in /swapfile for this tutorial but you can put the file anywhere you wish, like /var/swapfile for example. This file will provide an additional 1 Gigabyte of swap space. You can adjust this number by modifying the value given to the -s option:

      • sudo truncate -s 1G /swapfile

      After you allocate the space, you need to lock down access to the file. Normal users should not have any access to the file:

      • sudo chmod 0600 /swapfile

      Next, associate a pseudo-device with your file and configure it to mount at boot by typing:

      • echo "md99 none swap sw,file=/swapfile,late 0 0" | sudo tee -a /etc/fstab

      This command adds a line that looks like this to the /etc/fstab file:

      md99 none swap sw,file=/swapfile,late 0 0

      After the line is added to your /etc/fstab file, you can activate the swap file for the session by typing:

      You can verify that the swap file is now working by using the swapinfo command again:

      You should see the additional device (/dev/md99) associated with your swap file:


      Device 1G-blocks Used Avail Capacity /dev/gpt/swapfs 1 0 1 0% /dev/md99 1 0 1 0% Total 2 0 2 0%

      This swap file will be mounted automatically at each boot.


      The steps outlined in this guide can be used to bring your FreeBSD server into a more production-ready state. By configuring basic essentials like a firewall, NTP synchronization, and appropriate swap space, your server can be used as a good base for future installations and services.

      Source link