One place for hosting & domains

      Ingress

      How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes Ingresses offer you a flexible way of routing traffic from beyond your cluster to internal Kubernetes Services. Ingress Resources are objects in Kubernetes that define rules for routing HTTP and HTTPS traffic to Services. For these to work, an Ingress Controller must be present; its role is to implement the rules by accepting traffic (most likely via a Load Balancer) and routing it to the appropriate Services. Most Ingress Controllers use only one global Load Balancer for all Ingresses, which is more efficient than creating a Load Balancer per every Service you wish to expose.

      Helm is a package manager for managing Kubernetes. Using Helm Charts with your Kubernetes provides configurability and lifecycle management to update, rollback, and delete a Kubernetes application.

      In this guide, you’ll set up the Kubernetes-maintained Nginx Ingress Controller using Helm. You’ll then create an Ingress Resource to route traffic from your domains to example Hello World back-end services. Once you’ve set up the Ingress, you’ll install Cert-Manager to your cluster to be able to automatically provision Let’s Encrypt TLS certificates to secure your Ingresses.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • A fully registered domain name with two available A records. This tutorial will use hw1.example.com and hw2.example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      Step 1 — Setting Up Hello World Deployments

      In this section, before you deploy the Nginx Ingress, you will deploy a Hello World app called hello-kubernetes to have some Services to which you’ll route the traffic. To confirm that the Nginx Ingress works properly in the next steps, you’ll deploy it twice, each time with a different welcome message that will be shown when you access it from your browser.

      You’ll store the deployment configuration on your local machine. The first deployment configuration will be in a file named hello-kubernetes-first.yaml. Create it using a text editor:

      • nano hello-kubernetes-first.yaml

      Add the following lines:

      hello-kubernetes-first.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-first
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-first
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-first
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-first
        template:
          metadata:
            labels:
              app: hello-kubernetes-first
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the first deployment!
      

      This configuration defines a Deployment and a Service. The Deployment consists of three replicas of the paulbouwer/hello-kubernetes:1.5 image, and an environment variable named MESSAGE—you will see its value when you access the app. The Service here is defined to expose the Deployment in-cluster at port 80.

      Save and close the file.

      Then, create this first variant of the hello-kubernetes app in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-first.yaml

      You’ll see the following output:

      Output

      service/hello-kubernetes-first created deployment.apps/hello-kubernetes-first created

      To verify the Service’s creation, run the following command:

      • kubectl get service hello-kubernetes-first

      The output will look like this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 35s

      You’ll see that the newly created Service has a ClusterIP assigned, which means that it is working properly. All traffic sent to it will be forwarded to the selected Deployment on port 8080. Now that you have deployed the first variant of the hello-kubernetes app, you’ll work on the second one.

      Open a file called hello-kubernetes-second.yaml for editing:

      • nano hello-kubernetes-second.yaml

      Add the following lines:

      hello-kubernetes-second.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-second
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-second
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-second
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-second
        template:
          metadata:
            labels:
              app: hello-kubernetes-second
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the second deployment!
      

      Save and close the file.

      This variant has the same structure as the previous configuration; the only differences are in the Deployment and Service names, to avoid collisions, and the message.

      Now create it in Kubernetes with the following command:

      • kubectl create -f hello-kubernetes-second.yaml

      The output will be:

      Output

      service/hello-kubernetes-second created deployment.apps/hello-kubernetes-second created

      Verify that the second Service is up and running by listing all of your services:

      The output will be similar to this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 54s hello-kubernetes-second ClusterIP 10.245.99.130 <none> 80:30303/TCP 12s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 5m

      Both hello-kubernetes-first and hello-kubernetes-second are listed, which means that Kubernetes has created them successfully.

      You've created two deployments of the hello-kubernetes app with accompanying Services. Each one has a different message set in the deployment specification, which allow you to differentiate them during testing. In the next step, you'll install the Nginx Ingress Controller itself.

      Step 2 — Installing the Kubernetes Nginx Ingress Controller

      Now you'll install the Kubernetes-maintained Nginx Ingress Controller using Helm. Note that there are several Nginx Ingresses.

      The Nginx Ingress Controller consists of a Pod and a Service. The Pod runs the Controller, which constantly polls the /ingresses endpoint on the API server of your cluster for updates to available Ingress Resources. The Service is of type LoadBalancer, and because you are deploying it to a DigitalOcean Kubernetes cluster, the cluster will automatically create a DigitalOcean Load Balancer, through which all external traffic will flow to the Controller. The Controller will then route the traffic to appropriate Services, as defined in Ingress Resources.

      Only the LoadBalancer Service knows the IP address of the automatically created Load Balancer. Some apps (such as ExternalDNS) need to know its IP address, but can only read the configuration of an Ingress. The Controller can be configured to publish the IP address on each Ingress by setting the controller.publishService.enabled parameter to true during helm install. It is recommended to enable this setting to support applications that may depend on the IP address of the Load Balancer.

      To install the Nginx Ingress Controller to your cluster, run the following command:

      • helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true

      This command installs the Nginx Ingress Controller from the stable charts repository, names the Helm release nginx-ingress, and sets the publishService parameter to true.

      The output will look like:

      Output

      NAME: nginx-ingress LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nginx-ingress-controller 1 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7658988787-npv28 0/1 ContainerCreating 0 0s nginx-ingress-default-backend-7f5d59d759-26xq2 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.245.9.107 <pending> 80:31305/TCP,443:30519/TCP 0s nginx-ingress-default-backend ClusterIP 10.245.221.49 <none> 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE nginx-ingress 1 0s ==> v1beta1/ClusterRole NAME AGE nginx-ingress 0s ==> v1beta1/ClusterRoleBinding NAME AGE nginx-ingress 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 0/1 1 0 0s nginx-ingress-default-backend 0/1 1 0 0s ==> v1beta1/Role NAME AGE nginx-ingress 0s ==> v1beta1/RoleBinding NAME AGE nginx-ingress 0s NOTES: ...

      Helm has logged what resources in Kubernetes it created as a part of the chart installation.

      You can watch the Load Balancer become available by running:

      • kubectl get services -o wide -w nginx-ingress-controller

      You've installed the Nginx Ingress maintained by the Kubernetes community. It will route HTTP and HTTPS traffic from the Load Balancer to appropriate back-end Services, configured in Ingress Resources. In the next step, you'll expose the hello-kubernetes app deployments using an Ingress Resource.

      Step 3 — Exposing the App Using an Ingress

      Now you're going to create an Ingress Resource and use it to expose the hello-kubernetes app deployments at your desired domains. You'll then test it by accessing it from your browser.

      You'll store the Ingress in a file named hello-kubernetes-ingress.yaml. Create it using your editor:

      • nano hello-kubernetes-ingress.yaml

      Add the following lines to your file:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      In the code above, you define an Ingress Resource with the name hello-kubernetes-ingress. Then, you specify two host rules, so that hw1.example.com is routed to the hello-kubernetes-first Service, and hw2.example.com is routed to the Service from the second deployment (hello-kubernetes-second).

      Remember to replace the highlighted domains with your own, then save and close the file.

      Create it in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-ingress.yaml

      Next, you'll need to ensure that your two domains are pointed to the Load Balancer via A records. This is done through your DNS provider. To configure your DNS records on DigitalOcean, see How to Manage DNS Records.

      You can now navigate to hw1.example.com in your browser. You will see the following:

      Hello Kubernetes - First Deployment

      The second variant (hw2.example.com) will show a different message:

      Hello Kubernetes - Second Deployment

      With this, you have verified that the Ingress Controller correctly routes requests; in this case, from your two domains to two different Services.

      You've created and configured an Ingress Resource to serve the hello-kubernetes app deployments at your domains. In the next step, you'll set up Cert-Manager, so you'll be able to secure your Ingress Resources with free TLS certificates from Let's Encrypt.

      Step 4 — Securing the Ingress Using Cert-Manager

      To secure your Ingress Resources, you'll install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of your Ingress to take advantage of the TLS certificates. ClusterIssuers are Cert-Manager Resources in Kubernetes that provision TLS certificates. Once installed and configured, your app will be running behind HTTPS.

      Before installing Cert-Manager to your cluster via Helm, you'll manually apply the required CRDs (Custom Resource Definitions) from the jetstack/cert-manager repository by running the following command:

      • kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

      You will see the following output:

      Output

      customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created

      This shows that Kubernetes has applied the custom resources you require for cert-manager.

      Note: If you've followed this tutorial and the prerequisites, you haven't created a Kubernetes namespace called cert-manager, so you won't have to run the command in this note block. However, if this namespace does exist on your cluster, you'll need to inform Cert-Manager not to validate it with the following command:

      • kubectl label namespace cert-manager certmanager.k8s.io/disable-validation="true"

      The Webhook component of Cert-Manager requires TLS certificates to securely communicate with the Kubernetes API server. In order for Cert-Manager to generate certificates for it for the first time, resource validation must be disabled on the namespace it is deployed in. Otherwise, it would be stuck in an infinite loop; unable to contact the API and unable to generate the TLS certificates.

      The output will be:

      Output

      namespace/cert-manager labeled

      Next, you'll need to add the Jetstack Helm repository to Helm, which hosts the Cert-Manager chart. To do this, run the following command:

      • helm repo add jetstack https://charts.jetstack.io

      Helm will display the following output:

      Output

      "jetstack" has been added to your repositories

      Finally, install Cert-Manager into the cert-manager namespace:

      • helm install --name cert-manager --namespace cert-manager jetstack/cert-manager

      You will see the following output:

      Output

      NAME: cert-manager LAST DEPLOYED: ... NAMESPACE: cert-manager STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE cert-manager-edit 3s cert-manager-view 3s cert-manager-webhook:webhook-requester 3s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-5d669ffbd8-rb6tr 0/1 ContainerCreating 0 2s cert-manager-cainjector-79b7fc64f-gqbtz 0/1 ContainerCreating 0 2s cert-manager-webhook-6484955794-v56lx 0/1 ContainerCreating 0 2s ... NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://docs.cert-manager.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://docs.cert-manager.io/en/latest/reference/ingress-shim.html

      The output shows that the installation was successful. As listed in the NOTES in the output, you'll need to set up an Issuer to issue TLS certificates.

      You'll now create one that issues Let's Encrypt certificates, and you'll store its configuration in a file named production_issuer.yaml. Create it and open it for editing:

      • nano production_issuer.yaml

      Add the following lines:

      production_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      This configuration defines a ClusterIssuer that contacts Let's Encrypt in order to issue certificates. You'll need to replace your_email_address with your email address in order to receive possible urgent notices regarding the security and expiration of your certificates.

      Save and close the file.

      Roll it out with kubectl:

      • kubectl create -f production_issuer.yaml

      You will see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      With Cert-Manager installed, you're ready to introduce the certificates to the Ingress Resource defined in the previous step. Open hello-kubernetes-ingress.yaml for editing:

      • nano hello-kubernetes-ingress.yaml

      Add the highlighted lines:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - hw1.example.com
          - hw2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      The tls block under spec defines in what Secret the certificates for your sites (listed under hosts) will store their certificates, which the letsencrypt-prod ClusterIssuer issues. This must be different for every Ingress you create.

      Remember to replace the hw1.example.com and hw2.example.com with your own domains. When you've finished editing, save and close the file.

      Re-apply this configuration to your cluster by running the following command:

      • kubectl apply -f hello-kubernetes-ingress.yaml

      You will see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress configured

      You'll need to wait a few minutes for the Let's Encrypt servers to issue a certificate for your domains. In the meantime, you can track its progress by inspecting the output of the following command:

      • kubectl describe certificate hello-kubernetes

      The end of the output will look similar to this:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 56s cert-manager Generated new private key Normal GenerateSelfSigned 56s cert-manager Generated temporary self signed certificate Normal OrderCreated 56s cert-manager Created Order resource "hello-kubernetes-1197334873" Normal OrderComplete 31s cert-manager Order "hello-kubernetes-1197334873" completed successfully Normal CertIssued 31s cert-manager Certificate issued successfully

      When your last line of output reads Certificate issued successfully, you can exit by pressing CTRL + C. Navigate to one of your domains in your browser to test. You'll see the padlock to the left of the address bar in your browser, signifying that your connection is secure.

      In this step, you have installed Cert-Manager using Helm and created a Let's Encrypt ClusterIssuer. After, you updated your Ingress Resource to take advantage of the Issuer for generating TLS certificates. In the end, you have confirmed that HTTPS works correctly by navigating to one of your domains in your browser.

      Conclusion

      You have now successfully set up the Nginx Ingress Controller and Cert-Manager on your DigitalOcean Kubernetes cluster using Helm. You are now able to expose your apps to the Internet, at your domains, secured using Let's Encrypt TLS certificates.

      For further information about the Helm package manager, read this introduction article.



      Source link

      How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes


      Introduction

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services. Popular Ingress Controllers include Nginx, Contour, HAProxy, and Traefik. Ingresses provide a more efficient and flexible alternative to setting up multiple LoadBalancer services, each of which uses its own dedicated Load Balancer.

      In this guide, we’ll set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once we’ve set up the Ingress, we’ll install cert-manager into our cluster to manage and provision TLS certificates for encrypting HTTP traffic to the Ingress.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.
      • The Helm package manager installed on your local machine and Tiller installed on your cluster, as detailed in How To Install Software on Kubernetes Clusters with the Helm Package Manager.
      • The wget command-line utility installed on your local machine. You can install wget using the package manager built into your operating system.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Setting Up Dummy Backend Services

      Before we deploy the Ingress Controller, we’ll first create and roll out two dummy echo Services to which we’ll route external traffic using the Ingress. The echo Services will run the hashicorp/http-echo web server container, which returns a page containing a text string passed in when the web server is launched. To learn more about http-echo, consult its GitHub Repo, and to learn more about Kubernetes Services, consult Services from the official Kubernetes docs.

      On your local machine, create and edit a file called echo1.yaml using nano or your favorite editor:

      Paste in the following Service and Deployment manifest:

      echo1.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo1
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo1
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo1
      spec:
        selector:
          matchLabels:
            app: echo1
        replicas: 2
        template:
          metadata:
            labels:
              app: echo1
          spec:
            containers:
            - name: echo1
              image: hashicorp/http-echo
              args:
              - "-text=echo1"
              ports:
              - containerPort: 5678
      

      In this file, we define a Service called echo1 which routes traffic to Pods with the app: echo1 label selector. It accepts TCP traffic on port 80 and routes it to port 5678,http-echo's default port.

      We then define a Deployment, also called echo1, which manages Pods with the app: echo1 Label Selector. We specify that the Deployment should have 2 Pod replicas, and that the Pods should start a container called echo1 running the hashicorp/http-echo image. We pass in the text parameter and set it to echo1, so that the http-echo web server returns echo1. Finally, we open port 5678 on the Pod container.

      Once you're satisfied with your dummy Service and Deployment manifest, save and close the file.

      Then, create the Kubernetes resources using kubectl create with the -f flag, specifying the file you just saved as a parameter:

      • kubectl create -f echo1.yaml

      You should see the following output:

      Output

      service/echo1 created deployment.apps/echo1 created

      Verify that the Service started correctly by confirming that it has a ClusterIP, the internal IP on which the Service is exposed:

      You should see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

      This indicates that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will forward traffic to containerPort 5678 on the Pods it selects.

      Now that the echo1 Service is up and running, repeat this process for the echo2 Service.

      Create and open a file called echo2.yaml:

      echo2.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo2
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo2
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo2
      spec:
        selector:
          matchLabels:
            app: echo2
        replicas: 1
        template:
          metadata:
            labels:
              app: echo2
          spec:
            containers:
            - name: echo2
              image: hashicorp/http-echo
              args:
              - "-text=echo2"
              ports:
              - containerPort: 5678
      

      Here, we essentially use the same Service and Deployment manifest as above, but name and relabel the Service and Deployment echo2. In addition, to provide some variety, we create only 1 Pod replica. We ensure that we set the text parameter to echo2 so that the web server returns the text echo2.

      Save and close the file, and create the Kubernetes resources using kubectl:

      • kubectl create -f echo2.yaml

      You should see the following output:

      Output

      service/echo2 created deployment.apps/echo2 created

      Once again, verify that the Service is up and running:

      You should see both the echo1 and echo2 Services with assigned ClusterIPs:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

      Now that our dummy echo web services are up and running, we can move on to rolling out the Nginx Ingress Controller.

      Step 2 — Setting Up the Kubernetes Nginx Ingress Controller

      In this step, we'll roll out the Kubernetes-maintained Nginx Ingress Controller. Note that there are several Nginx Ingress Controllers; the Kubernetes community maintains the one used in this guide and Nginx Inc. maintains kubernetes-ingress. The instructions in this tutorial are based on those from the official Kubernetes Nginx Ingress Controller Installation Guide.

      The Nginx Ingress Controller consists of a Pod that runs the Nginx web server and watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress Resource is essentially a list of traffic routing rules for backend Services. For example, an Ingress rule can specify that HTTP traffic arriving at the path /web1 should be directed towards the web1 backend web server. Using Ingress Resources, you can also perform host-based routing: for example, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

      In this case, because we’re deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will create a LoadBalancer Service that spins up a DigitalOcean Load Balancer to which all external traffic will be directed. This Load Balancer will route external traffic to the Ingress Controller Pod running Nginx, which then forwards traffic to the appropriate backend Services.

      We'll begin by first creating the Kubernetes resources required by the Nginx Ingress Controller. These consist of ConfigMaps containing the Controller's configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment. To see a full list of these required resources, consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

      To create these mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

      We use apply instead of create here so that in the future we can incrementally apply changes to the Ingress Controller objects instead of completely overwriting them. To learn more about apply, consult Managing Resources from the official Kubernetes docs.

      You should see the following output:

      Output

      namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.extensions/nginx-ingress-controller created

      This output also serves as a convenient summary of all the Ingress Controller objects created from the mandatory.yaml manifest.

      Next, we'll create the Ingress Controller LoadBalancer Service, which will create a DigitalOcean Load Balancer that will load balance and route HTTP and HTTPS traffic to the Ingress Controller Pod deployed in the previous command.

      To create the LoadBalancer Service, once again kubectl apply a manifest file containing the Service definition:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

      You should see the following output:

      Output

      service/ingress-nginx created

      Now, confirm that the DigitalOcean Load Balancer was successfully created by fetching the Service details with kubectl:

      • kubectl get svc --namespace=ingress-nginx

      You should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h

      Note down the Load Balancer's external IP address, as you'll need it in a later step.

      This load balancer receives traffic on HTTP and HTTPS ports 80 and 443, and forwards it to the Ingress Controller Pod. The Ingress Controller will then route the traffic to the appropriate backend Service.

      We can now point our DNS records at this external Load Balancer and create some Ingress Resources to implement traffic routing rules.

      Step 3 — Creating the Ingress Resource

      Let's begin by creating a minimal Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service.

      In this guide, we'll use the test domain example.com. You should substitute this with the domain name you own.

      We'll first create a simple rule to route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

      Begin by opening up a file called echo_ingress.yaml in your favorite editor:

      Paste in the following ingress definition:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
      spec:
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      When you've finished editing your Ingress rules, save and close the file.

      Here, we've specified that we'd like to create an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header specifies the domain name of the target server. To learn more about Host request headers, consult the Mozilla Developer Network definition page. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

      You can now create the Ingress using kubectl:

      • kubectl apply -f echo_ingress.yaml

      You'll see the following output confirming the Ingress creation:

      Output

      ingress.extensions/echo-ingress created

      To test the Ingress, navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer's external IP. The Load Balancer's external IP is the external IP address for the ingress-nginx Service, which we fetched in the previous step. If you are using DigitalOcean to manage your domain's DNS records, consult How to Manage DNS Records to learn how to create A records.

      Once you've created the necessary echo1.example.com and echo2.example.com DNS records, you can test the Ingress Controller and Resource you've created using the curl command line utility.

      From your local machine, curl the echo1 Service:

      You should get the following response from the echo1 service:

      Output

      echo1

      This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

      Now, perform the same test for the echo2 Service:

      You should get the following response from the echo2 Service:

      Output

      echo2

      This confirms that your request to echo2.example.com is being correctly routed through the Nginx ingress to the echo2 backend Service.

      At this point, you've successfully set up a basic Nginx Ingress to perform virtual host-based routing. In the next step, we'll install cert-manager using Helm to provision TLS certificates for our Ingress and enable the more secure HTTPS protocol.

      Step 4 — Installing and Configuring Cert-Manager

      In this step, we'll use Helm to install cert-manager into our cluster. cert-manager is a Kubernetes service that provisions TLS certificates from Let's Encrypt and other certificate authorities and manages their lifecycles. Certificates can be requested and configured by annotating Ingress Resources with the certmanager.k8s.io/issuer annotation, appending a tls section to the Ingress spec, and configuring one or more Issuers to specify your preferred certificate authority. To learn more about Issuer objects, consult the official cert-manager documentation on Issuers.

      We'll first begin by using Helm to installcert-manager into our cluster:

      • helm install --name cert-manager --namespace kube-system --version v0.4.1 stable/cert-manager

      You should see the following output:

      Output

      . . . NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.readthedocs.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html

      This indicates that the cert-manager installation was successful.

      Before we begin issuing certificates for our Ingress hosts, we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we'll use the Let's Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

      Let's create a test Issuer to make sure the certificate provisioning mechanism is functioning correctly. Open a file named staging_issuer.yaml in your favorite text editor:

      nano staging_issuer.yaml
      

      Paste in the following ClusterIssuer manifest:

      staging_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
       name: letsencrypt-staging
      spec:
       acme:
         # The ACME server URL
         server: https://acme-staging-v02.api.letsencrypt.org/directory
         # Email address used for ACME registration
         email: your_email_address_here
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-staging
         # Enable the HTTP-01 challenge provider
         http01: {}
      

      Here we specify that we'd like to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server. We'll later use the production server to roll out our certificates, but the production server may rate-limit requests made against it, so for testing purposes it's best to use the staging URL.

      We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the certificate's private key. We also enable the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

      Roll out the ClusterIssuer using kubectl:

      • kubectl create -f staging_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-staging created

      Now that we've created our Let's Encrypt staging Issuer, we're ready to modify the Ingress Resource we created above and enable TLS encryption for the echo1.example.com and echo2.example.com paths.

      Open up echo_ingress.yaml once again in your favorite editor:

      Add the following to the Ingress Resource manifest:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-staging
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-staging
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here we add some annotations to specify the ingress.class, which determines the Ingress Controller that should be used to implement the Ingress Rules. In addition, we define the cluster-issuer to be letsencrypt-staging, the certificate Issuer we just created.

      Finally, we add a tls block to specify the hosts for which we want to acquire certificates, and specify the private key we created earlier.

      When you're done making changes, save and close the file.

      We'll now update the existing Ingress Resource using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      You should see the following output:

      Output

      ingress.extensions/echo-ingress configured

      You can use kubectl describe to track the state of the Ingress changes you've just applied:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 14m nginx-ingress-controller Ingress default/echo-ingress Normal UPDATE 1m (x2 over 13m) nginx-ingress-controller Ingress default/echo-ingress Normal CreateCertificate 1m cert-manager Successfully created Certificate "letsencrypt-staging"

      Once the certificate has been successfully created, you can run an additional describe on it to further confirm its successful creation:

      • kubectl describe certificate

      You should see the following output in the Events section:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 50s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 15s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3s cert-manager Issuing certificate... Normal CertObtained 1s cert-manager Obtained certificate from ACME server Normal CertIssued 1s cert-manager Certificate issued successfully

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for the two domains configured.

      We're now ready to send a request to a backend echo server to test that HTTPS is functioning correctly.

      Run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

      • wget --save-headers -O- echo1.example.com

      You should see the following output:

      Output

      URL transformed to HTTPS due to an HSTS policy --2018-12-11 14:38:24-- https://echo1.example.com/ Resolving echo1.example.com (echo1.example.com)... 203.0.113.0 Connecting to echo1.example.com (echo1.example.net)|203.0.113.0|:443... connected. ERROR: cannot verify echo1.example.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to echo1.example.com insecurely, use `--no-check-certificate'.

      This indicates that HTTPS has successfully been enabled, but the certificate cannot be verified as it's a fake temporary certificate issued by the Let's Encrypt staging server.

      Now that we've tested that everything works using this temporary fake certificate, we can roll out production certificates for the two hosts echo1.example.com and echo2.example.com.

      Step 5 — Rolling Out Production Issuer

      In this step we’ll modify the procedure used to provision staging certificates, and generate a valid, verifiable production certificate for our Ingress hosts.

      To begin, we'll first create a production certificate ClusterIssuer.

      Open a file called prod_issuer.yaml in your favorite editor:

      nano prod_issuer.yaml
      

      Paste in the following manifest:

      prod_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address_here
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      Note the different ACME server URL, and the letsencrypt-prod secret key name.

      When you're done editing, save and close the file.

      Now, roll out this Issuer using kubectl:

      • kubectl create -f prod_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      Update echo_ingress.yaml to use this new Issuer:

      Make the following changes to the file:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here, we update both the ClusterIssuer and secret key to letsencrypt-prod.

      Once you're satisfied with your changes, save and close the file.

      Roll out the changes using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      Output

      ingress.extensions/echo-ingress configured

      Wait a couple of minutes for the Let's Encrypt production server to issue the certificate. You can track its progress using kubectl describe on the certificate object:

      • kubectl describe certificate letsencrypt-prod

      Once you see the following output, the certificate has been issued successfully:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 4m4s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 3m30s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3m18s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3m18s cert-manager Issuing certificate... Normal CertObtained 3m16s cert-manager Obtained certificate from ACME server Normal CertIssued 3m16s cert-manager Certificate issued successfully

      We'll now perform a test using curl to verify that HTTPS is working correctly:

      You should see the following:

      Output

      <html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx/1.15.6</center> </body> </html>

      This indicates that HTTP requests are being redirected to use HTTPS.

      Run curl on https://echo1.example.com:

      • curl https://echo1.example.com

      You should now see the following output:

      Output

      echo1

      You can run the previous command with the verbose -v flag to dig deeper into the certificate handshake and to verify the certificate information.

      At this point, you've successfully configured HTTPS using a Let's Encrypt certificate for your Nginx Ingress.

      Conclusion

      In this guide, you set up an Nginx Ingress to load balance and route external requests to backend Services inside of your Kubernetes cluster. You also secured the Ingress by installing the cert-manager certificate provisioner and setting up a Let's Encrypt certificate for two host paths.

      There are many alternatives to the Nginx Ingress Controller. To learn more, consult Ingress controllers from the official Kubernetes documentation.



      Source link