How do you help your investors, customers, and team members succeed? Jeff Guy, DigitalOcean’s Chief Operating Officer, shares how to overcome the challenge of balancing their disparate needs to keep the “triangle of business stakeholders” in harmony.
Resources
About the Presenter
Jeff Guy is DigitalOcean’s Chief Operating Officer. He is focused on customer success, revenue growth, and overall company efficiency. He currently oversees strategy deployment, customer support & success, business development, product management, UX, business operations, and strategic sourcing and procurement.
The Linode Kubernetes Engine (LKE) is Linode’s managed Kubernetes service. When you deploy an LKE cluster, you receive a Kubernetes Master which runs your cluster’s control plane components, at no additional cost. The control plane includes Linode’s Cloud Controller Manager (CCM), which provides a way for your cluster to access additional Linode services. Linode’s CCM provides access to Linode’s load balancing service, Linode NodeBalancers.
NodeBalancers provide your Kubernetes cluster with a reliable way of exposing resources to the public internet. The LKE control plane handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, that the NodeBalancer will route traffic to. Whenever a Kubernetes Service of the LoadBalancer type is created, your Kubernetes cluster will create a Linode NodeBalancer service with the help of the Linode CCM.
Note
Adding external Linode NodeBalancers to your LKE cluster will incur additional costs. See Linode’s Pricing page for details.
Note
All existing LKE clusters receive CCM updates automatically every two weeks when a new LKE release is deployed. See the LKE Changelog for information on the latest LKE release.
Note
The Linode Terraform K8s module also deploys a Kubernetes cluster with the Linode CCM installed by default. Any Kubernetes cluster with a Linode CCM installation can make use of Linode NodeBalancers in the ways described in this guide.
In this Guide
This guide will show you:
Before You Begin
This guide assumes you have a working Kubernetes cluster that was deployed using the Linode Kubernetes Engine (LKE). You can deploy a Kubernetes cluster using LKE in the following ways:
Adding Linode NodeBalancers to your Kubernetes Cluster
To add an external load balancer to your Kubernetes cluster you can add the example lines to a new configuration file, or more commonly, to a Service file. When the configuration is applied to your cluster, Linode NodeBalancers will be created, and added to your Kubernetes cluster. Your cluster will be accessible via a public IP address and the NodeBalancers will route external traffic to a Service running on healthy nodes in your cluster.
Note
Billing for Linode NodeBalancers begin as soon as the example configuration is successfully applied to your Kubernetes cluster.
Configuring your Linode NodeBalancers with Annotations
The Linode CCM accepts annotations that configure the behavior and settings of your cluster’s underlying NodeBalancers.
The table below provides a list of all available annotation suffixes.
Each annotation must be prefixed with service.beta.kubernetes.io/linode-loadbalancer-. For example, the complete value for the throttle annotation is service.beta.kubernetes.io/linode-loadbalancer-throttle.
Annotation values such as http are case-sensitive.
Annotations Reference
Annotation (suffix)
Values
Default Value
Description
throttle
• integer • 0–20 • 0 disables the throttle
20
The client connection throttle limits the number of new connections-per-second from the same client IP.
default-protocol
• string •tcp, http, https
tcp
Specifies the protocol for the NodeBalancer to use.
port-*
A JSON object of port configurations For example: { "tls-secret-name": "prod-app-tls", "protocol": "https"})
None
• Specifies a NodeBalancer port to configure, i.e. port-443.
• Ports 1-65534 are available for balancing.
• The available port configurations are:
"tls-secret-name" use this key to provide a Kubernetes secret name when setting up TLS termination for a service to be accessed over HTTPS. The secret type should be kubernetes.io/tls.
"protocol" specifies the protocol to use for this port, i.e. tcp, http, https. The default protocol is tcp, unless you provided a different configuration for the default-protocol annotation.
check-type
• string • none, connection, http, http_body
None
• The type of health check to perform on Nodes to ensure that they are serving requests. The behavior for each check is the following:
none no check is performed
connection checks for a valid TCP handshake
http checks for a 2xx or 3xx response code
http_body checks for a specific string within the response body of the healthcheck URL. Use the check-body annotation to provide the string to use for the check.
check-path
string
None
The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
check-body
string
None
The string that must be present in the response body of the URL path used for health checks. You must have a check-type annotation configured for a http_body check.
check-interval
integer
None
The duration, in seconds, between health checks.
check-timeout
• integer • value between 1–30
None
Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
check-attempts
• integer • value between 1–30
None
Number of health checks to perform before removing a back-end Node from service.
check-passive
boolean
false
When true, 5xx status codes will cause the health check to fail.
preserve
boolean
false
When true, deleting a LoadBalancer service does not delete the underlying NodeBalancer
Note
Configuring Linode NodeBalancers for TLS Encryption
This section describes how to set up TLS termination on your Linode NodeBalancers so a Kubernetes Service can be accessed over HTTPS.
Generating a TLS type Secret
Kubernetes allows you to store sensitive information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In this section, you will create a Kubernetes secret to store Transport Layer Security (TLS) certificates and keys that you will then use to configure TLS termination on your Linode NodeBalancers.
In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type tls. Follow the steps in this section to create a Kubernetes TLS Secret.
Note
Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.
Create the secret using the create secret tls command. Ensure you substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.
If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.
Configuring TLS within a Service
In order to use https you’ll need to instruct the Service to use the correct port using the required annotations. You can add the following code snippet to a Service file to enable TLS termination on your NodeBalancers:
The service.beta.kubernetes.io/linode-loadbalancer-default-protocol annotation configures the NodeBalancer’s default protocol.
service.beta.kubernetes.io/linode-loadbalancer-port-443 specifies port 443 as the port to be configured. The value of this annotation is a JSON object designating the TLS secret name to use (example-secret) and the protocol to use for the port being configured (https).
If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:
kube-proxy will always attempt to proxy traffic to a random backend Pod. To direct traffic to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod. You can add the example lines to a Service configuration file to
Removing Linode NodeBalancers from your Kubernetes Cluster
To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:
kubectl delete -f example-service.yaml
Similarly, you can delete the Service by name:
kubectl delete service example-service
After deleting your service, its corresponding NodeBalancer will be removed from your Linode account.
Note
If your Service file used the preserve annotation, the underlying NodeBalancer will not be removed from your Linode account. See the annotations reference for details.
This guide is published under a CC BY-ND 4.0 license.
This guide will use an example Kubernetes Deployment and Service to demonstrate how to route external traffic to a Kubernetes application over HTTPS. This is accomplished using the NGINX Ingress Controller, cert-manager and Linode NodeBalancers. The NGINX Ingress Controller uses Linode NodeBalancers, which are Linode’s load balancing service, to route a Kubernetes Service’s traffic to the appropriate backend Pods over HTTP and HTTPS. cert-manager creates a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) providing secure HTTPS access to a Kubernetes Service.
Note
Before you Begin
This guide assumes that your Kubernetes cluster has the Linode Cloud Controller Manager (CCM) installed on your Kubernetes cluster. The Linode CCM is installed by default on clusters deployed with the Linode Kubernetes Engine and the Linode Terraform K8s module.
Note
The recommended way to deploy a Kubernetes cluster on Linode is using the Linode Kubernetes Engine (managed) or the Linode Terraform K8s module (unmanaged). However, to learn how to install the Linode CCM on a cluster that was not installed in the two ways mentioned above, see the Installing the Linode CCM on an Unmanaged Kubernetes Cluster guide.
Install Helm 3, and kubectl to your local environment.
Purchase a domain name from a reliable domain registrar. In a later section, you will use Linode’s DNS Manager, to create a new Domain and to add a DNS “A” record for two subdomains, one named blog and another named shop. Your subdomains will point to the example Kubernetes Services you will create in this guide. The example domain names used throughout this guide are blog.example.com and shop.example.com.
Note
Optionally, you can create a Wildcard DNS record, *.example.com and point your NodeBalancer’s external IP address to it. Using a Wildcard DNS record, will allow you to expose your Kubernetes services without requiring further configuration using the Linode DNS Manager.
Create an Example Application
The primary focus of this guide is to show you how to use the NGINX Ingress Controller and cert-manager to route traffic to a Kubernetes application over HTTPS. In this section, you will create two example applications that you will route external traffic to in a later section. The example application displays a page that returns information about the Deployment’s current backend Pod. This sample application is built using NGINX’s demo Docker image, nginxdemos/hello. You can replace the example applications used in this section with your own.
Create your Application Service and Deployment
Each example manifest file creates three Pods to serve the application.
Using a text editor, create a new file named hello-one.yaml with the contents of the example file.
service/hello-one created
deployment.apps/hello-one created
service/hello-two created
deployment.apps/hello-two created
Verify that the Services are running.
kubectl get svc
You should see a similar output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-one ClusterIP 10.128.94.166 80/TCP 6s
hello-two ClusterIP 10.128.102.187 80/TCP 6s
kubernetes ClusterIP 10.128.0.1 443/TCP 18m
Install the NGINX Ingress Controller
In this section you will use Helm to install the NGINX Ingress Controller on your Kubernetes Cluster. Installing the NGINX Ingress Controller will create Linode NodeBalancers that your cluster can make use of to load balance traffic to your example application.
Note
Add the Google stable Helm charts repository to your Helm repos.
Install the NGINX Ingress Controller. This installation will result in a Linode NodeBalancer being created.
helm install nginx-ingress stable/nginx-ingress
You will see a similar output:
NAME: nginx-ingress
LAST DEPLOYED: Mon Jul 20 10:27:03 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller'
...
Create a Subdomain DNS Entries for your Example Applications
Now that Linode NodeBalancers have been created by the NGINX Ingress Controller, you can point a subdomain DNS entries to the NodeBalancer’s public IPv4 address. Since this guide uses two example applications, it will require two subdomain entries.
Access your NodeBalancer’s assigned external IP address.
kubectl --namespace default get services -o wide -w nginx-ingress-controller
The command will return a similar output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
my-nginx-ingress-controller LoadBalancer 10.128.169.60 192.0.2.0 80:32401/TCP,443:30830/TCP 7h51m app.kubernetes.io/component=controller,app=nginx-ingress,release=my-nginx-ingress
Copy the IP address of the EXTERNAL IP field and navigate to Linode’s DNS manager and add two “A” records for the blog and shop subdomains. Ensure you point each record to the NodeBalancer’s IPv4 address you retrieved in the previous step.
Now that your NGINX Ingress Controller has been deployed and your subdomains’ A records have been created, you are ready to enable HTTPS on each subdomain.
Create a TLS Certificate Using cert-manager
Note
Before performing the commands in this section, ensure that your DNS has had time to propagate across the internet. This process can take a while. You can query the status of your DNS by using the following command, substituting blog.example.com for your domain.
dig +short blog.example.com
If successful, the output should return the IP address of your NodeBalancer.
To enable HTTPS on your example application, you will create a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) using the ACME protocol. This will be facilitated by cert-manager, the native Kubernetes certificate management controller.
In this section you will install cert-manager using Helm and the required cert-manager CustomResourceDefinitions (CRDs). Then, you will create a ClusterIssuer resource to assist in creating a cluster’s TLS certificate.
Note
If you would like a deeper dive into cert-manager, see our guide [’’]().
Add the Helm repository which contains the cert-manager Helm chart.
helm repo add jetstack https://charts.jetstack.io
Update your Helm repositories.
helm repo update
Install the cert-manager Helm chart. These basic configurations should be sufficient for many use cases, however, additional cert-manager configurable parameters can be found in cert-manager’s official documentation.
Verify that the corresponding cert-manager pods are now running.
kubectl get pods --namespace cert-manager
You should see a similar output:
NAME READY STATUS RESTARTS AGE
cert-manager-579d48dff8-84nw9 1/1 Running 3 1m
cert-manager-cainjector-789955d9b7-jfskr 1/1 Running 3 1m
cert-manager-webhook-64869c4997-hnx6n 1/1 Running 0 1m
Note
You should wait until all cert-manager pods are ready and running prior to proceeding to the next section.
Create a ClusterIssuer Resource
Create a manifest file named acme-issuer-prod.yaml that will be used to create a ClusterIssuer resource on your cluster. Ensure you replace [email protected] with your own email address.
This manifest file creates a ClusterIssuer resource that will register an account on an ACME server. The value of spec.acme.server designates Let’s Encrypt’s production ACME server, which should be trusted by most browsers.
Note
Let’s Encrypt provides a staging ACME server that can be used to test issuing trusted certificates, while not worrying about hitting Let’s Encrypt’s production rate limits. The staging URL is https://acme-staging-v02.api.letsencrypt.org/directory.
The value of privateKeySecretRef.name provides the name of a secret containing the private key for this user’s ACME server account (this is tied to the email address you provide in the manifest file). The ACME server will use this key to identify you.
To ensure that you own the domain for which you will create a certificate, the ACME server will issue a challenge to a client. cert-manager provides two options for solving challenges, http01 and DNS01. In this example, the http01 challenge solver will be used and it is configured in the solvers array. cert-manager will spin up challenge solver Pods to solve the issued challenges and use Ingress resources to route the challenge to the appropriate Pod.
Create the ClusterIssuer resource:
kubectl create -f acme-issuer-prod.yaml
You should see a similar output:
clusterissuer.cert-manager.io/letsencrypt-prod created
Enable HTTPS for your Application
Create the Ingress Resource
Create an Ingress resource manifest file named hello-app-ingress.yaml. If you assigned a different name to your ClusterIssuer, ensure you replace letsencrypt-prod with the name you used. Replace all hosts and host values with your own application’s domain name.
This resource defines how traffic coming from the Linode NodeBalancers is handled. In this case, NGINX will accept these connections over port 80, diverting traffic to both of your services via their domain names. The tls section of the Ingress resource manifest handles routing HTTPS traffic to the hostnames that are defined.
Create the Ingress resource.
kubectl create -f hello-app-ingress.yaml
You should see a similar output:
ingress.networking.k8s.io/hello-app-ingress created
Navigate to your app’s domain or if you have been following along with the example, navigate to blog.example.com and then, shop.example.com. You should see the demo NGINX page load and display information about the Pod being used to serve your request.
Use your browser to view your TLS certificate. You should see that the certificate was issued by Let’s Encrypt Authority X3.
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This guide is published under a CC BY-ND 4.0 license.