One place for hosting & domains

      Cloud

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix)ValuesDefault ValueDescription
      throttle020 (0 disables the throttle)20Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocoltcp, http, httpstcpSpecifies the protocol for the NodeBalancer.
      tlsExample value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ]NoneA JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-typenone, connection, http, http_bodyNoneThe type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-pathstringNoneThe URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-bodystringNoneThe text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-intervalintegerNoneThe duration, in seconds, between health checks.
      check-timeoutinteger (a value between 130)NoneDuration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attemptsinteger (a value between 130)NoneNumber of health checks to perform before removing a back-end Node from service.
      check-passivebooleanfalseWhen true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubectl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Survey: How Do IT Leaders Grade Their Data Center and Cloud Infrastructure Strategies?


      We’re still merely entering the hybrid and multicloud era of information technology, but according to new survey research from INAP, the transformation is about to hit warp speed, a trend we see continuing in our latest survey. Nearly 9 in 10 organizations with on-premise data centers plan to move at least some of their workloads off-premise into cloud, managed hosting or colocation in the next three years.

      As more companies diversify their infrastructure mix, how confident are IT leaders and managers that they’re taking the right approach?

      For INAP’s second annual installment of the State of IT Infrastructure Management survey, we asked 500 IT leaders and infrastructure managers to assess their data center and cloud strategies, assign a letter grade and give us their thoughts on why they chose a particular rating.

      How do the grades stack up among participants? What factors are most closely associated with A-grade infrastructures? And why do some infrastructure strategies fall short?

      Making the Grade in the Hybrid IT and Multicloud Era

      Grades

      Instead of the classic bell curve so many of us were subject to during our years in academia, most of the IT infrastructure management professionals say their infrastructure strategy deserves an above average grade, with the majority—56.3 percent of respondents—giving their infrastructures a B. Roughly 19 percent think they deserve a C or below. While the results can be read as a vote of confidence for multiplatform, hybrid cloud and multicloud strategies, most respondents say there’s still plenty room for improvement: Only 1 in 4 participants (25.2 percent) gave their infrastructure strategies an A.

      Factors Most Associated with A-Grade Infrastructure

      Still, it’s worth asking: What factors distinguish A’s from the rest of the crowd?

      Four groups in the data, regardless of company size, industry and headcount, are strongly correlated with high marks:

      Off-Premise Migrators

      A’s have a significantly smaller portion of their workloads on-premise (30 percent of workloads, on average) compared to C’s and below (45 percent).

      Colocation Customers

      Thirty-one percent of IT pros who have colocation as part of their infrastructure mix give themselves an A. This is six points higher than the total population.

      Cloud Diversifiers

      For companies already in the cloud, those who only host with public cloud platforms (AWS, Azure, Google) are less likely to give themselves A’s than those who adopt multicloud platform strategies—18 percent vs. 29 percent, respectively.

      Managed Services Super Users

      The more companies rely on third parties or cloud providers to fully manage their hosted environments (up to the application layer), the more likely they are to assign their infrastructure strategy an A. The average share of workloads fully managed: A’s (71 percent), B’s (62 percent), C’s (54 percent).

      Why Some IT Infrastructures Strategies Fall Short

      Click to view full-size image.

      From the above results, no single explanation for why strategies did not earn top marks were selected by a fewer than a fifth of respondents, but two clearly lead the pack:

      • Infrastructure not fully optimized for applications
      • Too much time managing and maintaining the infrastructure

      The first leading factor speaks to a simultaneous benefit and challenge of the multicloud and hybrid IT era. It’s more economical than ever to find a mix of infrastructure solutions that match the needs of individual workloads and applications. The flip side to that benefit is the simple fact that adopting new platforms can quickly lead to environment sprawl and raise the complexity of the overall strategy—making the goal of application optimization a tougher bar to clear.

      The second leading factor—improper time allocation—underscores a central theme of IT infrastructure management that will be discussed in greater depth in a future blog.

      Senior Leaders vs. Non-Senior IT Pros

      As previously noted, only 1 in 4 participants gave their infrastructure strategies an A. That number falls to 1 in 8 (12.6 percent) if we remove senior IT leaders from the mix. Non-senior infrastructure managers are also two times more likely to grade their infrastructure strategy a C. In other areas of the State of IT Infrastructure Management survey, senior leaders generally held a more optimistic outlook, and the infrastructure grades were no exception.

      Why might this be? We can only speculate, but senior leaders may be loath to give a low grade to a strategy they had a large part in shaping. Or perhaps it’s that non-senior leaders deal with more of the day-to-day tasks associated with infrastructure upkeep and don’t feel as positive about the strategy. Whatever the reason, these two groups are not seeing eye to eye.

      Strategizing to Earn the A-Grade

      When considering solutions—be it cloud, colocation and/or managed services—a lesson or two can be taken from those A-grade infrastructure strategies, and maybe from the C’s and below, as well.

      If you’re ready to level-up your strategy, but unsure where to start, INAP can help. We offer high-performance data center, cloud, network and managed services solutions that will earn your infrastructure strategy an A+.

      Laura Vietmeyer


      READ MORE



      Source link

      Manage Billing in Cloud Manager


      Updated by Linode

      Written by Linode

      We’ve done our best to create straightforward billing and payment policies. Still have questions? Use this guide to learn how to make payments, update your billing information, and remove services. To learn how billing works see the How Linode Billing Works guide. If you have a question that isn’t answered in either guide, please feel free to contact Support.

      Viewing Current Balance

      To view your current balance, follow the steps below. This shows you the sum of all Linode services used so far in the month, down to the hour.

      1. Log in to the Linode Cloud Manager.
      2. Select Account from the sidebar links.
      3. On the right side you will see your Billing Information panel.

        This customer has a $63.52 uninvoiced balance and $0 due

        Amount Due is the current invoiced balance and Uninvoiced Balance is the accrued balance that has not yet been invoiced for the month.

        Here, you can keep track of your outstanding balance. In the example above, the customer has accrued a $63.52 balance for Linode services this month so far, but it has not been invoiced yet. You can check this as frequently or infrequently as you wish. It gets updated every hour as you use and add Linode services.

      Making a Payment

      You can use the Cloud Manager to pay an outstanding balance or prepay for Linode services. Here’s how:

      1. Log in to the Linode Cloud Manager.
      2. Select Account from the sidebar links.
      3. Select Account & Billing.
      4. Expand the Make a Payment panel.

        The Make a Payment Panel

      5. Enter the amount of money you would like to pay in the Amount to Charge field.

      6. Enter the CVV number on the back of your credit card in the CVV field.

      7. Click Confirm Payment.

      The payment may take a few minutes to be applied to your account.

      Updating Credit Card Information

      Keep your credit card information up to date to prevent service interruptions. Here’s how:

      1. Log in to the Linode Cloud Manager.
      2. Select Account from the sidebar links.
      3. Select Account & Billing.
      4. Expand the Update Credit Card box and enter your credit card number and the card’s expiration date.
      5. Click Save. Your credit card information will be updated.

        Update your credit card information.

        Note

        If you have an outstanding balance, you will need to make a manual payment to bring your account up to date. See the Making a Payment section for more information.

        Note

        A $1.00 authorization hold may be placed on your credit card by your banking institution when our payment processor tests the validity of the card. This is normal behavior and does not result in a charge on your card.

      Accessing Billing History

      All of your billing history is stored in the Cloud Manager. Here’s how to access it:

      1. Log in to the Linode Cloud Manager.
      2. Select Account from the sidebar links.
      3. Select Account & Billing.
      4. Expand the Recent Invoices and Recent Payments panels.

      Select an invoice to view the charges for a particular month.

      Removing Services

      Our services are provided without a contract, so you’re free to remove services from your account at any time. Here’s how:

      1. Log in to the Linode Cloud Manager.
      2. To remove a Linode from your account, select Linodes from the sidebar links. Select the Linode you would like to remove, then select the Settings tab. Expand the Delete Linode panel and click Delete.
      3. To remove a NodeBalancer from your account, select NodeBalancers from the sidebar links. Open the menu of the NodeBalancer you would like to remove, then select Remove.
      4. To remove the Linode Backup Service, select Linodes from the sidebar links. Select the corresponding Linode. Under the Backups tab click the Cancel Backups button at the bottom of the page.

      Canceling Your Account

      You can cancel your account at any time. Please note that when you cancel your account, any uninvoiced balance remaining on your account will be charged to your account’s credit card. If you have any positive credit on your account at time of cancellation, then that credit will be used to pay for your uninvoiced balance.

      1. Log into Cloud Manager.
      2. Click the Accounts link in the sidebar.
      3. On the right of the page, select the Close Account link.
      4. A confirmation form will appear. Enter your Linode username in the first field and enter any comments you’d like to leave in the second field.
      5. Click the Close Account button to complete your account cancellation.

      Your account will be canceled and all of your services will be deactivated.

      Note

      You do not have to cancel your account to prevent recurring charges. Instead, you can remove all Linodes and services from your account via the Linodes tab in the Cloud Manager. This will allow you to retain your Linode account. If you use Longview with non-Linode services, or want to keep your account name and history, you may find this to be a useful option. See the Removing Services section for more information.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link