One place for hosting & domains

      kubectl

      Introdução ao Kubernetes: Um Guia de Referência Rápida do kubectl


      Introdução

      O Kubectl é uma ferramenta de linha de comando projetada para gerenciar objetos e clusters Kubernetes. Ele fornece uma interface de linha de comando para executar operações comuns, como criar e escalar Deployments, alternar contextos e acessar um shell em um container em execução.

      Como Usar Este Guia:

      • Este guia está no formato de referência rápida com trechos de linha de comando independentes.
      • Ele não é uma lista exaustiva de comandos kubectl, mas contém muitas operações e casos de uso comuns. Para uma referência mais completa, consulte a Documentação de referência do Kubectl
      • Vá para qualquer seção que seja relevante para a tarefa que você está tentando concluir.

      Pré-requisitos

      • Para usar o kubectl, você precisará de um cluster Kubernetes disponível para você. Para aprender a criar um cluster Kubernetes do zero, você pode consultar Como Criar um Cluster Kubernetes 1.11 Usando Kubeadm no Ubuntu 18.04. Alternativamente, você pode provisionar um cluster Kubernetes gerenciado em minutos usando o DigitalOcean Kubernetes. Para começar a criar um cluster Kubernetes gerenciado na DigitalOcean, consulte How to Create Kubernetes Clusters Using the Control Panel.
      • Você também precisará de uma máquina remota na qual instalará e executará o kubectl. O kubectl pode ser executado em diversos sistemas operacionais.

      Deployment de Exemplo

      Para demonstrar algumas das operações e comandos nesta referência rápida, usaremos um exemplo de Deployment que executa duas réplicas do Nginx:

      nginx-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
      

      Copie e cole esse manifesto em um arquivo chamado nginx-deployment.yaml.

      Instalando o kubectl

      Nota: Esses comandos foram testados somente em uma máquina Ubuntu 18.04. Para aprender como instalar o kubectl em outros sistemas operacionais, consulte Install and Set Up kubectl da documentação do Kubernetes.

      Primeiro, atualize o índice local de pacotes e instale as dependências necessárias:

      • sudo apt-get update && sudo apt-get install -y apt-transport-https

      Em seguida, adicione a chave GPG do Google Cloud ao APT e disponibilize o pacote kubectl no seu sistema:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update

      Por fim, instale o kubectl:

      • sudo apt-get install -y kubectl

      Teste se a instalação foi bem-sucedida usando version:

      Configurando o Autocompletar no Shell

      Nota: Esses comandos foram testados somente em uma máquina Ubuntu 18.04. Para saber como configurar o Autocompletar em outros sistemas operacionais, consulte Install and Set Up kubectl da documentação do Kubernetes.

      O kubectl inclui um script de autocompletar no shell que você pode disponibilizar para o software de autocompletar do shell existente no sistema.

      Instalando o Autocompletar do kubectl

      Primeiro, verifique se você tem o bash-completion instalado:

      Você deve ver alguma saída do script.

      Em seguida, faça um source no script de autocompletar do kubectl em seu arquivo ~/.bashrc:

      • echo 'source <(kubectl completion bash)' >>~/.bashrc
      • . ~/.bashrc

      Alternativamente, você pode adicionar o script de autocompletar ao diretório /etc/bash_completion.d:

      • kubectl completion bash >/etc/bash_completion.d/kubectl

      Uso

      Para usar o recurso de autocompletar, pressione a tecla TAB para exibir os comandos kubectl disponíveis:

      Output

      annotate apply autoscale completion cordon delete drain explain kustomize options port-forward rollout set uncordon api-resources attach certificate config cp describe . . .

      Você também pode exibir os comandos disponíveis após digitar parcialmente um comando:

      Output

      delete describe diff drain

      Conectando, Configurando e Usando Contextos

      Conectando

      Para testar se o kubectl pode se autenticar e acessar seu cluster Kubernetes, use cluster-info:

      Se o kubectl puder se autenticar com sucesso no seu cluster, você deverá ver a seguinte saída:

      Output

      Kubernetes master is running at https://kubernetes_master_endpoint CoreDNS is running at https://coredns_endpoint To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      O kubectl é configurado usando os arquivos de configuração kubeconfig. Por padrão, o kubectl procurará um arquivo chamado config no diretório $HOME/.kube. Para mudar isso, você pode definir a variável de ambiente $KUBECONFIG apontando para um arquivo kubeconfig personalizado ou passar o arquivo personalizado em tempo de execução usando a flag --kubeconfig:

      • kubectl cluster-info --kubeconfig=caminho_para_seu_arquivo_kubeconfig

      Nota: Se você estiver usando um cluster Kubernetes gerenciado, seu provedor de nuvem deverá ter disponibilizado seu arquivo kubeconfig.

      Se você não quiser usar a flag --kubeconfig com todos os comandos, e não existe o arquivo ~/.kube/config, crie um diretório chamado ~/.kube no seu diretório home, se ele ainda não existir, e copie o arquivo kubeconfig, renomeando-o para config:

      • mkdir ~/.kube
      • cp seu_arquivo_kubeconfig ~/.kube/config

      Agora, execute cluster-info novamente para testar sua conexão.

      Modificando Sua Configuração do kubectl

      Você também pode modificar sua configuração usando o conjunto de comandos kubectl config.

      Para visualizar sua configuração do kubectl, use o subcomando view:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED . . .

      Modificando Clusters

      Para buscar uma lista de clusters definidos no seu kubeconfig, use get-clusters:

      • kubectl config get-clusters

      Output

      NAME do-nyc1-sammy

      Para adicionar um cluster à sua configuração, use o subcomando set-cluster:

      • kubectl config set-cluster novo_cluster --server=endereço_do_servidor --certificate-authority=caminho_para_a_autoridade_de_certificação

      Para excluir um cluster da sua configuração, use delete-cluster:

      Nota: Isso exclui apenas o cluster da sua configuração e não exclui o cluster Kubernetes real.

      • kubectl config delete-cluster

      Modificando Usuários

      Você pode executar operações similares para usuários usando set-credentials:

      • kubectl config set-credentials username --client-certificate=/caminho/para/arquivo/certificado --client-key=/caminho/para/arquivo/chave

      Para excluir um usuário da sua configuração, você pode executar unset:

      • kubectl config unset users.username

      Contextos

      Um context ou contexto no Kubernetes é um objeto que contém um conjunto de parâmetros de acesso para o seu cluster. Ele consiste em uma tripla de cluster, namespace e user. Os contextos permitem alternar rapidamente entre diferentes conjuntos de configurações de cluster.

      Para ver seu contexto atual, você pode usar current-context:

      • kubectl config current-context

      Output

      do-nyc1-sammy

      Para ver uma lista de todos os contextos configurados, execute get-contexts:

      • kubectl config get-contexts

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * do-nyc1-sammy do-nyc1-sammy do-nyc1-sammy-admin

      Para definir um contexto, use set-context:

      • kubectl config set-context nome_do_contexto --cluster=nome_do_cluster --user=user_name --namespace=namespace

      Você pode alternar entre contextos com use-context:

      • kubectl config use-context nome_do_contexto

      Output

      Switched to context "do-nyc1-sammy"

      E você pode excluir um contexto com delete-context:

      • kubectl config delete-context nome_do_contexto

      Usando Namespaces

      Um Namespace no Kubernetes é uma abstração que lhe permite subdividir seu cluster em vários clusters virtuais. Usando Namespaces, você pode dividir os recursos do cluster entre várias equipes e objetos de escopo de maneira apropriada. Por exemplo, você pode ter um Namespace prod para cargas de trabalho de produção e um Namespace dev para cargas de trabalho de desenvolvimento e teste.

      Para buscar e imprimir uma lista de todos os Namespaces no cluster, use get namespace:

      Output

      NAME STATUS AGE default Active 2d21h kube-node-lease Active 2d21h kube-public Active 2d21h kube-system Active 2d21h

      Para definir um Namespace para o seu contexto atual, use set-context --current:

      • kubectl config set-context --current --namespace=nome_do_namespace

      Para criar um Namespace, use create namespace:

      • kubectl create namespace namespace_name

      Output

      namespace/sammy created

      De maneira similar, para excluir um Namespace, use delete namespace:

      Atenção: A exclusão de um Namespace excluirá tudo no Namespace, incluindo a execução de Deployments, Pods e outras cargas de trabalho. Execute este comando apenas se tiver certeza de que deseja eliminar o que está sendo executado no Namespace ou se estiver excluindo um Namespace vazio.

      • kubectl delete namespace nome_do_namespace

      Para buscar todos os Pods em um determinado Namespace ou para realizar outras operações nos recursos de um determinado Namespace, certifique-se de incluir a flag --namespace:

      • kubectl get pods --namespace=nome_do_namespace

      Gerenciando Recursos do Kubernetes

      Sintaxe Geral

      A sintaxe geral para a maioria dos comandos de gerenciamento do kubectl é:

      • kubectl command type name flags

      Onde

      • command é uma operação que você gostaria de executar, como create
      • type é o tipo de recurso do Kubernetes, como deployment
      • name é o nome do recurso, como app_frontend
      • flags são todas as flags opcionais que você gostaria de incluir

      Por exemplo, o comando a seguir recupera informações sobre um Deployment chamado app_frontend:

      • kubectl get deployment app_frontend

      Gestão Declarativa e kubectl apply

      A abordagem recomendada para gerenciar cargas de trabalho no Kubernetes é confiar no design declarativo do cluster o máximo possível. Isso significa que, em vez de executar uma série de comandos para criar, atualizar, excluir e reiniciar a execução de Pods, você deve definir as cargas de trabalho, serviços e sistemas que deseja executar nos arquivos de manifesto YAML e fornecer esses arquivos ao Kubernetes, que cuidará do resto.

      Na prática, isso significa usar o comando kubectl apply, que aplica uma configuração específica a um determinado recurso. Se o recurso de destino não existir, o Kubernetes criará o recurso. Se o recurso já existir, o Kubernetes salvará a revisão atual e atualizará o recurso de acordo com a nova configuração. Essa abordagem declarativa existe em contraste com a abordagem imperativa de executar o conjunto de comandos kubectl create, kubectl edit e kubectl scale para gerenciar recursos. Para saber mais sobre as diferentes maneiras de gerenciar os recursos do Kubernetes, consulte Gerenciamento de objetos do Kubernetes na documentação do Kubernetes.

      Lançando um Deployment

      Por exemplo, para fazer o deploy de exemplo do Nginx em seu cluster, use apply e forneça o caminho para o arquivo de manifesto nginx-deployment.yaml:

      • kubectl apply -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      A flag -f é usada para especificar um nome de arquivo ou URL contendo uma configuração válida. Se você deseja aplicar todos os manifestos de um diretório, você pode usar a flag -k:

      • kubectl apply -k diretório_do_manifesto

      Você pode acompanhar o status do deployment usando rollout status:

      • kubectl rollout status deployment/nginx-deployment

      Output

      Waiting for deployment "nginx-deployment" rollout to finish: 1 of 2 updated replicas are available... deployment "nginx-deployment" successfully rolled out

      Um alternativa ao rollout status é o comando kubectl get, juntamente com a flag -w (watch):

      • kubectl get deployment -w

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/2 2 0 3s nginx-deployment 1/2 2 1 3s nginx-deployment 2/2 2 2 3s

      Usando rollout pause e rollout resume, você pode pausar e retomar o lançamento ou rollout de um Deployment:

      • kubectl rollout pause deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment paused
      • kubectl rollout resume deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment resumed

      Modificando um Deployment em Execução

      Se você quiser modificar um Deployment em execução, poderá fazer alterações no seu arquivo de manifesto e, em seguida, executar o kubectl apply novamente para aplicar a atualização. Por exemplo, vamos modificar o arquivo nginx-deployment.yaml para alterar o número de réplicas de 2 para 3:

      nginx-deployment.yaml

      . . .
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
      . . .
      

      O comando kubectl diff permite que você veja diferenças entre os recursos em execução no momento e as alterações propostas no arquivo de configuração fornecido:

      • kubectl diff -f nginx-deployment.yaml

      Agora permita que o Kubernetes execute a atualização usando apply:

      • kubectl apply -f nginx-deployment.yaml

      A execução de outro get deployment deve confirmar a adição de uma terceira réplica.

      Se você executar apply novamente sem modificar o arquivo de manifesto, o Kubernetes detectará que nenhuma alteração foi feita e não executará nenhuma ação.

      Usando o rollout history, você pode ver uma lista das revisões anteriores do Deployment:

      • kubectl rollout history deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none>

      Com rollout undo, você pode reverter um Deployment para qualquer uma das revisões anteriores:

      • kubectl rollout undo deployment/nginx-deployment --to-revision=1

      Excluindo um Deployment

      Para excluir um deployment em execução, use kubectl delete:

      • kubectl delete -f nginx-deployment.yaml

      Output

      deployment.apps "nginx-deployment" deleted

      Gerenciamento Imperativo

      Você também pode usar um conjunto de comandos imperativos para manipular e gerenciar diretamente os recursos do Kubernetes.

      Criando um Deployment

      Use create para criar um objeto a partir de um arquivo, URL ou STDIN. Observe que, ao contrário de apply, se um objeto com o mesmo nome já existir, a operação irá falhar. A flag --dry-run lhe permite visualizar o resultado da operação sem realmente executá-la:

      • kubectl create -f nginx-deployment.yaml --dry-run

      Output

      deployment.apps/nginx-deployment created (dry-run)

      Agora podemos criar o objeto:

      • kubectl create -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      Modificando um Deployment em Execução

      Use scale para escalar o número de réplicas para o Deployment de 2 para 4:

      • kubectl scale --replicas=4 deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment scaled

      Você pode editar qualquer objeto localmente usando o kubectl edit. Isso abrirá o manifesto do objeto no seu editor padrão:

      • kubectl edit deployment/nginx-deployment

      Você deve ver o seguinte arquivo de manifesto em seu editor:

      nginx-deployment

      # Please edit the object below. Lines beginning with a "https://www.digitalocean.com/#" will be ignored,
      # and an empty file will abort the edit. If an error occurs while saving this file will be
      # reopened with the relevant failures.
      #
      apiVersion: extensions/v1beta1
      kind: Deployment
      . . . 
      spec:
        progressDeadlineSeconds: 600
        replicas: 4
        revisionHistoryLimit: 10
        selector:
          matchLabels:
      . . .
      

      Altere o valor das replicas de 4 para 2, depois salve e feche o arquivo.

      Agora execute um get para inspecionar as alterações:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 6m40s

      Escalamos o Deployment com êxito de volta para duas réplicas on-the-fly (ou seja, durante a operação). Você pode atualizar a maioria dos campos de um objeto do Kubernetes de maneira semelhante.

      Outro comando útil para modificar objetos localmente é o kubectl patch. Usando o patch, você pode atualizar os campos de um objeto on-the-fly, sem precisar abrir o seu editor. O patch também permite atualizações mais complexas com várias estratégias de mesclagem e correção. Para saber mais sobre isso, consulte Update API Objects in Place Using kubectl patch.

      O comando a seguir irá corrigir o objeto nginx-deployment para atualizar o campo replicas de 2 para 4; deploy é uma abreviação para o objeto deployment.

      • kubectl patch deploy nginx-deployment -p '{"spec": {"replicas": 4}}'

      Output

      deployment.extensions/nginx-deployment patched

      Agora podemos inspecionar as alterações:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 18m

      Você também pode criar um Deployment imperativamente usando o comando run. O run criará um Deployment usando uma imagem fornecida como parâmetro:

      • kubectl run nginx-deployment --image=nginx --port=80 --replicas=2

      O comando expose permite expor rapidamente um Deployment em execução como um Serviço Kubernetes, permitindo conexões de fora do seu cluster Kubernetes:

      • kubectl expose deploy nginx-deployment --type=LoadBalancer --port=80 --name=nginx-svc

      Output

      service/nginx-svc exposed

      Aqui, expusemos o Deployment nginx-deployment como um serviço LoadBalancer, abrindo a porta 80 para o tráfego externo e direcionando-a para a porta 80 do container. Nomeamos o serviço como nginx-svc. Usando o tipo de Serviço LoadBalancer, um balanceador de carga em nuvem é automaticamente provisionado e configurado pelo Kubernetes. Para obter o endereço IP externo do Serviço, use get:

      • kubectl get svc nginx-svc

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.245.26.242 203.0.113.0 80:30153/TCP 22m

      Você pode acessar os containers Nginx em execução indo até o EXTERNAL-IP no seu navegador web.

      Inspecionando Cargas de Trabalho e Debugando

      Existem vários comandos que você pode usar para obter mais informações sobre cargas de trabalho em execução no seu cluster.

      Inspecionando Recursos no Kubernetes

      O kubectl get busca um determinado recurso do Kubernetes e exibe algumas informações básicas associadas a ele:

      • kubectl get deployment -o wide

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 29m nginx nginx app=nginx

      Como não fornecemos um nome de Deployment ou um Namespace, o kubectl busca todos os Deployments no Namespace atual. A flag -o fornece informações adicionais como CONTAINERS e IMAGES.

      Além do get, você pode usar o describe para buscar uma descrição detalhada do recurso e dos recursos associados:

      • kubectl describe deploy nginx-deployment

      Output

      Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 11 Sep 2019 12:53:42 -0400 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision: 1 Selector: run=nginx-deployment . . .

      O conjunto de informações apresentadas variará dependendo do tipo de recurso. Você também pode usar este comando sem especificar um nome de recurso; nesse caso, as informações serão fornecidas para todos os recursos desse tipo no Namespace atual.

      O explain permite que você puxe rapidamente campos configuráveis para um determinado tipo de recurso:

      • kubectl explain deployment.spec

      Ao acrescentar campos adicionais, você pode mergulhar mais fundo na hierarquia de campos:

      • kubectl explain deployment.spec.template.spec

      Obtendo Acesso ao Shell de um Container

      Para obter acesso ao shell de um container em execução, use exec. Primeiro, encontre o Pod que contém o container em execução ao qual você deseja acessar:

      Output

      nginx-deployment-8859878f8-7gfw9 1/1 Running 0 109m nginx-deployment-8859878f8-z7f9q 1/1 Running 0 109m

      Vamos fazer um exec para o primeiro Pod. Como este Pod possui apenas um container, não precisamos usar a flag -c para especificar em qual container gostaríamos de fazer o exec.

      • kubectl exec -i -t nginx-deployment-8859878f8-7gfw9 -- /bin/bash

      Output

      root@nginx-deployment-8859878f8-7gfw9:/#

      Agora você tem acesso ao shell do container Nginx. A flag -i passa o STDIN para o container e -t fornece um TTY interativo. O traço duplo -- atua como um separador para o comando kubectl e o comando que você deseja executar dentro do container. Neste caso, estamos executando o /bin/bash.

      Para executar comandos dentro do container sem abrir um shell completo, omita as flags -i e -t e substitua o comando que você deseja executar em vez de /bin/bash:

      • kubectl exec nginx-deployment-8859878f8-7gfw9 ls

      Output

      bin boot dev etc home lib lib64 media . . .

      Buscando Logs

      Outro comando útil é o logs, que imprime logs de Pods e containers, incluindo containers finalizados.

      Para transmitir logs para a saída do seu terminal, você pode usar a flag -f:

      • kubectl logs -f nginx-deployment-8859878f8-7gfw9

      Output

      10.244.2.1 - - [12/Sep/2019:17:21:33 +0000] "GET / HTTP/1.1" 200 612 "-" "203.0.113.0" "-" 2019/09/16 17:21:34 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "203.0.113.0", referrer: "http://203.0.113.0" . . .

      Este comando continuará sendo executado no seu terminal até ser interrompido com um CTRL+C. Você pode omitir a flag -f se quiser imprimir a saída do log e sair imediatamente.

      Você também pode usar a flag -p para buscar logs de um container terminado. Quando esta opção é usada dentro de um Pod que tinha uma instância anterior do container em execução, logs imprimirá a saída do container finalizado:

      • kubectl logs -p nginx-deployment-8859878f8-7gfw9

      A flag -c lhe permite especificar o container do qual você deseja buscar logs, se o Pod tiver vários containers. Você pode usar a flag --all-containers=true para buscar logs de todos os containers no Pod.

      Redirecionamento de Porta e Proxying

      Para obter acesso de rede a um Pod, você pode usar o port-forward:

      • sudo kubectl port-forward pod/nginx-deployment-8859878f8-7gfw9 80:80

      Output

      Forwarding from 127.0.0.1:80 -> 80 Forwarding from [::1]:80 -> 80

      Neste caso, usamos sudo porque a porta 80 local é uma porta protegida. Para a maioria das outras portas, você pode omitir o sudo e executar o comando kubectl como seu usuário do sistema.

      Aqui redirecionamos a porta 80 local (que antecede os dois pontos) para a porta 80 do container do Pod (após os dois pontos).

      Você também pode usar deploy/nginx-deployment como tipo e nome do recurso para o qual redirecionar. Se você fizer isso, a porta local será redirecionada para o Pod selecionado pelo Deployment.

      O comando proxy pode ser usado para acessar o servidor de API do Kubernetes localmente:

      • kubectl proxy --port=8080

      Output

      Starting to serve on 127.0.0.1:8080

      Em outro shell, use curl para explorar a API:

      • curl http://localhost:8080/api/

      Output

      { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "203.0.113.0:443" } ]

      Feche o proxy pressionando CTRL-C.

      Conclusão

      Este guia aborda alguns dos comandos mais comuns do kubectl que você pode usar ao gerenciar um cluster Kubernetes e as cargas de trabalho deployadas nele.

      Você pode aprender mais sobre o kubectl consultando a documentação de referência oficial do Kubernetes.

      Existem muitos outros comandos e variações que você pode achar úteis como parte do seu trabalho com o kubectl. Para saber mais sobre todas as suas opções disponíveis, você pode executar:



      Source link

      Getting Started with Kubernetes: A kubectl Cheat Sheet


      Introduction

      Kubectl is a command-line tool designed to manage Kubernetes objects and clusters. It provides a command-line interface for performing common operations like creating and scaling Deployments, switching contexts, and accessing a shell in a running container.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • It is not an exhaustive list of kubectl commands, but contains many common operations and use cases. For a more thorough reference, consult the Kubectl Reference Docs
      • Jump to any section that is relevant to the task you are trying to complete.

      Prerequisites

      Sample Deployment

      To demonstrate some of the operations and commands in this cheat sheet, we’ll use a sample Deployment that runs 2 replicas of Nginx:

      nginx-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
      

      Copy and paste this manifest into a file called nginx-deployment.yaml.

      Installing kubectl

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to install kubectl on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      First, update your local package index and install required dependencies:

      • sudo apt-get update && sudo apt-get install -y apt-transport-https

      Then add the Google Cloud GPG key to APT and make the kubectl package available to your system:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update

      Finally, install kubectl:

      • sudo apt-get install -y kubectl

      Test that the installation succeeded using version:

      Setting Up Shell Autocompletion

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to set up autocompletion on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      kubectl includes a shell autocompletion script that you can make available to your system’s existing shell autocompletion software.

      Installing kubectl Autocompletion

      First, check if you have bash-completion installed:

      You should see some script output.

      Next, source the kubectl autocompletion script in your ~/.bashrc file:

      • echo 'source <(kubectl completion bash)' >>~/.bashrc
      • . ~/.bashrc

      Alternatively, you can add the completion script to the /etc/bash_completion.d directory:

      • kubectl completion bash >/etc/bash_completion.d/kubectl

      Usage

      To use the autocompletion feature, press the TAB key to display available kubectl commands:

      Output

      annotate apply autoscale completion cordon delete drain explain kustomize options port-forward rollout set uncordon api-resources attach certificate config cp describe . . .

      You can also display available commands after partially typing a command:

      Output

      delete describe diff drain

      Connecting, Configuring and Using Contexts

      Connecting

      To test that kubectl can authenticate with and access your Kubernetes cluster, use cluster-info:

      If kubectl can successfully authenticate with your cluster, you should see the following output:

      Output

      Kubernetes master is running at https://kubernetes_master_endpoint CoreDNS is running at https://coredns_endpoint To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      kubectl is configured using kubeconfig configuration files. By default, kubectl will look for a file called config in the $HOME/.kube directory. To change this, you can set the $KUBECONFIG environment variable to a custom kubeconfig file, or pass in the custom file at execution time using the --kubeconfig flag:

      • kubectl cluster-info --kubeconfig=path_to_your_kubeconfig_file

      Note: If you’re using a managed Kubernetes cluster, your cloud provider should have made its kubeconfig file available to you.

      If you don’t want to use the --kubeconfig flag with every command, and there is no existing ~/.kube/config file, create a directory called ~/.kube in your home directory if it doesn’t already exist, and copy in the kubeconfig file, renaming it to config:

      • mkdir ~/.kube
      • cp your_kubeconfig_file ~/.kube/config

      Now, run cluster-info once again to test your connection.

      Modifying your kubectl Configuration

      You can also modify your config using the kubectl config set of commands.

      To view your kubectl configuration, use the view subcommand:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED . . .

      Modifying Clusters

      To fetch a list of clusters defined in your kubeconfig, use get-clusters:

      • kubectl config get-clusters

      Output

      NAME do-nyc1-sammy

      To add a cluster to your config, use the set-cluster subcommand:

      • kubectl config set-cluster new_cluster --server=server_address --certificate-authority=path_to_certificate_authority

      To delete a cluster from your config, use delete-cluster:

      Note: This only deletes the cluster from your config and does not delete the actual Kubernetes cluster.

      • kubectl config delete-cluster

      Modifying Users

      You can perform similar operations for users using set-credentials:

      • kubectl config set-credentials username --client-certificate=/path/to/cert/file --client-key=/path/to/key/file

      To delete a user from your config, you can run unset:

      • kubectl config unset users.username

      Contexts

      A context in Kubernetes is an object that contains a set of access parameters for your cluster. It consists of a cluster, namespace, and user triple. Contexts allow you to quickly switch between different sets of cluster configuration.

      To see your current context, you can use current-context:

      • kubectl config current-context

      Output

      do-nyc1-sammy

      To see a list of all configured contexts, run get-contexts:

      • kubectl config get-contexts

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * do-nyc1-sammy do-nyc1-sammy do-nyc1-sammy-admin

      To set a context, use set-context:

      • kubectl config set-context context_name --cluster=cluster_name --user=user_name --namespace=namespace

      You can switch between contexts with use-context:

      • kubectl config use-context context_name

      Output

      Switched to context "do-nyc1-sammy"

      And you can delete a context with delete-context:

      • kubectl config delete-context context_name

      Using Namespaces

      A Namespace in Kubernetes is an abstraction that allows you to subdivide your cluster into multiple virtual clusters. By using Namespaces you can divide cluster resources among multiple teams and scope objects appropriately. For example, you can have a prod Namespace for production workloads, and a dev Namespace for development and test workloads.

      To fetch and print a list of all the Namespaces in your cluster, use get namespace:

      Output

      NAME STATUS AGE default Active 2d21h kube-node-lease Active 2d21h kube-public Active 2d21h kube-system Active 2d21h

      To set a Namespace for your current context, use set-context --current:

      • kubectl config set-context --current --namespace=namespace_name

      To create a Namespace, use create namespace:

      • kubectl create namespace namespace_name

      Output

      namespace/sammy created

      Similarly, to delete a Namespace, use delete namespace:

      Warning: Deleting a Namespace will delete everything in the Namespace, including running Deployments, Pods, and other workloads. Only run this command if you’re sure you’d like to kill whatever’s running in the Namespace or if you’re deleting an empty Namespace.

      • kubectl delete namespace namespace_name

      To fetch all Pods in a given Namespace or to perform other operations on resources in a given Namespace, make sure to include the --namespace flag:

      • kubectl get pods --namespace=namespace_name

      Managing Kubernetes Resources

      General Syntax

      The general syntax for most kubectl management commands is:

      • kubectl command type name flags

      Where

      • command is an operation you’d like to perform, like create
      • type is the Kubernetes resource type, like deployment
      • name is the resource’s name, like app_frontend
      • flags are any optional flags you’d like to include

      For example the following command retrieves information about a Deployment named app_frontend:

      • kubectl get deployment app_frontend

      Declarative Management and kubectl apply

      The recommended approach to managing workloads on Kubernetes is to rely on the cluster’s declarative design as much as possible. This means that instead of running a series of commands to create, update, delete, and restart running Pods, you should define the workloads, services, and systems you’d like to run in YAML manifest files, and provide these files to Kubernetes, which will handle the rest.

      In practice, this means using the kubectl apply command, which applies a particular configuration to a given resource. If the target resource doesn’t exist, then Kubernetes will create the resource. If the resource already exists, then Kubernetes will save the current revision, and update the resource according to the new configuration. This declarative approach exists in contrast to the imperative approach of running the kubectl create , kubectl edit, and the kubectl scale set of commands to manage resources. To learn more about the different ways of managing Kubernetes resources, consult Kubernetes Object Management from the Kubernetes docs.

      Rolling out a Deployment

      For example, to deploy the sample Nginx Deployment to your cluster, use apply and provide the path to the nginx-deployment.yaml manifest file:

      • kubectl apply -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      The -f flag is used to specify a filename or URL containing a valid configuration. If you’d like to apply all manifests from a directory, you can use the -k flag:

      • kubectl apply -k manifests_dir

      You can track the rollout status using rollout status:

      • kubectl rollout status deployment/nginx-deployment

      Output

      Waiting for deployment "nginx-deployment" rollout to finish: 1 of 2 updated replicas are available... deployment "nginx-deployment" successfully rolled out

      An alternative to rollout status is the kubectl get command, along with the -w (watch) flag:

      • kubectl get deployment -w

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/2 2 0 3s nginx-deployment 1/2 2 1 3s nginx-deployment 2/2 2 2 3s

      Using rollout pause and rollout resume, you can pause and resume the rollout of a Deployment:

      • kubectl rollout pause deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment paused
      • kubectl rollout resume deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment resumed

      Modifying a Running Deployment

      If you’d like to modify a running Deployment, you can make changes to its manifest file and then run kubectl apply again to apply the update. For example, we’ll modify the nginx-deployment.yaml file to change the number of replicas from 2 to 3:

      nginx-deployment.yaml

      . . .
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
      . . .
      

      The kubectl diff command allows you to see a diff between currently running resources, and the changes proposed in the supplied configuration file:

      • kubectl diff -f nginx-deployment.yaml

      Now allow Kubernetes to perform the update using apply:

      • kubectl apply -f nginx-deployment.yaml

      Running another get deployment should confirm the addition of a third replica.

      If you run apply again without modifying the manifest file, Kubernetes will detect that no changes were made and won’t perform any action.

      Using rollout history you can see a list of the Deployment’s previous revisions:

      • kubectl rollout history deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none>

      With rollout undo, you can revert a Deployment to any of its previous revisions:

      • kubectl rollout undo deployment/nginx-deployment --to-revision=1

      Deleting a Deployment

      To delete a running Deployment, use kubectl delete:

      • kubectl delete -f nginx-deployment.yaml

      Output

      deployment.apps "nginx-deployment" deleted

      Imperative Management

      You can also use a set of imperative commands to directly manipulate and manage Kubernetes resources.

      Creating a Deployment

      Use create to create an object from a file, URL, or STDIN. Note that unlike apply, if an object with the same name already exists, the operation will fail. The --dry-run flag allows you to preview the result of the operation without actually performing it:

      • kubectl create -f nginx-deployment.yaml --dry-run

      Output

      deployment.apps/nginx-deployment created (dry-run)

      We can now create the object:

      • kubectl create -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      Modifying a Running Deployment

      Use scale to scale the number of replicas for the Deployment from 2 to 4:

      • kubectl scale --replicas=4 deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment scaled

      You can edit any object in-place using kubectl edit. This will open up the object’s manifest in your default editor:

      • kubectl edit deployment/nginx-deployment

      You should see the following manifest file in your editor:

      nginx-deployment

      # Please edit the object below. Lines beginning with a '#' will be ignored,
      # and an empty file will abort the edit. If an error occurs while saving this file will be
      # reopened with the relevant failures.
      #
      apiVersion: extensions/v1beta1
      kind: Deployment
      . . . 
      spec:
        progressDeadlineSeconds: 600
        replicas: 4
        revisionHistoryLimit: 10
        selector:
          matchLabels:
      . . .
      

      Change the replicas value from 4 to 2, then save and close the file.

      Now run a get to inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 6m40s

      We’ve successfully scaled the Deployment back down to 2 replicas on-the-fly. You can update most of a Kubernetes’ object’s fields in a similar manner.

      Another useful command for modifying objects in-place is kubectl patch. Using patch, you can update an object’s fields on-the-fly without having to open up your editor. patch also allows for more complex updates with various merging and patching strategies. To learn more about these, consult Update API Objects in Place Using kubectl patch.

      The following command will patch the nginx-deployment object to update the replicas field from 2 to 4; deploy is shorthand for the deployment object.

      • kubectl patch deploy nginx-deployment -p '{"spec": {"replicas": 4}}'

      Output

      deployment.extensions/nginx-deployment patched

      We can now inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 18m

      You can also create a Deployment imperatively using the run command. run will create a Deployment using an image provided as a parameter:

      • kubectl run nginx-deployment --image=nginx --port=80 --replicas=2

      The expose command lets you quickly expose a running Deployment with a Kubernetes Service, allowing connections from outside your Kubernetes cluster:

      • kubectl expose deploy nginx-deployment --type=LoadBalancer --port=80 --name=nginx-svc

      Output

      service/nginx-svc exposed

      Here we’ve exposed the nginx-deployment Deployment as a LoadBalancer Service, opening up port 80 to external traffic and directing it to container port 80. We name the service nginx-svc. Using the LoadBalancer Service type, a cloud load balancer is automatically provisioned and configured by Kubernetes. To get the Service’s external IP address, use get:

      • kubectl get svc nginx-svc

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.245.26.242 203.0.113.0 80:30153/TCP 22m

      You can access the running Nginx containers by navigating to EXTERNAL-IP in your web browser.

      Inspecting Workloads and Debugging

      There are several commands you can use to get more information about workloads running in your cluster.

      Inspecting Kubernetes Resources

      kubectl get fetches a given Kubernetes resource and displays some basic information associated with it:

      • kubectl get deployment -o wide

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 29m nginx nginx app=nginx

      Since we did not provide a Deployment name or Namespace, kubectl fetches all Deployments in the current Namespace. The -o flag provides additional information like CONTAINERS and IMAGES.

      In addition to get, you can use describe to fetch a detailed description of the resource and associated resources:

      • kubectl describe deploy nginx-deployment

      Output

      Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 11 Sep 2019 12:53:42 -0400 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision: 1 Selector: run=nginx-deployment . . .

      The set of information presented will vary depending on the resource type. You can also use this command without specifying a resource name, in which case information will be provided for all resources of that type in the current Namespace.

      explain allows you to quickly pull configurable fields for a given resource type:

      • kubectl explain deployment.spec

      By appending additional fields you can dive deeper into the field hierarchy:

      • kubectl explain deployment.spec.template.spec

      Gaining Shell Access to a Container

      To gain shell access into a running container, use exec. First, find the Pod that contains the running container you’d like access to:

      Output

      nginx-deployment-8859878f8-7gfw9 1/1 Running 0 109m nginx-deployment-8859878f8-z7f9q 1/1 Running 0 109m

      Let’s exec into the first Pod. Since this Pod has only one container, we don’t need to use the -c flag to specify which container we’d like to exec into.

      • kubectl exec -i -t nginx-deployment-8859878f8-7gfw9 -- /bin/bash

      Output

      root@nginx-deployment-8859878f8-7gfw9:/#

      You now have shell access to the Nginx container. The -i flag passes STDIN to the container, and -t gives you an interactive TTY. The -- double-dash acts as a separator for the kubectl command and the command you’d like to run inside the container. In this case, we are running /bin/bash.

      To run commands inside the container without opening a full shell, omit the -i and -t flags, and substitute the command you’d like to run instead of /bin/bash:

      • kubectl exec nginx-deployment-8859878f8-7gfw9 ls

      Output

      bin boot dev etc home lib lib64 media . . .

      Fetching Logs

      Another useful command is logs, which prints logs for Pods and containers, including terminated containers.

      To stream logs to your terminal output, you can use the -f flag:

      • kubectl logs -f nginx-deployment-8859878f8-7gfw9

      Output

      10.244.2.1 - - [12/Sep/2019:17:21:33 +0000] "GET / HTTP/1.1" 200 612 "-" "203.0.113.0" "-" 2019/09/16 17:21:34 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "203.0.113.0", referrer: "http://203.0.113.0" . . .

      This command will keep running in your terminal until interrupted with a CTRL+C. You can omit the -f flag if you’d like to print log output and exit immediately.

      You can also use the -p flag to fetch logs for a terminated container. When this option is used within a Pod that had a prior running container instance, logs will print output from the terminated container:

      • kubectl logs -p nginx-deployment-8859878f8-7gfw9

      The -c flag allows you to specify the container you’d like to fetch logs from, if the Pod has multiple containers. You can use the --all-containers=true flag to fetch logs from all containers in the Pod.

      Port Forwarding and Proxying

      To gain network access to a Pod, you can use port-forward:

      • sudo kubectl port-forward pod/nginx-deployment-8859878f8-7gfw9 80:80

      Output

      Forwarding from 127.0.0.1:80 -> 80 Forwarding from [::1]:80 -> 80

      In this case we use sudo because local port 80 is a protected port. For most other ports you can omit sudo and run the kubectl command as your system user.

      Here we forward local port 80 (preceding the colon) to the Pod’s container port 80 (after the colon).

      You can also use deploy/nginx-deployment as the resource type and name to forward to. If you do this, the local port will be forwarded to the Pod selected by the Deployment.

      The proxy command can be used to access the Kubernetes API server locally:

      • kubectl proxy --port=8080

      Output

      Starting to serve on 127.0.0.1:8080

      In another shell, use curl to explore the API:

      curl http://localhost:8080/api/
      

      Output

      { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "203.0.113.0:443" } ]

      Close the proxy by hitting CTRL-C.

      Conclusion

      This guide covers some of the more common kubectl commands you may use when managing a Kubernetes cluster and workloads you’ve deployed to it.

      You can learn more about kubectl by consulting the official Kubernetes reference documentation.

      There are many more commands and variations that you may find useful as part of your work with kubectl. To learn more about all of your available options, you can run:

      kubectl --help
      



      Source link

      Getting Started with kubectl: A kubectl Cheat Sheet


      Introduction

      Kubectl is a command-line tool designed to manage Kubernetes objects and clusters. It provides a command-line interface for performing common operations like creating and scaling Deployments, switching contexts, and accessing a shell in a running container.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • It is not an exhaustive list of kubectl commands, but contains many common operations and use cases. For a more thorough reference, consult the Kubectl Reference Docs
      • Jump to any section that is relevant to the task you are trying to complete.

      Prerequisites

      Sample Deployment

      To demonstrate some of the operations and commands in this cheat sheet, we’ll use a sample Deployment that runs 2 replicas of Nginx:

      nginx-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
      

      Copy and paste this manifest into a file called nginx-deployment.yaml.

      Installing kubectl

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to install kubectl on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      First, update your local package index and install required dependencies:

      • sudo apt-get update && sudo apt-get install -y apt-transport-https

      Then add the Google Cloud GPG key to APT and make the kubectl package available to your system:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update

      Finally, install kubectl:

      • sudo apt-get install -y kubectl

      Test that the installation succeeded using version:

      Setting Up Shell Autocompletion

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to set up autocompletion on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      kubectl includes a shell autocompletion script that you can make available to your system’s existing shell autocompletion software.

      Installing kubectl Autocompletion

      First, check if you have bash-completion installed:

      You should see some script output.

      Next, source the kubectl autocompletion script in your ~/.bashrc file:

      • echo 'source <(kubectl completion bash)' >>~/.bashrc
      • . ~/.bashrc

      Alternatively, you can add the completion script to the /etc/bash_completion.d directory:

      • kubectl completion bash >/etc/bash_completion.d/kubectl

      Usage

      To use the autocompletion feature, press the TAB key to display available kubectl commands:

      Output

      annotate apply autoscale completion cordon delete drain explain kustomize options port-forward rollout set uncordon api-resources attach certificate config cp describe . . .

      You can also display available commands after partially typing a command:

      Output

      delete describe diff drain

      Connecting, Configuring and Using Contexts

      Connecting

      To test that kubectl can authenticate with and access your Kubernetes cluster, use cluster-info:

      If kubectl can successfully authenticate with your cluster, you should see the following output:

      Output

      Kubernetes master is running at https://kubernetes_master_endpoint CoreDNS is running at https://coredns_endpoint To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      kubectl is configured using kubeconfig configuration files. By default, kubectl will look for a file called config in the $HOME/.kube directory. To change this, you can set the $KUBECONFIG environment variable to a custom kubeconfig file, or pass in the custom file at execution time using the --kubeconfig flag:

      • kubectl cluster-info --kubeconfig=path_to_your_kubeconfig_file

      Note: If you’re using a managed Kubernetes cluster, your cloud provider should have made its kubeconfig file available to you.

      If you don’t want to use the --kubeconfig flag with every command, and there is no existing ~/.kube/config file, create a directory called ~/.kube in your home directory if it doesn’t already exist, and copy in the kubeconfig file, renaming it to config:

      • mkdir ~/.kube
      • cp your_kubeconfig_file ~/.kube/config

      Now, run cluster-info once again to test your connection.

      Modifying your kubectl Configuration

      You can also modify your config using the kubectl config set of commands.

      To view your kubectl configuration, use the view subcommand:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED . . .

      Modifying Clusters

      To fetch a list of clusters defined in your kubeconfig, use get-clusters:

      • kubectl config get-clusters

      Output

      NAME do-nyc1-sammy

      To add a cluster to your config, use the set-cluster subcommand:

      • kubectl config set-cluster new_cluster --server=server_address --certificate-authority=path_to_certificate_authority

      To delete a cluster from your config, use delete-cluster:

      Note: This only deletes the cluster from your config and does not delete the actual Kubernetes cluster.

      • kubectl config delete-cluster

      Modifying Users

      You can perform similar operations for users using set-credentials:

      • kubectl config set-credentials username --client-certificate=/path/to/cert/file --client-key=/path/to/key/file

      To delete a user from your config, you can run unset:

      • kubectl config unset users.username

      Contexts

      A context in Kubernetes is an object that contains a set of access parameters for your cluster. It consists of a cluster, namespace, and user triple. Contexts allow you to quickly switch between different sets of cluster configuration.

      To see your current context, you can use current-context:

      • kubectl config current-context

      Output

      do-nyc1-sammy

      To see a list of all configured contexts, run get-contexts:

      • kubectl config get-contexts

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * do-nyc1-sammy do-nyc1-sammy do-nyc1-sammy-admin

      To set a context, use set-context:

      • kubectl config set-context context_name --cluster=cluster_name --user=user_name --namespace=namespace

      You can switch between contexts with use-context:

      • kubectl config use-context context_name

      Output

      Switched to context "do-nyc1-sammy"

      And you can delete a context with delete-context:

      • kubectl config delete-context context_name

      Using Namespaces

      A Namespace in Kubernetes is an abstraction that allows you to subdivide your cluster into multiple virtual clusters. By using Namespaces you can divide cluster resources among multiple teams and scope objects appropriately. For example, you can have a prod Namespace for production workloads, and a dev Namespace for development and test workloads.

      To fetch and print a list of all the Namespaces in your cluster, use get namespace:

      Output

      NAME STATUS AGE default Active 2d21h kube-node-lease Active 2d21h kube-public Active 2d21h kube-system Active 2d21h

      To set a Namespace for your current context, use set-context --current:

      • kubectl config set-context --current --namespace=namespace_name

      To create a Namespace, use create namespace:

      • kubectl create namespace namespace_name

      Output

      namespace/sammy created

      Similarly, to delete a Namespace, use delete namespace:

      Warning: Deleting a Namespace will delete everything in the Namespace, including running Deployments, Pods, and other workloads. Only run this command if you’re sure you’d like to kill whatever’s running in the Namespace or if you’re deleting an empty Namespace.

      • kubectl delete namespace namespace_name

      To fetch all Pods in a given Namespace or to perform other operations on resources in a given Namespace, make sure to include the --namespace flag:

      • kubectl get pods --namespace=namespace_name

      Managing Kubernetes Resources

      General Syntax

      The general syntax for most kubectl management commands is:

      • kubectl command type name flags

      Where

      • command is an operation you’d like to perform, like create
      • type is the Kubernetes resource type, like deployment
      • name is the resource’s name, like app_frontend
      • flags are any optional flags you’d like to include

      For example the following command retrieves information about a Deployment named app_frontend:

      • kubectl get deployment app_frontend

      Declarative Management and kubectl apply

      The recommended approach to managing workloads on Kubernetes is to rely on the cluster’s declarative design as much as possible. This means that instead of running a series of commands to create, update, delete, and restart running Pods, you should define the workloads, services, and systems you’d like to run in YAML manifest files, and provide these files to Kubernetes, which will handle the rest.

      In practice, this means using the kubectl apply command, which applies a particular configuration to a given resource. If the target resource doesn’t exist, then Kubernetes will create the resource. If the resource already exists, then Kubernetes will save the current revision, and update the resource according to the new configuration. This declarative approach exists in contrast to the imperative approach of running the kubectl create , kubectl edit, and the kubectl scale set of commands to manage resources. To learn more about the different ways of managing Kubernetes resources, consult Kubernetes Object Management from the Kubernetes docs.

      Rolling out a Deployment

      For example, to deploy the sample Nginx Deployment to your cluster, use apply and provide the path to the nginx-deployment.yaml manifest file:

      • kubectl apply -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      The -f flag is used to specify a filename or URL containing a valid configuration. If you’d like to apply all manifests from a directory, you can use the -k flag:

      • kubectl apply -k manifests_dir

      You can track the rollout status using rollout status:

      • kubectl rollout status deployment/nginx-deployment

      Output

      Waiting for deployment "nginx-deployment" rollout to finish: 1 of 2 updated replicas are available... deployment "nginx-deployment" successfully rolled out

      An alternative to rollout status is the kubectl get command, along with the -w (watch) flag:

      • kubectl get deployment -w

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/2 2 0 3s nginx-deployment 1/2 2 1 3s nginx-deployment 2/2 2 2 3s

      Using rollout pause and rollout resume, you can pause and resume the rollout of a Deployment:

      • kubectl rollout pause deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment paused
      • kubectl rollout resume deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment resumed

      Modifying a Running Deployment

      If you’d like to modify a running Deployment, you can make changes to its manifest file and then run kubectl apply again to apply the update. For example, we’ll modify the nginx-deployment.yaml file to change the number of replicas from 2 to 3:

      nginx-deployment.yaml

      . . .
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
      . . .
      

      The kubectl diff command allows you to see a diff between currently running resources, and the changes proposed in the supplied configuration file:

      • kubectl diff -f nginx-deployment.yaml

      Now allow Kubernetes to perform the update using apply:

      • kubectl apply -f nginx-deployment.yaml

      Running another get deployment should confirm the addition of a third replica.

      If you run apply again without modifying the manifest file, Kubernetes will detect that no changes were made and won’t perform any action.

      Using rollout history you can see a list of the Deployment’s previous revisions:

      • kubectl rollout history deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none>

      With rollout undo, you can revert a Deployment to any of its previous revisions:

      • kubectl rollout undo deployment/nginx-deployment --to-revision=1

      Deleting a Deployment

      To delete a running Deployment, use kubectl delete:

      • kubectl delete -f nginx-deployment.yaml

      Output

      deployment.apps "nginx-deployment" deleted

      Imperative Management

      You can also use a set of imperative commands to directly manipulate and manage Kubernetes resources.

      Creating a Deployment

      Use create to create an object from a file, URL, or STDIN. Note that unlike apply, if an object with the same name already exists, the operation will fail. The --dry-run flag allows you to preview the result of the operation without actually performing it:

      • kubectl create -f nginx-deployment.yaml --dry-run

      Output

      deployment.apps/nginx-deployment created (dry-run)

      We can now create the object:

      • kubectl create -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      Modifying a Running Deployment

      Use scale to scale the number of replicas for the Deployment from 2 to 4:

      • kubectl scale --replicas=4 deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment scaled

      You can edit any object in-place using kubectl edit. This will open up the object’s manifest in your default editor:

      • kubectl edit deployment/nginx-deployment

      You should see the following manifest file in your editor:

      nginx-deployment

      # Please edit the object below. Lines beginning with a '#' will be ignored,
      # and an empty file will abort the edit. If an error occurs while saving this file will be
      # reopened with the relevant failures.
      #
      apiVersion: extensions/v1beta1
      kind: Deployment
      . . . 
      spec:
        progressDeadlineSeconds: 600
        replicas: 4
        revisionHistoryLimit: 10
        selector:
          matchLabels:
      . . .
      

      Change the replicas value from 4 to 2, then save and close the file.

      Now run a get to inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 6m40s

      We’ve successfully scaled the Deployment back down to 2 replicas on-the-fly. You can update most of a Kubernetes’ object’s fields in a similar manner.

      Another useful command for modifying objects in-place is kubectl patch. Using patch, you can update an object’s fields on-the-fly without having to open up your editor. patch also allows for more complex updates with various merging and patching strategies. To learn more about these, consult Update API Objects in Place Using kubectl patch.

      The following command will patch the nginx-deployment object to update the replicas field from 2 to 4; deploy is shorthand for the deployment object.

      • kubectl patch deploy nginx-deployment -p '{"spec": {"replicas": 4}}'

      Output

      deployment.extensions/nginx-deployment patched

      We can now inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 18m

      You can also create a Deployment imperatively using the run command. run will create a Deployment using an image provided as a parameter:

      • kubectl run nginx-deployment --image=nginx --port=80 --replicas=2

      The expose command lets you quickly expose a running Deployment with a Kubernetes Service, allowing connections from outside your Kubernetes cluster:

      • kubectl expose deploy nginx-deployment --type=LoadBalancer --port=80 --name=nginx-svc

      Output

      service/nginx-svc exposed

      Here we’ve exposed the nginx-deployment Deployment as a LoadBalancer Service, opening up port 80 to external traffic and directing it to container port 80. We name the service nginx-svc. Using the LoadBalancer Service type, a cloud load balancer is automatically provisioned and configured by Kubernetes. To get the Service’s external IP address, use get:

      • kubectl get svc nginx-svc

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.245.26.242 203.0.113.0 80:30153/TCP 22m

      You can access the running Nginx containers by navigating to EXTERNAL-IP in your web browser.

      Inspecting Workloads and Debugging

      There are several commands you can use to get more information about workloads running in your cluster.

      Inspecting Kubernetes Resources

      kubectl get fetches a given Kubernetes resource and displays some basic information associated with it:

      • kubectl get deployment -o wide

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 29m nginx nginx app=nginx

      Since we did not provide a Deployment name or Namespace, kubectl fetches all Deployments in the current Namespace. The -o flag provides additional information like CONTAINERS and IMAGES.

      In addition to get, you can use describe to fetch a detailed description of the resource and associated resources:

      • kubectl describe deploy nginx-deployment

      Output

      Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 11 Sep 2019 12:53:42 -0400 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision: 1 Selector: run=nginx-deployment . . .

      The set of information presented will vary depending on the resource type. You can also use this command without specifying a resource name, in which case information will be provided for all resources of that type in the current Namespace.

      explain allows you to quickly pull configurable fields for a given resource type:

      • kubectl explain deployment.spec

      By appending additional fields you can dive deeper into the field hierarchy:

      • kubectl explain deployment.spec.template.spec

      Gaining Shell Access to a Container

      To gain shell access into a running container, use exec. First, find the Pod that contains the running container you’d like access to:

      Output

      nginx-deployment-8859878f8-7gfw9 1/1 Running 0 109m nginx-deployment-8859878f8-z7f9q 1/1 Running 0 109m

      Let’s exec into the first Pod. Since this Pod has only one container, we don’t need to use the -c flag to specify which container we’d like to exec into.

      • kubectl exec -i -t nginx-deployment-8859878f8-7gfw9 -- /bin/bash

      Output

      root@nginx-deployment-8859878f8-7gfw9:/#

      You now have shell access to the Nginx container. The -i flag passes STDIN to the container, and -t gives you an interactive TTY. The -- double-dash acts as a separator for the kubectl command and the command you’d like to run inside the container. In this case, we are running /bin/bash.

      To run commands inside the container without opening a full shell, omit the -i and -t flags, and substitute the command you’d like to run instead of /bin/bash:

      • kubectl exec nginx-deployment-8859878f8-7gfw9 ls

      Output

      bin boot dev etc home lib lib64 media . . .

      Fetching Logs

      Another useful command is logs, which prints logs for Pods and containers, including terminated containers.

      To stream logs to your terminal output, you can use the -f flag:

      • kubectl logs -f nginx-deployment-8859878f8-7gfw9

      Output

      10.244.2.1 - - [12/Sep/2019:17:21:33 +0000] "GET / HTTP/1.1" 200 612 "-" "203.0.113.0" "-" 2019/09/16 17:21:34 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "203.0.113.0", referrer: "http://203.0.113.0" . . .

      This command will keep running in your terminal until interrupted with a CTRL+C. You can omit the -f flag if you’d like to print log output and exit immediately.

      You can also use the -p flag to fetch logs for a terminated container. When this option is used within a Pod that had a prior running container instance, logs will print output from the terminated container:

      • kubectl logs -p nginx-deployment-8859878f8-7gfw9

      The -c flag allows you to specify the container you’d like to fetch logs from, if the Pod has multiple containers. You can use the --all-containers=true flag to fetch logs from all containers in the Pod.

      Port Forwarding and Proxying

      To gain network access to a Pod, you can use port-forward:

      • sudo kubectl port-forward pod/nginx-deployment-8859878f8-7gfw9 80:80

      Output

      Forwarding from 127.0.0.1:80 -> 80 Forwarding from [::1]:80 -> 80

      In this case we use sudo because local port 80 is a protected port. For most other ports you can omit sudo and run the kubectl command as your system user.

      Here we forward local port 80 (preceding the colon) to the Pod’s container port 80 (after the colon).

      You can also use deploy/nginx-deployment as the resource type and name to forward to. If you do this, the local port will be forwarded to the Pod selected by the Deployment.

      The proxy command can be used to access the Kubernetes API server locally:

      • kubectl proxy --port=8080

      Output

      Starting to serve on 127.0.0.1:8080

      In another shell, use curl to explore the API:

      curl http://localhost:8080/api/
      

      Output

      { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "203.0.113.0:443" } ]

      Close the proxy by hitting CTRL-C.

      Conclusion

      This guide covers some of the more common kubectl commands you may use when managing a Kubernetes cluster and workloads you’ve deployed to it.

      You can learn more about kubectl by consulting the official Kubernetes reference documentation.

      There are many more commands and variations that you may find useful as part of your work with kubectl. To learn more about all of your available options, you can run:

      kubectl --help
      



      Source link