每天都学一点

kubernetes安装

23 06月
作者:林健|分类:docker+k8s企业级DevOps

viviKubernetes 安装手册(非高可用版)

[TOC]

集群信息

1. 节点规划

部署k8s集群的节点按照用途可以划分为如下2类角色:

  • master:集群的master节点,集群的初始化节点,基础配置不低于2C4G
  • slave:集群的slave节点,可以多台,基础配置不低于2C4G

本例为了演示slave节点的添加,会部署一台master+2台slave,节点规划如下:

主机名 节点ip 角色 部署组件
k8s-master 192.168.20.41 master etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel
k8s-slave1 192.168.20.42 slave kubectl, kubelet, kube-proxy, flannel
k8s-slave2 192.168.20.43 slave kubectl, kubelet, kube-proxy, flannel

2. 组件版本

组件 版本 说明
CentOS 7.8.2003
Kernel Linux 3.10.0-1127.10.1.el7.x86_64
etcd 3.4.13-0 使用容器方式部署,默认数据挂载到本地路径
coredns 1.7.0
kubeadm v1.19.8
kubectl v1.19.8
kubelet v1.19.8
kube-proxy v1.19.8
flannel v0.11.0

安装前准备工作

1. 设置hosts解析

操作节点:所有节点(k8s-master,k8s-slave)均需执行

  • 修改hostname
    hostname必须只能包含小写字母、数字、”,”、”-“,且开头结尾必须是小写字母或数字
# 在master节点
$ hostnamectl set-hostname k8s-master #设置master节点的hostname

# 在slave-1节点
$ hostnamectl set-hostname k8s-slave1 #设置slave1节点的hostname

# 在slave-2节点
$ hostnamectl set-hostname k8s-slave2 #设置slave2节点的hostname
重进终端 bash
  • 添加hosts解析
$ cat >>/etc/hosts<<EOF
192.168.20.41 k8s-master
192.168.20.42 k8s-slave1
192.168.20.43 k8s-slave2
EOF

2. 调整系统配置

操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行

本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)

  • 设置安全组开放端口

如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,60080,60081UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开

  • 设置iptables
iptables -P FORWARD ACCEPT
  • 关闭swap
swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld
  • 修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
  • 设置yum源
$ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum clean all && yum makecache

3. 安装docker

操作节点: 所有节点

 ## 查看所有的可用版本
$ yum list docker-ce --showduplicates | sort -r
##安装旧版本 yum install docker-ce-cli-18.09.9-3.el7  docker-ce-18.09.9-3.el7
## 安装源里最新版本
$ yum install docker-ce-20.10.6 -y

## 配置docker加速
$ mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'
{
     "insecure-registries": [    
    "192.168.10.50:5000" 
  ],
 "registry-mirrors": ["https://8xpk5wnt.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
## 启动docker
$ systemctl enable docker && systemctl start docker

systemctl status docker #查看服务状态
systemctl daemon-reload #重加载配置
systemctl restart docker #重启服务

部署kubernetes

1. 安装 kubeadm, kubelet 和 kubectl

操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行

$ yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8 --disableexcludes=kubernetes
## 查看kubeadm 版本
$ kubeadm version
## 设置kubelet开机启动
$ systemctl enable kubelet 

2. 初始化配置文件

操作节点: 只在master节点(k8s-master)执行

$ kubeadm config print init-defaults > kubeadm.yaml
$ cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.20.41  # apiserver地址,因为单master,所以配置master的节点内网IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 修改成阿里镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.8   #版本号
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。

3. 提前下载镜像

操作节点:只在master节点(k8s-master)执行

  # 查看需要使用的镜像列表,若无问题,将得到如下列表
$ kubeadm config images list --config kubeadm.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
  # 提前下载镜像到本地
$ kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
failed to pull image "registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0": output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher


docker pull coredns/coredns   #pull下不来处理
docker tag coredns/coredns "registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0"  #修改tag


vim /etc/resolv.conf #DNS
nameserver 114.114.114.114  # 新增一行
wq保存

4. 初始化master节点

批量停止

根据NAMES停止所有容器

docker stop `docker ps |  awk 'NR!=1{print $NF}'`
根据CONTAINER ID停止所有容器

docker stop `docker ps |  awk 'NR!=1{print $1}'`
批量删除

根据NAMES删除所有容器

docker rm `docker ps -a |  awk 'NR!=1{print $NF}'`
根据CONTAINER ID删除所有容器

docker rm `docker ps -a |  awk 'NR!=1{print $1}'`


同样,批量删除镜像也可以这么做

操作节点:只在master节点(k8s-master)执行

$ kubeadm init --config kubeadm.yaml

若初始化成功后,最后会提示如下信息:


[root@k8s-master kubeadm]# kubeadm init --config kubeadm.yaml
W0620 20:41:00.888236    8381 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.20.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.20.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.20.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 72.005487 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.41:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:d69c0034ff7f5873dc0ff355215cf6a2df8b577a8aed64947e1ab77a047ba993




接下来按照上述提示信息操作,配置kubectl客户端的认证

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件

若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可

5. 添加slave节点到集群中

操作节点:所有的slave节点(k8s-slave)需要执行
在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。

kubeadm join 192.168.20.41:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9da5f108ff5ffc10612a8f33ab3fb8d7330596bf6d1892c73bcf19d52ca85cdf

如果忘记添加命令,可以通过如下命令生成:

$ kubeadm token create --print-join-command
# master 初始化错误解决方法
[root@master ~]# kubeadm init --kubernetes-version=v1.20.0 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=10.0.0.7
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.0. Latest validated version: 19.03
        [WARNING Hostname]: hostname "master" could not be reached
        [WARNING Hostname]: hostname "master": lookup master on 223.6.6.6:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

# 解决方法

错误一:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at 
解决方法: "exec-opts": ["native.cgroupdriver=systemd"]
加入下面一行 
[root@master ~]# vim /etc/docker/daemon.json
  "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

错误二:
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.0. Latest validated version: 19.03
解决方法:
这个错误可以忽略,是因为docker版本太新造成的

错误三:
[WARNING Hostname]: hostname "master" could not be reached
解决方法:
所有节点修改host文件
[root@master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.7 master
10.0.0.17 node1
10.0.0.27 node2

错误四:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解决方法:
关闭swap 
修改fstab文件
[root@master ~]# swapoff -a
[root@master ~]# vim /etc/fstab
# /etc/fstab
# Created by anaconda on Tue Jul 21 11:31:26 2020
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
UUID=92d0df55-7937-4cef-9df7-daf5cd77f916 /                       xfs     defaults        0 0
UUID=6f2be111-89b2-4f3a-a015-fe3e6c5290fc /boot                   xfs     defaults        0 0
UUID=47913215-9063-4ff4-a56b-0df738def1bc /data                   xfs     defaults        0 0
#UUID=7ca8f821-027c-44d6-8eac-a4a83b409087 swap                    swap    defaults        0 0
————————————————
/etc/kubernetes/manifests #yaml目录
命令

1
ln -s 源目录 目标快捷方式


比如你要在目录/etc/www下面,建立/usr/share/phpmyadmin文件夹(或文件)的快捷方式,如下

1
ln -s /usr/share/phpmyadmin /etc/www

6. 安装flannel插件

操作节点:只在master节点(k8s-master)执行,CNI

  • 下载flannel的yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • 修改配置,指定网卡名称,大概在文件的190行,添加一行配置:
$ vi kube-flannel.yml
...      
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0  # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
        resources:
          requests:
            cpu: "100m"


  • 执行安装flannel网络插件slave节也要
# 先拉取镜像,此过程国内速度比较慢
$ docker pull quay.io/coreos/flannel:v0.14.0-amd64
# 执行flannel安装(部署节点命令)
$ kubectl apply -f kube-flannel.yml 
#删除flannel资源
#  $ kubectl delete -f kube-flannel.yml

#  $ kubeclt -n kube-system get pods -owide #查看POD

7. 设置master节点是否可调度(可选)

操作节点:k8s-master

默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:

$ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

课程后期会部署系统组件到master节点,因此,此处建议设置k8s-master节点为可调度

8. 设置kubectl自动补全

操作节点:k8s-master

kubectl命令自动补全
yum install -y bash-completion

source /usr/share/bash-completion/bash_completion

source <(kubectl completion bash)
 以上只是零时起作用,每次登录终端都得起效需要:

复制代码
source /usr/share/bash-completion/bash_completion 
source <(kubectl completion bash)
加入到/root/.bashrc文件中。
即可以:

source <(kubectl completion bash) 
echo "source <(kubectl completion bash)" >> ~/.bashrc
echo "source <(helm completion bash)" >> ~/.bashrc

8. 验证集群

操作节点: 在master节点(k8s-master)执行

$ kubectl get nodes  #观察集群节点是否全部Ready

创建测试nginx服务

$ kubectl run  test-nginx --image=nginx:alpine

kubectl -n kube-system get pods #查看pods

kubectl -n kube-system logs kube-flannel-ds-amd64-htbf6 #查看日志

查看pod是否创建成功,并访问pod ip测试是否可用

$ kubectl get po -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
test-nginx-5bd8859b98-5nnnw   1/1     Running   0          9s    10.244.1.2   k8s-slave1   <none>           <none>
$ curl 10.244.1.2
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

9. 部署dashboard

  • 部署服务
# 推荐使用下面这种方式
$ wget kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
$ vi recommended.yaml
# 修改Service为NodePort类型,文件的45行上下
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  # 加上type=NodePort变成NodePort类型的服务
......
  • 查看访问地址,本例为30133端口
$ kubectl apply -f recommended.yaml
$ kubectl -n kubernetes-dashboard get svc
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.105.62.124   <none>        8000/TCP        31m
kubernetes-dashboard        NodePort    10.103.74.46    <none>        443:30133/TCP   31m 
$ vi dashboard-admin.conf
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

$ kubectl apply -f dashboard-admin.conf
$ kubectl -n kubernetes-dashboard get secret |grep admin-token
admin-token-fqdpf                  kubernetes.io/service-account-token   3      7m17s
# 使用该命令拿到token,然后粘贴到
$ kubectl -n kubernetes-dashboard get secret admin-token-fqdpf -o jsonpath={.data.token}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1rb2xHWHMwbWFPMjJaRzhleGRqaExnVi1BLVNRc2txaEhETmVpRzlDeDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mcWRwZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjYyNWMxNjJlLTQ1ZG...

10. 清理环境

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

# 在全部集群节点执行
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /run/flannel/subnet.env
rm -rf /var/lib/cni/
mv /etc/kubernetes/ /tmp
mv /var/lib/etcd /tmp
mv ~/.kube /tmp
iptables -F
iptables -t nat -F
ipvsadm -C
ip link del kube-ipvs0
ip link del dummy0
    浏览2 评论0
    返回
    目录
    返回
    首页
    Centos 软连接和硬链接 走进DOCKER

    发表评论