多主节点K8S集群升级

升级主节点(control plane nodes)

  • 更新第一台主节点yum repo cache

    [root@k8s-prod-master1 ~]# yum makecache fast
    复制代码
  • 查看当前k8s的版本

    [root@k8s-prod-master1 ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"f3abc15296f3a3f54e4ee42e830c61047b13895f", GitTreeState:"clean", BuildDate:"2021-01-13T13:18:52Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    复制代码
  • 查找可用升级版本

    [root@k8s-prod-master1 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
    复制代码

    根据列出来的版本,我们选定升级版本为1.16.15-0。

    升级规则为:可以升级到当前Minor版本的任意版本,或者Minor加1的任意版本,例如当前版本为1.15.5,Minor版本为15,那么可以升级为1.15.5+,或者1.16.x,不能越Minor升级,即不能直接升级到1.17.x

  • 在第一台主节点上升级kubeadm版本到1.16.15

    [root@k8s-prod-master1 ~]# yum install -y kubeadm-1.16.15-0 --disableexcludes=kubernetes
    复制代码
  • 验证升级计划

    [root@k8s-prod-master1 ~]# kubeadm upgrade plan
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.15.5
    [upgrade/versions] kubeadm version: v1.16.15
    I0518 10:38:56.544764   20379 version.go:251] remote version is much newer: v1.21.1; falling back to: stable-1.16
    [upgrade/versions] Latest stable version: v1.16.15
    [upgrade/versions] Latest version in the v1.15 series: v1.15.12
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     6 x v1.15.0   v1.15.12
    
    Upgrade to the latest version in the v1.15 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.15.5   v1.15.12
    Controller Manager   v1.15.5   v1.15.12
    Scheduler            v1.15.5   v1.15.12
    Kube Proxy           v1.15.5   v1.15.12
    CoreDNS              1.3.1     1.6.2
    Etcd                 3.3.10    3.3.10
    
    You can now apply the upgrade by executing the following command:
    
            kubeadm upgrade apply v1.15.12
    
    _____________________________________________________________________
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     6 x v1.15.0   v1.16.15
    
    Upgrade to the latest stable version:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.15.5   v1.16.15
    Controller Manager   v1.15.5   v1.16.15
    Scheduler            v1.15.5   v1.16.15
    Kube Proxy           v1.15.5   v1.16.15
    CoreDNS              1.3.1     1.6.2
    Etcd                 3.3.10    3.3.15-0
    
    You can now apply the upgrade by executing the following command:
    
            kubeadm upgrade apply v1.16.15
    
    _____________________________________________________________________
    复制代码

    上面提示我们,可以升级到1.15.12版本,或者升级到1.16.15版本,并且列出来对应版本需要的组件版本

  • 根据上面列出来的组件版本,我们预先拉取镜像

    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15
    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15
    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15
    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15
    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2
    [root@k8s-prod-master1 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0
    复制代码
  • 如果k8s初始化的时候镜像源非harbor.olavoice.com,则需要对镜像重新Tag,以gcr.azk8s.cn为例,此镜像源已经禁止访问

[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15 gcr.azk8s.cn/google_containers/kube-apiserver:v1.16.15
[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15 gcr.azk8s.cn/google_containers/kube-controller-manager:v1.16.15
[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15 gcr.azk8s.cn/google_containers/kube-scheduler:v1.16.15
[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15 gcr.azk8s.cn/google_containers/kube-proxy:v1.16.15
[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2 gcr.azk8s.cn/google_containers/coredns:1.6.2
[root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0 gcr.azk8s.cn/google_containers/etcd:3.3.15-0
复制代码
  • 在第一台主节点上使用kubeadm upgrade apply v1.16.15命令升级k8s各组件版本到1.16.15

    [root@k8s-prod-master1 ~]# kubeadm upgrade apply v1.16.15
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/version] You have chosen to change the cluster version to "v1.16.15"
    [upgrade/versions] Cluster version: v1.15.5
    [upgrade/versions] kubeadm version: v1.16.15
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.15"...
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3063ebeaaeeb0b0ae290b42909feed15
    Static pod: kube-controller-manager-k8s-prod-master1 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-scheduler-k8s-prod-master1 hash: c888f571a5ca45c57074e8bd29d45798
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-k8s-prod-master1 hash: 2c48bf5edd224ad10bf56cd5ead33095
    [upgrade/staticpods] Preparing for "etcd" upgrade
    [upgrade/staticpods] Renewing etcd-server certificate
    [upgrade/staticpods] Renewing etcd-peer certificate
    [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-k8s-prod-master1 hash: 2c48bf5edd224ad10bf56cd5ead33095
    Static pod: etcd-k8s-prod-master1 hash: a576dcf3cdae038d4cd3520500c0de38
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests551642047"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Renewing apiserver certificate
    [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
    [upgrade/staticpods] Renewing front-proxy-client certificate
    [upgrade/staticpods] Renewing apiserver-etcd-client certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3063ebeaaeeb0b0ae290b42909feed15
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3a6e3625419d59fc23a626ee48b98ae5
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Renewing controller-manager.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-k8s-prod-master1 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-controller-manager-k8s-prod-master1 hash: 4f8382a5e369e7caf52148006eb21dac
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Renewing scheduler.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-k8s-prod-master1 hash: c888f571a5ca45c57074e8bd29d45798
    Static pod: kube-scheduler-k8s-prod-master1 hash: 92ada396a5fce07cd05526431ce7ba3e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.15". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
    复制代码
  • 更新第二台主节点yum repo cache

    [root@k8s-prod-master2 ~]# yum makecache fast
    复制代码
  • 在第二台主节点上升级kubeadm版本到1.16.15

    [root@k8s-prod-master2 ~]# yum install -y kubeadm-1.16.15-0 --disableexcludes=kubernetes
    复制代码
  • 预先拉取镜像

    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15
    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15
    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15
    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15
    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2
    [root@k8s-prod-master2 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0
    复制代码
  • 在第二台主节点上使用kubeadm upgrade node命令升级k8s各组件版本到1.16.15

    [root@k8s-prod-master2 ~]# kubeadm upgrade node
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.16.15"...
    Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb
    Static pod: kube-controller-manager-k8s-prod-master2 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-scheduler-k8s-prod-master2 hash: c888f571a5ca45c57074e8bd29d45798
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-k8s-prod-master2 hash: c29941cc1aa16a5fc1f0c505075d5069
    [upgrade/staticpods] Preparing for "etcd" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-k8s-prod-master2 hash: c29941cc1aa16a5fc1f0c505075d5069
    Static pod: etcd-k8s-prod-master2 hash: 7c0e2e3107c5919fa31561ab80d4a6d1
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests450070159"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb
    Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb
    Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb
    Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb
    Static pod: kube-apiserver-k8s-prod-master2 hash: 3b31616504ce6e92cf5ed314dce90f74
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-k8s-prod-master2 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-controller-manager-k8s-prod-master2 hash: 4f8382a5e369e7caf52148006eb21dac
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-k8s-prod-master2 hash: c888f571a5ca45c57074e8bd29d45798
    Static pod: kube-scheduler-k8s-prod-master2 hash: 92ada396a5fce07cd05526431ce7ba3e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upgrade] The control plane instance for this node was successfully updated!
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
    复制代码
  • 更新第三台主节点yum repo cache

    [root@k8s-prod-master3 ~]# yum makecache fast
    复制代码
  • 在第三台主节点上升级kubeadm版本到1.16.15

    [root@k8s-prod-master3 ~]# yum install -y kubeadm-1.16.15-0 --disableexcludes=kubernetes
    复制代码
  • 预先拉取镜像

    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15
    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15
    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15
    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15
    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2
    [root@k8s-prod-master3 ~]# docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0
    复制代码
  • 在第三台主节点上使用kubeadm upgrade node命令升级k8s各组件版本到1.16.15

    [root@k8s-prod-master3 ~]# kubeadm upgrade node
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.16.15"...
    Static pod: kube-apiserver-k8s-prod-master3 hash: 67d0682f25ed725533617a42eac46523
    Static pod: kube-controller-manager-k8s-prod-master3 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-scheduler-k8s-prod-master3 hash: c888f571a5ca45c57074e8bd29d45798
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-k8s-prod-master3 hash: 06486501aecbdefad0781a265587e663
    [upgrade/staticpods] Preparing for "etcd" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-k8s-prod-master3 hash: 06486501aecbdefad0781a265587e663
    Static pod: etcd-k8s-prod-master3 hash: 038dd587665a6a1ab3259c2775cda1b3
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    {"level":"warn","ts":"2021-05-18T10:57:24.548+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///<https://172.16.20.54:2379>","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests480460980"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-k8s-prod-master3 hash: 67d0682f25ed725533617a42eac46523
    Static pod: kube-apiserver-k8s-prod-master3 hash: 986c565fb0e7702ad9bc4f310db14929
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-k8s-prod-master3 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-controller-manager-k8s-prod-master3 hash: 4f8382a5e369e7caf52148006eb21dac
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-k8s-prod-master3 hash: c888f571a5ca45c57074e8bd29d45798
    Static pod: kube-scheduler-k8s-prod-master3 hash: 92ada396a5fce07cd05526431ce7ba3e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upgrade] The control plane instance for this node was successfully updated!
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
    复制代码
  • 查看集群状态

    [root@k8s-prod-master2 ~]# kubectl get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.15.0
    k8s-prod-master2   Ready    master   509d   v1.15.0
    k8s-prod-master3   Ready    master   509d   v1.15.0
    olami-asr2         Ready    <none>   509d   v1.15.0
    olami-k8s-node1    Ready    <none>   447d   v1.15.0
    olami-nlp-model    Ready    <none>   145d   v1.15.0
    复制代码
  • 释放k8s-prod-master1节点,使其不可被调度

    [root@k8s-prod-master2 ~]# kubectl drain k8s-prod-master1 --ignore-daemonsets
    node/k8s-prod-master1 cordoned
    error: unable to drain node "k8s-prod-master1", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master1
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-lk8tq
    复制代码
  • 升级第一台主节点的kubectl和kubelet版本到1.16.15

    [root@k8s-prod-master1 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master1 ~]# systemctl daemon-reload && systemctl restart kubelet
    复制代码
  • 解除驱逐,让第一台主节点重新可以被调度

    [root@k8s-prod-master2 ~]# kubectl uncordon k8s-prod-master1
    node/k8s-prod-master1 uncordoned
    复制代码
  • 查看集群状态

    [root@k8s-prod-master2 ~]# kubectl  get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.16.15
    k8s-prod-master2   Ready    master   509d   v1.15.0
    k8s-prod-master3   Ready    master   509d   v1.15.0
    olami-asr2         Ready    <none>   509d   v1.15.0
    olami-k8s-node1    Ready    <none>   447d   v1.15.0
    olami-nlp-model    Ready    <none>   145d   v1.15.0
    复制代码

    可以发现k8s-prod-master1的版本信息,已经更新到了v1.16.15

  • 释放k8s-prod-master2节点,使其不可被调度

    [root@k8s-prod-master1 ~]# kubectl drain k8s-prod-master2 --ignore-daemonsets
    node/k8s-prod-master2 cordoned
    error: unable to drain node "k8s-prod-master2", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master2
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-x8l2c
    复制代码
  • 升级第二台主节点的kubectl和kubelet版本到1.16.15

    [root@k8s-prod-master2 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master2 ~]# systemctl daemon-reload && systemctl restart kubelet
    复制代码
  • 解除驱逐,让第二台主节点重新可以被调度

    [root@k8s-prod-master1 ~]# kubectl uncordon k8s-prod-master2
    node/k8s-prod-master2 uncordoned
    复制代码
  • 查看集群状态

    [root@k8s-prod-master1 ~]# kubectl  get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.16.15
    k8s-prod-master2   Ready    master   509d   v1.16.15
    k8s-prod-master3   Ready    master   509d   v1.15.0
    olami-asr2         Ready    <none>   509d   v1.15.0
    olami-k8s-node1    Ready    <none>   447d   v1.15.0
    olami-nlp-model    Ready    <none>   145d   v1.15.0
    复制代码

    可以发现k8s-prod-master2的版本信息,已经更新到了v1.16.15

  • 释放k8s-prod-master3节点,使其不可被调度

    [root@k8s-prod-master1 ~]# kubectl drain k8s-prod-master3 --ignore-daemonsets
    node/k8s-prod-master3 cordoned
    error: unable to drain node "k8s-prod-master3", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master3
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-ktts7
    复制代码
  • 升级第二台主节点的kubectl和kubelet版本到1.16.15

    [root@k8s-prod-master3 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master3 ~]# systemctl daemon-reload && systemctl restart kubelet
    复制代码
  • 解除驱逐,让第三台主节点重新可以被调度

    [root@k8s-prod-master1 ~]# kubectl uncordon k8s-prod-master3
    node/k8s-prod-master3 uncordoned
    [root@k8s-prod-master1 ~]# kubectl  get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.16.15
    k8s-prod-master2   Ready    master   509d   v1.16.15
    k8s-prod-master3   Ready    master   509d   v1.16.15
    olami-asr2         Ready    <none>   509d   v1.15.0
    olami-k8s-node1    Ready    <none>   447d   v1.15.0
    olami-nlp-model    Ready    <none>   145d   v1.15.0
    复制代码

    可以发现k8s-prod-master3的版本信息,已经更新到了v1.16.15,三台主节点升级已完成,现在需要升级工作节点

升级工作节点

以节点olami-nlp-model为例,其他节点升级步骤一致

  • 更新yum repo cache

    [root@olami-nlp-model ~]# yum makecache fast
    复制代码
  • 升级指定版本的kubeadm

    [root@olami-nlp-model ~]# yum install -y kubeadm-1.16.15-0 --disableexcludes=kubernetes
    复制代码
  • 调用kubeadm upgrade node进行升级

    [root@olami-nlp-model ~]# kubeadm upgrade node
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
    复制代码
  • Drain the node,标记为不可调度,并驱逐上面的工作负载

    [root@k8s-prod-master1 ~]# kubectl drain olami-nlp-model --ignore-daemonsets
    node/olami-nlp-model cordoned
    error: unable to drain node "olami-nlp-model", aborting command...
    
    There are pending nodes to be drained:
    olami-nlp-model
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-logging-system/fluentbit-operator-5cb575bcc6-r5jqh, kubesphere-monitoring-system/alertmanager-main-0
    复制代码
  • 升级kubectl和kubelet

    [root@olami-nlp-model ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@olami-nlp-model ~]# systemctl daemon-reload && systemctl restart kubelet
    复制代码
  • Uncordon the node,使其重新可以被调度

    [root@k8s-prod-master1 ~]# kubectl uncordon olami-nlp-model
    node/olami-nlp-model uncordoned
    复制代码
  • 查看集群状态

    [root@k8s-prod-master1 ~]# kubectl  get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.16.15
    k8s-prod-master2   Ready    master   509d   v1.16.15
    k8s-prod-master3   Ready    master   509d   v1.16.15
    olami-asr2         Ready    <none>   509d   v1.15.0
    olami-k8s-node1    Ready    <none>   447d   v1.15.0
    olami-nlp-model    Ready    <none>   145d   v1.16.15
    复制代码
  • 其他节点,执行同样步骤,最终集群状态

    [root@k8s-prod-master1 ~]# kubectl  get nodes
    NAME               STATUS   ROLES    AGE    VERSION
    k8s-prod-master1   Ready    master   509d   v1.16.15
    k8s-prod-master2   Ready    master   509d   v1.16.15
    k8s-prod-master3   Ready    master   509d   v1.16.15
    olami-asr2         Ready    <none>   509d   v1.16.15
    olami-k8s-node1    Ready    <none>   447d   v1.16.15
    olami-nlp-model    Ready    <none>   145d   v1.16.15
    复制代码

    至此,K8S集群升级结束

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享