kubeadm部署一主二从k8s集群

0. 前置工作

1. virtual box创建三台ubuntu server 20.04虚拟机,网络选择桥接模式

主机 ip
k8s-master 192.168.1.248
k8s-node1 192.168.1.106
k8s-node2 192.168.1.251
  • ubuntu 20.04 设置静态IP:
## 配置master,两台node配置类似
sudo vim /etc/netplan/00-installer-config.yaml

# This is the network config written by 'subiquity'
network:
  ethernets:
    enp0s3:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.1.248/24]  
      gateway4: 192.168.1.1
      nameservers:
          addresses: [8.8.8.8]
  version: 2

sudo netplan apply
复制代码
  • 设置hosts,保存生效
192.168.1.248 k8s-master
192.168.1.106 k8s-node1
192.168.1.251 k8s-node2
复制代码
  • 关闭交换分区
sudo swapoff -a

# 查看交换分区状态
sudo free -m 
复制代码

2. 安装docker

docs.docker.com/engine/inst…

3. 安装kubectk\kubeadm\kubelet

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
复制代码
sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

sudo apt update
sudo apt install kubelet kubeadm kubectl

sudo systemctl enable kubelet && sudo systemctl start kubelet
复制代码

4. 配置集群 – master

sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
复制代码

可能遇到的问题:

  • detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.
sudo vim /etc/docker/daemon.json

"exec-opts": ["native.cgroupdriver=systemd"]

sudo systemctl restart docker
复制代码
  • the number of available CPUs 1 is less than the required 2
virtualbox 将虚拟机cpu设置为2
复制代码
  • Error response from daemon: manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 not found:
## 手动拉取coredns镜像并打标签
sudo docker pull coredns/coredns:1.8.0

sudo docker tag coredns/coredns:1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
复制代码

看到如下信息表述初始化master节点成功:
image.png

## init完成之后会有提示信息:
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

5. 配置集群 – node

## 将两个node节点加入集群
sudo kubeadm join 192.168.1.248:6443 --token... (init master 最后输出的信息)
复制代码

此时因为还没有安装网络插件,所以节点状态不正常:
image.png

## 安装calio
wget https://docs.projectcalico.org/manifests/calico.yaml

vim calico.yaml
## 修改这个值为kubeadm init中的cidr
- name: CALICO_IPV4POOL_CIDR
  value: "172.16.0.0/16"
  
kubectl apply -f calico.yaml
复制代码

可能遇到的问题:

  • get pod 发现 kube-system coredns-57d4cbf879-22lwd 0/1 ErrImagePull,describe pod发现拉取不到镜像,因为之前本地已有该镜像(手动打标签),且deploy中的拉取策略为imagePullPolicy: IfNotPresent:

image.png
解决:是node节点上没有该镜像,而这两个pod调度到node节点上,就会找不到镜像。参考上面手动打标签的操作,在两个node节点上手动拉取并打标签即可解决。

image.png

6. 部署集群之后的组件检查

  • dial tcp 127.0.0.1:10251: connect: connection refused

image.png

cd /etc/kubernetes/manifests
vim vim kube-scheduler.yaml             ## 将 --port=0 注释
sudo vim kube-controller-manager.yaml   ## 将 --port=0 注释
sudo systemctl restart kubelet
复制代码

解决:

image.png

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享