Back
Featured image of post 基于 Containerd 部署k8s 1.28.1 (Ubuntu 20.04)

基于 Containerd 部署k8s 1.28.1 (Ubuntu 20.04)

详细的 k8s 基于 containerd 的安装过程, 带你解决k8s 1.24 后的种种疑难杂症

集群配置

配置清单

  • OS: ubuntu 20.04
  • kubernetes: 1.28.1
  • Container Runtime:Containerd 1.7.11
  • CRI: runc 1.10
  • CNI: cni-plugin 1.4

集群规划

IP Hostname 配置
11.0.1.147 master1 2C 4G 30G
11.0.1.148 master2 2C 4G 30G
11.0.1.149 node1 2C 4G 30G
11.0.1.150 node2 2C 4G 30G
11.0.1.151 node3 2C 4G 30G

集群网络规划

  • Pod 网络: 10.244.0.0/16
  • Service 网络: 10.96.0.0/12
  • Node 网络: 11.0.1.0/24

环境初始化

主机配置

# 修改主机名
hostnamectl set-hostname master1
hostnamectl set-hostname master2
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3

# 将节点加入 hosts
cat << EOF >> /etc/hosts
11.0.1.147 master1
11.0.1.148 master2
11.0.1.149 node1
11.0.1.150 node2
11.0.1.151 node3
EOF

# 时间同步
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd

# 配置 ssh 免密登录
ssh-copy-id -i /root/.ssh/id_rsa.pub root@11.0.1.148
ssh-copy-id -i /root/.ssh/id_rsa.pub root@11.0.1.149
ssh-copy-id -i /root/.ssh/id_rsa.pub root@11.0.1.150
ssh-copy-id -i /root/.ssh/id_rsa.pub root@11.0.1.151

禁用 swap

sudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab

安装 ipvs

apt install -y ipset ipvsadm

调整内核参数

# 配置需要的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# 启动模块
sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 是 sysctl 参数生效
sudo sysctl --system

# 检验是否配置成功
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward


# 配置 ipvs 内核参数
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 内核加载 ipvs
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack

# 确认ipvs模块加载
lsmod |grep -e ip_vs -e nf_conntrack

安装 Containerd

二进制安装 containerd

wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
tar -xzvf containerd-1.7.11-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
rm -rf bin

#使用systemcd来管理containerd
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service 
mv containerd.service  /usr/lib/systemd/system/
systemctl daemon-reload && systemctl enable --now containerd 
systemctl  status containerd

安装 OCI Interface runc

#安装runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc

安装 CNI plugins

wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf  cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/

修改 Containd 配置

#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml

# 修改沙箱镜像
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.8"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"#' /etc/containerd/config.toml
# 修改 cgroup 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml

配置 Containerd 镜像源

# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
  capabilities = ["pull", "resolve"]

[host."https://docker.m.daocloud.io"]
  capabilities = ["pull", "resolve"]

[host."https://reg-mirror.qiniu.com"]
  capabilities = ["pull", "resolve"]

[host."https://registry.docker-cn.com"]
  capabilities = ["pull", "resolve"]

[host."http://hub-mirror.c.163.com"]
  capabilities = ["pull", "resolve"]

EOF

# registry.k8s.io镜像加速
mkdir -p /etc/containerd/certs.d/registry.k8s.io
tee /etc/containerd/certs.d/registry.k8s.io/hosts.toml << 'EOF'
server = "https://registry.k8s.io"

[host."https://k8s.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# docker.elastic.co镜像加速
mkdir -p /etc/containerd/certs.d/docker.elastic.co
tee /etc/containerd/certs.d/docker.elastic.co/hosts.toml << 'EOF'
server = "https://docker.elastic.co"

[host."https://elastic.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/gcr.io
tee /etc/containerd/certs.d/gcr.io/hosts.toml << 'EOF'
server = "https://gcr.io"

[host."https://gcr.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# ghcr.io镜像加速
mkdir -p /etc/containerd/certs.d/ghcr.io
tee /etc/containerd/certs.d/ghcr.io/hosts.toml << 'EOF'
server = "https://ghcr.io"

[host."https://ghcr.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"

[host."https://k8s-gcr.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# mcr.m.daocloud.io镜像加速
mkdir -p /etc/containerd/certs.d/mcr.microsoft.com
tee /etc/containerd/certs.d/mcr.microsoft.com/hosts.toml << 'EOF'
server = "https://mcr.microsoft.com"

[host."https://mcr.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# nvcr.io镜像加速
mkdir -p /etc/containerd/certs.d/nvcr.io
tee /etc/containerd/certs.d/nvcr.io/hosts.toml << 'EOF'
server = "https://nvcr.io"

[host."https://nvcr.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# quay.io镜像加速
mkdir -p /etc/containerd/certs.d/quay.io
tee /etc/containerd/certs.d/quay.io/hosts.toml << 'EOF'
server = "https://quay.io"

[host."https://quay.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# registry.jujucharms.com镜像加速
mkdir -p /etc/containerd/certs.d/registry.jujucharms.com
tee /etc/containerd/certs.d/registry.jujucharms.com/hosts.toml << 'EOF'
server = "https://registry.jujucharms.com"

[host."https://jujucharms.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

# rocks.canonical.com镜像加速
mkdir -p /etc/containerd/certs.d/rocks.canonical.com
tee /etc/containerd/certs.d/rocks.canonical.com/hosts.toml << 'EOF'
server = "https://rocks.canonical.com"

[host."https://rocks-canonical.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
EOF

#重启containerd
systemctl restart containerd 
systemctl status containerd

创建容器确保 containerd 正确运行(可选)

#拉取镜像,测试containerd是否能创建和启动成功
ctr i pull docker.io/library/nginx:alpine		#能正常拉取镜像说明没啥问题
ctr images ls									#查看镜像
ctr c create --net-host docker.io/library/nginx:alpine nginx #创建容器
ctr task start -d nginx							#启动容器,正常说明containerd没啥问题
ctr containers ls 								#查看容器
ctr tasks kill -s SIGKILL  nginx				#终止容器
ctr containers rm nginx							#删除容器

安装 kubeadm、kubelet、kubectl

# 安装依赖
apt install apt-transport-https ca-certificates -y
apt install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind9-utils -y 

# 编辑镜像源文件,文件末尾加入阿里云k8s镜像源配置
echo 'deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main' >> /etc/apt/sources.list
#更新证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add
#更新源
apt update

# 查看 kubeadm 版本
apt-cache madison kubeadm | grep 1.28
apt-get install -y kubeadm=1.28.1-00 kubectl=1.28.1-00 kubelet=1.28.1-00
# kubelet 开机自启
systemctl enable kubelet

配置 crictl socket

crictl config  runtime-endpoint unix:///run/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock

kubeadm init

一: 直接通过 kubeadm init 初始化集群

可提前拉取镜像

kubeadm  config images list --kubernetes-version=v1.28.1 --image-repository=registry.aliyuncs.com/google_containers
kubeadm  config images pull --kubernetes-version=v1.28.1 --image-repository=registry.aliyuncs.com/google_containers

初始化集群

以下是安装是非高可用这并不影响高可用的配置, 我另一篇博客 keepalived+nginx实现高可用apiserver, 初始化集群与高可用的 apiserver 两步骤可以完全独立, 这两篇博客带你了解 kubeadm init 的多种方式

kubeadm init \
--apiserver-advertise-address=11.0.1.147 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.1 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

二: 通过加载配置文件初始化集群

获取集群初始化配置文件

kubeadm config print init-defaults >Kubernetes-cluster.yaml
vim Kubernetes-cluster.yaml

Kubernetes-cluster.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 将此处IP地址替换为主节点IP ETCD容器会试图通过此地址绑定端口 如果主机不存在则会失败
  advertiseAddress: 11.0.1.147
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1  # 节点 hostname
  taints: null
---
# controlPlaneEndpoint 可配置高可用的 ApiServer
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:  # 可使用外接 etcd 集群
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 国内源
kind: ClusterConfiguration
kubernetesVersion: 1.28.1
networking:
  dnsDomain: cluster.local
  # 增加配置 指定pod网段
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kubeproxy 使用 ipvs
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

获取或修改: kubectl -n kube-system edit cm kubeadm-config

使用配置文件初始化集群

kubeadm init --config Kubernetes-cluster.yaml

复制 kubeconfig

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入节点

加入 工作节点

可直接复制 kubeadm init 后的 join 加入

kubeadm join 11.0.1.147:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7b465a19bae495131a16b51967a0c329bce9fe7d49136c641929eda69cfd6969

加入 master

如果 kubeadm init 后有加入 master 的命令直接复制就行

如果没有就自己创建 control-plane 的 cert

创建 cert-key
$ kubeadm init phase upload-certs --upload-certs
[upload-certs] Using certificate key:
d38fbc73dc4c113409597a59d65ee66e4641ca220a463b1efeac9baa14f2924a
创建 token 也可以直接用 kubeadm init 产生的 token
$ kubeadm token create --print-join-command
kubeadm join 11.0.1.147:6443 --token m4l8th.81p28vmm5dh3nxl9 --discovery-token-ca-cert-hash sha256:7b465a19bae495131a16b51967a0c329bce9fe7d49136c641929eda69cfd6969
加入 master2
kubeadm join 11.0.1.147:6443 --token m4l8th.81p28vmm5dh3nxl9 --discovery-token-ca-cert-hash sha256:7b465a19bae495131a16b51967a0c329bce9fe7d49136c641929eda69cfd6969 --control-plane --certificate-key d38fbc73dc4c113409597a59d65ee66e4641ca220a463b1efeac9baa14f2924a

排错

如何向Kubernetes集群中添加master节点(原集群只有一个master节点)

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.

unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

说明 apiserver 没有绑定到固定 IP, 可以在 master1

kubectl -n kube-system edit cm kubeadm-config

# 修改 data 中 controlPlaneEndpoint 为一个静态 IP(要确保静态 IP 能够访问 apiserver 注意证书, 后期可通过 keepalived/nginx 配置 apiserver 的高可用, 
root@ubuntu:~# kubeadm join 11.0.1.147:6443 --token m4l8th.81p28vmm5dh3nxl9 --discovery-token-ca-cert-hash sha256:7b465a19bae495131a16b51967a0c329bce9fe7d49136c641929eda69cfd6969 --control-plane --certificate-key d38fbc73dc4c113409597a59d65ee66e4641ca220a463b1efeac9baa14f2924a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1223 14:04:19.065758   16517 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [11.0.1.148 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [11.0.1.148 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 11.0.1.148 11.0.1.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

配置命令行自动补全(可选)

apt install bash-completion -y
cat << EOF >> ~/.profile
alias k='kubectl'
source <(kubectl completion bash)
complete -F __start_kubectl k
EOF

source ~/.profile

安装 calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
vi custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16  # 与划分的 pod 网段一致
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

验证集群

root@ubuntu:~# k get po -A
NAMESPACE          NAME                                       READY   STATUS              RESTARTS      AGE
calico-apiserver   calico-apiserver-66cb6b4b7f-8l67s          1/1     Running             0             57s
calico-apiserver   calico-apiserver-66cb6b4b7f-p8xs9          0/1     Running             0             57s
calico-system      calico-kube-controllers-86d48c97dc-vzzcd   1/1     Running             0             5m32s
calico-system      calico-node-29snn                          1/1     Running             0             5m32s
calico-system      calico-node-cqrrf                          1/1     Running             0             5m32s
calico-system      calico-node-gvpjn                          1/1     Running             0             5m32s
calico-system      calico-node-wq4mh                          1/1     Running             0             5m32s
calico-system      calico-node-xfvkw                          1/1     Running             0             5m32s
calico-system      calico-typha-55fd77b9db-2x8sv              1/1     Running             0             5m24s
calico-system      calico-typha-55fd77b9db-4r98k              1/1     Running             0             5m33s
calico-system      calico-typha-55fd77b9db-qk7cm              1/1     Running             0             5m24s
calico-system      csi-node-driver-bhzpm                      2/2     Running             0             5m32s
calico-system      csi-node-driver-bptcd                      2/2     Running             0             5m32s
calico-system      csi-node-driver-g884s                      2/2     Running             0             5m32s
calico-system      csi-node-driver-vm4p7                      0/2     ContainerCreating   0             5m32s
calico-system      csi-node-driver-zgmds                      2/2     Running             0             5m32s
kube-system        coredns-66f779496c-6fmh9                   1/1     Running             0             17h
kube-system        coredns-66f779496c-p47zp                   1/1     Running             0             17h
kube-system        etcd-master1                               1/1     Running             2             17h
kube-system        etcd-master2                               1/1     Running             0             16h
kube-system        kube-apiserver-master1                     1/1     Running             2             17h
kube-system        kube-apiserver-master2                     1/1     Running             0             16h
kube-system        kube-controller-manager-master1            1/1     Running             4             17h
kube-system        kube-controller-manager-master2            1/1     Running             1 (11h ago)   16h
kube-system        kube-proxy-bb2qd                           1/1     Running             0             17h
kube-system        kube-proxy-c4zqw                           1/1     Running             0             17h
kube-system        kube-proxy-cnwnl                           1/1     Running             0             16h
kube-system        kube-proxy-mtgn6                           1/1     Running             0             17h
kube-system        kube-proxy-tlgln                           1/1     Running             0             17h
kube-system        kube-scheduler-master1                     1/1     Running             4             17h
kube-system        kube-scheduler-master2                     1/1     Running             1 (11h ago)   16h
tigera-operator    tigera-operator-55585899bf-mcs5f           1/1     Running             0             5m46s
root@ubuntu:~# k get no
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   17h   v1.28.1
master2   Ready    control-plane   16h   v1.28.1
node1     Ready    <none>          17h   v1.28.1
node2     Ready    <none>          17h   v1.28.1
node3     Ready    <none>          17h   v1.28.1

参考文章:

  1. 将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd | Kubernetes
  2. 使用 kubeadm 引导集群 | Kubernetes
  3. ubuntu安装指定版本docker(包含官方/国内安装方法)_ubuntu 18.04 安装docker 20.10.7-CSDN博客
  4. ubuntu 20.4安装k8s 1.24.0、1.28.0(使用containerd)_ubuntu containerd 安装-CSDN博客
  5. kubeadm 部署k8s v1.28.3集群 - 小吉猫 - 博客园 (cnblogs.com)
  6. 如何向Kubernetes集群中添加master节点(原集群只有一个master节点) - 知乎 (zhihu.com)
  7. [containerd] 镜像加速_containerd 镜像加速-CSDN博客