集群配置
配置清单
- OS: ubuntu 20.04
- kubernetes: 1.28.1
- Container Runtime:Containerd 1.7.11
- CRI: runc 1.10
- CNI: cni-plugin 1.4
集群规划
IP | Host | 配置 |
---|---|---|
11.0.1.147 | master1 (keepalived+nginx) | 2C 4G 30G |
11.0.1.148 | master2 (keepalived+nginx) | 2C 4G 30G |
11.0.1.149 | node1 | 2C 4G 30G |
11.0.1.150 | node2 | 2C 4G 30G |
11.0.1.151 | node3 | 2C 4G 30G |
集群网络规划
- Pod 网络: 10.244.0.0/16
- Service 网络: 10.96.0.0/12
- Node 网络: 11.0.1.0/24
安装配置 keepalived
# 在规划的 vip 节点
apt install keepalived -y
# master1
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass root
}
virtual_ipaddress {
11.0.1.100
}
}
EOF
# master2
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
nopreempt
priority 70
advert_int 1
virtual_ipaddress {
11.0.1.100
}
}
EOF
# 重启 keepalived
systemctl restart keepalived.service && systemctl enable keepalived.service
systemctl status keepalived.service
# 查看 vip
ip a | grep 11.0.1.100
# 可停掉 master1 看 vip 是否会漂移到 master2
安装配置 nginx
apt install nginx -y
systemctl status nginx
# 修改 nginx 配置文件
cat /etc/nginx/nginx.conf
user user;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
#添加了stream 这一段,其他的保持默认即可
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 11.0.1.147:6443; #master01的IP和6443端口
server 11.0.1.148:6443; #master02的IP和6443端口
}
server {
listen 16443; #监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口
proxy_pass k8s-apiserver; #使用proxy_pass模块进行反向代理
}
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# 重启 nginx 服务
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
# 端口检查
netstat -lntup| grep 16443
配置高可用 ApiServer
可以使用 kubeadm init 初始化集群时, 指定高可用地址
command line
kubeadm init \
--apiserver-advertise-address=11.0.1.147 \
--apiserver-bind-port=6443 \
--control-plane-endpoint=11.0.1.100:16443 \ # 指定 vip + nginx 端口
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.1 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
yaml 方式初始化
kubeadm 初始化集群文件 Kubernetes-cluster.yaml:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 将此处IP地址替换为主节点IP ETCD容器会试图通过此地址绑定端口 如果主机不存在则会失败
advertiseAddress: 11.0.1.147
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master1 # 节点 hostname
taints: null
---
# controlPlaneEndpoint 可配置高可用的 ApiServer
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: 11.0.1.100:6443 # 使用 keepalived + nginx 的高可用地址
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd: # 可使用外接 etcd 集群
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 国内源
kind: ClusterConfiguration
kubernetesVersion: 1.28.1
networking:
dnsDomain: cluster.local
# 增加配置 指定pod网段
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kubeproxy 使用 ipvs
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
在已有集群上添加高可用地址
kubectl -n kube-system edit cm kubeadm-config
# 把 controlPlaneEndpoint 修改为高可用地址即可
单 master 转 vip 高可用的坑:
如果遇到加入新节点时, 出现 tls 相关报错, 显示连接 ip 不支持, 则需要更新 apiserver 证书