使用kubeadm快速搭建一个k8s集群
版本列表(下面安装都已此版本为例,其他版本兼容自行查询官网)
组件 | 版本 |
---|---|
docker | 20.10.6 |
k8s | v1.21.0 |
calico | v3.26.0 |
资源网盘连接: 网盘
1、准备机器
- 开通三台机器,内网互通
- 每台机器的hostname不要用localhost【不包含下划线、小数点、大写字母】(这个后续步骤也可以做)
2、安装前置环境(都执行)
2.1 基础环境
#关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
systemctl stop firewalld
systemctl disable firewalld
# 修改 hostname
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
#关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
#关闭 swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#将桥接的 IPv4 流量传递到 iptables:
# 修改 /etc/sysctl.conf
# 有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p
# 执行命令查看设置
sysctl -a | grep call
2.2 docker 环境
2.2.1 下载docker-20.10.6-ce.tgz,下载地址:官网地址,选择centos7 x86_64版本的。
2.2.2 上传,解压
将docker-20.10.6-ce.tgz上传到服务器,并执行命令解压
tar -zxvf docker-20.10.6-ce.tgz
cp docker/* /usr/bin/
2.2.3 创建docker.service
vi /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
# 开启远程连接
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process,not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
2.2.4 启动docker,并创建daemon.json
systemctl start docker
systemctl enable docker
vi /etc/docker/daemon.json # 配置仓库等
{
"oom-score-adjust": -1000,"graph":"/xxx/docker",#配置自己的镜像,容器存放路径
"log-driver": "json-file","log-opts": {
"max-size": "100m","max-file": "3"
},"max-concurrent-downloads": 10,"max-concurrent-uploads": 10,"registry-mirrors": ["xxxx"],# 配置镜像源
"storage-driver": "overlay2","storage-opts": [
"overlay2.override_kernel_check=true"
]
}
## 执行重启docker
sudo systemctl daemon-reload
sudo systemctl restart docker
3、安装k8s核心 kubectl kubeadm kubelet(都执行)
# 卸载旧版本
提前使用可以联网的机器下载 离线rpm包
# 创建rpm软件存储目录:
mkdir -p /kubeadm-rpm
# 执行命令:
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --downloadonly --downloaddir /kubeadm-rpm
yum remove -y kubelet kubeadm kubectl
上传rpm包到服务器后使用yum安装
yum -y install /kubeadm-rpm/*"
# 开机启动kubelet
systemctl enable kubelet && systemctl start kubelet
4、初始化master节点(master执行)
4.1 镜像准备(三台服务器都上传)
kube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
# 网络插件镜像 这里使用的是calico
calico-cni
calico-node
calico-kube-controllers
calico-pod2Daemon-dlexvol
##注意1.21.0版本的k8s coredns镜像比较特殊,结合阿里云需要特殊处理,重新打标签 镜像名称前缀可以自己打tag
docker tag registry.cn-hangzhou.aliyuncs.com/zzl/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/zzl/coredns/coredns:v1.8.0
4.2 init master节点
########kubeadm init 一个master########################
########kubeadm join 其他worker########################
kubeadm init \
--apiserver-advertise-address=10.170.11.8 \
--image-repository registry.cn-hangzhou.aliyuncs.com/zzl \
--kubernetes-version v1.21.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
# apiserver-advertise-address master的ip (内网)
# image-repository 指定准备镜像的仓库 如 我这里使用的阿里云的 registry.cn-hangzhou.aliyuncs.com/zzl
## 注意:pod-cidr与service-cidr
# 指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域
#例如 apiserver-advertise-address=10.170.xx pod-network-cidr=192.170.0.0/16 这样不行
####按照提示继续####
## init完成后第一步:复制相关文件夹
To start using your cluster,you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
## 导出环境变量
Alternatively,if you are the root user,you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
### 部署一个pod网络组件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
##############如下:安装calico#####################
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
#(此步骤,可以先使用联网的机器 curl 下载yml curl)离线下载命令
# 下载对应版本
curl https://docs.projectcalico.org/manifests/calico.yaml -O
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
# 启动calico
kubectl apply -f calico.yml
kubectl get pod -A ##获取集群中所有部署好的应用Pod
kubectl get nodes ##查看集群所有机器的状态
5. 初始化work节点
## 用master生成的命令即可
kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \
--discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
#过期之后重写创建
kubeadm token create --print-join-command
kubeadm token create --ttl 0 --print-join-command
kubeadm join --token y1eyw5.ylg568kvohfdsfco --discovery-token-ca-cert-hash sha256: 6c35e4f73f72afd89bf1c8c303ee55677d2cdb1342d67bb23c852aba2efc7c73
6. 验证集群
#获取所有节点
kubectl get nodes
#给节点打标签
###加标签 《h1》
kubectl label node 机器hostname node-role.kubernetes.io/worker=''
###去标签
kubectl label node 机器hostname node-role.kubernetes.io/worker-
7. 设置ipvs模式
#1、查看默认kube-proxy 使用的模式
kubectl logs -n kube-system kube-proxy-28xv4
#2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就会慢
kubectl edit cm kube-proxy -n kube-system
修改如下
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
###修改了kube-proxy的配置,delete以前的Kube-proxy pod
kubectl get pod -A|grep kube-proxy
kubectl delete pod kube-proxy-xxxx -n kube-system
原文地址:https://blog.csdn.net/aloney1/article/details/131276908
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。