目录
环境
系统:CentOS Linux release 7.9.2009 (Core)
配置:三台4核8G
kubelet-1.18.2
kubeadm-1.18.2
kubectl-1.18.2
Docker version 20.10.17
A:192.168.72.128 master
B:192.168.72.129 node1
C:192.168.72.130 node2
注 :文章下边以ABC分别代表三台机器
k8s核心组件介绍
master节点(控制节点)
• kube-apiserver:整个系统的控制入口(所有资源统一访问入口)
• kube-controller-manager:用来执行整个系统中的后台任务,包括节点状态状 况、Pod个数、Pods和Service的关联
• kube-scheduler: 调度器,负责节点资源管理,接收来自kube-apiserver创建Pods任务, 并分配到某个节点
• etcd:存储集群数据,集群数据库
node节点(工作节点)
• kube-proxy: 负责Pod网络代理,定时从etcd获取到 service信息来做相应的策略(每个节点都有)
• kubelet:运行在每个节点上,作为agent,接受分配该节点的Pods任务及 管理容器,周期性获取容器状态,反馈kube-apiserver
其他组件
• kubectl:k8s客户端命令行工具,将收到的命令进行转换后发送给kube-apiserver,进行操作集群
• dashboard:k8s web后台
• DNS: 整个集群dns,实现各服务之间的访问
• flannel:网络插件
准备工作
A、B、C三台机器都执行
系统层次
host解析
cat <<EOF>> /etc/hosts
192.168.72.128 master
192.168.72.129 node1
192.168.72.130 node2
EOF
关闭防火墙
systemctl disable firewalld --now
关闭selinux
永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
临时关闭
setenforce 0
关闭swap
临时关闭
swapoff -a
永久关闭
swap那一行进行删除或者注释即可,三台哦(没有swap就不需要了)
[root@master ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Aug 16 20:05:30 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=a22b6a52-782a-4845-9cec-fa5cf9a5309d /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
优化内核
[root@master ~]# cat <<EOF >> /etc/sysctl.conf
# 开启网桥模式
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 关闭ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
EOF
[root@master ~]# sysctl -p
安装docker
参考博主的另一篇文章关于安装docker的
配置k8s-repo源
[root@master ~]# cat <<EOF > /etc/yum.repos.d/k8s-repo.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master ~]# yum makecache
安装集群
A、B、C三台都需要
安装kubeadm
版本迭代比较快,最好执行版本进行执行
yum安装
yum -y install kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes
启动
systemctl restart kubelet ; systemctl enable kubelet
安装master
A 服务器执行,启动后进行等待,因为会拉取镜像
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16
出现这个是master上基本完成了,这里边有两个信息
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.72.128:6443 --token e80nri.pb83aqxued1l1pd0 \
--discovery-token-ca-cert-hash sha256:e8bf3d1dd954fdac02f690f202115d55061341e3bb3d7d58077e62fa97648b3d
## 这些再A服务器上执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装node节点
B、C服务器上执行
### 这个分别再B、C上进行执行
kubeadm join 192.168.72.128:6443 --token e80nri.pb83aqxued1l1pd0 \
--discovery-token-ca-cert-hash sha256:e8bf3d1dd954fdac02f690f202115d55061341e3bb3d7d58077e62fa97648b3d
如果join清屏或者忘了,可以再A服务器上执行以下命令,就又可以看到join了,也可以用到后续增加D、E等服务器
kubeadm token create --print-join-command
查看node
目前是NotReady,是因为没有安装网络插件flannel
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 9m6s v1.18.2
node1 NotReady <none> 3m1s v1.18.2
node2 NotReady <none> 2m43s v1.18.2
[root@master ~]#
安装flannel
生成flannel.yml文件
A:服务器上执行
flannel.yml
注:Network网段跟跟上边安装master一致10.244.0.0/16
cat <<EOF>> ./flannel.yml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF
执行
[root@master ~]# kubectl apply -f flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# echo $?
0
[root@master ~]#
验证k8s状态
已经变成了ready
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 19m v1.18.2
node1 Ready <none> 13m v1.18.2
node2 Ready <none> 13m v1.18.2
[root@master ~]#
k8s命令补全
k8s命令较多,可以进行安装插件,以后可以使用tab进行命令补全
[root@master ~]# yum -y install bash-completion.noarch
[root@master ~]# source <(kubectl completion bash)
[root@master ~]# source /etc/profile.d/bash_completion.sh
VIM粘贴模式
vim进行粘贴带有#号时候,复制的时候都带上了很烦,添加下边参数就可以避免了
echo "set paste" >> /root/.vimrc
修改ROLES名称
node的ROLES现在是<none>看着很烦,那咱们修改一波
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 24m v1.18.2
node1 Ready <none> 18m v1.18.2
node2 Ready <none> 18m v1.18.2
[root@master ~]#
node1修改
kubectl edit nodes node1
大约22下后边增加node-role.kubernetes.io/worker: ""即可,注意对齐
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: node1
kubernetes.io/os: linux
node-role.kubernetes.io/worker: ""
最后wq保存即可,与vim一样的保存方法
node2
参考node1修改即可
验证
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 29m v1.18.2
node1 Ready worker 23m v1.18.2
node2 Ready worker 23m v1.18.2
[root@master ~]#
k8s命令
# 查看命名空间
kubectl get ns
# 查看所有命名空间的pod
kubectl get pods -A
# 查看指定命名空间的pod
kubectl -n kube-system get pods
# 查看pod的地址
kubectl -n kube-system get pods -o wide
重置k8s集群
A 、B 、C三台执行
kubeadm reset
¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥
有问题评论区回复,或者私信
¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。