如何解决Pod 不会调度到具有本地 PV 的节点持久卷声明不会绑定到本地持久卷
我正在尝试将 mongo 安排到我的集群(卡塔尔)中的给定节点。
我在 pod 描述中看到以下错误消息:
Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },that the pod didn't tolerate,6 node(s) didn't find available persistent volumes to bind.
Mongo 依赖于以下 2 个声明:
[dsargrad@malta cfg]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-learning-center-mongodb-0 Pending local-storage 3m57s
logs-volume-learning-center-mongodb-0 Pending local-storage 3m57s
[dsargrad@malta cfg]$ kubectl describe pvc data-volume-learning-center-mongodb-0
Name: data-volume-learning-center-mongodb-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: learning-center-mongodb-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 4m45s persistentvolume-controller waiting for first consumer to be created before binding
Normal WaitForPodScheduled 12s (x19 over 4m42s) persistentvolume-controller waiting for pod learning-center-mongodb-0 to be scheduled
我想绑定的两个PV如下:
[dsargrad@malta cfg]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mongo-data-pv 1Gi RWO Retain Available default/data-volume-learning-center-mongodb-0 local-storage 8m47s
mongo-logs-pv 1Gi RWO Retain Available default/logs-volume-learning-center-mongodb-0 local-storage 15m
这些使用“本地”存储......在 qatar.corp.sensis.com 节点上。
[dsargrad@malta cfg]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
benin.corp.sensis.com Ready <none> 45h v1.20.5
chad.corp.sensis.com Ready <none> 45h v1.20.5
malta.corp.sensis.com Ready control-plane,master 45h v1.20.5
qatar.corp.sensis.com Ready <none> 45h v1.20.5
spain.corp.sensis.com Ready <none> 45h v1.20.5
togo.corp.sensis.com Ready <none> 45h v1.20.5
tonga.corp.sensis.com Ready <none> 45h v1.20.5
我的 mongo pod 无法安排
[dsargrad@malta cfg]$ kubectl describe pod learning-center-mongodb-0
Name: learning-center-mongodb-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=learning-center-mongodb-svc
controller-revision-hash=learning-center-mongodb-784678577f
statefulset.kubernetes.io/pod-name=learning-center-mongodb-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/learning-center-mongodb
Init Containers:
mongod-posthook:
Image: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
Port: <none>
Host Port: <none>
Command:
cp
version-upgrade-hook
/hooks/version-upgrade
Environment: <none>
Mounts:
/hooks from hooks (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
mongodb-agent-readinessprobe:
Image: quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.1
Port: <none>
Host Port: <none>
Command:
cp
/probes/readinessprobe
/opt/scripts/readinessprobe
Environment: <none>
Mounts:
/opt/scripts from agent-scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Containers:
mongod:
Image: registry.hub.docker.com/library/mongo:4.2.6
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
#run post-start hook to handle version changes
/hooks/version-upgrade
# wait for config and keyfile to be created by the agent
while ! [ -f /data/automation-mongod.conf -a -f /var/lib/mongodb-mms-automation/authentication/keyfile ]; do sleep 3 ; done ; sleep 2 ;
# start mongod with this configuration
exec mongod -f /data/automation-mongod.conf;
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Environment:
AGENT_STATUS_FILEPATH: /healthstatus/agent-health-status.json
Mounts:
/data from data-volume (rw)
/healthstatus from healthstatus (rw)
/hooks from hooks (rw)
/var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
mongodb-agent:
Image: quay.io/mongodb/mongodb-agent:10.27.0.6772-1
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
current_uid=$(id -u)
echo $current_uid
declare -r current_uid
if ! grep -q "${current_uid}" /etc/passwd ; then
sed -e "s/^mongodb:/builder:/" /etc/passwd > /tmp/passwd
echo "mongodb:x:$(id -u):$(id -g):,:/:/bin/bash" >> /tmp/passwd
cat /tmp/passwd
export NSS_WRAPPER_PASSWD=/tmp/passwd
export LD_PRELOAD=libnss_wrapper.so
export NSS_WRAPPER_GROUP=/etc/group
fi
agent/mongodb-agent -cluster=/var/lib/automation/config/cluster-config.json -skipMongoStart -noDaemonize -healthCheckFilePath=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json -serveStatusPort=5000 -useLocalMongoDbTools
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Readiness: exec [/opt/scripts/readinessprobe] delay=5s timeout=1s period=10s #success=1 #failure=60
Environment:
AGENT_STATUS_FILEPATH: /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
AUTOMATION_CONFIG_MAP: learning-center-mongodb-config
HEADLESS_AGENT: true
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/data from data-volume (rw)
/opt/scripts from agent-scripts (rw)
/var/lib/automation/config from automation-config (ro)
/var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/log/mongodb-mms-automation/healthstatus from healthstatus (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
logs-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: logs-volume-learning-center-mongodb-0
ReadOnly: false
data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-volume-learning-center-mongodb-0
ReadOnly: false
agent-scripts:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
automation-config:
Type: Secret (a volume populated by a Secret)
SecretName: learning-center-mongodb-config
Optional: false
healthstatus:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
hooks:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
learning-center-mongodb-keyfile:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
mongodb-kubernetes-operator-token-ldwsr:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-kubernetes-operator-token-ldwsr
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m19s default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },6 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 7m19s default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },6 node(s) didn't find available persistent volumes to bind.
我在创建 PV 时使用了 claimRef。
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-logs-pv
labels:
app: learning-center-mongodb-svc
spec:
capacity:
storage: 1Gi
claimRef:
namespace: default
name: logs-volume-learning-center-mongodb-0
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/storage/mongo/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- qatar.corp.sensis.com
我的本地存储类:
[dsargrad@malta cfg]$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 4h22m
这里是数据PV的描述
[dsargrad@malta cfg]$ kubectl describe pv mongo-data-pv
Name: mongo-data-pv
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim: default/data-volume-learning-center-mongodb-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/storage/mongo/data
Events: <none>
和日志 PV
[dsargrad@malta cfg]$ kubectl describe pv mongo-logs-pv
Name: mongo-logs-pv
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim: default/logs-volume-learning-center-mongodb-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/storage/mongo/logs
Events: <none>
在节点 qatar.corp.sensis.com 上,我有 PV 中引用的文件夹 Screenshot of directory with permissions
为什么不将 pod 调度到 qatar.corp.sensis.com 并且 PVC 绑定到 PV?
解决方法
我做了一个愚蠢的假设,即如果 PVC 声明了一个大小,我会在 describe 命令的输出中看到这一点。我必须获取 PVC 规范的 yaml 才能看到它请求的数量超过了 PV 分配的数量
我现在绑定成功了
[dsargrad@malta cfg]$ kubectl apply -f *logs* --namespace default
persistentvolume/mongo-logs-pv configured
[dsargrad@malta cfg]$ kubectl apply -f *data* --namespace default
persistentvolume/mongo-data-pv configured
[dsargrad@malta cfg]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-learning-center-mongodb-0 Bound mongo-data-pv 10Gi RWO local-storage 98m
logs-volume-learning-center-mongodb-0 Bound mongo-logs-pv 10Gi RWO local-storage 98m
我通过仔细查看 PVC 规格找到了我需要的细节。 有趣的是,YouTube 上的 this 视频让我找到了答案。看时间大约从 6:50 开始。
在下面的注释中,存储大小为“2G”。
[dsargrad@malta cfg]$ kubectl get pvc logs-volume-learning-center-mongodb-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-03-31T15:55:40Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: learning-center-mongodb-svc
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-03-31T15:55:40Z"
name: logs-volume-learning-center-mongodb-0
namespace: default
resourceVersion: "302313"
uid: 09ef80fe-a45e-45e4-b515-9746b9265476
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
storageClassName: local-storage
volumeMode: Filesystem
status:
phase: Pending
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。