Pod 不会调度到具有本地 PV 的节点持久卷声明不会绑定到本地持久卷

如何解决Pod 不会调度到具有本地 PV 的节点持久卷声明不会绑定到本地持久卷

我正在尝试将 mongo 安排到我的集群(卡塔尔)中的给定节点。

我在 pod 描述中看到以下错误消息:

  Warning  FailedScheduling  58m   default-scheduler  0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },that the pod didn't tolerate,6 node(s) didn't find available persistent volumes to bind.

Mongo 依赖于以下 2 个声明:

[dsargrad@malta cfg]$ kubectl get pvc
NAME                                    STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
data-volume-learning-center-mongodb-0   Pending                                      local-storage   3m57s
logs-volume-learning-center-mongodb-0   Pending                                      local-storage   3m57s

[dsargrad@malta cfg]$ kubectl describe pvc data-volume-learning-center-mongodb-0
Name:          data-volume-learning-center-mongodb-0
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app=learning-center-mongodb-svc
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       learning-center-mongodb-0
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  WaitForFirstConsumer  4m45s                 persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  WaitForPodScheduled   12s (x19 over 4m42s)  persistentvolume-controller  waiting for pod learning-center-mongodb-0 to be scheduled

我想绑定的两个PV如下:

[dsargrad@malta cfg]$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                           STORAGECLASS    REASON   AGE
mongo-data-pv   1Gi        RWO            Retain           Available   default/data-volume-learning-center-mongodb-0   local-storage            8m47s
mongo-logs-pv   1Gi        RWO            Retain           Available   default/logs-volume-learning-center-mongodb-0   local-storage            15m

这些使用“本地”存储......在 qatar.corp.sensis.com 节点上。

[dsargrad@malta cfg]$ kubectl get nodes
NAME                    STATUS   ROLES                  AGE   VERSION
benin.corp.sensis.com   Ready    <none>                 45h   v1.20.5
chad.corp.sensis.com    Ready    <none>                 45h   v1.20.5
malta.corp.sensis.com   Ready    control-plane,master   45h   v1.20.5
qatar.corp.sensis.com   Ready    <none>                 45h   v1.20.5
spain.corp.sensis.com   Ready    <none>                 45h   v1.20.5
togo.corp.sensis.com    Ready    <none>                 45h   v1.20.5
tonga.corp.sensis.com   Ready    <none>                 45h   v1.20.5

我的 mongo pod 无法安排

[dsargrad@malta cfg]$ kubectl describe pod learning-center-mongodb-0
Name:           learning-center-mongodb-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=learning-center-mongodb-svc
                controller-revision-hash=learning-center-mongodb-784678577f
                statefulset.kubernetes.io/pod-name=learning-center-mongodb-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/learning-center-mongodb
Init Containers:
  mongod-posthook:
    Image:      quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      version-upgrade-hook
      /hooks/version-upgrade
    Environment:  <none>
    Mounts:
      /hooks from hooks (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
  mongodb-agent-readinessprobe:
    Image:      quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.1
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      /probes/readinessprobe
      /opt/scripts/readinessprobe
    Environment:  <none>
    Mounts:
      /opt/scripts from agent-scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Containers:
  mongod:
    Image:      registry.hub.docker.com/library/mongo:4.2.6
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
      -c

      #run post-start hook to handle version changes
      /hooks/version-upgrade

      # wait for config and keyfile to be created by the agent
       while ! [ -f /data/automation-mongod.conf -a -f /var/lib/mongodb-mms-automation/authentication/keyfile ]; do sleep 3 ; done ; sleep 2 ;


      # start mongod with this configuration
      exec mongod -f /data/automation-mongod.conf;

    Limits:
      cpu:     1
      memory:  500M
    Requests:
      cpu:     500m
      memory:  400M
    Environment:
      AGENT_STATUS_FILEPATH:  /healthstatus/agent-health-status.json
    Mounts:
      /data from data-volume (rw)
      /healthstatus from healthstatus (rw)
      /hooks from hooks (rw)
      /var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
      /var/log/mongodb-mms-automation from logs-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
  mongodb-agent:
    Image:      quay.io/mongodb/mongodb-agent:10.27.0.6772-1
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/bash
      -c
      current_uid=$(id -u)
      echo $current_uid
      declare -r current_uid
      if ! grep -q "${current_uid}" /etc/passwd ; then
      sed -e "s/^mongodb:/builder:/" /etc/passwd > /tmp/passwd
      echo "mongodb:x:$(id -u):$(id -g):,:/:/bin/bash" >> /tmp/passwd
      cat /tmp/passwd
      export NSS_WRAPPER_PASSWD=/tmp/passwd
      export LD_PRELOAD=libnss_wrapper.so
      export NSS_WRAPPER_GROUP=/etc/group
      fi
      agent/mongodb-agent -cluster=/var/lib/automation/config/cluster-config.json -skipMongoStart -noDaemonize -healthCheckFilePath=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json -serveStatusPort=5000 -useLocalMongoDbTools
    Limits:
      cpu:     1
      memory:  500M
    Requests:
      cpu:      500m
      memory:   400M
    Readiness:  exec [/opt/scripts/readinessprobe] delay=5s timeout=1s period=10s #success=1 #failure=60
    Environment:
      AGENT_STATUS_FILEPATH:  /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
      AUTOMATION_CONFIG_MAP:  learning-center-mongodb-config
      HEADLESS_AGENT:         true
      POD_NAMESPACE:          default (v1:metadata.namespace)
    Mounts:
      /data from data-volume (rw)
      /opt/scripts from agent-scripts (rw)
      /var/lib/automation/config from automation-config (ro)
      /var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
      /var/log/mongodb-mms-automation from logs-volume (rw)
      /var/log/mongodb-mms-automation/healthstatus from healthstatus (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  logs-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  logs-volume-learning-center-mongodb-0
    ReadOnly:   false
  data-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-volume-learning-center-mongodb-0
    ReadOnly:   false
  agent-scripts:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  automation-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  learning-center-mongodb-config
    Optional:    false
  healthstatus:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  hooks:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  learning-center-mongodb-keyfile:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  mongodb-kubernetes-operator-token-ldwsr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mongodb-kubernetes-operator-token-ldwsr
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m19s  default-scheduler  0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },6 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  7m19s  default-scheduler  0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },6 node(s) didn't find available persistent volumes to bind.

我在创建 PV 时使用了 claimRef。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-logs-pv
  labels:
    app: learning-center-mongodb-svc
spec:
  capacity:
    storage: 1Gi
  claimRef:
    namespace: default
    name: logs-volume-learning-center-mongodb-0
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /home/storage/mongo/logs
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - qatar.corp.sensis.com

我的本​​地存储类:

[dsargrad@malta cfg]$ kubectl get storageclass
NAME                      PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage (default)   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  4h22m

这里是数据PV的描述

[dsargrad@malta cfg]$ kubectl describe pv mongo-data-pv
Name:              mongo-data-pv
Labels:            app=learning-center-mongodb-svc
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Available
Claim:             default/data-volume-learning-center-mongodb-0
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /home/storage/mongo/data
Events:    <none>

和日志 PV

[dsargrad@malta cfg]$ kubectl describe pv mongo-logs-pv
Name:              mongo-logs-pv
Labels:            app=learning-center-mongodb-svc
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Available
Claim:             default/logs-volume-learning-center-mongodb-0
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /home/storage/mongo/logs
Events:    <none>

在节点 qatar.corp.sensis.com 上,我有 PV 中引用的文件夹 Screenshot of directory with permissions

为什么不将 pod 调度到 qatar.corp.sensis.com 并且 PVC 绑定到 PV?

解决方法

我做了一个愚蠢的假设,即如果 PVC 声明了一个大小,我会在 describe 命令的输出中看到这一点。我必须获取 PVC 规范的 yaml 才能看到它请求的数量超过了 PV 分配的数量

我现在绑定成功了

[dsargrad@malta cfg]$ kubectl apply -f *logs* --namespace default
persistentvolume/mongo-logs-pv configured
[dsargrad@malta cfg]$ kubectl apply -f *data* --namespace default
persistentvolume/mongo-data-pv configured
[dsargrad@malta cfg]$ kubectl get pvc
NAME                                    STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
data-volume-learning-center-mongodb-0   Bound    mongo-data-pv   10Gi       RWO            local-storage   98m
logs-volume-learning-center-mongodb-0   Bound    mongo-logs-pv   10Gi       RWO            local-storage   98m

我通过仔细查看 PVC 规格找到了我需要的细节。 有趣的是,YouTube 上的 this 视频让我找到了答案。看时间大约从 6:50 开始。

在下面的注释中,存储大小为“2G”。

[dsargrad@malta cfg]$ kubectl get pvc logs-volume-learning-center-mongodb-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2021-03-31T15:55:40Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: learning-center-mongodb-svc
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:accessModes: {}
        f:resources:
          f:requests:
            .: {}
            f:storage: {}
        f:volumeMode: {}
      f:status:
        f:phase: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-31T15:55:40Z"
  name: logs-volume-learning-center-mongodb-0
  namespace: default
  resourceVersion: "302313"
  uid: 09ef80fe-a45e-45e4-b515-9746b9265476
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2G
  storageClassName: local-storage
  volumeMode: Filesystem
status:
  phase: Pending

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-