在K8S上安装RabbitMQ时无法绑定到卷

如何解决在K8S上安装RabbitMQ时无法绑定到卷

我正在尝试使用头盔安装Rabbit-mq,但是由于体积问题,安装失败。

这是我的存储课程:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate

这是我的不变量:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: main-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /media/2TB-DATA/k8s-pv
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - node-dev

这是列出我的存储和pv的输出:

# kubectl get storageclass
NAME                      PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
local-storage (default)   kubernetes.io/no-provisioner   Delete          Immediate           false                  14m
# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
main-pv   100Gi      RWX            Delete           Available           local-storage            40m

安装Rabbit-mq后:

helm install rabbitmq bitnami/rabbitmq

豆荚处于待处理状态,我看到此错误:

# kubectl describe pvc
Name:          data-rabbitmq-0
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/instance=rabbitmq
               app.kubernetes.io/name=rabbitmq
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    rabbitmq-0
Events:
  Type    Reason         Age                     From                         Message
  ----    ------         ----                    ----                         -------
  Normal  FailedBinding  3m20s (x4363 over 18h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

我在做什么错了?

解决方法

可能与平台有关。您在哪里尝试这样做?我问原因是无法在GKE上复制-它工作正常

集群版本,标签,节点

kubectl get nodes --show-labels
NAME                                       STATUS   ROLES    AGE   VERSION           LABELS
gke-cluster-1-default-pool-82008fd9-8x81   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-8x81,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-qkp7   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-qkp7,test=node
gke-cluster-1-default-pool-82008fd9-tlc7   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-tlc7,test=node

PV,存储类

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: main-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: test
              operator: In
              values:
                - node-test

安装图表:

helm install rabbitmq bitnami/rabbitmq
...
kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
...
pod/rabbitmq-0         1/1     Running   0          3m40s
...



kubectl describe pod rabbitmq-0
Name:           rabbitmq-0
Namespace:      default
Priority:       0
Node:           gke-cluster-1-default-pool-82008fd9-tlc7/10.164.0.29
Start Time:     Thu,03 Sep 2020 07:34:10 +0000
Labels:         app.kubernetes.io/instance=rabbitmq
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=rabbitmq
                controller-revision-hash=rabbitmq-8687f4cb9f
                helm.sh/chart=rabbitmq-7.6.4
                statefulset.kubernetes.io/pod-name=rabbitmq-0
Annotations:    checksum/secret: 433e8ea7590e8d9f1bb94ed2f55e6d9b95f8abef722a917b97a9e916921d7ac5
                kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container rabbitmq
Status:         Running
IP:             10.16.2.13
IPs:            <none>
Controlled By:  StatefulSet/rabbitmq
Containers:
  rabbitmq:
    Container ID:   docker://b1a567522f50ac4c0663db2d9eca5fd8721d9a3d900ac38bb58f0cae038162f2
    Image:          docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0
    Image ID:       docker-pullable://bitnami/rabbitmq@sha256:9abd53aeef6d222fec318c97a75dd50ce19c16b11cb83a3e4fb91c4047ea0d4d
    Ports:          5672/TCP,25672/TCP,15672/TCP,4369/TCP
    Host Ports:     0/TCP,0/TCP,0/TCP
    State:          Running
      Started:      Thu,03 Sep 2020 07:34:34 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
    Liveness:   exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=120s timeout=20s period=30s #success=1 #failure=6
    Readiness:  exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=10s timeout=20s period=30s #success=1 #failure=3
    Environment:
      BITNAMI_DEBUG:            false
      MY_POD_IP:                 (v1:status.podIP)
      MY_POD_NAME:              rabbitmq-0 (v1:metadata.name)
      MY_POD_NAMESPACE:         default (v1:metadata.namespace)
      K8S_SERVICE_NAME:         rabbitmq-headless
      K8S_ADDRESS_TYPE:         hostname
      RABBITMQ_FORCE_BOOT:      no
      RABBITMQ_NODE_NAME:       rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      K8S_HOSTNAME_SUFFIX:      .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      RABBITMQ_MNESIA_DIR:      /bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)
      RABBITMQ_LDAP_ENABLE:     no
      RABBITMQ_LOGS:            -
      RABBITMQ_ULIMIT_NOFILES:  65536
      RABBITMQ_USE_LONGNAME:    true
      RABBITMQ_ERL_COOKIE:      <set to the key 'rabbitmq-erlang-cookie' in secret 'rabbitmq'>  Optional: false
      RABBITMQ_USERNAME:        user
      RABBITMQ_PASSWORD:        <set to the key 'rabbitmq-password' in secret 'rabbitmq'>  Optional: false
      RABBITMQ_PLUGINS:         rabbitmq_management,rabbitmq_peer_discovery_k8s,rabbitmq_auth_backend_ldap
Mounts:
      /bitnami/rabbitmq/conf from configuration (rw)
      /bitnami/rabbitmq/mnesia from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rabbitmq-token-mclhw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-rabbitmq-0
    ReadOnly:   false
  configuration:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-config
    Optional:  false
  rabbitmq-token-mclhw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rabbitmq-token-mclhw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age    From                                               Message
  ----    ------                  ----   ----                                               -------
  Normal  Scheduled               6m42s  default-scheduler                                  Successfully assigned default/rabbitmq-0 to gke-cluster-1-default-pool-82008fd9-tlc7
  Normal  SuccessfulAttachVolume  6m36s  attachdetach-controller                            AttachVolume.Attach succeeded for volume "pvc-8145821b-ed09-11ea-b464-42010aa400e3"
  Normal  Pulling                 6m32s  kubelet,gke-cluster-1-default-pool-82008fd9-tlc7  Pulling image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
  Normal  Pulled                  6m22s  kubelet,gke-cluster-1-default-pool-82008fd9-tlc7  Successfully pulled image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
  Normal  Created                 6m18s  kubelet,gke-cluster-1-default-pool-82008fd9-tlc7  Created container rabbitmq
  Normal  Started                 6m18s  kubelet,gke-cluster-1-default-pool-82008fd9-tlc7  Started container rabbitmq

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-