如何在CrunchyData Postgres运算符中使用NFS存储

如何解决如何在CrunchyData Postgres运算符中使用NFS存储

我正在使用NFS Helm图表尝试CrunchyData postgres-operator(Helm)。我无法使用NFS创建群集。执行以下配置:

已安装的NFS helm图表存储库

helm install nfs-abc stable/nfs-server-provisioner

设置postgres存储值Doc

backrest_storage: 'nfsstorage'
backup_storage: 'nfsstorage'
primary_storage: 'nfsstorage'
replica_storage: 'nfsstorage'

设置存储配置Doc

export CCP_SECURITY_CONTEXT='"supplementalGroups": [65534]'
export CCP_STORAGE_PATH=/nfsfileshare
export CCP_NFS_IP=data-nfs-dravoka-nfs-server-provisioner-0.default.svc.cluster.local
export CCP_STORAGE_MODE=ReadWriteMany
export CCP_STORAGE_CAPACITY=400M

创建的PGO集群

pgo create cluster -n pgo dravoka --storage-config='nfsstorage' --pgbackrest-storage-config='nfsstorage' --pvc-size='2Gi'

PVC描述:

kubectl describe -n pgo pvc dravoka
Name:          dravoka
Namespace:     pgo
StorageClass:  standard
Status:        Pending
Volume:
Labels:        pg-cluster=dravoka
               pgremove=true
               vendor=crunchydata
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason              Age                    From                         Message
  ----     ------              ----                   ----                         -------
  Warning  ProvisioningFailed  112s (x10 over 7m45s)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported

Pod描述:

kubectl describe -n pgo pod dravoka-backrest-shared-repo-9fdd77886-j2mjv
Name:           dravoka-backrest-shared-repo-9fdd77886-j2mjv
Namespace:      pgo
Priority:       0
Node:           <none>
Labels:         name=dravoka-backrest-shared-repo
                pg-cluster=dravoka
                pg-pod-anti-affinity=preferred
                pgo-backrest-repo=true
                pod-template-hash=9fdd77886
                service-name=dravoka-backrest-shared-repo
                vendor=crunchydata
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/dravoka-backrest-shared-repo-9fdd77886
Containers:
  database:
    Image:      registry.developers.crunchydata.com/crunchydata/pgo-backrest-repo:centos7-4.4.1
    Port:       2022/TCP
    Host Port:  0/TCP
    Requests:
      memory:  48Mi
    Environment:
      PGBACKREST_STANZA:           db
      SSHD_PORT:                   2022
      PGBACKREST_DB_PATH:          /pgdata/dravoka
      PGBACKREST_REPO_PATH:        /backrestrepo/dravoka-backrest-shared-repo
      PGBACKREST_PG1_PORT:         5432
      PGBACKREST_LOG_PATH:         /tmp
      PGBACKREST_PG1_SOCKET_PATH:  /tmp
      PGBACKREST_DB_HOST:          dravoka
    Mounts:
      /backrestrepo from backrestrepo (rw)
      /sshd from sshd (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  sshd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dravoka-backrest-repo-config
    Optional:    false
  backrestrepo:
    Type:        PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:   dravoka-pgbr-repo
    ReadOnly:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  76s (x7 over 9m58s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

我是否丢失了某些已配置的内容或做错了什么?我的目的是使用NFS作为postgres存储。任何帮助将不胜感激。

解决方法

无法通过StorageClass“标准”配置卷:无效 AccessModes [ReadWriteMany]:仅AccessModes [ReadWriteOnce 支持ReadOnlyMany]

因此,这是问题的根本原因。您正在使用不支持所需AccessMode的存储类(即ReadWriteMany

)来配置pvc

看着doc,似乎您已经有了此配置

storage3_name: 'nfsstorage'
storage3_access_mode: 'ReadWriteMany'
storage3_size: '1G'
storage3_type: 'create'
storage3_supplemental_groups: 65534

storage3_access_mode表示access_mode为ReadWriteMany,但是不支持。

请尝试将其更改为ReadWriteOnce


然后,Postgres需要块存储才能工作,因此即使正确安装了NFS,Postgres集群也将无法运行。更多说明here

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-