如何解决ceph pg 修复没有立即开始
我的集群上时不时会出现一个 pg 不一致错误。正如文档所建议的那样,我运行 ceph pg repair pg.id 并且该命令给出“指示 osd y 上的 pg x 进行修复”似乎按预期工作。但是它并没有立即开始,这可能是什么原因?我正在运行 24 小时磨砂,所以在任何给定的时间我至少有 8-10 pg 被磨砂或深度磨砂。诸如清理或修复之类的 pg 进程是否形成队列,而我的修复命令是否只是等待轮到它?或者这背后还有其他问题吗?
编辑:
Ceph 运行状况详细信息输出:
pg 57.ee is active+clean+inconsistent,acting [16,46,74,59,5]
rados list-inconsistent-obj 57.ee --format=json-pretty
{
"epoch": 55281,"inconsistents": [
{
"object": {
"name": "10001a447c7.00005b03","nspace": "","locator": "","snap": "head","version": 150876
},"errors": [],"union_shard_errors": [
"read_error"
],"selected_object_info": {
"oid": {
"oid": "10001a447c7.00005b03","key": "","snapid": -2,"hash": 3954101486,"max": 0,"pool": 57,"namespace": ""
},"version": "55268'150876","prior_version": "0'0","last_reqid": "client.42086585.0:355736","user_version": 150876,"size": 4194304,"mtime": "2021-03-15 21:52:43.651368","local_mtime": "2021-03-15 21:52:45.399035","lost": 0,"flags": [
"dirty","data_digest"
],"truncate_seq": 0,"truncate_size": 0,"data_digest": "0xf88f1537","omap_digest": "0xffffffff","expected_object_size": 0,"expected_write_size": 0,"alloc_hint_flags": 0,"manifest": {
"type": 0
},"watchers": {}
},"shards": [
{
"osd": 5,"primary": false,"shard": 4,"size": 1400832,"data_digest": "0x00000000"
},{
"osd": 16,"primary": true,"shard": 0,{
"osd": 46,"shard": 1,{
"osd": 59,"shard": 3,"errors": [
"read_error"
],"size": 1400832
},{
"osd": 74,"shard": 2,"data_digest": "0x00000000"
}
]
}
]
}
这个 pg 在一个 EC 池中。当我运行 ceph pg repair 57.ee 我得到输出:
instructing pg 57.ees0 on osd.16 to repair
然而,正如您从 pg 报告中看到的,不一致的分片在 osd 59 中。我认为输出末尾的“s0”指的是第一个分片,所以我也尝试了这样的修复命令:
ceph pg repair 57.ees3 但我收到一个错误,告诉我这是无效的命令。
解决方法
您有 I/O 错误,通常是由于磁盘故障而发生的,正如您看到的分片错误:
errors": [],"union_shard_errors": [
"read_error"
有问题的分片在 "osd":59 上
尝试强制再次读取有问题的对象:
# rados -p EC_pool get 10001a447c7.00005b03
scrub 导致读取对象,并返回读取错误,这意味着对象被标记为消失,当发生这种情况时,它将尝试从其他地方恢复对象(对等、恢复、回填)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。