尝试在 Kubernetes 集群中使用 iSCSI 卷,但得到“错误的 fs 类型、错误的选项、/dev/sdb 上的超级块错误、缺少代码页或帮助程序”

如何解决尝试在 Kubernetes 集群中使用 iSCSI 卷,但得到“错误的 fs 类型、错误的选项、/dev/sdb 上的超级块错误、缺少代码页或帮助程序”

由于 NFS ref 可能导致的问题,我尝试在 K8S 集群中构建 iSCSI 卷挂载,但出现错误:

MountVolume.MountDevice 卷“iscsipd-rw”失败:安装失败:退出状态 32

挂载参数:--description=Kubernetes 临时挂载 /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy .testtarget-lun-1 --scope -- mount -t ext4 -o defaults /dev/disk/by-path/ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget -lun-1 /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1

mount: /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1: fs错误类型、选项错误、/dev/sdb 上的超级块错误、缺少代码页或帮助程序或其他错误。

一开始我是按照this document来创建iSCSI启动器的,由于不同情况导致的错误,我多次尝试了各种设置。 iSCSI 启动器连接看起来不错

Command (m for help): p
Disk /dev/sdb: 1 GiB,1073741824 bytes,2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2CDE61DE-F57A-4C0B-AFB6-9DD7040A8BBD

Tue Apr 13 15:41:57 i@kt04:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   64G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
├─sda3                      8:3    0   49G  0 part
│ └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
└─sda4                      8:4    0   14G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
sdb                         8:16   0    1G  0 disk
sr0                        11:0    1 1024M  0 rom

Tue Apr 13 15:45:33 i@kt04:~$ sudo ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root  9 Apr 13 15:41 ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 -> ../../sdb
lrwxrwxrwx 1 root root  9 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part4 -> ../../sda4
lrwxrwxrwx 1 root root  9 Apr 13 01:55 pci-0000:02:01.0-ata-1 -> ../../sr0

Tue Apr 13 15:46:18 i@kt04:~$ sudo iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 2.0-874
Target: iqn.2020-09.com.xxxx:yyyy.testtarget (non-flash)
        Current Portal: 192.168.20.100:3260,1
        Persistent Portal: 192.168.20.100:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.2020-09.com.xxxx:yyyy.testtarget
                Iface IPaddress: 192.168.30.24
                Iface HWaddress: <empty>
                Iface Netdev: <empty>
                SID: 2
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: <empty>
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 262144
                FirstBurstLength: 65536
                MaxBurstLength: 262144
                ImmediateData: Yes
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 33 State: running
                scsi33 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdb          State: running

Tue Apr 13 15:57:55 i@kt04:~$ sudo systemctl status open-iscsi
● open-iscsi.service - Login to default iSCSI targets
   Loaded: loaded (/lib/systemd/system/open-iscsi.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
     Docs: man:iscsiadm(8)
           man:iscsid(8)
  Process: 1352 ExecStop=/lib/open-iscsi/logout-all.sh (code=exited,status=0/SUCCESS)
  Process: 1351 ExecStop=/bin/sync (code=exited,status=0/SUCCESS)
  Process: 1301 ExecStop=/lib/open-iscsi/umountiscsi.sh (code=exited,status=0/SUCCESS)
  Process: 1416 ExecStart=/lib/open-iscsi/activate-storage.sh (code=exited,status=0/SUCCESS)
  Process: 1383 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited,status=0/SUCCESS)
 Main PID: 1416 (code=exited,status=0/SUCCESS)

Apr 13 11:03:20 kt04 systemd[1]: Starting Login to default iSCSI targets...
Apr 13 11:03:20 kt04 iscsiadm[1383]: Logging in to [iface: default,target: iqn.2020-09.com.xxxx:yyyy.testtarget,portal: 192.168.20.100,3260] (multiple)
Apr 13 11:03:20 kt04 iscsiadm[1383]: Login to [iface: default,3260] successful.
Apr 13 11:03:20 kt04 systemd[1]: Started Login to default iSCSI targets.

Tue Apr 13 16:09:28 i@kt04:~$ sudo systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
   Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
     Docs: man:iscsid(8)
  Process: 1374 ExecStart=/sbin/iscsid (code=exited,status=0/SUCCESS)
  Process: 1364 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited,status=0/SUCCESS)
 Main PID: 1377 (iscsid)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/iscsid.service
           ├─1376 /sbin/iscsid
           └─1377 /sbin/iscsid

Apr 13 11:03:20 kt04 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
Apr 13 11:03:20 kt04 iscsid[1374]: iSCSI logger with pid=1376 started!
Apr 13 11:03:20 kt04 systemd[1]: Started iSCSI initiator daemon (iscsid).
Apr 13 11:03:21 kt04 iscsid[1376]: iSCSI daemon with pid=1377 started!
Apr 13 11:03:21 kt04 iscsid[1376]: Connection2:0 to [target: iqn.2020-09.com.xxxx:yyyy.testtarget,3260] through [iface: default] is operational now

Tue Apr 13 16:21:21 i@kt04:~$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: NECVMWar Model: VMware SATA CD00 Rev: 1.00
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi32 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware   Model: Virtual disk     Rev: 2.0
  Type:   Direct-Access                    ANSI  SCSI revision: 06
Host: scsi33 Channel: 00 Id: 00 Lun: 01
  Vendor: SYNOLOGY Model: iSCSI Storage    Rev: 4.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05

我尝试过将“sdb”作为原始磁盘工作,如上所示,并且还创建了一个用 ext4 文件系统挂载的 sdb1 分区(甚至创建了一次 LVM),这导致“挂载失败:退出状态 32 /dev/sdb 已经挂载或挂载点忙”错误

我使用的 pod yaml

apiVersion: v1
kind: Pod
metadata:
  name: iscsipd
spec:
  nodeName: kt04
  containers:
  - name: iscsipd-rw
    image: kubernetes/pause
    volumeMounts:
    - mountPath: "/mnt/iscsipd"
      name: iscsipd-rw
  restartPolicy: Always
  volumes:
  - name: iscsipd-rw
    iscsi:
      targetPortal: 192.168.20.100:3260
      iqn: iqn.2020-09.com.xxxx:yyyy.testtarget
      lun: 1
      fsType: ext4
      readOnly: false

最后一次尝试我使用 fdisk 创建一个带有 ext4 的 sdb1 分区,但没有挂载到 /mnt,结果如下,仍然出现相同的错误“fs 类型错误,选项错误,/dev/sdb 上的超级块错误,缺少代码页或帮助程序"

Wed Apr 14 11:25:06 ice@kt04:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   64G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
├─sda3                      8:3    0   49G  0 part
│ └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
└─sda4                      8:4    0   14G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
sdb                         8:16   0    1G  0 disk
└─sdb1                      8:17   0 1023M  0 part
sr0                        11:0    1 1024M  0 rom

在 NAS(use Synology RS1221) iSCSI 配置面板,显示 Target 已连接(Lun is Thick Provisioning)

裸机版本的 k8s:1.19.6

iscsiadm 版本 2.0-874

open-iscsi 版本 2.0.874-5ubuntu2.10

任何人都可以提供一些建议,让我可以尝试使其发挥作用,或者指出我做错了什么?

解决方法

问题解决了。感谢 Slack#kubernetes-users 上的 [Long Wu Yuan]。

问题解决前提供的信息:

$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               7.9G     0  7.9G   0% /dev
tmpfs                              1.6G  3.5M  1.6G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   62G   10G   49G  18% /
tmpfs                              7.9G     0  7.9G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  146M  764M  16% /boot

$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   64G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
├─sda3                      8:3    0   49G  0 part
│ └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
└─sda4                      8:4    0   14G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
sdb                         8:16   0    1G  0 disk
└─sdb1                      8:17   0 1023M  0 part
sr0                        11:0    1 1024M  0 rom

删除pod然后

$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               7.9G     0  7.9G   0% /dev
tmpfs                              1.6G  3.5M  1.6G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   62G   10G   49G  18% /
tmpfs                              7.9G     0  7.9G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  146M  764M  16% /boot

执行sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress然后

$ sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress
512+0 records in
512+0 records out
536870912 bytes (537 MB,512 MiB) copied,4.85588 s,111 MB/s
$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   64G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
├─sda3                      8:3    0   49G  0 part
│ └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
└─sda4                      8:4    0   14G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
sdb                         8:16   0    1G  0 disk
sr0                        11:0    1 1024M  0 rom

然后再申请pod,就成功了!,pod工作后的df -h & lsblk信息

$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               7.9G     0  7.9G   0% /dev
tmpfs                              1.6G  3.7M  1.6G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   62G   10G   49G  17% /
tmpfs                              7.9G     0  7.9G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda2                          976M  146M  764M  16% /boot

$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   64G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
├─sda3                      8:3    0   49G  0 part
│ └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
└─sda4                      8:4    0   14G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   63G  0 lvm  /
sdb                         8:16   0    1G  0 disk
sr0                        11:0    1 1024M  0 rom

正如 Long 所说,我应该更好地了解错误消息“坏超级块”并找到解决方案,或者我的环境对于这种 iSCSI 卷情况的错误配置是什么。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-