You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of my server nodes in the cluster crashed while part of a VG. Some time went by and one of the other nodes lost their lease. Now I have an active LV on a node but no lock for the VG of that LV:
Attemping to start the lock completes immediately but has no effect:
[root@node3 ~]# /usr/sbin/vgchange --lock-start --lock-opt auto sbvg_drbdpool
Starting locking. Waiting until locks are ready...
[root@node3 ~]# lvs
VG sbvg_drbdpool lock skipped: storage failed for sanlock leases
Reading VG sbvg_drbdpool without a lock.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
csilv2eg47tjvu91qf sbvg_datalake rwi---r--- 100.00g
csilvnglz2r7n6i85 sbvg_datalake rwi---r--- 1000.00g
csilvpvr6bs1lps8u sbvg_datalake -wi------- 100.00g
poctest_00000 sbvg_drbdpool Vwi-a-tz-- 50.01g thinpool 5.74
thinpool sbvg_drbdpool twi-aotz-- 931.31g 0.31 10.55
[root@node3 ~]#
Attempting to stop the locks thinking that might clear it shows this:
[root@node3 ~]# /usr/sbin/vgchange --lock-stop sbvg_drbdpool
VG sbvg_drbdpool lock skipped: storage failed for sanlock leases
Reading VG sbvg_drbdpool without a lock.
VG sbvg_drbdpool stop failed: LVs must first be deactivated
Drop and force did not clear issue:
lvmlockctl --drop sbvg_datalake
vgchange --lock-type none --lockopt force sbvg_drbdpool
vgchange --lock-start --lock-opt auto sbvg_drbdpool
The text was updated successfully, but these errors were encountered:
One of my server nodes in the cluster crashed while part of a VG. Some time went by and one of the other nodes lost their lease. Now I have an active LV on a node but no lock for the VG of that LV:
Attemping to start the lock completes immediately but has no effect:
Attempting to stop the locks thinking that might clear it shows this:
Drop and force did not clear issue:
The text was updated successfully, but these errors were encountered: