Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Thin Provisioning with IDM locks in Device Mapper #122

Open
TProhofsky opened this issue Feb 26, 2021 · 1 comment
Open

Support Thin Provisioning with IDM locks in Device Mapper #122

TProhofsky opened this issue Feb 26, 2021 · 1 comment

Comments

@TProhofsky
Copy link
Collaborator

Thin pools do not use lvmlockd for allocation. Instead the device mapper allocates space from the pool. Need to investigate the device mapper using the lockmgr.

@Leo-Yan
Copy link
Contributor

Leo-Yan commented Mar 3, 2021

Some investigation for the thin pool and thin LV:

Conclusion

  • Firstly, for the thin pool operations, I observed the consistent
    behavior for both sanlock and IDM after testing them.

  • As you said, I can confirm that the thin pool and thin LV reuse the
    same lock; more specific, both of them use the thin pool's lock;

    This is proven by the code: lib/locking/lvmlockd.c, func
    _lockd_lv_thin().

  • When activate a thin pool and its thin LV, though the lvm command
    "lvchange" will send command to lvmlockd to acquire the same lock
    for both thin pool and thin LV, lvmlockd will only send commands to
    IDM lock manager/drive firmware for the first locking operation,
    afterwards, it directly bails out if the lock has been acquired;
    except for the first locking operation, the later procedure is only
    finished on the host and no interaction for IDM lock manager and
    drive firmware.

  • When deactivate the thin pool and thin LV, the lvm command will only
    send unlocking operation until the last user (here the user means the
    thin pool or the thin LV); so this is why we can acheive paired
    result for locking and unlocking.

    I understand the locking path uses function pool_is_active() to check
    the thin pool is active or not, if it's active, will directly bail
    out.

Problem

  • Based on the testing, I don't find any logic error, either for the
    original code for LVM and sanlock, and I can see IDM lock manager
    handles the thin pool with the same way.

  • I think one thing we can improve is to use dedicated locking for the
    thin pool and thin LVs. If we really move forward to this
    direction, one special thing is for the thin LV's lock, which should
    reuse the thin pool's PV list, this can allow LV's lock can cover
    more complete PV list as possible.

  • Since the thin pool and thin LVs share the same one lock, if one
    host acquires the lock for any thin LV, this means any other hosts
    have no chance to use the thin pool and thin LVs.

  • To be honest, I am a bit concern if we should move forward to allow
    every thin LV and thin pool use the dedicated locks, the reason is
    this issue seems to me is a common issue rather than a specific
    issue bound to a locking scheme. So if we want to move forward,
    means we need to change the the LVM core code rather than only
    change code only for IDM wrapper or IDM lock manager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants