Skip to content

ZFS LocalPV e2e test cases

Aman Gupta edited this page Sep 14, 2020 · 14 revisions

E2e test cases for ZFS-LocalPV

Automated test cases into e2e-pipelines

https://gitlab.openebs.ci/openebs/e2e-nativek8s/pipelines/

  • To see the CI/E2E summary of the gitlab pipelines [click here] and switch to k8s tab under stable releases category.
  • See the test results and the README files for test-cases [click here] and navigate to various stages.

Pipeline cluster environment details:

Kubernetes version: v1.17.2 (1 master + 3 worker_node cluster)

Application used for E2e test flow: Percona-mysql

ZFS version:

`$ modinfo zfs | grep -iw version`
`version:       0.8.1-1ubuntu14.4`

OS details:

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Installation

    1. Validation of installing ZFS-LocalPV provisioner.
    2. Install ZFSPV controller in High Availabilty (HA) and check the behaviour of zvolume when one of the controller replica is down.
  • Volume Provisioning

    1. E2e test for succesfully provision and deprovision of the volume. (fsTypes: zfs, xfs, ext4 and btrfs)
    2. E2e test of custom-topology support for ZFSPV set via storage-class.
    3. E2e test for Raw-block-volume support for ZFSPV.
    4. E2e test for shared mount support for zfs-localpv.
  • Zvolume Properties

    1. Verification of zvolume properties set via the storage-class.
    2. Modification of zvolume properties after zvolume creation i.e. at runtime (Properties: compression,dedup and recordsize)
  • Volume Resize

    1. ZFS volume resize test. (File-system: zfs,xfs and ext4)
  • Snapshot & clone

    1. E2e test case for ZFS-LocalPV snapshot and clone (File-system: zfs,xfs,ext4 and btrfs)
  • Backup & Restore

    1. Create the backup of namespace after dumping some data into the application, and check the data-consistency after restore.
    2. Create the backup of one namespace, restore it in another namespace and continue this for 3-4 different namespaces and check the behaviour of zfs-localpv and restore should be completed successfully.
    3. After taking a backup of multiple namespaces in a single namespace, Deprovision the base volumes and restart of zfs-localpv driver components and after that restore the backup.
    4. If backup is taken with volume-snapshots, While restoring snapshot points to the whole data instead of only snapshot data. #issue
    5. Create schedules while continuously dumping data into the application, and restore should be consist only that data which is retrieved from a particular schedule backup.
    6. Create the backup of a namespace in which multiple applications are running on different-different nodes, and verify the successful backup and restore of this namespace.
  • Infra-chaos

    1. Testcase for restart of the docker runtime on node where volume is provisioned.
    2. Testcase for restart of kubelet services on node where volume is provisioned.
  • Upgrade Testing

    1. Upgrade of the ZFS-LocalPV components and verify that older volume is not impacting with any issue.
    2. Provision of the new volume after upgrading the zfspv-components
  • Manual test cases

  1. Check for the parent volume; it should not be deleted when volume snapshot is present.
  2. Test case for the scheduler to verify it is doing volume count based scheduling.