Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mocked disk device to test pci passthrough #3489

Merged
merged 1 commit into from
Jul 23, 2019

Conversation

kwu83tw
Copy link
Contributor

@kwu83tw kwu83tw commented Jun 15, 2019

Add mocked disk device to test pci passthrough

We need q35 as the machine type in order to support IOMMU and
tweaking pcie controllers to place our virtio device properly.

  1. Modify node config to accommodate iommu related section
  2. Modify libvirt modification for pci passthru
  3. Add pci passthru related node template
  4. Expose option want_pci_passthrough

@kwu83tw
Copy link
Contributor Author

kwu83tw commented Jun 15, 2019

This is the card for such task. https://jira.suse.com/browse/SOC-9304

Copy link
Contributor

@JanZerebecki JanZerebecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the failing checks.

JanZerebecki
JanZerebecki previously approved these changes Jun 19, 2019
We need to specify q35 as the machine type in order to support IOMMU and
tweaking pcie controllers to place our virtio device properly.
1. Add option for pci passthrough
2. Add script to construct node template
@jgrassler
Copy link
Contributor

Looks like there's trouble with the CI jobs (unretrievable zypper repositories - looks unrelated to this pull request). I restarted suse/mkcloud/stable normally and suse/mkcloud with a bit of a twist: I put an export want_pci_passthrough=1 in the custom_settings field for the suse/mkcloud job. This should show us (a) whether the code causes no regressions when want_pci_passthrough is not being passed and (b) will actually exercise this code in the CI job.

Copy link
Contributor

@aspiers aspiers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I correctly understand that this corresponds to this part of the Jira acceptance criteria?

A sensible, Crowbar based test automation is available to minimally test the functionality in a virtualized environment. A virtio based device we pass through to a nested VM is enough: we just need to make sure PCI pass-through in general works - no need to pass a GPU through.

IOW, your compute node is a VM set up via mkcloud, and this PR tweaks mkcloud to do PCI pass-through from the physical host to that VM - right?

Based on a quick scan through, it all looks fine to me, although I'm not familiar enough with the exact details of what libvirt config changes are required for PCI pass-through in order to be able to review this thoroughly. (Although on a related note, I am getting gradually more familiar with that area of the nova code.)

But I have one remaining question: if this only passes through from the physical host to the compute node, don't you also need to test pass-through from the compute node to the (nested) guest? IOW, double pass-through by modifying the OpenStack functional tests to boot a nested VM with pass-through enabled?

@kwu83tw
Copy link
Contributor Author

kwu83tw commented Jul 22, 2019

IOW, your compute node is a VM set up via mkcloud, and this PR tweaks mkcloud to do PCI pass-through from the physical host to that VM - right?

This PR allows user to do prior preparation work of configuring node VM with proper machine type and additional virtio disk. The virtio disk on compute node (L0) will be passing through later on into its nested VM (L1). We're not going to pass anything from host to VM.

As for the testing part (to test pci-passthru from L0 VM to L1 VM), it will be another PR. 3543

@JanZerebecki JanZerebecki merged commit a70f26a into SUSE-Cloud:master Jul 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants