[WIP] Add VIrtIO vhost-blk#7
Draft
LKomaryanskiy wants to merge 3 commits intoxen-troops:v7.0.0-xtfrom
Draft
Conversation
Although QEMU virtio is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel istead of passing them to QEMU. The patch adds vhost-blk
backend which sets up vhost-blk kernel module to process requests.
test setup and results:
fio --direct=1 --rw=randread --bs=4k --ioengine=libaio --iodepth=128
QEMU drive options: cache=none
filesystem: xfs
SSD:
| randread, IOPS | randwrite, IOPS |
Host | 95.8k | 85.3k |
QEMU virtio | 61.5k | 79.9k |
QEMU vhost-blk | 95.6k | 84.3k |
RAMDISK (vq == vcpu == numjobs):
| randread, IOPS | randwrite, IOPS |
virtio, 1vcpu | 133k | 133k |
virtio, 2vcpu | 305k | 306k |
virtio, 4vcpu | 310k | 298k |
virtio, 8vcpu | 271k | 252k |
vhost-blk, 1vcpu | 110k | 113k |
vhost-blk, 2vcpu | 247k | 252k |
vhost-blk, 4vcpu | 558k | 556k |
vhost-blk, 8vcpu | 576k | 575k | *single kernel thread
vhost-blk, 8vcpu | 803k | 779k | *two kernel threads
v2:
- fix g_new() to g_new0() for vq allocations
- add multithreading support
- fix last agrument in vhost_dev_init()
- kick all vqueues in start, not only the first one
Signed-off-by: Andrey Zhadchenko <andrey.zhadchenko@virtuozzo.com>
Message-Id: <20221013203130.690327-2-andrey.zhadchenko@virtuozzo.com>
vhost-blk Original patch uses other version of some function with less number of urguments. Also, logging function has a missing parameter. This patch fixes these issues. Signed-off-by: Leonid Komarianskyi <leonid_komarianskyi@epam.com>
…for building Current dependency breaks QEMU usermode builds as it tries to compile vhost-blk for usermode and as a result failes. Signed-off-by: Leonid Komarianskyi <leonid_komarianskyi@epam.com>
Deedone
pushed a commit
to Deedone/qemu
that referenced
this pull request
Apr 11, 2025
Add the reproducer from https://gitlab.com/qemu-project/qemu/-/issues/339 Without the previous commit, when running 'make check-qtest-i386' with QEMU configured with '--enable-sanitizers' we get: ==4028352==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000062a00 at pc 0x5626d03c491a bp 0x7ffdb4199410 sp 0x7ffdb4198bc0 READ of size 786432 at 0x619000062a00 thread T0 #0 0x5626d03c4919 in __asan_memcpy (qemu-system-i386+0x1e65919) xen-troops#1 0x5626d1c023cc in flatview_write_continue softmmu/physmem.c:2787:13 xen-troops#2 0x5626d1bf0c0f in flatview_write softmmu/physmem.c:2822:14 xen-troops#3 0x5626d1bf0798 in address_space_write softmmu/physmem.c:2914:18 xen-troops#4 0x5626d1bf0f37 in address_space_rw softmmu/physmem.c:2924:16 xen-troops#5 0x5626d1bf14c8 in cpu_physical_memory_rw softmmu/physmem.c:2933:5 xen-troops#6 0x5626d0bd5649 in cpu_physical_memory_write include/exec/cpu-common.h:82:5 xen-troops#7 0x5626d0bd0a07 in i8257_dma_write_memory hw/dma/i8257.c:452:9 xen-troops#8 0x5626d09f825d in fdctrl_transfer_handler hw/block/fdc.c:1616:13 #9 0x5626d0a048b4 in fdctrl_start_transfer hw/block/fdc.c:1539:13 #10 0x5626d09f4c3e in fdctrl_write_data hw/block/fdc.c:2266:13 #11 0x5626d09f22f7 in fdctrl_write hw/block/fdc.c:829:9 #12 0x5626d1c20bc5 in portio_write softmmu/ioport.c:207:17 0x619000062a00 is located 0 bytes to the right of 512-byte region [0x619000062800,0x619000062a00) allocated by thread T0 here: #0 0x5626d03c66ec in posix_memalign (qemu-system-i386+0x1e676ec) xen-troops#1 0x5626d2b988d4 in qemu_try_memalign util/oslib-posix.c:210:11 xen-troops#2 0x5626d2b98b0c in qemu_memalign util/oslib-posix.c:226:27 xen-troops#3 0x5626d09fbaf0 in fdctrl_realize_common hw/block/fdc.c:2341:20 xen-troops#4 0x5626d0a150ed in isabus_fdc_realize hw/block/fdc-isa.c:113:5 xen-troops#5 0x5626d2367935 in device_set_realized hw/core/qdev.c:531:13 SUMMARY: AddressSanitizer: heap-buffer-overflow (qemu-system-i386+0x1e65919) in __asan_memcpy Shadow bytes around the buggy address: 0x0c32800044f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c3280004510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c3280004520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c3280004530: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c3280004540:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004550: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c3280004590: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Heap left redzone: fa Freed heap region: fd ==4028352==ABORTING [ kwolf: Added snapshot=on to prevent write file lock failure ] Reported-by: Alexander Bulekov <alxndr@bu.edu> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alexander Bulekov <alxndr@bu.edu> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Upstream-Status: Backport [46609b9] CVE: CVE-2021-3507 Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com> %% original patch: CVE-2021-3507_2.patch
LKomaryanskiy
pushed a commit
that referenced
this pull request
Feb 27, 2026
ASAN spotted a leaking string in machine_set_loadparm():
Direct leak of 9 byte(s) in 1 object(s) allocated from:
#0 0x560ffb5bb379 in malloc ../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:69:3
#1 0x7f1aca926518 in g_malloc ../glib/gmem.c:106
#2 0x7f1aca94113e in g_strdup ../glib/gstrfuncs.c:364
#3 0x560ffc8afbf9 in qobject_input_type_str ../qapi/qobject-input-visitor.c:542:12
#4 0x560ffc8a80ff in visit_type_str ../qapi/qapi-visit-core.c:349:10
#5 0x560ffbe6053a in machine_set_loadparm ../hw/s390x/s390-virtio-ccw.c:802:10
#6 0x560ffc0c5e52 in object_property_set ../qom/object.c:1450:5
#7 0x560ffc0d4175 in object_property_set_qobject ../qom/qom-qobject.c:28:10
#8 0x560ffc0c6004 in object_property_set_str ../qom/object.c:1458:15
#9 0x560ffbe2ae60 in update_machine_ipl_properties ../hw/s390x/ipl.c:569:9
#10 0x560ffbe2aa65 in s390_ipl_update_diag308 ../hw/s390x/ipl.c:594:5
#11 0x560ffbdee132 in handle_diag_308 ../target/s390x/diag.c:147:9
#12 0x560ffbebb956 in helper_diag ../target/s390x/tcg/misc_helper.c:137:9
#13 0x7f1a3c51c730 (/memfd:tcg-jit (deleted)+0x39730)
Cc: qemu-stable@nongnu.org
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20250509174938.25935-1-farosas@suse.de>
Fixes: 1fd396e ("s390x: Register TYPE_S390_CCW_MACHINE properties as class properties")
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit bdf12f2)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is rebase of VirtIO vhost-blk patch with fixes to 7.0.0-xt branch. The original patch can be found here:
https://patchew.org/QEMU/20221013203130.690327-1-andrey.zhadchenko@virtuozzo.com/
This PR currently is still WIP, because, in case of using VirtIO vhost-blk feature in Gen3, DomA freezes during storage operation.