-
Notifications
You must be signed in to change notification settings - Fork 12
[LTS 9.2] CVE-2023-46813, CVE-2023-0597 #614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pvts-mat
wants to merge
8
commits into
ctrliq:ciqlts9_2
Choose a base branch
from
pvts-mat:ciqlts9_2-CVE-batch-8
base: ciqlts9_2
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
jira VULN-6719 cve CVE-2023-46813 commit-author Borislav Petkov (AMD) <[email protected]> commit a37cd2a A virt scenario can be constructed where MMIO memory can be user memory. When that happens, a race condition opens between when the hardware raises the #VC and when the #VC handler gets to emulate the instruction. If the MOVS is replaced with a MOVS accessing kernel memory in that small race window, then write to kernel memory happens as the access checks are not done at emulation time. Disable MMIO emulation in user mode temporarily until a sensible use case appears and justifies properly handling the race window. Fixes: 0118b60 ("x86/sev-es: Handle MMIO String Instructions") Reported-by: Tom Dohrmann <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Tested-by: Tom Dohrmann <[email protected]> Cc: <[email protected]> (cherry picked from commit a37cd2a) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-6719 cve CVE-2023-46813 commit-author Joerg Roedel <[email protected]> commit b9cb9c4 Check the IO permission bitmap (if present) before emulating IOIO #VC exceptions for user-space. These permissions are checked by hardware already before the #VC is raised, but due to the VC-handler decoding race it needs to be checked again in software. Fixes: 25189d0 ("x86/sev-es: Add support for handling IOIO exceptions") Reported-by: Tom Dohrmann <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Tested-by: Tom Dohrmann <[email protected]> Cc: <[email protected]> (cherry picked from commit b9cb9c4) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-6719 cve CVE-2023-46813 commit-author Joerg Roedel <[email protected]> commit 63e44bc Check the memory operand of INS/OUTS before emulating the instruction. The #VC exception can get raised from user-space, but the memory operand can be manipulated to access kernel memory before the emulation actually begins and after the exception handler has run. [ bp: Massage commit message. ] Fixes: 597cfe4 ("x86/boot/compressed/64: Setup a GHCB-based VC Exception handler") Reported-by: Tom Dohrmann <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Cc: <[email protected]> (cherry picked from commit 63e44bc) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-8044 cve-pre CVE-2023-0597 commit-author Andrey Ryabinin <[email protected]> commit 3f148f3 KASAN maps shadow for the entire CPU-entry-area: [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE] This will explode once the per-cpu entry areas are randomized since it will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to allocate shadow for such big area. Fix this by allocating KASAN shadow only for really used cpu entry area addresses mapped by cea_map_percpu_pages() Thanks to the 0day folks for finding and reporting this to be an issue. [ dhansen: tweak changelog since this will get committed before peterz's actual cpu-entry-area randomization ] Signed-off-by: Andrey Ryabinin <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Tested-by: Yujie Liu <[email protected]> Cc: kernel test robot <[email protected]> Link: https://lore.kernel.org/r/[email protected] (cherry picked from commit 3f148f3) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-8044 cve CVE-2023-0597 commit-author Peter Zijlstra <[email protected]> commit 97e3d26 upstream-diff Included `linux/prandom.h' in `arch/x86/mm/cpu_entry_area.c' directly (compilation fails without it) Seth found that the CPU-entry-area; the piece of per-cpu data that is mapped into the userspace page-tables for kPTI is not subject to any randomization -- irrespective of kASLR settings. On x86_64 a whole P4D (512 GB) of virtual address space is reserved for this structure, which is plenty large enough to randomize things a little. As such, use a straight forward randomization scheme that avoids duplicates to spread the existing CPUs over the available space. [ bp: Fix le build. ] Reported-by: Seth Jenkins <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> (cherry picked from commit 97e3d26) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-8044 cve-bf CVE-2023-0597 commit-author Sean Christopherson <[email protected]> commit 80d72a8 Recompute the physical address for each per-CPU page in the CPU entry area, a recent commit inadvertantly modified cea_map_percpu_pages() such that every PTE is mapped to the physical address of the first page. Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Andrey Ryabinin <[email protected]> Link: https://lkml.kernel.org/r/[email protected] (cherry picked from commit 80d72a8) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-8044 cve-bf CVE-2023-0597 commit-author Sean Christopherson <[email protected]> commit 9765014 Populate a KASAN shadow for the entire possible per-CPU range of the CPU entry area instead of requiring that each individual chunk map a shadow. Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping was left behind, which can lead to not-present page faults during KASAN validation if the kernel performs a software lookup into the GDT. The DS buffer is also likely affected. The motivation for mapping the per-CPU areas on-demand was to avoid mapping the entire 512GiB range that's reserved for the CPU entry area, shaving a few bytes by not creating shadows for potentially unused memory was not a goal. The bug is most easily reproduced by doing a sigreturn with a garbage CS in the sigcontext, e.g. int main(void) { struct sigcontext regs; syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); memset(®s, 0, sizeof(regs)); regs.cs = 0x1d0; syscall(__NR_rt_sigreturn); return 0; } to coerce the kernel into doing a GDT lookup to compute CS.base when reading the instruction bytes on the subsequent #GP to determine whether or not the #GP is something the kernel should handle, e.g. to fixup UMIP violations or to emulate CLI/STI for IOPL=3 applications. BUG: unable to handle page fault for address: fffffbc8379ace00 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0 Oops: 0000 [ctrliq#1] PREEMPT SMP KASAN CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ ctrliq#432 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:kasan_check_range+0xdf/0x190 Call Trace: <TASK> get_desc+0xb0/0x1d0 insn_get_seg_base+0x104/0x270 insn_fetch_from_user+0x66/0x80 fixup_umip_exception+0xb1/0x530 exc_general_protection+0x181/0x210 asm_exc_general_protection+0x22/0x30 RIP: 0003:0x0 Code: Unable to access opcode bytes at 0xffffffffffffffd6. RSP: 0003:0000000000000000 EFLAGS: 00000202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 </TASK> Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Reported-by: [email protected] Suggested-by: Andrey Ryabinin <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Andrey Ryabinin <[email protected]> Link: https://lkml.kernel.org/r/[email protected] (cherry picked from commit 9765014) Signed-off-by: Marcin Wcisło <[email protected]>
jira VULN-8044 cve-bf CVE-2023-0597 commit-author Michal Koutný <[email protected]> commit a3f547a The commit 97e3d26 ("x86/mm: Randomize per-cpu entry area") fixed an omission of KASLR on CPU entry areas. It doesn't take into account KASLR switches though, which may result in unintended non-determinism when a user wants to avoid it (e.g. debugging, benchmarking). Generate only a single combination of CPU entry areas offsets -- the linear array that existed prior randomization when KASLR is turned off. Since we have 3f148f3 ("x86/kasan: Map shadow for percpu pages on demand") and followups, we can use the more relaxed guard kasrl_enabled() (in contrast to kaslr_memory_enabled()). Fixes: 97e3d26 ("x86/mm: Randomize per-cpu entry area") Signed-off-by: Michal Koutný <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/all/20230306193144.24605-1-mkoutny%40suse.com (cherry picked from commit a3f547a) Signed-off-by: Marcin Wcisło <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[LTS 9.2]
CVE-2023-46813 VULN-6719
CVE-2023-0597 VULN-8044
Commits
CVE-2023-46813
3e1baae:
c90c4a4:
72de94d:
CVE-2023-0597
d468641:
198ff55:
c051a40:
7f0398a:
c861f7a:
The solution of CVE-2023-0597 on LTS 9.2 is basically the same as on LTS 8.6, with two differences:
1
) was not picked for the solution because it's already present inciqlts9_2
history.upstream-diff
for 97e3d26 (0
) only the inclusion oflinux/prandom.h
remained, because all the missing prerequisites theciqlts8_6
patch was manoeuvering around are conveniently present in theciqlts9_2
version.For comparison relate to the table from the CVE-2023-0597 PR for LTS 8.6 augmented with the
ciqlts9_2
column:kABI check: passed
Boot test: passed
boot-test.log
Kselftests: passed relative
Reference
kselftests–ciqlts9_2–run1.log
Patch
kselftests–ciqlts9_2-CVE-batch-8–run1.log
Comparison
The tests results for the reference and the patch are the same.