Skip to content

ZFS corruption related to snapshots post-2.0.x upgrade #12014

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jgoerzen opened this issue May 8, 2021 · 293 comments · Fixed by #17340
Closed

ZFS corruption related to snapshots post-2.0.x upgrade #12014

jgoerzen opened this issue May 8, 2021 · 293 comments · Fixed by #17340
Labels
Component: Encryption "native encryption" feature Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@jgoerzen
Copy link

jgoerzen commented May 8, 2021

System information

Type Version/Name
Distribution Name Debian
Distribution Version Buster
Linux Kernel 5.10.0-0.bpo.5-amd64
Architecture amd64
ZFS Version 2.0.3-1~bpo10+1
SPL Version 2.0.3-1~bpo10+1

Describe the problem you're observing

Since upgrading to 2.0.x and enabling crypto, every week or so, I start to have issues with my zfs send/receive-based backups. Upon investigating, I will see output like this:

zpool status -v
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:03:37 with 0 errors on Mon May  3 16:58:33 2021
config:

	NAME         STATE     READ WRITE CKSUM
	rpool        ONLINE       0     0     0
	  nvme0n1p7  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        <0xeb51>:<0x0>

Of note, the <0xeb51> is sometimes a snapshot name; if I zfs destroy the snapshot, it is replaced by this tag.

Bug #11688 implies that zfs destroy on the snapshot and then a scrub will fix it. For me, it did not. If I run a scrub without rebooting after seeing this kind of zpool status output, I get the following in very short order, and the scrub (and eventually much of the system) hangs:

[393801.328126] VERIFY3(0 == remove_reference(hdr, NULL, tag)) failed (0 == 1)
[393801.328129] PANIC at arc.c:3790:arc_buf_destroy()
[393801.328130] Showing stack for process 363
[393801.328132] CPU: 2 PID: 363 Comm: z_rd_int Tainted: P     U     OE     5.10.0-0.bpo.5-amd64 #1 Debian 5.10.24-1~bpo10+1
[393801.328133] Hardware name: Dell Inc. XPS 15 7590/0VYV0G, BIOS 1.8.1 07/03/2020
[393801.328134] Call Trace:
[393801.328140]  dump_stack+0x6d/0x88
[393801.328149]  spl_panic+0xd3/0xfb [spl]
[393801.328153]  ? __wake_up_common_lock+0x87/0xc0
[393801.328221]  ? zei_add_range+0x130/0x130 [zfs]
[393801.328225]  ? __cv_broadcast+0x26/0x30 [spl]
[393801.328275]  ? zfs_zevent_post+0x238/0x2a0 [zfs]
[393801.328302]  arc_buf_destroy+0xf3/0x100 [zfs]
[393801.328331]  arc_read_done+0x24d/0x490 [zfs]
[393801.328388]  zio_done+0x43d/0x1020 [zfs]
[393801.328445]  ? zio_vdev_io_assess+0x4d/0x240 [zfs]
[393801.328502]  zio_execute+0x90/0xf0 [zfs]
[393801.328508]  taskq_thread+0x2e7/0x530 [spl]
[393801.328512]  ? wake_up_q+0xa0/0xa0
[393801.328569]  ? zio_taskq_member.isra.11.constprop.17+0x60/0x60 [zfs]
[393801.328574]  ? taskq_thread_spawn+0x50/0x50 [spl]
[393801.328576]  kthread+0x116/0x130
[393801.328578]  ? kthread_park+0x80/0x80
[393801.328581]  ret_from_fork+0x22/0x30

However I want to stress that this backtrace is not the original cause of the problem, and it only appears if I do a scrub without first rebooting.

After that panic, the scrub stalled -- and a second error appeared:

zpool status -v
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Sat May  8 08:11:07 2021
	152G scanned at 132M/s, 1.63M issued at 1.41K/s, 172G total
	0B repaired, 0.00% done, no estimated completion time
config:

	NAME         STATE     READ WRITE CKSUM
	rpool        ONLINE       0     0     0
	  nvme0n1p7  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        <0xeb51>:<0x0>
        rpool/crypt/debian-1/home/jgoerzen/no-backup@[elided]-hourly-2021-05-07_02.17.01--2d:<0x0>

I have found the solution to this issue is to reboot into single-user mode and run a scrub. Sometimes it takes several scrubs, maybe even with some reboots in between, but eventually it will clear up the issue. If I reboot before scrubbing, I do not get the panic or the hung scrub.

I run this same version of ZoL on two other machines, one of which runs this same kernel version. What is unique about this machine?

  • It is a laptop
  • It uses ZFS crypto (the others use LUKS)

I made a significant effort to rule out hardware issues, including running several memory tests and the built-in Dell diagnostics. I believe I have rules that out.

Describe how to reproduce the problem

I can't at will. I have to wait for a spell.

Include any warning/errors/backtraces from the system logs

See above

Potentially related bugs

@jgoerzen jgoerzen added Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang) labels May 8, 2021
@jgoerzen
Copy link
Author

jgoerzen commented May 8, 2021

Two other interesting tidbits...

When I do the reboot after this issue occurs, the mounting of the individual zfs datasets is S L O W. Several seconds each, and that normally just flies by. After scrubbing, it is back to normal speed of mounting.

The datasets that have snapshot issues vary with each one. Sometimes it's just one, sometimes many. But var is almost always included. (Though its parent, which has almost no activity ever, also is from time to time, so that's odd.)

@jstenback
Copy link
Contributor

Same symptoms here, more or less. See also issue #11688.

@glueckself
Copy link

glueckself commented May 9, 2021

I also have the symptom with the corrupted snapshots, without kernel panics so far.

So far it only affected my Debian system with Linux 5.10 and zfs 2.0.3 (I've turned the server off for today, I can check the exact versions tomorrow). Also, while the system has the 2.0.3 zfs utils + module, the pool is still left on 0.8.6 format. I wasn't able to execute zfs list -r -t all <affected dataset> - it displayed cannot iterate filesystems and only a few snapshots (instead of tens it should've). Also, I couldn't destroy the affected snapshots because it said they didn't exist anymore. I couldn't send the dataset with syncoid at all.

On the corrupted system, after I got the mail from ZED, I manually ran a scrub at first, after which the zpool status said that there were no errors. However, the next zpool status, seconds after the first, again said that there were errors. Subsequent scrubs didn't clean the errors.

I've rebooted the server into an Ubuntu 20.10 live with zfs 0.8.4-1ubuntu11 (again, sorry that I haven't noted the version, can add it tomorrow) and after a scrub the errors were gone. Following scrubs haven't detected errors anymore. zfs list -r -t all ... again displayed a large list of snapshots.

The errors didn't seem to affect the data on the zvols (all 4 affected snapshots are of zvols). The zvols are used as disks for VMs with ext4 on them. I will verify them tomorrow.
EDIT: I checked one of the VM disks, neither fsck nor dpkg -V (verify checksums of all files installed from a package) could find any errors (except mismatching dpkg-checksums of config files I've changed - that is to be expected).

I have two other Ubuntu 21.04 based Systems with zfs-2.0.2-1ubuntu5 which are not affected until now. However, they have their pools already upgraded to 2. All are snapshotted with sanoid and have the datasets encrypted.

My next step will be to downgrade zfs back to 0.8.6 on the Debian system and see what happens.

EDIT:
More points I've noted while investigating with 0.8.4-1ubuntu11:

  • Creating new snapshots continued working for affected datasets, however destroying them didn't (right now I have 127 "frequently" (sanoids term for the most often snapshot - in my case 15 minutes) instead of the 10 sanoid is configured to keep.
  • With 0.8, the destroying of the affected snapshots worked. Scrubbing afterwards didn't find any errors.

EDIT 2:

  • On 2.0.2 (Ubuntu 21.04 again), sanoid managed to successfully prune (destroy) all remaining snapshots that past their valid-time. A scrub afterwards didn't find any errors. I'll be running the 2.0.2 for a while and see what happens.

@dcarosone
Copy link

dcarosone commented May 21, 2021

I'm seeing this too, on Ubuntu 21.04, also using zfs encryption

I have znapzend running, and it makes a lot of snapshots. Sometimes, some of them are bad, and can't be used (for example, attempting to send them to a replica destination fails). I now use the skipIntermediates option, and so at least forward progress is made on the next snapshot interval.

In the most recent case (this morning) I had something like 4300 errors (many more than I'd seen previously). There are no block-level errors (read/write/cksum). They're cleared after destroying the affected snapshots and scrubbing (and maybe a reboot, depending on .. day?)

Warning! Speculation below:

  • this may be related to a race condition?
  • znapzend wakes up and makes recursive snapshots of about 6 first-level child datasets ot rpool (ROOT, home, data, ...) all at the same time (as well as a couple of other pools, some of those still using LUKS for encryption underneath instead).
  • I have been having trouble with the ubuntu-native zsysd, whch gets stuck at 100% cpu. Normally I get frustrated and just disable it.
  • However, recently, I have been trying to understand what it's doing and what's going wrong (it tries to collect every dataset and snapshot and property in memory on startup). It seems like this has happened several times in the past few days while I have been letting zsysd run (so more contention for libzfs operations)
  • Update I haven't seen this again since disabling zsysd .. ~3 weeks and counting.

@aerusso
Copy link
Contributor

aerusso commented Jun 12, 2021

@jgoerzen Can you

  1. Capture the zpool events -v report when one of these "bad" snapshots is created?
  2. Try to zfs send that snapshot (i.e., to zfs send ... | cat >/dev/null; notice the need to use cat).
  3. Reboot, and try to zfs send the snapshot.

In my case, #11688 (which you already reference), I've discovered that rebooting "heals" the snapshot -- at least using the patchset I mentioned there

@jgoerzen
Copy link
Author

I'll be glad to. Unfortunately, I rebooted the machine yesterday, so I expect it will be about a week before the problem recurs.

It is interesting to see the discussion today in #11688. The unique factor about the machine that doesn't work for me is that I have encryption enabled. It wouldn't surprise me to see the same thing here, but I will of course wait for it to recur and let you know.

@jgoerzen
Copy link
Author

Hello @aerusso,

The problem recurred over the weekend and I noticed it this morning.

Unfortunately, the incident that caused it had already expired out of the zpool events buffer (apparently), as it only went as far back as less than an hour ago. However, I did find this in syslog:

Jun 20 01:17:39 athena zed: eid=34569 class=authentication pool='rpool' bookmark=12680:0:0:98
Jun 20 01:17:39 athena zed: eid=34570 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=12680:0:0:242
Jun 20 01:17:40 athena zed: eid=34571 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=12680:0:0:261
...
Jun 20 17:17:39 athena zed: eid=37284 class=authentication pool='rpool' bookmark=19942:0:0:98
Jun 20 17:17:39 athena zed: eid=37285 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=19942:0:0:242
Jun 20 17:17:40 athena zed: eid=37286 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=19942:0:0:261
...
Jun 20 18:17:28 athena zed: eid=37376 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=21921:0:0:2072
Jun 20 18:17:29 athena zed: eid=37377 class=authentication pool='rpool' priority=2 err=5 flags=0x80 bookmark=21921:0:0:2072
Jun 20 18:17:29 athena zed: eid=37378 class=data pool='rpool' priority=2 err=5 flags=0x80 bookmark=21921:0:0:2072
Jun 20 18:17:40 athena zed: eid=37411 class=authentication pool='rpool' bookmark=21923:0:0:0

It should be noted that my hourly snap/send stuff runs at 17 minutes past the hour, so that may explain this timestamp correlation.

zpool status reported:

  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:04:12 with 0 errors on Sun Jun 13 00:28:13 2021
config:

	NAME         STATE     READ WRITE CKSUM
	rpool        ONLINE       0     0     0
	  nvme0n1p7  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        <0x5c81>:<0x0>
        <0x3188>:<0x0>
        rpool/crypt/debian-1@athena-hourly-2021-06-20_23.17.01--2d:<0x0>
        rpool/crypt/debian-1/var@athena-hourly-2021-06-20_23.17.01--2d:<0x0>
        <0x4de6>:<0x0>

Unfortunately I forgot to attempt to do a zfs send before reboot. Those snapshots, though not referenced directly, would have been included in a send -I that would have been issued. From my logs:

Jun 20 18:17:03 athena simplesnapwrap[4740]: Running: /sbin/zfs send -I rpool/crypt/debian-1/var@__simplesnap_bakfs1_2021-06-20T22:17:02__ rpool/crypt/debian-1/var@__simplesnap_bakfs1_2021-06-20T23:17:03__
Jun 20 18:17:03 athena simplesnap[2466/simplesnapwrap]: internal error: warning: cannot send 'rpool/crypt/debian-1/var@athena-hourly-2021-06-20_23.17.01--2d': Invalid argument

So I think that answers the question.

After a reboot but before a scrub, the zfs send you gave executes fine.

@cbreak-black
Copy link

I have similar symptoms, on an encrypted single-ssd ubuntu 21.04 boot pool, using stock zfs from ubuntu's repos. Deleting the affected snapshots and scrubbing previously cleared the errors, but on reoccurence, repeated scrubbing (without deleting them) caused a deadlock. My system has ECC memory, so it's probably not RAM related.

  • Does this problem happen with slower pools (like hard disk pools?)
  • Does this problem happen with pools that have redundancy?
  • Does this problem happen with with pools that don't trim (hard disk pools again?)

@aerusso
Copy link
Contributor

aerusso commented Jul 4, 2021

@cbreak-black Was there a system restart between the occurrence of the corrupted snapshot and the problems? Restarting has "fixed" this symptom for me (though you will need to scrub twice for the message to disappear, I believe).

I have a suspicion that this may be a version of #10737 , which has an MR under way there. The behavior I am experiencing could be explained by that bug (syncoid starts many zfs sends on my machine, some of which are not finished; SSDs do the send much faster, so are more likely to get deeper into the zfs send before the next command in the pipeline times out; a reboot heals the issue, for me; there's no on disk corruption, as far as I can tell).

I'm holding off on trying to bisect this issue (at least) until testing that MR. (And all the above is conjecture!)

@cbreak-black
Copy link

@aerusso No, without a restart I got into the scrub-hang, and had to restart hard. Afterwards, the scrub finished, and several of the errors vanished. The rest of the errors vanished after deleting the snapshots and scrubbing again.

@InsanePrawn
Copy link
Contributor

InsanePrawn commented Jul 4, 2021

Can I join the club too? #10019
Note how it's also at 0x0. Sadly I deleted said snapshot and dataset by now.

@aerusso
Copy link
Contributor

aerusso commented Jul 4, 2021

@InsanePrawn I can't seem to find commit 4d5b4a33d in any repository I know of (and neither can github, apparently, either). However, in your report you say this was a "recent git master" and the commit I'm currently betting on being guilty is da92d5c, which was committed in November of the previous year, so I can't use your data point to rule out my theory!

Also, it sounds like you didn't have any good way to reproduce the error --- however, you were using a test pool. Compared to my reproduction strategy (which is just, turn my computer on and browse the web, check mail, etc.) it might be easier to narrow in on a test case (or might have been easier a year and a half ago, when this was all fresh). Anyway, if you have any scripts or ideas of what you were doing that caused this besides "snapshots being created and deleted every couple minutes", it might be useful too. (I already tried lots of snapshot creations and deletions during fio on several datasets in a VM).

@InsanePrawn
Copy link
Contributor

InsanePrawn commented Jul 4, 2021

Yeah, idk why I didn't go look for the commit in my issue - luckily for us, that server (and pool; it does say yolo, but it's my private server's root pool. it's just that i won't cry much if it breaks; originally due to then unreleased crypto) and the git repo on it still exist. Looks like 4d5b4a33d was two systemd-generator commits by me after 610eec4

@InsanePrawn
Copy link
Contributor

InsanePrawn commented Jul 4, 2021

FWIW the dataset the issue appeared on was an empty filesystem (maybe a single small file inside) dataset that had snapshots (without actual fs activity) taken in quick intervals (somewhere between 30s and 5m intervals) in parallel with a few (5-15) other similarly empty datasets.
Edit: These were being snapshotted and replicated by zrepl, probably in a similar manner to what znapzend does.

The pool is a raidz2 on 3.5" spinning SATA disks.
I'm afraid I have nothing more to add in terms of reproduction :/

Edit: Turns out the dataset also still exists, the defective snapshot however does not anymore. I doubt that's helpful?

@aerusso
Copy link
Contributor

aerusso commented Jul 5, 2021

@InsanePrawn Does running the zrepl workload reproduce the bug on 2.0.5 (or another recent release?)

I don't think the snapshot is terribly important --- unless you're able to really dig into it with zdb (which I have not developed sufficient expertise to do). Rather, I think it's the workload, hardware setup, and (possibly, but I don't understand the mechanism at all) the dataset itself. Encryption also is a common theme, but that might just affect the presentation (i.e., there's no MAC to fail in the unencrypted, unauthenticated, case).

Getting at zpool events -v showing the error would probably tell us something (see mine).

@cbreak-black
Copy link

I've since added redundancy to my pool (it's now a mirror with two devices), and disabled autotrim. The snapshot corruption still happens. Still don't know what is causing it. And I also don't know if the corruption happens when creating the snapshot, and only later gets discovered (when I try to zfs send the snapshots), or if snapshots get corrupted some time in between creation and sending.

@aerusso
Copy link
Contributor

aerusso commented Aug 14, 2021

@cbreak-black Can you enable the all-debug.sh ZEDlet, and put the temporary directory somewhere permanent (i.e., not the default of /tmp/zed.debug.log)?

This will get the output of zpool events -v as it is generated, and will give times, which you can conceivably triangulate with your other logs. There's other information in those logs that is probably useful, too.

I'll repeat this here: if anyone gets me a reliable reproducer on a new pool, I have no doubt we'll be able to solve this in short order.

@wohali
Copy link

wohali commented Sep 1, 2021

Just mentioning here that we saw this on TrueNAS 12.0-U5 with OpenZFS 2.0.5 as well -- see #11688 (comment) for our story.

@rincebrain
Copy link
Contributor

Since I don't see anyone mentioning it here yet, #11679 contains a number of stories about the ARC getting confused when encryption is involved and, in a very similar looking illumos bug linked from there, eating data at least once.

@gamanakis
Copy link
Contributor

gamanakis commented Sep 30, 2021

@jgoerzen are you using raw send/receive? If yes this is closely related to #12594.

@jgoerzen
Copy link
Author

@gamanakis Nope, I'm not using raw (-w).

@phreaker0
Copy link

it's present in v2.1.1 as well:

Okt 09 01:01:14 tux sanoid[2043026]: taking snapshot ssd/container/debian-test@autosnap_2021-10-08_23:01:14_hourly
Okt 09 01:01:16 tux sanoid[2043026]: taking snapshot ssd/container/debian-test@autosnap_2021-10-08_23:01:14_frequently
Okt 09 01:01:16 tux kernel: VERIFY3(0 == remove_reference(hdr, NULL, tag)) failed (0 == 1)
Okt 09 01:01:16 tux kernel: PANIC at arc.c:3836:arc_buf_destroy()
Okt 09 01:01:16 tux kernel: Showing stack for process 435
Okt 09 01:01:16 tux kernel: CPU: 2 PID: 435 Comm: z_rd_int_1 Tainted: P           OE     5.4.0-84-generic #94-Ubuntu
Okt 09 01:01:16 tux kernel: Hardware name: GIGABYTE GB-BNi7HG4-950/MKHM17P-00, BIOS F1 05/24/2016
Okt 09 01:01:16 tux kernel: Call Trace:
Okt 09 01:01:16 tux kernel:  dump_stack+0x6d/0x8b
Okt 09 01:01:16 tux kernel:  spl_dumpstack+0x29/0x2b [spl]
Okt 09 01:01:16 tux kernel:  spl_panic+0xd4/0xfc [spl]
Okt 09 01:01:16 tux kernel:  ? kfree+0x231/0x250
Okt 09 01:01:16 tux kernel:  ? spl_kmem_free+0x33/0x40 [spl]
Okt 09 01:01:16 tux kernel:  ? kfree+0x231/0x250
Okt 09 01:01:16 tux kernel:  ? zei_add_range+0x140/0x140 [zfs]
Okt 09 01:01:16 tux kernel:  ? spl_kmem_free+0x33/0x40 [spl]
Okt 09 01:01:16 tux kernel:  ? zfs_zevent_drain+0xd3/0xe0 [zfs]
Okt 09 01:01:16 tux kernel:  ? zei_add_range+0x140/0x140 [zfs]
Okt 09 01:01:16 tux kernel:  ? zfs_zevent_post+0x234/0x270 [zfs]
Okt 09 01:01:16 tux kernel:  arc_buf_destroy+0xfa/0x100 [zfs]
Okt 09 01:01:16 tux kernel:  arc_read_done+0x251/0x4a0 [zfs]
Okt 09 01:01:16 tux kernel:  zio_done+0x407/0x1050 [zfs]
Okt 09 01:01:16 tux kernel:  zio_execute+0x93/0xf0 [zfs]
Okt 09 01:01:16 tux kernel:  taskq_thread+0x2fb/0x510 [spl]
Okt 09 01:01:16 tux kernel:  ? wake_up_q+0x70/0x70
Okt 09 01:01:16 tux kernel:  ? zio_taskq_member.isra.0.constprop.0+0x60/0x60 [zfs]
Okt 09 01:01:16 tux kernel:  kthread+0x104/0x140
Okt 09 01:01:16 tux kernel:  ? task_done+0xb0/0xb0 [spl]
Okt 09 01:01:16 tux kernel:  ? kthread_park+0x90/0x90
Okt 09 01:01:16 tux kernel:  ret_from_fork+0x1f/0x40

@phreaker0
Copy link

@aerusso you wrote that da92d5c may be the cause of this issue. My workstation at work panics after a couple of days and I need to reset it. Could you provide a branch of 2.1.1 with this commit reverted (as revert causes merge conflicts I can't fix myself) so I could test if the machine no longer crashes?

@aerusso
Copy link
Contributor

aerusso commented Oct 14, 2021

@phreaker0 Unfortunately, the bug that da92d5c introduced (#10737) was fixed by #12299, which I believe is present in all maintained branches now. It does not fix #11688, (which I suspect is the same as this bug).

I'm currently running 0.8.6 on Linux 5.4.y, and am hoping to wait out this bug (I don't have a lot of time right now, or for the foreseeable future). But, If you have a reliable reproducer (or a whole lot of time) you could bisect while running 5.4 (or some other pre-5.10 kernel). I can help anyone who wants to do that. If we can find the guilty commit, I have no doubt this can be resolved.

ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
ryao added a commit to ryao/zfs that referenced this issue May 17, 2025
@phreaker0
Copy link

I updated two of my machines (which experienced the issue quite heavily) with zfs 2.3.2 patched with the commit gamanakis@ab60df7.
It's been running without issues for about two and a half days now. Previously, the problems started after a few hours, so it looks like the problem is finally solved!

@HankB @pcd1193182 @gamanakis and everyone else who helped, thanks for your hard work!

gamanakis added a commit to gamanakis/zfs that referenced this issue May 18, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in a panic in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
gamanakis added a commit to gamanakis/zfs that referenced this issue May 19, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
ixhamza pushed a commit to truenas/zfs that referenced this issue May 19, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
behlendorf pushed a commit that referenced this issue May 19, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue #12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #12014 
Closes #17340
ryao added a commit to ryao/zfs that referenced this issue May 19, 2025
ryao added a commit to ryao/zfs that referenced this issue May 19, 2025
This check is currently limited to checking mismatches that occur in the
same stack frame. It does not detect across stack frames.

Signed-off-by: Richard Yao <[email protected]>
gamanakis added a commit to gamanakis/zfs that referenced this issue May 20, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes openzfs#12014 
Closes openzfs#17340
robn pushed a commit to robn/zfs that referenced this issue May 23, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes openzfs#12014
Closes openzfs#17340
(cherry picked from commit ea74cde)
robn pushed a commit to robn/zfs that referenced this issue May 23, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes openzfs#12014
Closes openzfs#17340
(cherry picked from commit ea74cde)
robn pushed a commit to robn/zfs that referenced this issue May 24, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue openzfs#12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes openzfs#12014
Closes openzfs#17340
(cherry picked from commit ea74cde)
@optimist555
Copy link

I updated two of my machines (which experienced the issue quite heavily) with zfs 2.3.2 patched with the commit gamanakis@ab60df7. It's been running without issues for about two and a half days now. Previously, the problems started after a few hours, so it looks like the problem is finally solved!

@HankB @pcd1193182 @gamanakis and everyone else who helped, thanks for your hard work!

Were you using raw send?

@phreaker0
Copy link

I updated two of my machines (which experienced the issue quite heavily) with zfs 2.3.2 patched with the commit gamanakis@ab60df7. It's been running without issues for about two and a half days now. Previously, the problems started after a few hours, so it looks like the problem is finally solved!
@HankB @pcd1193182 @gamanakis and everyone else who helped, thanks for your hard work!

Were you using raw send?

no

tonyhutter pushed a commit that referenced this issue May 27, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue #12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #12014
Closes #17340
(cherry picked from commit ea74cde)
tonyhutter pushed a commit that referenced this issue May 28, 2025
Bisecting identified the redacted send/receive as the source of the bug
for issue #12014. Specifically the call to
dsl_dataset_hold_obj(&fromds) has been replaced by
dsl_dataset_hold_obj_flags() which passes a DECRYPT flag and creates
a key mapping. A subsequent dsl_dataset_rele_flag(&fromds) is missing
and the key mapping is not cleared. This may be inadvertedly used, which
results in arc_untransform failing with ECKSUM in:
 arc_untransform+0x96/0xb0 [zfs]
 dbuf_read_verify_dnode_crypt+0x196/0x350 [zfs]
 dbuf_read+0x56/0x770 [zfs]
 dmu_buf_hold_by_dnode+0x4a/0x80 [zfs]
 zap_lockdir+0x87/0xf0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x8d/0x1d0 [zfs]
 setup_featureflags+0x267/0x2e0 [zfs]
 dmu_send_impl+0xe7/0xcb0 [zfs]
 dmu_send_obj+0x265/0x360 [zfs]
 zfs_ioc_send+0x10c/0x280 [zfs]

Fix this by restoring the call to dsl_dataset_hold_obj().

The same applies for to_ds: here replace dsl_dataset_rele(&to_ds) with
dsl_dataset_rele_flags().

Both leaked key mappings will cause a panic when exporting the
sending pool or unloading the zfs module after a non-raw send from
an encrypted filesystem.

Contributions-by: Hank Barta <[email protected]>
Contributions-by: Paul Dagnelie <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #12014
Closes #17340
(cherry picked from commit ea74cde)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Encryption "native encryption" feature Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

Successfully merging a pull request may close this issue.