Skip to content

Commit 0b2cd50

Browse files
chaseyuJaegeuk Kim
authored andcommitted
f2fs: fix to zero data after EOF for compressed file correctly
generic/091 may fail, then it bisects to the bad commit ba8dac3 ("f2fs: fix to zero post-eof page"). What will cause generic/091 to fail is something like below Testcase #1: 1. write 16k as compressed blocks 2. truncate to 12k 3. truncate to 20k 4. verify data in range of [12k, 16k], however data is not zero as expected Script of Testcase #1 mkfs.f2fs -f -O extra_attr,compression /dev/vdb mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc sync truncate -s $((12*1024)) /mnt/f2fs/file truncate -s $((20*1024)) /mnt/f2fs/file dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 od /mnt/f2fs/data umount /mnt/f2fs Analisys: in step 2), we will redirty all data pages from #0 to #3 in compressed cluster, and zero page #3, in step 3), f2fs_setattr() will call f2fs_zero_post_eof_page() to drop all page cache post eof, includeing dirtied page #3, in step 4) when we read data from page #3, it will decompressed cluster and extra random data to page #3, finally, we hit the non-zeroed data post eof. However, the commit ba8dac3 ("f2fs: fix to zero post-eof page") just let the issue be reproduced easily, w/o the commit, it can reproduce this bug w/ below Testcase #2: 1. write 16k as compressed blocks 2. truncate to 8k 3. truncate to 12k 4. truncate to 20k 5. verify data in range of [12k, 16k], however data is not zero as expected Script of Testcase #2 mkfs.f2fs -f -O extra_attr,compression /dev/vdb mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc sync truncate -s $((8*1024)) /mnt/f2fs/file truncate -s $((12*1024)) /mnt/f2fs/file truncate -s $((20*1024)) /mnt/f2fs/file echo 3 > /proc/sys/vm/drop_caches dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 od /mnt/f2fs/data umount /mnt/f2fs Anlysis: in step 2), we will redirty all data pages from #0 to #3 in compressed cluster, and zero page #2 and #3, in step 3), we will truncate page #3 in page cache, in step 4), expand file size, in step 5), hit random data post eof w/ the same reason in Testcase #1. Root Cause: In f2fs_truncate_partial_cluster(), after we truncate partial data block on compressed cluster, all pages in cluster including the one post eof will be dirtied, after another tuncation, dirty page post eof will be dropped, however on-disk compressed cluster is still valid, it may include non-zero data post eof, result in exposing previous non-zero data post eof while reading. Fix: In f2fs_truncate_partial_cluster(), let change as below to fix: - call filemap_write_and_wait_range() to flush dirty page - call truncate_pagecache() to drop pages or zero partial page post eof - call f2fs_do_truncate_blocks() to truncate non-compress cluster to last valid block Fixes: 3265d3d ("f2fs: support partial truncation on compressed inode") Reported-by: Jan Prusakowski <[email protected]> Signed-off-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
1 parent 0fe1c6b commit 0b2cd50

File tree

1 file changed

+16
-7
lines changed

1 file changed

+16
-7
lines changed

fs/f2fs/compress.c

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1246,19 +1246,28 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock)
12461246
for (i = cluster_size - 1; i >= 0; i--) {
12471247
struct folio *folio = page_folio(rpages[i]);
12481248
loff_t start = (loff_t)folio->index << PAGE_SHIFT;
1249+
loff_t offset = from > start ? from - start : 0;
12491250

1250-
if (from <= start) {
1251-
folio_zero_segment(folio, 0, folio_size(folio));
1252-
} else {
1253-
folio_zero_segment(folio, from - start,
1254-
folio_size(folio));
1251+
folio_zero_segment(folio, offset, folio_size(folio));
1252+
1253+
if (from >= start)
12551254
break;
1256-
}
12571255
}
12581256

12591257
f2fs_compress_write_end(inode, fsdata, start_idx, true);
1258+
1259+
err = filemap_write_and_wait_range(inode->i_mapping,
1260+
round_down(from, cluster_size << PAGE_SHIFT),
1261+
LLONG_MAX);
1262+
if (err)
1263+
return err;
1264+
1265+
truncate_pagecache(inode, from);
1266+
1267+
err = f2fs_do_truncate_blocks(inode,
1268+
round_up(from, PAGE_SIZE), lock);
12601269
}
1261-
return 0;
1270+
return err;
12621271
}
12631272

12641273
static int f2fs_write_compressed_pages(struct compress_ctx *cc,

0 commit comments

Comments
 (0)