Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

added module param to enable decompression of scrubed blocks#17630

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
alek-p wants to merge1 commit intoopenzfs:master
base:master
Choose a base branch
Loading
fromalek-p:decompressive_scrub

Conversation

@alek-p
Copy link
Contributor

Motivation and Context

We've seen a handful of the rare cases of corruption where the checksum for the block is correct, but it fails to decompress.
Since a scrub does not decompress the blocks that it reads, currently, a scrub will not catch this type of corruption.

To address this, we've implemented a module parameter (zfs_scrub_decompress) that, when set to a non-zero value, will cause all scrubbed blocks to be decompressed, and if this decompression fails, the failure will be reported in zpool status -v output as permanent data error(s).

Description

To implement thisdsl_scan_scrub_done() callback was modified to optionally do decompression for scrubbed blocks.
The existing scurb behavior is preserved by default as the tunable in set to 0 by default.

How Has This Been Tested?

Ran the zpool scrub-related tests in the zfs test suite, which passed, and some surface performance tests. The performance tests suggest about 10-15% higher peak CPU usage when using decompressive scrub.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Quality assurance (non-breaking change which makes the code more robust against bugs)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@alek-palek-p added the Status: Code Review NeededReady for review and testing labelAug 13, 2025
/*
* If the block was read without error, is compressed, and we're doing
* a decompression scrub we will now attempt to decompress it to
* further verify its integrity.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

For encrypted datasets, unless the encryption keys are loaded I'd expect the decompression to fail. It'd be nice to optionally do this check as part of the normal pipeline.

alek-p and adamdmoss reacted with thumbs up emoji
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Existing scrub code always usesZIO_FLAG_RAW flag to disable both decryption and decompression. Unless that is what you mean, Brian, I wonder if instead of the additional code here we could just set the flags properly to make ZIO pipeline to also verify encryption MACs and only then decompress.

I also wonder whether it is realistic (I really doubt it happens now) to try to recover those errors by reading from other data copies. I don't know what is the use case of this patch, that can be either memory corruption (in which case all copies will be broken) or some on-disk corruption that managed to slip through checksums (that is unlikely, but in which case there might be a hope).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Unless that is what you mean... we could just set the flags properly...

Yes, this is what I was suggesting. Although, I'm not convinced we can set the flags properly since the encryption keys could be unloaded any at time while we're scrubbing. A newZIO_FLAG_ might be needed as a hint to decrypt and decompress the block only if possible.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I tried to go this route before by settingZIO_FLAG_RAW_ENCRYPT instead ofZIO_FLAG_RAW and hoping to see a decompression failure during scrub, but I couldn't find any evidence of the failure. I'll look again.

The use case is we have a handful of cases of on-disk corruption where the checksum for the block is correct, but it fails to decompress. We would like to identify all such cases using a scrub. By the way, demand-reading such a block returns an EIO and doesn't add anything to the errlog. We have a patch that changes this as well.

The recovery question is an interesting one. It might be possible to recover from this in some cases, but in the case I've dug into, it is not possible since we never had a good copy of that block. The bad block was received already containing the corruption (in a compressed send stream), and the source did not have redundancy configured.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I've tried a POC of what I think you were proposing:

@@ -4742,6 +4745,9 @@ dsl_scan_scrub_cb(dsl_pool_t *dp,        boolean_t needs_io = B_FALSE;        int zio_flags = ZIO_FLAG_SCAN_THREAD | ZIO_FLAG_RAW | ZIO_FLAG_CANFAIL;+       if (BP_GET_LEVEL(bp) == 0 && BP_GET_COMPRESS(bp) != ZIO_COMPRESS_OFF)+               zio_flags &= ~ZIO_FLAG_RAW_COMPRESS;+        count_block(dp->dp_blkstats, bp);        if (phys_birth <= scn->scn_phys.scn_min_txg ||            phys_birth >= scn->scn_phys.scn_max_txg) {

and this causes failure in zio_decompress_data() across all files in the pool... I don't see why that would be happening.

Signed-off-by: Alek Pinchuk <alek.pinchuk@connectwise.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@behlendorfbehlendorfbehlendorf left review comments

@amotinamotinamotin left review comments

At least 1 approving review is required to merge this pull request.

Assignees

No one assigned

Labels

Status: Code Review NeededReady for review and testing

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@alek-p@behlendorf@amotin

[8]ページ先頭

©2009-2025 Movatter.jp