This is an archive of the discontinued Mercurial Phabricator instance.

recover: don't verify by default
ClosedPublic

Authored by valentin.gatienbaron on Jan 22 2020, 2:49 PM.

Details

Summary

The reason is:

  • it's not that hard to trigger interrupted transactions: just run out of disk space
  • it takes forever to verify on large repos. Before --no-verify, I told people to C-c hg recover when the progress bar showed up. Now I tell them to pass --no-verify.
  • I don't remember a single case where the verification step was useful

This is technically a change of behavior. Perhaps this would be better
suited for tweakdefaults?

Diff Detail

Repository
rHG Mercurial
Branch
default
Lint
No Linters Available
Unit
No Unit Test Coverage

Event Timeline

pulkit added a subscriber: pulkit.Jan 22 2020, 3:13 PM

I also have experience with C-c thing. Will a config option which enables --no-verify by default will work for you?

+1, the point of having the --no-verify option was to eventualy turn it on by default.

(also, we shoudl tighten the windows the transaction creation that requires hg recover)

It's already possible to control the default, like so:

[alias]
recover = recover --no-verify

So the current patch is meant to change a default. Or be abandoned, which would be fine too given that it's easy to change the default for oneself, but I figured I'd propose the change.

marmoute accepted this revision.Jan 23 2020, 4:29 AM
pulkit added a subscriber: durin42.EditedFeb 10 2020, 1:07 PM

Sorry for late reply. By changing the default, I am afraid about the cases where a user has broken repository and we only recover the transaction and don't verify. I am not sure what those cases are. Also I don't know why recover performs verify in the first place. Maybe @durin42, @marmoute or someone else knows?

durin42 accepted this revision as: durin42.Feb 10 2020, 4:37 PM

Sorry for late reply. By changing the default, I am afraid about the cases where a user has broken repository and we only recover the transaction and don't verify. I am not sure what those cases are. Also I don't know why recover performs verify in the first place. Maybe @durin42, @marmoute or someone else knows?

I _think_ it's just paranoia. As long as the bundle wasn't woefully corrupt, it shouldn't be a problem. I _think_ if we set some of the [server]-section bundle validation options (which should be cheap enough) we could ditch this completely safely.

As it stands, I'm fine with this patch if someone else has the confidence to push it.

I _think_ it's just paranoia. As long as the bundle wasn't woefully corrupt, it shouldn't be a problem. I _think_ if we set some of the [server]-section bundle validation options (which should be cheap enough) we could ditch this completely safely.
As it stands, I'm fine with this patch if someone else has the confidence to push it.

How does the validity of an input bundle affect recover? I would have thought it's only the validity of the journal that matters, and that's created entirely based on local data (file lengths or contents before writes).

Now I suppose the journal itself may well be truncated or not written at all when running out of disk space or other error situations where the OS does the writes out of order.

durin42 accepted this revision.Feb 12 2020, 11:35 AM

I _think_ it's just paranoia. As long as the bundle wasn't woefully corrupt, it shouldn't be a problem. I _think_ if we set some of the [server]-section bundle validation options (which should be cheap enough) we could ditch this completely safely.
As it stands, I'm fine with this patch if someone else has the confidence to push it.

How does the validity of an input bundle affect recover?

It could have borked linknodes or missing filenodes. That can happen in some cases with subtle revlog corruption. Back when I helped run code.google.com we saw a few cases of that, where clients couldn't push specific changes unless they pushed their whole repo. But anyway, I was mis-thinking about this and this is about hg recover and not recovering something from a backup bundle (sigh) so we've been talking past each other.

I would have thought it's only the validity of the journal that matters, and that's created entirely based on local data (file lengths or contents before writes).

Yes, you're right.

Now I suppose the journal itself may well be truncated or not written at all when running out of disk space or other error situations where the OS does the writes out of order.

Yeah, it's possible on a network FS or something, but this honeslty seems like a safe change to me. Sorry I misread it last time through. :/

This revision is now accepted and ready to land.Feb 12 2020, 11:35 AM
This revision was automatically updated to reflect the committed changes.