This is an archive of the discontinued Mercurial Phabricator instance.

revlog: add a mechanism to verify expected file position before appending
ClosedPublic

Authored by spectral on Feb 3 2021, 8:16 PM.

Details

Summary

If someone uses hg debuglocks, or some non-hg process writes to the .hg
directory without respecting the locks, or if the repo's on a networked
filesystem, it's possible for the revlog code to write out corrupted data.

The form of this corruption can vary depending on what data was written and how
that happened. We are in the "networked filesystem" case (though I've had users
also do this to themselves with the "hg debuglocks" scenario), and most often
see this with the changelog. What ends up happening is we produce two items
(let's call them rev1 and rev2) in the .i file that have the same linkrev,
baserev, and offset into the .d file, while the data in the .d file is appended
properly. rev2's compressed_size is accurate for rev2, but when we go to
decompress the data in the .d file, we use the offset that's recorded in the
index file, which is the same as rev1, and attempt to decompress
rev2.compressed_size bytes of rev1's data. This usually does not succeed. :)

When using inline data, this also fails, though I haven't investigated why too
closely. This shows up as a "patch decode" error. I believe what's happening
there is that we're basically ignoring the offset field, getting the data
properly, but since baserev != rev, it thinks this is a delta based on rev
(instead of a full text) and can't actually apply it as such.

For now, I'm going to make this an optional component and default it to entirely
off. I may increase the default severity of this in the future, once I've
enabled it for my users and we gain more experience with it. Luckily, most of my
users have a versioned filesystem and can roll back to before the corruption has
been written, it's just a hassle to do so and not everyone knows how (so it's a
support burden). Users on other filesystems will not have that luxury, and this
can cause them to have a corrupted repository that they are unlikely to know how
to resolve, and they'll see this as a data-loss event. Refusing to create the
corruption is a much better user experience.

This mechanism is not perfect. There may be false-negatives (racy writes that
are not detected). There should not be any false-positives (non-racy writes that
are detected as such). This is not a mechanism that makes putting a repo on a
networked filesystem "safe" or "supported", just *less* likely to cause
corruption.

Diff Detail

Repository
rHG Mercurial
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

spectral created this revision.Feb 3 2021, 8:16 PM

This still adds all of the function call overhead even when the feature is not used. I also don't like that this check is done repeatedly e.g. during an unbundle. I don't think I would mind checking the size once per revlog on the first write, but not repeatedly.

spectral updated this revision to Diff 25481.Feb 4 2021, 2:46 PM

This still adds all of the function call overhead even when the feature is not used. I also don't like that this check is done repeatedly e.g. during an unbundle. I don't think I would mind checking the size once per revlog on the first write, but not repeatedly.

Switched to just being a boolean check instead of a dummy function. Also cleaned up the formatting issues, turns out I didn't have black installed correctly.

I agree that in a "tight" loop like during a pull it's much less useful, and would be open to having the checks disabled for those operations if/when they're ever enabled by default. However, there are commands like split or rebase (with a merge tool specified) that can have several pauses while hg is waiting on the user (writing a description, choosing what to split, resolving conflicts) where the files have already been written to once, but the user had a chance to "forget" about what they were doing. If there was an easy way of having any user interaction reset the state so that the next write caused the checks to happen again, I might be open to doing that.

Sorry, I'm not super familiar with the phabricator workflow, should I be doing something more here?

This revision was not accepted when it landed; it landed in state Needs Review.
This revision was automatically updated to reflect the committed changes.

Seems like this patch made test-git-interop.t more broken than it was before. I'll append the following patch in flight to make it just as broken as it was before:

diff --git a/hgext/git/__init__.py b/hgext/git/__init__.py
--- a/hgext/git/__init__.py
+++ b/hgext/git/__init__.py
@@ -90,7 +90,7 @@ class gitstore(object):  # store.basicst
             return os.path.join(self.path, b'..', b'.hg', f)
         raise NotImplementedError(b'Need to pick file for %s.' % f)

-    def changelog(self, trypending):
+    def changelog(self, trypending, concurrencychecker):
         # TODO we don't have a plan for trypending in hg's git support yet
         return gitlog.changelog(self.git, self._db)

@spectral I've noticed some flakyness on the new test introduced in this change: https://foss.heptapod.net/octobus/mercurial-devel/-/jobs/187438

Could you take a look at it?

I'm unable to reproduce. I've run the test over 10,000 times (I added a #testcases a b c d e f g h i j k l m n o p q r s t u v w x y z so it ran 26 times each run, and I've run over 300 instances of that like run-tests.py -j26 -l --chg test-racy-mutations.t, over 100 with -j108, and other combinations (with and without the added testcases, with and without --chg, etc.)

I'm on a rather beefy linux machine, based on Debian testing, using Python3.9, on a commit descended from 856820b4.

It hasn't failed a single time, let alone like that. I also don't know what would cause that kind of failure. :(

I'm unable to reproduce. I've run the test over 10,000 times (I added a #testcases a b c d e f g h i j k l m n o p q r s t u v w x y z so it ran 26 times each run, and I've run over 300 instances of that like run-tests.py -j26 -l --chg test-racy-mutations.t, over 100 with -j108, and other combinations (with and without the added testcases, with and without --chg, etc.)
I'm on a rather beefy linux machine, based on Debian testing, using Python3.9, on a commit descended from 856820b4.
It hasn't failed a single time, let alone like that. I also don't know what would cause that kind of failure. :(

Mhhh. It may be possible that your machine is too fast to cause the race. Your runners are not the fastest machines in the world and they are under quite a bit of load; it wouldn't be the first time I've seen that happen. Could you try to artificially load up your CPU while you run the tests with chg? Otherwise we'll have to dig deeper.

I'm unable to reproduce. I've run the test over 10,000 times (I added a #testcases a b c d e f g h i j k l m n o p q r s t u v w x y z so it ran 26 times each run, and I've run over 300 instances of that like run-tests.py -j26 -l --chg test-racy-mutations.t, over 100 with -j108, and other combinations (with and without the added testcases, with and without --chg, etc.)
I'm on a rather beefy linux machine, based on Debian testing, using Python3.9, on a commit descended from 856820b4.
It hasn't failed a single time, let alone like that. I also don't know what would cause that kind of failure. :(

The error seems to hint at at attempted cleanup to a not fully initialized dirstateguard. So it might be the symptom of another failure that probably have a narrow windows for happening.

The test does not use a python extensions so it looks like that racy windows exists in the main code. @spectral, What is the range of code you want to ensure a race with ? and how is the synchronization happening to reach it ?

FWIW, it happened to me again (for now 100% of my pushes, with an impressive 2 pushes), so probably not a one-off. If you need access to Heptapod CI to figure that out, please say so.

@spectral, What is the range of code you want to ensure a race with ? and how is the synchronization happening to reach it ?

I'm attempting to reproduce a timeline like:

  1. process A starts
  2. process A reads changelog, identifies it as having lets say 10 entries
  3. process A acquires locks (note: I think steps 2 and 3 can be swapped)
  4. process A starts the editor (for a commit message or whatever)
  5. user deletes the lock file (maybe they don't realize why it's still held - it's in a screen session or something)
  6. process B starts and runs to completion, appending to the changelog. Changelog now has 11 entries.
  7. user finds process A's editor, quits it
  8. process A appends to changelog, thinking there were only 10 entries but it actually had 11 due to process B writing to it.

In the test, the synchronization happens between step 4+5 - the "editor" signals it has started, and lets step 5 execute, and again between step 6+7 - the "editor" has been waiting for a signal saying that step 6 has finished, and upon receiving it, exits 'normally'.

The only way I can think of for this to break is if something is causing the dirstate.backup files to be deleted either in step 5 (debuglocks being used to delete the locks deletes the backup files) or step 6 (process B just blindly deletes all the backup files).

My guess: my machine has some security hardening thing enabled that your test runner doesn't? Assuming you're using cpython, the number on the end of these files is the memory address of the dirstateguard object, and because of chg, we probably actually stand a rather high likelihood that it'll be the same between processes. I don't know why I wasn't able to reproduce, but if I change id(self) to instead be a constant string, it does reproduce.

I haven't tried to figure out how to run the tests on the test runner yet, but I'm relatively confident that this is the cause. I've sent D10504 to address.

@spectral, What is the range of code you want to ensure a race with ? and how is the synchronization happening to reach it ?

I'm attempting to reproduce a timeline like:

  1. process A starts
  2. process A reads changelog, identifies it as having lets say 10 entries
  3. process A acquires locks (note: I think steps 2 and 3 can be swapped)
  4. process A starts the editor (for a commit message or whatever)
  5. user deletes the lock file (maybe they don't realize why it's still held - it's in a screen session or something)
  6. process B starts and runs to completion, appending to the changelog. Changelog now has 11 entries.
  7. user finds process A's editor, quits it
  8. process A appends to changelog, thinking there were only 10 entries but it actually had 11 due to process B writing to it.

In the test, the synchronization happens between step 4+5 - the "editor" signals it has started, and lets step 5 execute, and again between step 6+7 - the "editor" has been waiting for a signal saying that step 6 has finished, and upon receiving it, exits 'normally'.
The only way I can think of for this to break is if something is causing the dirstate.backup files to be deleted either in step 5 (debuglocks being used to delete the locks deletes the backup files) or step 6 (process B just blindly deletes all the backup files).
My guess: my machine has some security hardening thing enabled that your test runner doesn't? Assuming you're using cpython, the number on the end of these files is the memory address of the dirstateguard object, and because of chg, we probably actually stand a rather high likelihood that it'll be the same between processes. I don't know why I wasn't able to reproduce, but if I change id(self) to instead be a constant string, it does reproduce.
I haven't tried to figure out how to run the tests on the test runner yet, but I'm relatively confident that this is the cause. I've sent D10504 to address.

Thanks for looking into it !

If you want to run this through the heptapod-CI simply put your change in a topic and push to the heptapod repositories. This will trigger a CI run.