Page MenuHomePhabricator

rust: translated random test of missingancestors
ClosedPublic

Authored by gracinet on Dec 12 2018, 10:11 AM.

Details

Summary

This is a Rust implementation of the random
DAG generator and related incrementalmissingancestors
tests against a naive brute force implementation.

It is provided as an integration test, so that it
won't run by default if any unit test fails.

In case of a failed example, all needed information
for reproduction is included in the panic message,
(this is how
test_remove_ancestors_from_case1() has been generated),
as well as the random seed.

The whole test is rerunnable by passing the random seed
in the TEST_RANDOM_SEED environment variable.
The other parameters (numbers of iterations) can be passed
in the TEST_MISSING_ANCESTORS environment variable.

An alternative would have been to expose to Python
MissingAncestors<VecGraphs> but that would have meant
pollution of the release build used from Python,
whereas we do it in this changeset within the tests submodule

Diff Detail

Repository
rHG Mercurial
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

gracinet created this revision.Dec 12 2018, 10:11 AM
gracinet updated this revision to Diff 12833.Dec 12 2018, 11:26 AM
gracinet updated this revision to Diff 12856.Dec 14 2018, 12:59 PM
yuja added a subscriber: yuja.Dec 14 2018, 10:29 PM
In case of a failed example, all needed information
for reproduction is included in the panic message,
so that we don't need to
seed the random generator explicitely. This is how
`test_remove_ancestors_from_case1()` has been generated.

Can we turn this into a utility command? I don't think it's good idea to
include random data in module tests. We'll probably want to loop it over
various seeds to find interesting graphs.

@yuja: I'll look into it, maybe that's a case for benches (I've not played with them yet).

@yuja now that 4.9 is out, I'm getting back to this.

This test is useful for me, because it's the only one that really tests the corectness of MissingAncestor, and it's not a bench either.
I agree it feels out of place in with the unit tests, so my proposal is to make an integration test out of it.

If we don't want to run it, we can do cargo test --lib, and in any case cargo test does not run integration tests (nor doctests) if the unit tests fail.

yuja added a comment.Feb 5 2019, 7:45 AM
This test is useful for me, because it's the only one that really tests the corectness of MissingAncestor, and it's not a bench either.
I agree it feels out of place in  with the unit tests, so my proposal is to make an integration test out of it.

or examples/ to build a dev-only standalone executable?

I don't know which is better, but I think we'll need a way to get around
the randomness (e.g. pass in a static random seed to reproduce a failed
example.)

@yuja we don't need a seed to reproduce: failed examples contain all the information (that's a difference with the Python version)

yuja added a comment.Feb 5 2019, 5:44 PM
@yuja we don't need a seed to reproduce: failed examples contain all the information (that's a difference with the Python version)

Yes, but isn't it handy if we can run the same test with some debugging aids?

I didn't carefully look through the original patch, so maybe I miss the point.
But IIRC, this test had some parameters we would probably want to tune. The
random seed is just an example.

gracinet edited the summary of this revision. (Show Details)Feb 12 2019, 9:05 AM
gracinet updated this revision to Diff 14044.

@yuja I'm trying to find a compromise between our points of view : I'd like very much it to be run automatically (hence an integration test), and you want to be parametrizable. So, I'm pushing a version that has these two properties, working with env variables, since command-line arguments are already used by the testing framework. Hope it'll suit you, if it doesn't, I'll put it in examples and find a way to run it anyway

This revision was automatically updated to reflect the committed changes.