This is an archive of the discontinued Mercurial Phabricator instance.

run-tests: add support for external test result
ClosedPublic

Authored by lothiraldan on Jun 7 2018, 3:19 PM.

Details

Summary

The goal is to begin experiment with custom test result. I'm not sure we
should offers any backward-compatibility guarantee on that plugin API as it
doesn't change often and shouldn't have too much clients.

Diff Detail

Repository
rHG Mercurial
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

lothiraldan created this revision.Jun 7 2018, 3:19 PM

I see some what, but not any why. Why is this useful?

lothiraldan updated this revision to Diff 9031.Jun 12 2018, 5:05 PM

I see some what, but not any why. Why is this useful?

I need this changeset to integrate the mercurial test runner with some external tools.

I see some what, but not any why. Why is this useful?

I need this changeset to integrate the mercurial test runner with some external tools.

I'd still like more information. Why is the json report inadquate? What's your goal?

(Remember my perspective: every feature here is a liability, so anything we're not using for development on Mercurial is something I'm hesitant to take on in run-tests)

lothiraldan added a comment.EditedJul 2 2018, 8:16 AM

I see some what, but not any why. Why is this useful?

I need this changeset to integrate the mercurial test runner with some external tools.

I'd still like more information. Why is the json report inadquate? What's your goal?
(Remember my perspective: every feature here is a liability, so anything we're not using for development on Mercurial is something I'm hesitant to take on in run-tests)

I'm trying to integrate Mercurial with a new test format I'm developing called LITF (https://github.com/Lothiraldan/litf). The LITF test format is stream-based while the json report, while having the needed information, only dumps a file at the end. Moreover, the format is not final yet, I would like to avoid generating unnecessary noise on the phabricator / mailing-list.

The end-goal is to have better tooling to launch only a subset of tests, see the results of tests from a remote machine and speed up the development process by allowing to relaunch only failed tests in a subset for example.

durin42 accepted this revision as: durin42.Jul 3 2018, 2:00 PM

I guess I can live with that.

tests/basic_test_result.py
38

shouldn't this say "SKIP!"?

lothiraldan updated this revision to Diff 9476.Jul 9 2018, 10:41 AM
lothiraldan marked an inline comment as done.Jul 9 2018, 10:42 AM
durin42 accepted this revision.Jul 9 2018, 10:44 AM
This revision is now accepted and ready to land.Jul 9 2018, 10:44 AM
This revision was automatically updated to reflect the committed changes.