Page MenuHomePhabricator

rust-discovery: using from Python code
Needs ReviewPublic

Authored by gracinet on Wed, May 22, 1:00 PM.

Details

Reviewers
None
Group Reviewers
hg-reviewers
Summary

As previously done in other topics, the Rust version is used if it's been
built.

The version fully in Rust of the partialdiscovery class has the performance
advantage over the Python version (actually using the Rust MissingAncestor) if
the undecided set is big enough. Otherwise no sampling occurs, and the
discovery is reasonably fast anyway.

Note: it's hard to predict the size of the initial undecided set, it can
depend on the kind of topological changes between the local and remote graphs.
The point of the Rust version is to make the bad cases acceptable.

More specifically, the performance advantages are:

  • faster sampling, especially takefullsample()
  • much faster addmissings() in almost all cases (see commit message in grandparent of the present changeset)
  • no conversion cost of the undecided set at the interface between Rust and Python

Measurements with big undecided sets

For an extreme example, discovery between mozilla-try and mozilla-unified
(over one million undecided revisions, same case as in dbd0fcca6dfc), we
get roughly a x2.5/x3 better performance:

Growing sample size (5% starting with 200): time goes down from
210 to 72 seconds.
Constant sample size of 200: time down from 1853 to 659 seconds.

With a sample size computed from number of roots and heads of the
undecided set (respectsize is False), here are perfdiscovery results:

Before ! wall 9.358729 comb 9.360000 user 9.310000 sys 0.050000 (median of 50)
After ! wall 3.793819 comb 3.790000 user 3.750000 sys 0.040000 (median of 50)

In that later case, the sample sizes are routinely in the hundreds of
thousands of revisions. While still faster, the Rust iteration in
addmissings has less of an advantage than with smaller sample sizes, but
one sees addcommons becoming faster, probably a consequence of not having
to copy big sets back and forth.

This example is not a goal in itself, but it showcases several different
areas in which the process can become slow, due to different factors, and
how this full Rust version can help.

Measurements with small undecided sets

In cases the undecided set is small enough than no sampling occurs,
the Rust version has a disadvantage at init if targetheads is really big
(some time is lost in the translation to Rust data structures),
and that is compensated by the faster addmissings().

On a private repository with over one million commits, we still get a minor
improvement, of 6.8%:

Before ! wall 0.593585 comb 0.590000 user 0.550000 sys 0.040000 (median of 50)
After  ! wall 0.553035 comb 0.550000 user 0.520000 sys 0.030000 (median of 50)

What's interesting in that case is the first addinfo() at 180ms for Rust and
233ms for Python+C, mostly due to add_missings and the children cache
computation being done in less than 0.2ms on the Rust side vs over 40ms on the
Python side.

The worst case we have on hand is with mozilla-try, prepared with
discovery-helper.sh for 10 heads and depth 10, time goes up 2.2% on the median.
In this case targetheads is really huge with 165842 server heads.

Before ! wall 0.823884 comb 0.810000 user 0.790000 sys 0.020000 (median of 50)
After  ! wall 0.842607 comb 0.840000 user 0.800000 sys 0.040000 (median of 50)

If that would be considered a problem, more adjustments can be made, which are
prematurate at this stage: cooking special variants of methods of the inner
MissingAncestors object, retrieving local heads directly from Rust to avoid
the cost of conversion. Effort would probably be better spent at this point
improving the surroundings if needed.

Here's another data point with a smaller repository, pypy, where performance
is almost identical

Before ! wall 0.015121 comb 0.030000 user 0.020000 sys 0.010000 (median of 186)
After ! wall 0.015009 comb 0.010000 user 0.010000 sys 0.000000 (median of 184)

Diff Detail

Repository
rHG Mercurial
Lint
Lint Skipped
Unit
Unit Tests Skipped

Event Timeline

gracinet created this revision.Wed, May 22, 1:00 PM

I think this series needs to be updated for D2647. I'm curious to see the timing numbers after that patch (I can get those myself once the series is updated if you're tired of running the perf commands).

gracinet edited the summary of this revision. (Show Details)Wed, Jun 12, 2:17 PM
gracinet updated this revision to Diff 15473.

Update of the whole series done but, ended up in an inconsistent state, and I have to go. Please don't act on it until I give the signal it's ready (sorry for inconvenience)

gracinet edited the summary of this revision. (Show Details)Thu, Jun 13, 9:33 AM
gracinet updated this revision to Diff 15487.

Instability fixed, and I took the opportunity to rebase again to leverage policy.importrust, that's been queued yesterday (rHGf7385ed775a8)

I added new performance measurements for the big pathological case (mozilla-try / mozilla-unified) with respectsize=False. Speedup, compared to parent commit is still about x2.5 for me, not exactly for the same reasons, that's interesting.