- User Since
- Jun 29 2017, 2:56 PM (111 w, 2 d)
Thu, Aug 15
I get 28 of them.
We will need more than just matching the categoie name to detect experimental/deprecated config. There are such option outside of the experimental section, and we still need to hide them by default. You can grep for # experimental config: to see some example.
Sun, Aug 11
Sat, Aug 10
Fri, Aug 9
Thu, Aug 8
We could warm them in increasing order to improve efficiency. However this is for the full cache warming so this looks good enough. (consider doing them in order in a follow up)
It seems like we are not warming enought cache with the new scheme.
Wed, Aug 7
For some reasons, my previous comment seems to have never made it to phabricator:
Tue, Aug 6
Forcing this write seems like a good idea. Having it in its own
changeset seems like a good idea (and please add a comment about forcing
Mon, Aug 5
Sat, Aug 3
The change looks good to me, However we probably want to introduce a new filter level 'wdir-independent-visible' to ensure we a good branchcache cache in .hg/cache that most share can use and that will be kept up to date. This also means we need to make sure it is warmed after transaction.
Overall principle seems good. I made couple of inline comment.
Interesting feature for sure. Thanks for looking into it.
Fri, Aug 2
Jul 4 2019
In the couple of last version, we already saved minutes in real life use case simply by improving pure CPU processing time client side. Can you elaborate on what other types of evidences you need to convince you there exist CPU bound case for discovery?
The ×2.5 speedup is the kind of things that motivate this series. Even if the mozilla-unified/mozilla-try is just and example it triggers the kind of pathological case we encounter in real life: large undecided set. We keep finding such pathological case from time to time, and will keep finding them. In addition there are case with legitimate large undedicated set (the mozilla example for one).
Having this faster code significantly reduce the impact of these pathological cases.
Jun 21 2019
Jun 17 2019
For the record. I am planning to make an extra path on that this week (in case nobody else got there first).
Jun 14 2019
Jun 7 2019
Jun 5 2019
Ah, I see. The move from ('plain', '') to ('', 'plain') is matching the key used for the vfsmap?
If so, go ahead with this patch on stable.
Can we go fully explicit ? Having both plain and store as the possible value?
May 28 2019
May 24 2019
(I did some experiment, here seems a good spot to report them)
May 23 2019
May 22 2019
The nodes in the above example have been selected by a script because they had interresting property. They are not based on a tag so I can't give you one. How did you converted the repo ? I think hg convert keeps a map somewhere, otherwise, using the commit message could work.
Can you indicate a summary of the total speedup of the series ? (from base to last changesets?). Also I am not sure for which case these number apply ? Is this the compatibility mode or after repository conversion ? Can we have number for both ?
Something only based on the number of root can also over sample. For a "simple" example, imagine a undecideded set with many roots that eventually all merge into a few heads.
If most of that set is common between local and remote, few question about the part of the history at the heads will quickly "decide" many changesets. Numberous questions at the roots part of the history won't.
We could maybe make it a function of both the number of heads and roots. That is not strictly the number of connected set, but that would provide a more conservative approach. That could over-sample for hour-glass shape, but they are probably less common.
May 21 2019
I feel like I am missing something. Your commit message seems to be talking using at least as many item in the sameple than there is independant connected set. However your code seems to use "heads(undecided)" that is a quite different. Using independant connected set seems like a good trade off (but might be expensive to compute). Using all heads can significantly bloat the discovery without giving it a significant edge in many cases.