This patch did two optimizations:
- Avoid sorting headrevs since it's already sorted.
- Inline cl.node so there is no node hash table lookups inside the loop.
After this patch, branchcache.update's bottleneck is at the native code
path (headrevs and node):
63 \ <lambda> (4 times) namespaces.py:55 63 | branchtip (4 times) localrepo.py:965 63 | branchmap (4 times) localrepo.py:953 63 | _branchmapupdatecache (6 times) perftweaks.py:105 62 | _branchmapupdate perftweaks.py:85
- 46 \ headrevs changelog.py:336 14 \ node (4087 times) changelog.py:361
We can choose speeding up headrevs, or make it somehow lazy so not all
heads need to be loaded.