This is an archive of the discontinued Mercurial Phabricator instance.

util: allow lrucachedict to track cost of entries
ClosedPublic

Authored by indygreg on Sep 6 2018, 9:17 PM.

Details

Summary

Currently, lrucachedict allows tracking of arbitrary items with the
only limit being the total number of items in the cache.

Caches can be a lot more useful when they are bound by the size
of the items in them rather than the number of elements in the
cache.

In preparation for teaching lrucachedict to enforce a max size of
cached items, we teach lrucachedict to optionally associate a numeric
cost value with each node.

We purposefully let the caller define their own cost for nodes.

This does introduce some overhead. Most of it comes from setitem,
since that function now calls into insert(), thus introducing Python
function call overhead.

$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.599552 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! wall 0.614643 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.655817 comb 0.650000 user 0.650000 sys 0.000000 (best of 16)
! sets
! wall 0.540448 comb 0.540000 user 0.540000 sys 0.000000 (best of 18)
! wall 0.805644 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
! mixed
! wall 0.651556 comb 0.660000 user 0.660000 sys 0.000000 (best of 15)
! wall 0.781357 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)

$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.621014 comb 0.620000 user 0.620000 sys 0.000000 (best of 16)
! wall 0.615146 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.698115 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
! sets
! wall 0.560247 comb 0.560000 user 0.560000 sys 0.000000 (best of 18)
! wall 0.832495 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
! mixed
! wall 0.686172 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! wall 0.841359 comb 0.840000 user 0.840000 sys 0.000000 (best of 12)

We're still under 1us per insert, which seems like reasonable
performance for a cache.

If we comment out updating of self.totalcost during insert(),
performance of insert() is identical to setitem before. However,
I don't want to make total cost evaluation lazy because it has
significant performance implications for when we need to evaluate the
total cost at mutation time (it requires a cache traversal, which could
be expensive for large caches).

Diff Detail

Repository
rHG Mercurial
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

indygreg created this revision.Sep 6 2018, 9:17 PM
lothiraldan added inline comments.
mercurial/util.py
1277

I'm not sure this line is tested, I didnd't see a test where we replace an entry with an associated cost

indygreg planned changes to this revision.Sep 7 2018, 3:12 PM
indygreg added inline comments.
mercurial/util.py
1277

Good catch! I'll send a revised patch.

FWIW, cost accounting on this data structure opens up a lot of potential around caching on revlogs. I have some alpha-quality commits to replace the full revision cache on the revlog with an lrucachedict and to add a decompressed chunk cache to revlogs. Such caches can speed up certain operations drastically. However, we need to be careful about adding always-on caches to revlogs because they can result in memory bloat. I was thinking about adding context manager methods to revlogs to temporarily activate certain aggressive caches in order to facilitate certain operations. e.g. a fulltext or chunk cache when applying delta groups could make it drastically faster to compute deltas during bulk insertion. A chunk cache could make reverse walks significantly faster. Etc. I figured you'd be interested given recent work in this area :)

indygreg updated this revision to Diff 10832.Sep 7 2018, 3:14 PM
lothiraldan added inline comments.Sep 8 2018, 8:36 AM
mercurial/util.py
1277

Having a weighted cache would combine well with our work on intermediate snapshots. If we can keep the right intermediate snapshots in the cache we will get a lot of useful cache hit.

This revision was automatically updated to reflect the committed changes.