This [very hacky] commit implements a new files store that uses
Redis to cache files data. It services requests from Redis (if
available) and falls back to a base store / interface (revlogs in
our case) on misses.
The purpose of this commit is to first demonstrate the value in having
interfaces for storage. If we code to the interface, then another
interface can come along and do useful things - like caching.
The other purpose was to investigate performance. Would a memory-backed
key-value store have a significant impact on performance of our
experimental wire protocol command to serve file data fulltexts for
a specific revisions? The answer is a very resounding yet!
Using the same mozilla-unified revision from the previous commit:
- no compression: 1478MB; ~94s wall; ~56s CPU w/ hot redis: 1478MB; ~9.6s wall; ~8.6s CPU
- zstd level 3: 343MB; ~97s wall; ~57s CPU w/ hot redis: 343MB; ~8.5s wall; ~8.3s CPU
- zstd level 1 w/ hot redis: 377MB; ~6.8s wall; ~6.6s CPU
- zlib level 6: 367MB; ~116s wall; ~74s CPU w/ hot redis: 367MB; ~36.7s wall; ~36s CPU
For the curious, the ls profiler says that our hotspot without
compression is in socket I/O. With zstd compression, the hotspot is
compression.
I reckon the reason for the socket I/O overhead is because we end up
writing tons more chunks on the wire when uncompressed (compression
will effectively ensure each output chunk is a similar, large'ish
size). All those extra Python function calls and system calls do add
up!
Anyway, I'm definitely happy with the performance improvements. I'd
say this was a useful experiment!