It seems like 2048 directories ought to be enough for any reasonable
use of Mercurial?
A previous version of this patch scanned for slashes before any allocations
occurred. That approach is slower than this in the happy path, but much faster
than this in the case that too many slashes are encountered. We may want to
revisit it in the future using memchr() so it'll be well-optimized by the libc
we're using.
.. bc:
Mercurial will now defend against OOMs by refusing to operate on paths with 2048 or more components. This means that _extremely_ deep path hierarchies will be rejected, but we anticipate nobody is using hierarchies this deep.
What code calls this function? Do we have any good perf numbers for introducing this loop?
I ask because the diffing code is surprisingly impacted by the the "find newlines" stage. Using an implementation that the compiler can expand to SSE/AVX instructions is substantially faster. FWIW glibc and other C implementations have assembly versions of strchr() and memchr(), which could be substantially faster if the compiler isn't smart enough to detect the "count occurrences of chars" pattern.