This is an archive of the discontinued Mercurial Phabricator instance.

xdiff: add a preprocessing step that trims files
ClosedPublic

Authored by quark on Mar 4 2018, 7:50 PM.

Details

Summary

xdiff has a xdl_trim_ends step that removes common lines, unmatchable
lines. That is in theory good, but happens too late - after splitting,
hashing, and adjusting the hash values so they are unique. Those splitting,
hashing and adjusting hash values steps could have noticeable overhead.

Diffing two large files with minor (one-line-ish) changes are not uncommon.
In that case, the raw performance of those preparation steps seriously
matter. Even allocating an O(N) array and storing line offsets to it is
expensive. Therefore my previous attempts [1] [2] cannot be good enough
since they do not remove the O(N) array assignment.

This patch adds a preprocessing step - xdl_trim_files that runs before
other preprocessing steps. It counts common prefix and suffix and lines in
them (needed for displaying line number), without doing anything else.

Testing with a crafted large (169MB) file, with minor change:

open('a','w').write(''.join('%s\n' % (i % 100000) for i in xrange(30000000) if i != 6000000))
open('b','w').write(''.join('%s\n' % (i % 100000) for i in xrange(30000000) if i != 6003000))

Running xdiff by a simple binary [3], this patch improves the xdiff perf by
more than 10x for the above case:

# xdiff before this patch
2.41s user 1.13s system 98% cpu 3.592 total
# xdiff after this patch
0.14s user 0.16s system 98% cpu 0.309 total
# gnu diffutils
0.12s user 0.15s system 98% cpu 0.272 total
# (best of 20 runs)

It's still slightly slower than GNU diffutils. But it's pretty close now.

Testing with real repo data:

For the whole repo, this patch makes xdiff 25% faster:

# hg perfbdiff --count 100 --alldata -c d334afc585e2 --blocks [--xdiff]
# xdiff, after
! wall 0.058861 comb 0.050000 user 0.050000 sys 0.000000 (best of 100)
# xdiff, before
! wall 0.077816 comb 0.080000 user 0.080000 sys 0.000000 (best of 91)
# bdiff
! wall 0.117473 comb 0.120000 user 0.120000 sys 0.000000 (best of 67)

For files that are long (ex. commands.py), the speedup is more than 3x, very
significant:

# hg perfbdiff --count 3000 --blocks commands.py.i 1 [--xdiff]
# xdiff, after
! wall 0.690583 comb 0.690000 user 0.690000 sys 0.000000 (best of 12)
# xdiff, before
! wall 2.240361 comb 2.210000 user 2.210000 sys 0.000000 (best of 4)
# bdiff
! wall 2.469852 comb 2.440000 user 2.440000 sys 0.000000 (best of 4)

[1]: https://phab.mercurial-scm.org/D2631
[2]: https://phab.mercurial-scm.org/D2634
[3]:

// Code to run xdiff from command line. No proper error handling.
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include "mercurial/thirdparty/xdiff/xdiff.h"
#define ensure(x) if (!(x)) exit(255);
mmfile_t readfile(const char *path) {
  struct stat st; int fd = open(path, O_RDONLY);
  fstat(fd, &st); mmfile_t file = { malloc(st.st_size), st.st_size };
  ensure(read(fd, file.ptr, st.st_size) == st.st_size); close(fd);
  return file;
}
int main(int argc, char const *argv[]) {
  mmfile_t a = readfile(argv[1]), b = readfile(argv[2]);
  xpparam_t xpp = {0}; xdemitconf_t xecfg = {0}; xdemitcb_t ecb = {0};
  xdl_diff(&a, &b, &xpp, &xecfg, &ecb);
  return 0;
}

Diff Detail

Repository
rHG Mercurial
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

quark created this revision.Mar 4 2018, 7:50 PM
quark updated this revision to Diff 6645.Mar 4 2018, 8:08 PM
quark edited the summary of this revision. (Show Details)Mar 4 2018, 8:32 PM
quark updated this revision to Diff 6650.Mar 4 2018, 11:53 PM
quark edited the summary of this revision. (Show Details)Mar 5 2018, 12:24 AM
quark updated this revision to Diff 6664.Mar 6 2018, 1:09 AM

@quark: Will you be refactoring this based on upstream feedback? Or rebasing due to other xdiff changes that have since landed? i.e. should I review this patch now?

quark added a comment.Mar 6 2018, 11:04 PM

I'll do the rebase (probably tomorrow). It will make the xdl_do_diff2 change unnecessary, and maybe rename prefix_lines etc to be more like the original code. But the main feature (xdl_trim_files) will probably stay unchanged and is worth a look now.

quark added a comment.Mar 7 2018, 1:06 AM

The patch sent to the git list was completely different from this one, because git has another layer xdiff-interface.c and trimming happens there. git also has some more complexity like context line handling, which is the hard part.

Since the context lines logic was removed and Mercurial does context line in a higher layer, the git upstream's concerns about context lines are invalid here.

quark edited the summary of this revision. (Show Details)Mar 7 2018, 5:41 PM
quark updated this revision to Diff 6708.
quark edited the summary of this revision. (Show Details)Mar 7 2018, 5:45 PM
indygreg requested changes to this revision.Mar 8 2018, 1:04 AM

I'm overall pretty happy with this. I'm requesting just a few minor fixups.

Also, if you (or anyone else for that matter) wanted to spend time to use better variable names and add comments throughout this code, it would be greatly appreciated. I find this code challenging to read because of its almost non-existent documentation.

mercurial/thirdparty/xdiff/xprepare.c
169

Bonus points if you resubmit this with more expressive variable names. Just because xdiff's code is almost impossible to read doesn't mean we should follow suit :)

183–193

I'm still showing this as a hot point in the code when compiling with default settings used by Python packaging tools. I suspect we can get better results on typical compiler flags by tweaking things a bit. But we can do that after this lands.

199–202

This is clever. But memrchr() will be easier to read. Plus I suspect it will be faster.

If you disagree, let's compromise at:

i = 0;
while (i <= reserved) {
   pp1--;
   i += (*pp1 == '\n');
}

There's no sense using a for without the 3rd parameter IMO.

This revision now requires changes to proceed.Mar 8 2018, 1:04 AM
quark added inline comments.Mar 9 2018, 3:40 PM
mercurial/thirdparty/xdiff/xprepare.c
169

The style guide in git community recommends using whatever style around the existing code base. I think we actually also do that, since new methods are not using foo_bar naming.

I'll add comments instead.

183–193

Yes. It's expected.

I did try various ways to optimize it before sending the patch, including:

  • Like memchr, test 8 bytes at once. Difficulty: memory alignment is not guaranteed (ex. msmall.ptr % 8 != mlarge.ptr % 8).
  • Use various SIMD related compiler flags.

The first makes things slower, even if I did tell the compiler "pretend the memory to be aligned". The second makes no difference.

199–202

I think readability of the current code is better, since the memrchr version needs a "size" parameter, which is a burden to the existing logic.

I did some research before sending this patch. The glibc memchr is basically relying on maybe_contain_zero_byte that can test 8 bytes at once. But CPU SIMD instructions are faster than that trick.

The following code counts "\n"s in a file, using 3 ways: naive loop, testing 8 bytes at once, and actually using memchr. See the benchmark at the end.

#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>

char buf[64000000] __attribute__ ((aligned (16)));
int size;

static int count_naive() {
  int count = 0, i = 0;
  for (int i = 0; i < size; ++i) {
    count += buf[i] == '\n';
  }
  return count;
}

static int count_memchr() {
  int count = 0, i = 0;
  const char *p = buf;
  while (p) {
    p = memchr(p + 1, '\n', buf + size - p);
    count++;
  }
  return count;
}

static inline int maybe_contain_zero_byte(uint64_t x) {
  // See https://github.com/lattera/glibc/blob/master/string/memchr.c
  const uint64_t MAGIC_BITS = 0x7efefefefefefeff;
  return ((((x + MAGIC_BITS) ^ ~x) & ~MAGIC_BITS) != 0);
}

static int count_u64() {
  uint64_t *p = (uint64_t *)&buf;
  uint64_t x = '\n' + ('\n' << 8);
  int count = 0;
  x |= x << 16;
  x |= x << 32;
  for (int i = 0; i < size / 8; ++i, ++p) {
    uint64_t v = *p ^ x;
    if (maybe_contain_zero_byte(v)) {
      const char *c = (const char *) p;
      for (int j = 0; j < 8; ++j) {
        count += (((v >> (8 * j)) & 0xff) == 0);
      }
    }
  }
  return count;
}

int main(int argc, char const *argv[]) {
  int fd = open(argv[1], O_RDONLY);
  size = (int) read(fd, buf, sizeof buf);
  if (argv[2] && argv[2][0] == 'n') {
    printf("naive:  %d\n", count_naive());
  } else if (argv[2] && argv[2][0] == 'm') {
    printf("memchr: %d\n", count_memchr());
  } else {
    printf("u64:    %d\n", count_u64());
  }
  return 0;
}

/*
# gcc 7.3.0
gcc -O2 a.c -o ao2
gcc -O3 -mavx2 a.c -o ao3

# best of 50 runs, wall time
# test case: random data
# head -c 64000000 /dev/urandom > /tmp/r 
./ao2 naive  0.069
./ao2 u64    0.043
./ao2 memchr 0.039
./ao3 naive  0.038  # best
./ao3 u64    0.043
./ao3 memchr 0.039

# test case: real code
# v=read('/home/quark/hg-committed/mercurial/commands.py')
# write('/tmp/c', v * (64000000/len(v)))
./ao2 naive  0.069
./ao2 u64    0.059
./ao2 memchr 0.055
./ao3 naive  0.038  # best
./ao3 u64    0.055
./ao3 memchr 0.055  # slower

# ruby script to run the tests
path = ARGV[0]
%w[./ao2 ./ao3].product(%w[naive u64 memchr]).each do |exe, name|
  time = 50.times.map do
    t1 = Time.now
    system exe, path, name, 1=>'/dev/null'
    Time.now - t1
  end.min
  puts "#{exe} #{name.ljust(6)} #{time.round(3)}"
end
*/

So I'd like to keep it simple and avoid over optimization. After all, this is O(100)-ish, assuming line length won't be ridiculously long. Even memchr is faster by 14%, it won't be noticeable. Not to say it's 31% slower in the -O3 case.

quark updated this revision to Diff 6797.Mar 9 2018, 5:54 PM
indygreg accepted this revision.Mar 9 2018, 6:16 PM

I almost accepted the last version and this one is mostly cosmetic changes. So LGTM!

Your work here is very much appreciated. Thank you for doing the thorough performance analysis.

This revision is now accepted and ready to land.Mar 9 2018, 6:16 PM
This revision was automatically updated to reflect the committed changes.