@akushner thought it'd be helpful to have a dedicated debug command to run if packfile unpleasantness is suspected, and I thought it'd be useful too.
Also added some missing metrics on remotefilelog stores.
Lint Skipped |
Unit Tests Skipped |
remotefilelog/basestore.py | ||
---|---|---|
46 | We may want a try/catch around all of these metric functions that call _getfiles. _getfiles could throw an exception if the directory isn't readable for some reason, and we don't want metric logging code to bring down the process. Or we add the try/catch to _getfiles or _listkeys and print a warning if the directory is inaccesible, then return an empty list. | |
47 | If we're just counting, we can use _listkeys() instead, so we don't have to resolve the filenames. | |
55 | If we're just getting the file size, we can use _listkeys to get the information necessary to compute the file path to get the size. | |
59 | Isn't getmetrics called at store creation time now? By your previous diff. If so, getnumfiles is expensive right? How do we ensure that we don't accidentally call getnumfiles every time we want to log metrics. | |
remotefilelog/debugcommands.py | ||
361 | I'd write out the full name debugpackstatus. st isn't a super common acronym in the code base. |
Ready for another look. Please check my C.
remotefilelog/basestore.py | ||
---|---|---|
59 | From our IRL chat, added verbose=False. |
Needs a test
remotefilelog/basestore.py | ||
---|---|---|
58–59 | os.path.join(self._getrootpath(), fnhashhex[:2], fnhashhex[2:]) would be the more platform agnostic way of doing this. |
I flushed this out a bit since the last change, adding getmetrics() to datapackstore and printing the paths for remotefilelog stores, too.
remotefilelog/debugcommands.py | ||
---|---|---|
202 | Not really related, but I noticed that debugdatapack aborted too early if one of the paths had any errors. I could split this out if you think it's warranted. |
remotefilelog/debugcommands.py | ||
---|---|---|
377 | It's based on Ruby/Underscore.js's name for the same function. Then again, who ever said Ruby was a paragon of naming? |
remotefilelog/debugcommands.py | ||
---|---|---|
377 | Yes. Whether 0 or [] should be False is a separate topic (I personally prefer the Ruby/Lua way - only nil and false are False). It seems you're using it to filter paths, which is fine I guess? | |
385–387 | Maybe more common in Python: [p for s in store.stores if s for p in getpaths(s)] It's much shorter. Although I personally do not enjoy list comprehensive that much. |
remotefilelog/debugcommands.py | ||
---|---|---|
377 | Yeah, it doesn't matter today but I was looking ahead if we were to generalize this function / move it into a util. Then I'd expect it to behave the Ruby way too (or at least have a different name, like filterfalsy). | |
385–387 | Interesting, I didn't know you could chain them. I'll take the shorter way. |
Path | Packages | |||
---|---|---|---|---|
M | cstore/py-datapackstore.h (9 lines) | |||
M | remotefilelog/__init__.py (4 lines) | |||
M | remotefilelog/basepack.py (2 lines) | |||
M | remotefilelog/basestore.py (49 lines) | |||
M | remotefilelog/contentstore.py (7 lines) | |||
M | remotefilelog/debugcommands.py (75 lines) | |||
M | remotefilelog/metadatastore.py (7 lines) | |||
M | tests/test-remotefilelog-repack.t (8 lines) | |||
M | tests/test-treemanifest-repack.t (8 lines) | |||
M | treemanifest/__init__.py (2 lines) |
} | } | ||||
} | } | ||||
static PyObject *datapackstore_markforrefresh(py_datapackstore *self) { | static PyObject *datapackstore_markforrefresh(py_datapackstore *self) { | ||||
self->datapackstore.markForRefresh(); | self->datapackstore.markForRefresh(); | ||||
Py_RETURN_NONE; | Py_RETURN_NONE; | ||||
} | } | ||||
static PyObject *datapackstore_getmetrics(py_datapackstore *self, PyObject* kw) { | |||||
return PyDict_New(); | |||||
} | |||||
// --------- DatapackStore Declaration --------- | // --------- DatapackStore Declaration --------- | ||||
static PyMethodDef datapackstore_methods[] = { | static PyMethodDef datapackstore_methods[] = { | ||||
{"getdeltachain", (PyCFunction)datapackstore_getdeltachain, METH_VARARGS, ""}, | {"getdeltachain", (PyCFunction)datapackstore_getdeltachain, METH_VARARGS, ""}, | ||||
{"getmissing", (PyCFunction)datapackstore_getmissing, METH_O, ""}, | {"getmissing", (PyCFunction)datapackstore_getmissing, METH_O, ""}, | ||||
{"markforrefresh", (PyCFunction)datapackstore_markforrefresh, METH_NOARGS, ""}, | {"markforrefresh", (PyCFunction)datapackstore_markforrefresh, METH_NOARGS, ""}, | ||||
{"getmetrics", (PyCFunction)datapackstore_getmetrics, METH_KEYWORDS, ""}, | |||||
{NULL, NULL} | {NULL, NULL} | ||||
}; | }; | ||||
static PyTypeObject datapackstoreType = { | static PyTypeObject datapackstoreType = { | ||||
PyObject_HEAD_INIT(NULL) | PyObject_HEAD_INIT(NULL) | ||||
0, /* ob_size */ | 0, /* ob_size */ | ||||
"cstore.datapackstore", /* tp_name */ | "cstore.datapackstore", /* tp_name */ | ||||
sizeof(py_datapackstore), /* tp_basicsize */ | sizeof(py_datapackstore), /* tp_basicsize */ | ||||
} | } | ||||
} | } | ||||
static PyObject *uniondatapackstore_markforrefresh(py_uniondatapackstore *self) { | static PyObject *uniondatapackstore_markforrefresh(py_uniondatapackstore *self) { | ||||
self->uniondatapackstore->markForRefresh(); | self->uniondatapackstore->markForRefresh(); | ||||
Py_RETURN_NONE; | Py_RETURN_NONE; | ||||
} | } | ||||
static PyObject *uniondatapackstore_getmetrics(py_uniondatapackstore *self) { | static PyObject *uniondatapackstore_getmetrics(py_uniondatapackstore *self, PyObject* kw) { | ||||
return PyDict_New(); | return PyDict_New(); | ||||
} | } | ||||
// --------- UnionDatapackStore Declaration --------- | // --------- UnionDatapackStore Declaration --------- | ||||
static PyMethodDef uniondatapackstore_methods[] = { | static PyMethodDef uniondatapackstore_methods[] = { | ||||
{"get", (PyCFunction)uniondatapackstore_get, METH_VARARGS, ""}, | {"get", (PyCFunction)uniondatapackstore_get, METH_VARARGS, ""}, | ||||
{"getdeltachain", (PyCFunction)uniondatapackstore_getdeltachain, METH_VARARGS, ""}, | {"getdeltachain", (PyCFunction)uniondatapackstore_getdeltachain, METH_VARARGS, ""}, | ||||
{"getmissing", (PyCFunction)uniondatapackstore_getmissing, METH_O, ""}, | {"getmissing", (PyCFunction)uniondatapackstore_getmissing, METH_O, ""}, | ||||
{"markforrefresh", (PyCFunction)uniondatapackstore_markforrefresh, METH_NOARGS, ""}, | {"markforrefresh", (PyCFunction)uniondatapackstore_markforrefresh, METH_NOARGS, ""}, | ||||
{"getmetrics", (PyCFunction)uniondatapackstore_getmetrics, METH_NOARGS, ""}, | {"getmetrics", (PyCFunction)uniondatapackstore_getmetrics, METH_KEYWORDS, ""}, | ||||
{NULL, NULL} | {NULL, NULL} | ||||
}; | }; | ||||
static PyTypeObject uniondatapackstoreType = { | static PyTypeObject uniondatapackstoreType = { | ||||
PyObject_HEAD_INIT(NULL) | PyObject_HEAD_INIT(NULL) | ||||
0, /* ob_size */ | 0, /* ob_size */ | ||||
"cstore.uniondatapackstore", /* tp_name */ | "cstore.uniondatapackstore", /* tp_name */ | ||||
sizeof(py_uniondatapackstore), /* tp_basicsize */ | sizeof(py_uniondatapackstore), /* tp_basicsize */ |
def debugdatapack(ui, *paths, **opts): | def debugdatapack(ui, *paths, **opts): | ||||
return debugcommands.debugdatapack(ui, *paths, **opts) | return debugcommands.debugdatapack(ui, *paths, **opts) | ||||
@command('debughistorypack', [ | @command('debughistorypack', [ | ||||
], _('hg debughistorypack <path>'), norepo=True) | ], _('hg debughistorypack <path>'), norepo=True) | ||||
def debughistorypack(ui, path, **opts): | def debughistorypack(ui, path, **opts): | ||||
return debugcommands.debughistorypack(ui, path) | return debugcommands.debughistorypack(ui, path) | ||||
@command('debugpackstatus', [], 'hg debugpackstatus') | |||||
def debugpackstatus(repo, ui, **opts): | |||||
return debugcommands.debugpackstatus(repo, ui) | |||||
@command('debugkeepset', [ | @command('debugkeepset', [ | ||||
], _('hg debugkeepset')) | ], _('hg debugkeepset')) | ||||
def debugkeepset(ui, repo, **opts): | def debugkeepset(ui, repo, **opts): | ||||
# The command is used to measure keepset computation time | # The command is used to measure keepset computation time | ||||
def keyfn(fname, fnode): | def keyfn(fname, fnode): | ||||
return fileserverclient.getcachekey(repo.name, fname, hex(fnode)) | return fileserverclient.getcachekey(repo.name, fname, hex(fnode)) | ||||
repackmod.keepset(repo, keyfn) | repackmod.keepset(repo, keyfn) | ||||
return | return |
""" | """ | ||||
totalsize = 0 | totalsize = 0 | ||||
count = 0 | count = 0 | ||||
for __, __, size in self._getavailablepackfiles(): | for __, __, size in self._getavailablepackfiles(): | ||||
totalsize += size | totalsize += size | ||||
count += 1 | count += 1 | ||||
return totalsize, count | return totalsize, count | ||||
def getmetrics(self): | def getmetrics(self, verbose=False): | ||||
"""Returns metrics on the state of this store.""" | """Returns metrics on the state of this store.""" | ||||
size, count = self.gettotalsizeandcount() | size, count = self.gettotalsizeandcount() | ||||
return { | return { | ||||
'numpacks': count, | 'numpacks': count, | ||||
'totalpacksize': size, | 'totalpacksize': size, | ||||
} | } | ||||
def getpack(self, path): | def getpack(self, path): |
"""Returns the metadata dict for given node.""" | """Returns the metadata dict for given node.""" | ||||
for store in self.stores: | for store in self.stores: | ||||
try: | try: | ||||
return store.getmeta(name, node) | return store.getmeta(name, node) | ||||
except KeyError: | except KeyError: | ||||
pass | pass | ||||
raise KeyError((name, hex(node))) | raise KeyError((name, hex(node))) | ||||
def getmetrics(self): | def getmetrics(self, verbose=False): | ||||
metrics = [s.getmetrics() for s in self.stores] | metrics = [s.getmetrics(verbose=verbose) for s in self.stores] | ||||
return shallowutil.sumdicts(*metrics) | return shallowutil.sumdicts(*metrics) | ||||
def _getpartialchain(self, name, node): | def _getpartialchain(self, name, node): | ||||
"""Returns a partial delta chain for the given name/node pair. | """Returns a partial delta chain for the given name/node pair. | ||||
A partial chain is a chain that may not be terminated in a full-text. | A partial chain is a chain that may not be terminated in a full-text. | ||||
""" | """ | ||||
for store in self.stores: | for store in self.stores: | ||||
raise RuntimeError("cannot add to a remote store") | raise RuntimeError("cannot add to a remote store") | ||||
def getmissing(self, keys): | def getmissing(self, keys): | ||||
return keys | return keys | ||||
def markledger(self, ledger, options=None): | def markledger(self, ledger, options=None): | ||||
pass | pass | ||||
def getmetrics(self, verbose=False): | |||||
return {} | |||||
class manifestrevlogstore(object): | class manifestrevlogstore(object): | ||||
def __init__(self, repo): | def __init__(self, repo): | ||||
self._store = repo.store | self._store = repo.store | ||||
self._svfs = repo.svfs | self._svfs = repo.svfs | ||||
self._revlogs = dict() | self._revlogs = dict() | ||||
self._cl = revlog.revlog(self._svfs, '00changelog.i') | self._cl = revlog.revlog(self._svfs, '00changelog.i') | ||||
self._repackstartlinkrev = 0 | self._repackstartlinkrev = 0 | ||||
# debugcommands.py - debug logic for remotefilelog | # debugcommands.py - debug logic for remotefilelog | ||||
# | # | ||||
# Copyright 2013 Facebook, Inc. | # Copyright 2013 Facebook, Inc. | ||||
# | # | ||||
# This software may be used and distributed according to the terms of the | # This software may be used and distributed according to the terms of the | ||||
# GNU General Public License version 2 or any later version. | # GNU General Public License version 2 or any later version. | ||||
from __future__ import absolute_import | from __future__ import absolute_import | ||||
from mercurial import error, filelog, revlog | import itertools | ||||
from mercurial import error, filelog, revlog, util | |||||
from mercurial.node import bin, hex, nullid, short | from mercurial.node import bin, hex, nullid, short | ||||
from mercurial.i18n import _ | from mercurial.i18n import _ | ||||
from . import ( | from . import ( | ||||
constants, | constants, | ||||
datapack, | datapack, | ||||
fileserverclient, | fileserverclient, | ||||
historypack, | historypack, | ||||
shallowrepo, | shallowrepo, | ||||
copyfrom = raw[(start + 80):divider] | copyfrom = raw[(start + 80):divider] | ||||
mapping[currentnode] = (p1, p2, linknode, copyfrom) | mapping[currentnode] = (p1, p2, linknode, copyfrom) | ||||
start = divider + 1 | start = divider + 1 | ||||
return size, firstnode, mapping | return size, firstnode, mapping | ||||
def debugdatapack(ui, *paths, **opts): | def debugdatapack(ui, *paths, **opts): | ||||
failures = 0 | |||||
phillcoAuthorUnsubmitted Not Done Not really related, but I noticed that debugdatapack aborted too early if one of the paths had any errors. I could split this out if you think it's warranted. phillco: Not really related, but I noticed that `debugdatapack` aborted too early if one of the paths… | |||||
quarkUnsubmitted Not Done No need to split. quark: No need to split. | |||||
for path in paths: | for path in paths: | ||||
if '.data' in path: | if '.data' in path: | ||||
path = path[:path.index('.data')] | path = path[:path.index('.data')] | ||||
ui.write("%s:\n" % path) | ui.write("%s:\n" % path) | ||||
dpack = datapack.datapack(path) | dpack = datapack.datapack(path) | ||||
node = opts.get('node') | node = opts.get('node') | ||||
if node: | if node: | ||||
deltachain = dpack.getdeltachain('', bin(node)) | deltachain = dpack.getdeltachain('', bin(node)) | ||||
"".ljust(2 * hashlen - len("Total:")), | "".ljust(2 * hashlen - len("Total:")), | ||||
str(totaldeltasize).ljust(12), | str(totaldeltasize).ljust(12), | ||||
str(totalblobsize).ljust(9), | str(totalblobsize).ljust(9), | ||||
deltastr | deltastr | ||||
)) | )) | ||||
bases = {} | bases = {} | ||||
nodes = set() | nodes = set() | ||||
failures = 0 | |||||
for filename, node, deltabase, deltalen in dpack.iterentries(): | for filename, node, deltabase, deltalen in dpack.iterentries(): | ||||
bases[node] = deltabase | bases[node] = deltabase | ||||
if node in nodes: | if node in nodes: | ||||
ui.write(("Bad entry: %s appears twice\n" % short(node))) | ui.write(("Bad entry: %s appears twice\n" % short(node))) | ||||
failures += 1 | failures += 1 | ||||
nodes.add(node) | nodes.add(node) | ||||
if filename != lastfilename: | if filename != lastfilename: | ||||
printtotals() | printtotals() | ||||
hashformatter(deltabase), | hashformatter(deltabase), | ||||
str(deltalen).ljust(14), | str(deltalen).ljust(14), | ||||
blobsize)) | blobsize)) | ||||
if filename is not None: | if filename is not None: | ||||
printtotals() | printtotals() | ||||
failures += _sanitycheck(ui, set(nodes), bases) | failures += _sanitycheck(ui, set(nodes), bases) | ||||
if failures > 1: | if failures > 1: | ||||
ui.warn(("%d failures\n" % failures)) | ui.warn(("%d failures\n" % failures)) | ||||
return 1 | return 1 | ||||
def _sanitycheck(ui, nodes, bases): | def _sanitycheck(ui, nodes, bases): | ||||
""" | """ | ||||
Does some basic sanity checking on a packfiles with ``nodes`` ``bases`` (a | Does some basic sanity checking on a packfiles with ``nodes`` ``bases`` (a | ||||
mapping of node->base): | mapping of node->base): | ||||
- Each deltabase must itself be a node elsewhere in the pack | - Each deltabase must itself be a node elsewhere in the pack | ||||
- There must be no cycles | - There must be no cycles | ||||
"P1 Node".ljust(14), | "P1 Node".ljust(14), | ||||
"P2 Node".ljust(14), | "P2 Node".ljust(14), | ||||
"Link Node".ljust(14), | "Link Node".ljust(14), | ||||
"Copy From")) | "Copy From")) | ||||
lastfilename = filename | lastfilename = filename | ||||
ui.write("%s %s %s %s %s\n" % (short(node), short(p1node), | ui.write("%s %s %s %s %s\n" % (short(node), short(p1node), | ||||
short(p2node), short(linknode), copyfrom)) | short(p2node), short(linknode), copyfrom)) | ||||
def debugpackstatus(ui, repo): | |||||
I'd write out the full name debugpackstatus. st isn't a super common acronym in the code base. durham: I'd write out the full name `debugpackstatus`. `st` isn't a super common acronym in the code… | |||||
def format(metrics, paths=None): | |||||
paths = list(itertools.chain.from_iterable(compact(paths))) | |||||
parts = [] | |||||
if 'numpacks' in metrics: | |||||
parts.append("%d packs consuming %s" % (metrics['numpacks'], | |||||
util.bytecount(metrics['totalpacksize']))) | |||||
if 'numloosefiles' in metrics: | |||||
parts.append("%d loose files consuming %s" % ( | |||||
metrics['numloosefiles'], | |||||
util.bytecount(metrics['totalloosesize']))) | |||||
if paths: | |||||
parts.append("in %s" % ", ".join(paths)) | |||||
return ", ".join(parts) | |||||
def compact(list): | |||||
durhamUnsubmitted Not Done I'd rename this 'removenone' or something. durham: I'd rename this 'removenone' or something. | |||||
phillcoAuthorUnsubmitted Not Done It's based on Ruby/Underscore.js's name for the same function. Then again, who ever said Ruby was a paragon of naming? phillco: It's based on Ruby/Underscore.js's name for the same function. Then again, who ever said Ruby… | |||||
quarkUnsubmitted Not Done Could you use filter(None, list)? It is more common in python. quark: Could you use `filter(None, list)`? It is more common in python. | |||||
phillcoAuthorUnsubmitted Not Done @quark looks like that removes other falsy values too, like 0. But, I guess that is desirable? phillco: @quark looks like that removes other falsy values too, like 0. But, I guess that is desirable? | |||||
quarkUnsubmitted Not Done Yes. Whether 0 or [] should be False is a separate topic (I personally prefer the Ruby/Lua way - only nil and false are False). It seems you're using it to filter paths, which is fine I guess? quark: Yes. Whether `0` or `[]` should be `False` is a separate topic (I personally prefer the… | |||||
phillcoAuthorUnsubmitted Not Done Yeah, it doesn't matter today but I was looking ahead if we were to generalize this function / move it into a util. Then I'd expect it to behave the Ruby way too (or at least have a different name, like filterfalsy). phillco: Yeah, it doesn't matter today but I was looking ahead if we were to generalize this function /… | |||||
return [i for i in list if i is not None] | |||||
def getpaths(store): | |||||
if store is None: | |||||
return [] | |||||
# Support union stores: | |||||
if getattr(store, 'stores', None): | |||||
return list(itertools.chain.from_iterable(compact( | |||||
[getpaths(s) for s in store.stores] | |||||
))) | |||||
quarkUnsubmitted Not Done Maybe more common in Python: [p for s in store.stores if s for p in getpaths(s)] It's much shorter. Although I personally do not enjoy list comprehensive that much. quark: Maybe more common in Python:
[p for s in store.stores if s for p in getpaths(s)]
It's much… | |||||
phillcoAuthorUnsubmitted Not Done Interesting, I didn't know you could chain them. I'll take the shorter way. phillco: Interesting, I didn't know you could chain them. I'll take the shorter way. | |||||
# Some stores use _path for the attribute, others path: | |||||
try: | |||||
return [store.path] | |||||
except AttributeError: | |||||
pass | |||||
try: | |||||
return [store._path] | |||||
except AttributeError: | |||||
pass | |||||
return None | |||||
mfl = repo.manifestlog | |||||
if util.safehasattr(mfl, 'localdatastores'): | |||||
ui.write(("Local Tree Store: %s\n" % format( | |||||
shallowutil.sumdicts(*[s.getmetrics() for s in | |||||
mfl.localdatastores]), | |||||
[getpaths(p) for p in mfl.localdatastores] | |||||
))) | |||||
ui.write(("Shared Tree Store: %s\n" % format( | |||||
shallowutil.sumdicts(*[s.getmetrics() for s in | |||||
mfl.shareddatastores]), | |||||
[getpaths(p) for p in mfl.localdatastores] | |||||
))) | |||||
else: | |||||
ui.write(("(No Tree Store)\n")) | |||||
if (util.safehasattr(repo, 'contentstore') and | |||||
util.safehasattr(repo, 'metadatastore')): | |||||
ui.write(("File Content Store: %s\n" % | |||||
durhamUnsubmitted Not Done I'd use the words "File Data Store" and "File History Store" in the user facing code. content and metadata were old terms and should probably be replaced throughout the code. durham: I'd use the words "File Data Store" and "File History Store" in the user facing code. content… | |||||
format(repo.contentstore.getmetrics(verbose=True), | |||||
[getpaths(repo.contentstore)]))) | |||||
ui.write(("File Metadata Store: %s\n" % | |||||
format(repo.metadatastore.getmetrics(verbose=True), | |||||
[getpaths(repo.contentstore)]))) | |||||
else: | |||||
ui.write(("(No File Store)\n")) | |||||
def debugwaitonrepack(repo): | def debugwaitonrepack(repo): | ||||
with repo._lock(repo.svfs, "repacklock", True, None, | with repo._lock(repo.svfs, "repacklock", True, None, | ||||
None, _('repacking %s') % repo.origroot): | None, _('repacking %s') % repo.origroot): | ||||
pass | pass | ||||
def debugwaitonprefetch(repo): | def debugwaitonprefetch(repo): | ||||
with repo._lock(repo.svfs, "prefetchlock", True, None, | with repo._lock(repo.svfs, "prefetchlock", True, None, | ||||
None, _('prefetching in %s') % repo.origroot): | None, _('prefetching in %s') % repo.origroot): | ||||
pass | pass |
if missing: | if missing: | ||||
missing = store.getmissing(missing) | missing = store.getmissing(missing) | ||||
return missing | return missing | ||||
def markledger(self, ledger, options=None): | def markledger(self, ledger, options=None): | ||||
for store in self.stores: | for store in self.stores: | ||||
store.markledger(ledger, options) | store.markledger(ledger, options) | ||||
def getmetrics(self): | def getmetrics(self, verbose=False): | ||||
metrics = [s.getmetrics() for s in self.stores] | metrics = [s.getmetrics(verbose=verbose) for s in self.stores] | ||||
return shallowutil.sumdicts(*metrics) | return shallowutil.sumdicts(*metrics) | ||||
class remotefilelogmetadatastore(basestore.basestore): | class remotefilelogmetadatastore(basestore.basestore): | ||||
def getancestors(self, name, node, known=None): | def getancestors(self, name, node, known=None): | ||||
"""Returns as many ancestors as we're aware of. | """Returns as many ancestors as we're aware of. | ||||
return value: { | return value: { | ||||
node: (p1, p2, linknode, copyfrom), | node: (p1, p2, linknode, copyfrom), | ||||
def add(self, name, node, data): | def add(self, name, node, data): | ||||
raise RuntimeError("cannot add to a remote store") | raise RuntimeError("cannot add to a remote store") | ||||
def getmissing(self, keys): | def getmissing(self, keys): | ||||
return keys | return keys | ||||
def markledger(self, ledger, options=None): | def markledger(self, ledger, options=None): | ||||
pass | pass | ||||
def getmetrics(self, verbose=False): | |||||
return {} |
> [remotefilelog] | > [remotefilelog] | ||||
> prefetchdays=0 | > prefetchdays=0 | ||||
> EOF | > EOF | ||||
$ cd .. | $ cd .. | ||||
# Test that repack cleans up the old files and creates new packs | # Test that repack cleans up the old files and creates new packs | ||||
$ cd shallow | $ cd shallow | ||||
$ hg debugpackstatus | |||||
(No Tree Store) | |||||
File Content Store: 0 packs consuming 0 bytes, 1 loose files consuming 168 bytes, in $TESTTMP/hgcache/master/packs, $TESTTMP/hgcache, $TESTTMP/shallow/.hg/store/data | |||||
File Metadata Store: 0 packs consuming 0 bytes, 1 loose files consuming 168 bytes, in $TESTTMP/hgcache/master/packs, $TESTTMP/hgcache, $TESTTMP/shallow/.hg/store/data | |||||
$ find $CACHEDIR | sort | $ find $CACHEDIR | sort | ||||
$TESTTMP/hgcache | $TESTTMP/hgcache | ||||
$TESTTMP/hgcache/master | $TESTTMP/hgcache/master | ||||
$TESTTMP/hgcache/master/11 | $TESTTMP/hgcache/master/11 | ||||
$TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072 | $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072 | ||||
$TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51 | $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51 | ||||
$TESTTMP/hgcache/repos | $TESTTMP/hgcache/repos | ||||
$ hg repack | $ hg repack | ||||
$ find $CACHEDIR | sort | $ find $CACHEDIR | sort | ||||
$TESTTMP/hgcache | $TESTTMP/hgcache | ||||
$TESTTMP/hgcache/master | $TESTTMP/hgcache/master | ||||
$TESTTMP/hgcache/master/packs | $TESTTMP/hgcache/master/packs | ||||
$TESTTMP/hgcache/master/packs/276d308429d0303762befa376788300f0310f90e.histidx | $TESTTMP/hgcache/master/packs/276d308429d0303762befa376788300f0310f90e.histidx | ||||
$TESTTMP/hgcache/master/packs/276d308429d0303762befa376788300f0310f90e.histpack | $TESTTMP/hgcache/master/packs/276d308429d0303762befa376788300f0310f90e.histpack | ||||
$TESTTMP/hgcache/master/packs/8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.dataidx | $TESTTMP/hgcache/master/packs/8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.dataidx | ||||
$TESTTMP/hgcache/master/packs/8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.datapack | $TESTTMP/hgcache/master/packs/8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.datapack | ||||
$TESTTMP/hgcache/repos | $TESTTMP/hgcache/repos | ||||
$ hg debugpackstatus | |||||
(No Tree Store) | |||||
File Content Store: 1 packs consuming 1.12 KB, 0 loose files consuming 0 bytes, in $TESTTMP/hgcache/master/packs, $TESTTMP/hgcache, $TESTTMP/shallow/.hg/store/data | |||||
File Metadata Store: 1 packs consuming 1.29 KB, 0 loose files consuming 0 bytes, in $TESTTMP/hgcache/master/packs, $TESTTMP/hgcache, $TESTTMP/shallow/.hg/store/data | |||||
# Test that the packs are readonly | # Test that the packs are readonly | ||||
$ ls_l $CACHEDIR/master/packs | $ ls_l $CACHEDIR/master/packs | ||||
-r--r--r-- 1145 276d308429d0303762befa376788300f0310f90e.histidx | -r--r--r-- 1145 276d308429d0303762befa376788300f0310f90e.histidx | ||||
-r--r--r-- 172 276d308429d0303762befa376788300f0310f90e.histpack | -r--r--r-- 172 276d308429d0303762befa376788300f0310f90e.histpack | ||||
-r--r--r-- 1074 8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.dataidx | -r--r--r-- 1074 8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.dataidx | ||||
-r--r--r-- 69 8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.datapack | -r--r--r-- 69 8e25dec685d5e0bb1f1b39df3acebda0e0d75c6e.datapack | ||||
Node P1 Node P2 Node Link Node Copy From | Node P1 Node P2 Node Link Node Copy From | ||||
1832e0765de9 a0c8bcbbb45c 000000000000 8e83608cbe60 | 1832e0765de9 a0c8bcbbb45c 000000000000 8e83608cbe60 | ||||
dir | dir | ||||
Node P1 Node P2 Node Link Node Copy From | Node P1 Node P2 Node Link Node Copy From | ||||
23226e7a252c 000000000000 000000000000 8e83608cbe60 | 23226e7a252c 000000000000 000000000000 8e83608cbe60 | ||||
- Repack and reverify | - Repack and reverify | ||||
$ hg debugpackstatus | |||||
Local Tree Store: 0 packs consuming 0 bytes, in $TESTTMP/client/.hg/store/packs/manifests | |||||
Shared Tree Store: 2 packs consuming 2.46 KB, in $TESTTMP/client/.hg/store/packs/manifests | |||||
(No File Store) | |||||
$ hg repack | $ hg repack | ||||
$ ls_l $CACHEDIR/master/packs/manifests | grep pack | $ ls_l $CACHEDIR/master/packs/manifests | grep pack | ||||
-r--r--r-- 339 56e8c6f0ca2a324b8b5ca1a2730323a1b4d0793a.datapack | -r--r--r-- 339 56e8c6f0ca2a324b8b5ca1a2730323a1b4d0793a.datapack | ||||
-r--r--r-- 262 7535b6084226436bbdff33043969e7fa963e8428.histpack | -r--r--r-- 262 7535b6084226436bbdff33043969e7fa963e8428.histpack | ||||
$ hg debugdatapack $CACHEDIR/master/packs/manifests/*.datapack | $ hg debugdatapack $CACHEDIR/master/packs/manifests/*.datapack | ||||
$TESTTMP/hgcache/master/packs/manifests/56e8c6f0ca2a324b8b5ca1a2730323a1b4d0793a: | $TESTTMP/hgcache/master/packs/manifests/56e8c6f0ca2a324b8b5ca1a2730323a1b4d0793a: | ||||
Node P1 Node P2 Node Link Node Copy From | Node P1 Node P2 Node Link Node Copy From | ||||
1832e0765de9 a0c8bcbbb45c 000000000000 8e83608cbe60 | 1832e0765de9 a0c8bcbbb45c 000000000000 8e83608cbe60 | ||||
a0c8bcbbb45c 000000000000 000000000000 1f0dee641bb7 | a0c8bcbbb45c 000000000000 000000000000 1f0dee641bb7 | ||||
dir | dir | ||||
Node P1 Node P2 Node Link Node Copy From | Node P1 Node P2 Node Link Node Copy From | ||||
23226e7a252c 000000000000 000000000000 8e83608cbe60 | 23226e7a252c 000000000000 000000000000 8e83608cbe60 | ||||
$ hg debugpackstatus | |||||
Local Tree Store: 0 packs consuming 0 bytes, in $TESTTMP/client/.hg/store/packs/manifests | |||||
Shared Tree Store: 1 packs consuming 1.46 KB, in $TESTTMP/client/.hg/store/packs/manifests | |||||
(No File Store) | |||||
# Test repacking local manifest packs | # Test repacking local manifest packs | ||||
$ hg up -q 1 | $ hg up -q 1 | ||||
$ echo a >> a && hg commit -Aqm 'modify a' | $ echo a >> a && hg commit -Aqm 'modify a' | ||||
$ echo b >> dir/b && hg commit -Aqm 'modify dir/b' | $ echo b >> dir/b && hg commit -Aqm 'modify dir/b' | ||||
$ ls_l .hg/store/packs/manifests | grep datapack | $ ls_l .hg/store/packs/manifests | grep datapack | ||||
-r--r--r-- 248 5d1716bbef6e7200192de6509055d1ee31a4172c.datapack | -r--r--r-- 248 5d1716bbef6e7200192de6509055d1ee31a4172c.datapack | ||||
-r--r--r-- 146 cffef142da32f3e52c1779490e5d0ddac5f9b82b.datapack | -r--r--r-- 146 cffef142da32f3e52c1779490e5d0ddac5f9b82b.datapack |
raise RuntimeError("cannot add to a remote store") | raise RuntimeError("cannot add to a remote store") | ||||
def getmissing(self, keys): | def getmissing(self, keys): | ||||
return keys | return keys | ||||
def markledger(self, ledger, options=None): | def markledger(self, ledger, options=None): | ||||
pass | pass | ||||
def getmetrics(self): | def getmetrics(self, verbose=False): | ||||
return {} | return {} | ||||
def serverrepack(repo, incremental=False, options=None): | def serverrepack(repo, incremental=False, options=None): | ||||
packpath = repo.vfs.join('cache/packs/%s' % PACK_CATEGORY) | packpath = repo.vfs.join('cache/packs/%s' % PACK_CATEGORY) | ||||
revlogstore = manifestrevlogstore(repo) | revlogstore = manifestrevlogstore(repo) | ||||
try: | try: |
We may want a try/catch around all of these metric functions that call _getfiles. _getfiles could throw an exception if the directory isn't readable for some reason, and we don't want metric logging code to bring down the process. Or we add the try/catch to _getfiles or _listkeys and print a warning if the directory is inaccesible, then return an empty list.