This patch removes the unrequired files from the cbor library which are docs/,
tests/, setup.py, setup.cfg, tox.ini.
Also, this patch fixes couple of test-check* tests by making sure they skip
testing the third party library cbor.
indygreg |
hg-reviewers |
This patch removes the unrequired files from the cbor library which are docs/,
tests/, setup.py, setup.cfg, tox.ini.
Also, this patch fixes couple of test-check* tests by making sure they skip
testing the third party library cbor.
Lint Skipped |
Unit Tests Skipped |
I'm going to fold the deletes into the previous commit and rewrite the commit message of this one accordingly.
Path | Packages | |||
---|---|---|---|---|
D | M | mercurial/thirdparty/cbor/docs/conf.py (33 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/customizing.rst (132 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/index.rst (15 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/modules/decoder.rst (5 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/modules/encoder.rst (5 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/modules/types.rst (5 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/usage.rst (81 lines) | ||
D | M | mercurial/thirdparty/cbor/docs/versionhistory.rst (78 lines) | ||
D | M | mercurial/thirdparty/cbor/setup.cfg (46 lines) | ||
D | M | mercurial/thirdparty/cbor/setup.py (12 lines) | ||
D | M | mercurial/thirdparty/cbor/tests/test_decoder.py (342 lines) | ||
D | M | mercurial/thirdparty/cbor/tests/test_encoder.py (318 lines) | ||
D | M | mercurial/thirdparty/cbor/tests/test_types.py (36 lines) | ||
D | M | mercurial/thirdparty/cbor/tox.ini (12 lines) | ||
M | tests/test-check-py3-compat.t (1 line) | |||
M | tests/test-check-pyflakes.t (1 line) |
# coding: utf-8 | |||||
#!/usr/bin/env python | |||||
import pkg_resources | |||||
extensions = [ | |||||
'sphinx.ext.autodoc', | |||||
'sphinx.ext.intersphinx' | |||||
] | |||||
templates_path = ['_templates'] | |||||
source_suffix = '.rst' | |||||
master_doc = 'index' | |||||
project = 'cbor2' | |||||
author = u'Alex Grönholm' | |||||
copyright = u'2016, ' + author | |||||
v = pkg_resources.get_distribution(project).parsed_version | |||||
version = v.base_version | |||||
release = v.public | |||||
language = None | |||||
exclude_patterns = ['_build'] | |||||
pygments_style = 'sphinx' | |||||
highlight_language = 'python' | |||||
todo_include_todos = False | |||||
html_theme = 'sphinx_rtd_theme' | |||||
html_static_path = ['_static'] | |||||
htmlhelp_basename = project.replace('-', '') + 'doc' | |||||
intersphinx_mapping = {'python': ('http://docs.python.org/', None)} |
Customizing encoding and decoding | |||||
================================= | |||||
Both the encoder and decoder can be customized to support a wider range of types. | |||||
On the encoder side, this is accomplished by passing a callback as the ``default`` constructor | |||||
argument. This callback will receive an object that the encoder could not serialize on its own. | |||||
The callback should then return a value that the encoder can serialize on its own, although the | |||||
return value is allowed to contain objects that also require the encoder to use the callback, as | |||||
long as it won't result in an infinite loop. | |||||
On the decoder side, you have two options: ``tag_hook`` and ``object_hook``. The former is called | |||||
by the decoder to process any semantic tags that have no predefined decoders. The latter is called | |||||
for any newly decoded ``dict`` objects, and is mostly useful for implementing a JSON compatible | |||||
custom type serialization scheme. Unless your requirements restrict you to JSON compatible types | |||||
only, it is recommended to use ``tag_hook`` for this purpose. | |||||
JSON compatibility | |||||
------------------ | |||||
In certain applications, it may be desirable to limit the supported types to the same ones | |||||
serializable as JSON: (unicode) string, integer, float, boolean, null, array and object (dict). | |||||
This can be done by passing the ``json_compatible`` option to the encoder. When incompatible types | |||||
are encountered, a :class:`~cbor2.encoder.CBOREncodeError` is then raised. | |||||
For the decoder, there is no support for detecting incoming incompatible types yet. | |||||
Using the CBOR tags for custom types | |||||
------------------------------------ | |||||
The most common way to use ``default`` is to call :meth:`~cbor2.encoder.CBOREncoder.encode` | |||||
to add a custom tag in the data stream, with the payload as the value:: | |||||
class Point(object): | |||||
def __init__(self, x, y): | |||||
self.x = x | |||||
self.y = y | |||||
def default_encoder(encoder, value): | |||||
# Tag number 4000 was chosen arbitrarily | |||||
encoder.encode(CBORTag(4000, [value.x, value.y])) | |||||
The corresponding ``tag_hook`` would be:: | |||||
def tag_hook(decoder, tag, shareable_index=None): | |||||
if tag.tag != 4000: | |||||
return tag | |||||
# tag.value is now the [x, y] list we serialized before | |||||
return Point(*tag.value) | |||||
Using dicts to carry custom types | |||||
--------------------------------- | |||||
The same could be done with ``object_hook``, except less efficiently:: | |||||
def default_encoder(encoder, value): | |||||
encoder.encode(dict(typename='Point', x=value.x, y=value.y)) | |||||
def object_hook(decoder, value): | |||||
if value.get('typename') != 'Point': | |||||
return value | |||||
return Point(value['x'], value['y']) | |||||
You should make sure that whatever way you decide to use for telling apart your "specially marked" | |||||
dicts from arbitrary data dicts won't mistake on for the other. | |||||
Value sharing with custom types | |||||
------------------------------- | |||||
In order to properly encode and decode cyclic references with custom types, some special care has | |||||
to be taken. Suppose you have a custom type as below, where every child object contains a reference | |||||
to its parent and the parent contains a list of children:: | |||||
from cbor2 import dumps, loads, shareable_encoder, CBORTag | |||||
class MyType(object): | |||||
def __init__(self, parent=None): | |||||
self.parent = parent | |||||
self.children = [] | |||||
if parent: | |||||
self.parent.children.append(self) | |||||
This would not normally be serializable, as it would lead to an endless loop (in the worst case) | |||||
and raise some exception (in the best case). Now, enter CBOR's extension tags 28 and 29. These tags | |||||
make it possible to add special markers into the data stream which can be later referenced and | |||||
substituted with the object marked earlier. | |||||
To do this, in ``default`` hooks used with the encoder you will need to use the | |||||
:meth:`~cbor2.encoder.shareable_encoder` decorator on your ``default`` hook function. It will | |||||
automatically automatically add the object to the shared values registry on the encoder and prevent | |||||
it from being serialized twice (instead writing a reference to the data stream):: | |||||
@shareable_encoder | |||||
def default_encoder(encoder, value): | |||||
# The state has to be serialized separately so that the decoder would have a chance to | |||||
# create an empty instance before the shared value references are decoded | |||||
serialized_state = encoder.encode_to_bytes(value.__dict__) | |||||
encoder.encode(CBORTag(3000, serialized_state)) | |||||
On the decoder side, you will need to initialize an empty instance for shared value lookup before | |||||
the object's state (which may contain references to it) is decoded. | |||||
This is done with the :meth:`~cbor2.encoder.CBORDecoder.set_shareable` method:: | |||||
def tag_hook(decoder, tag, shareable_index=None): | |||||
# Return all other tags as-is | |||||
if tag.tag != 3000: | |||||
return tag | |||||
# Create a raw instance before initializing its state to make it possible for cyclic | |||||
# references to work | |||||
instance = MyType.__new__(MyType) | |||||
decoder.set_shareable(shareable_index, instance) | |||||
# Separately decode the state of the new object and then apply it | |||||
state = decoder.decode_from_bytes(tag.value) | |||||
instance.__dict__.update(state) | |||||
return instance | |||||
You could then verify that the cyclic references have been restored after deserialization:: | |||||
parent = MyType() | |||||
child1 = MyType(parent) | |||||
child2 = MyType(parent) | |||||
serialized = dumps(parent, default=default_encoder, value_sharing=True) | |||||
new_parent = loads(serialized, tag_hook=tag_hook) | |||||
assert new_parent.children[0].parent is new_parent | |||||
assert new_parent.children[1].parent is new_parent | |||||
.. include:: ../README.rst | |||||
:start-line: 7 | |||||
:end-before: Project links | |||||
Table of contents | |||||
----------------- | |||||
.. toctree:: | |||||
:maxdepth: 2 | |||||
usage | |||||
customizing | |||||
versionhistory | |||||
* :ref:`API reference <modindex>` |
:mod:`cbor2.decoder` | |||||
==================== | |||||
.. automodule:: cbor2.decoder | |||||
:members: |
:mod:`cbor2.encoder` | |||||
==================== | |||||
.. automodule:: cbor2.encoder | |||||
:members: |
:mod:`cbor2.types` | |||||
================== | |||||
.. automodule:: cbor2.types | |||||
:members: |
Basic usage | |||||
=========== | |||||
Serializing and deserializing with cbor2 is pretty straightforward:: | |||||
from cbor2 import dumps, loads | |||||
# Serialize an object as a bytestring | |||||
data = dumps(['hello', 'world']) | |||||
# Deserialize a bytestring | |||||
obj = loads(data) | |||||
# Efficiently deserialize from a file | |||||
with open('input.cbor', 'rb') as fp: | |||||
obj = load(fp) | |||||
# Efficiently serialize an object to a file | |||||
with open('output.cbor', 'wb') as fp: | |||||
dump(obj, fp) | |||||
Some data types, however, require extra considerations, as detailed below. | |||||
String/bytes handling on Python 2 | |||||
--------------------------------- | |||||
The ``str`` type is encoded as binary on Python 2. If you want to encode strings as text on | |||||
Python 2, use unicode strings instead. | |||||
Date/time handling | |||||
------------------ | |||||
The CBOR specification does not support naïve datetimes (that is, datetimes where ``tzinfo`` is | |||||
missing). When the encoder encounters such a datetime, it needs to know which timezone it belongs | |||||
to. To this end, you can specify a default timezone by passing a :class:`~datetime.tzinfo` instance | |||||
to :func:`~cbor2.encoder.dump`/:func:`~cbor2.encoder.dumps` call as the ``timezone`` argument. | |||||
Decoded datetimes are always timezone aware. | |||||
By default, datetimes are serialized in a manner that retains their timezone offsets. You can | |||||
optimize the data stream size by passing ``datetime_as_timestamp=False`` to | |||||
:func:`~cbor2.encoder.dump`/:func:`~cbor2.encoder.dumps`, but this causes the timezone offset | |||||
information to be lost. | |||||
Cyclic (recursive) data structures | |||||
---------------------------------- | |||||
If the encoder encounters a shareable object (ie. list or dict) that it has been before, it will | |||||
by default raise :exc:`~cbor2.encoder.CBOREncodeError` indicating that a cyclic reference has been | |||||
detected and value sharing was not enabled. CBOR has, however, an extension specification that | |||||
allows the encoder to reference a previously encoded value without processing it again. This makes | |||||
it possible to serialize such cyclic references, but value sharing has to be enabled by passing | |||||
``value_sharing=True`` to :func:`~cbor2.encoder.dump`/:func:`~cbor2.encoder.dumps`. | |||||
.. warning:: Support for value sharing is rare in other CBOR implementations, so think carefully | |||||
whether you want to enable it. It also causes some line overhead, as all potentially shareable | |||||
values must be tagged as such. | |||||
Tag support | |||||
----------- | |||||
In addition to all standard CBOR tags, this library supports many extended tags: | |||||
=== ======================================== ==================================================== | |||||
Tag Semantics Python type(s) | |||||
=== ======================================== ==================================================== | |||||
0 Standard date/time string datetime.date / datetime.datetime | |||||
1 Epoch-based date/time datetime.date / datetime.datetime | |||||
2 Positive bignum int / long | |||||
3 Negative bignum int / long | |||||
4 Decimal fraction decimal.Decimal | |||||
5 Bigfloat decimal.Decimal | |||||
28 Mark shared value N/A | |||||
29 Reference shared value N/A | |||||
30 Rational number fractions.Fraction | |||||
35 Regular expression ``_sre.SRE_Pattern`` (result of ``re.compile(...)``) | |||||
36 MIME message email.message.Message | |||||
37 Binary UUID uuid.UUID | |||||
258 Set of unique items set | |||||
=== ======================================== ==================================================== | |||||
Arbitary tags can be represented with the :class:`~cbor2.types.CBORTag` class. |
Version history | |||||
=============== | |||||
This library adheres to `Semantic Versioning <http://semver.org/>`_. | |||||
**UNRELEASED** | |||||
- Added canonical encoding (via ``canonical=True``) (PR by Sekenre) | |||||
- Added support for encoding/decoding sets (semantic tag 258) (PR by Sekenre) | |||||
**4.0.1** (2017-08-21) | |||||
- Fixed silent truncation of decoded data if there are not enough bytes in the stream for an exact | |||||
read (``CBORDecodeError`` is now raised instead) | |||||
**4.0.0** (2017-04-24) | |||||
- **BACKWARD INCOMPATIBLE** Value sharing has been disabled by default, for better compatibility | |||||
with other implementations and better performance (since it is rarely needed) | |||||
- **BACKWARD INCOMPATIBLE** Replaced the ``semantic_decoders`` decoder option with the ``tag_hook`` | |||||
option | |||||
- **BACKWARD INCOMPATIBLE** Replaced the ``encoders`` encoder option with the ``default`` option | |||||
- **BACKWARD INCOMPATIBLE** Factored out the file object argument (``fp``) from all callbacks | |||||
- **BACKWARD INCOMPATIBLE** The encoder no longer supports every imaginable type implementing the | |||||
``Sequence`` or ``Map`` interface, as they turned out to be too broad | |||||
- Added the ``object_hook`` option for decoding dicts into complex objects | |||||
(intended for situations where JSON compatibility is required and semantic tags cannot be used) | |||||
- Added encoding and decoding of simple values (``CBORSimpleValue``) | |||||
(contributed by Jerry Lundström) | |||||
- Replaced the decoder for bignums with a simpler and faster version (contributed by orent) | |||||
- Made all relevant classes and functions available directly in the ``cbor2`` namespace | |||||
- Added proper documentation | |||||
**3.0.4** (2016-09-24) | |||||
- Fixed TypeError when trying to encode extension types (regression introduced in 3.0.3) | |||||
**3.0.3** (2016-09-23) | |||||
- No changes, just re-releasing due to git tagging screw-up | |||||
**3.0.2** (2016-09-23) | |||||
- Fixed decoding failure for datetimes with microseconds (tag 0) | |||||
**3.0.1** (2016-08-08) | |||||
- Fixed error in the cyclic structure detection code that could mistake one container for | |||||
another, sometimes causing a bogus error about cyclic data structures where there was none | |||||
**3.0.0** (2016-07-03) | |||||
- **BACKWARD INCOMPATIBLE** Encoder callbacks now receive three arguments: the encoder instance, | |||||
the value to encode and a file-like object. The callback must must now either write directly to | |||||
the file-like object or call another encoder callback instead of returning an iterable. | |||||
- **BACKWARD INCOMPATIBLE** Semantic decoder callbacks now receive four arguments: the decoder | |||||
instance, the primitive value, a file-like object and the shareable index for the decoded value. | |||||
Decoders that support value sharing must now set the raw value at the given index in | |||||
``decoder.shareables``. | |||||
- **BACKWARD INCOMPATIBLE** Removed support for iterative encoding (``CBOREncoder.encode()`` is no | |||||
longer a generator function and always returns ``None``) | |||||
- Significantly improved performance (encoder ~30 % faster, decoder ~60 % faster) | |||||
- Fixed serialization round-trip for ``undefined`` (simple type #23) | |||||
- Added proper support for value sharing in callbacks | |||||
**2.0.0** (2016-06-11) | |||||
- **BACKWARD INCOMPATIBLE** Deserialize unknown tags as ``CBORTag`` objects so as not to lose | |||||
information | |||||
- Fixed error messages coming from nested structures | |||||
**1.1.0** (2016-06-10) | |||||
- Fixed deserialization of cyclic structures | |||||
**1.0.0** (2016-06-08) | |||||
- Initial release |
[metadata] | |||||
name = cbor2 | |||||
description = Pure Python CBOR (de)serializer with extensive tag support | |||||
long_description = file: README.rst | |||||
author = Alex Grönholm | |||||
author_email = alex.gronholm@nextday.fi | |||||
url = https://github.com/agronholm/cbor2 | |||||
license = MIT | |||||
license_file = LICENSE.txt | |||||
keywords = serialization cbor | |||||
classifiers = | |||||
Development Status :: 5 - Production/Stable | |||||
Intended Audience :: Developers | |||||
License :: OSI Approved :: MIT License | |||||
Programming Language :: Python | |||||
Programming Language :: Python :: 2.7 | |||||
Programming Language :: Python :: 3 | |||||
Programming Language :: Python :: 3.3 | |||||
Programming Language :: Python :: 3.4 | |||||
Programming Language :: Python :: 3.5 | |||||
Programming Language :: Python :: 3.6 | |||||
[options] | |||||
packages = find: | |||||
[options.extras_require] | |||||
test = | |||||
pytest | |||||
pytest-cov | |||||
[tool:pytest] | |||||
addopts = -rsx --cov --tb=short | |||||
testpaths = tests | |||||
[coverage:run] | |||||
source = cbor2 | |||||
[coverage:report] | |||||
show_missing = true | |||||
[flake8] | |||||
max-line-length = 99 | |||||
exclude = .tox,build,docs | |||||
[bdist_wheel] | |||||
universal = 1 |
from setuptools import setup | |||||
setup( | |||||
use_scm_version={ | |||||
'version_scheme': 'post-release', | |||||
'local_scheme': 'dirty-tag' | |||||
}, | |||||
setup_requires=[ | |||||
'setuptools >= 36.2.7', | |||||
'setuptools_scm >= 1.7.0' | |||||
] | |||||
) |
from __future__ import division | |||||
import math | |||||
import re | |||||
import sys | |||||
from binascii import unhexlify | |||||
from datetime import datetime, timedelta | |||||
from decimal import Decimal | |||||
from email.message import Message | |||||
from fractions import Fraction | |||||
from io import BytesIO | |||||
from uuid import UUID | |||||
import pytest | |||||
from cbor2.compat import timezone | |||||
from cbor2.decoder import loads, CBORDecodeError, load, CBORDecoder | |||||
from cbor2.types import CBORTag, undefined, CBORSimpleValue | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('00', 0), | |||||
('01', 1), | |||||
('0a', 10), | |||||
('17', 23), | |||||
('1818', 24), | |||||
('1819', 25), | |||||
('1864', 100), | |||||
('1903e8', 1000), | |||||
('1a000f4240', 1000000), | |||||
('1b000000e8d4a51000', 1000000000000), | |||||
('1bffffffffffffffff', 18446744073709551615), | |||||
('c249010000000000000000', 18446744073709551616), | |||||
('3bffffffffffffffff', -18446744073709551616), | |||||
('c349010000000000000000', -18446744073709551617), | |||||
('20', -1), | |||||
('29', -10), | |||||
('3863', -100), | |||||
('3903e7', -1000) | |||||
]) | |||||
def test_integer(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
def test_invalid_integer_subtype(): | |||||
exc = pytest.raises(CBORDecodeError, loads, b'\x1c') | |||||
assert str(exc.value).endswith('unknown unsigned integer subtype 0x1c') | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('f90000', 0.0), | |||||
('f98000', -0.0), | |||||
('f93c00', 1.0), | |||||
('fb3ff199999999999a', 1.1), | |||||
('f93e00', 1.5), | |||||
('f97bff', 65504.0), | |||||
('fa47c35000', 100000.0), | |||||
('fa7f7fffff', 3.4028234663852886e+38), | |||||
('fb7e37e43c8800759c', 1.0e+300), | |||||
('f90001', 5.960464477539063e-8), | |||||
('f90400', 0.00006103515625), | |||||
('f9c400', -4.0), | |||||
('fbc010666666666666', -4.1), | |||||
('f97c00', float('inf')), | |||||
('f9fc00', float('-inf')), | |||||
('fa7f800000', float('inf')), | |||||
('faff800000', float('-inf')), | |||||
('fb7ff0000000000000', float('inf')), | |||||
('fbfff0000000000000', float('-inf')) | |||||
]) | |||||
def test_float(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload', ['f97e00', 'fa7fc00000', 'fb7ff8000000000000']) | |||||
def test_float_nan(payload): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert math.isnan(decoded) | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('f4', False), | |||||
('f5', True), | |||||
('f6', None), | |||||
('f7', undefined) | |||||
]) | |||||
def test_special(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded is expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('40', b''), | |||||
('4401020304', b'\x01\x02\x03\x04'), | |||||
]) | |||||
def test_binary(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('60', u''), | |||||
('6161', u'a'), | |||||
('6449455446', u'IETF'), | |||||
('62225c', u'\"\\'), | |||||
('62c3bc', u'\u00fc'), | |||||
('63e6b0b4', u'\u6c34') | |||||
]) | |||||
def test_string(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('80', []), | |||||
('83010203', [1, 2, 3]), | |||||
('8301820203820405', [1, [2, 3], [4, 5]]), | |||||
('98190102030405060708090a0b0c0d0e0f101112131415161718181819', list(range(1, 26))) | |||||
]) | |||||
def test_array(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('a0', {}), | |||||
('a201020304', {1: 2, 3: 4}) | |||||
]) | |||||
def test_map(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('a26161016162820203', {'a': 1, 'b': [2, 3]}), | |||||
('826161a161626163', ['a', {'b': 'c'}]), | |||||
('a56161614161626142616361436164614461656145', | |||||
{'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E'}) | |||||
]) | |||||
def test_mixed_array_map(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('5f42010243030405ff', b'\x01\x02\x03\x04\x05'), | |||||
('7f657374726561646d696e67ff', 'streaming'), | |||||
('9fff', []), | |||||
('9f018202039f0405ffff', [1, [2, 3], [4, 5]]), | |||||
('9f01820203820405ff', [1, [2, 3], [4, 5]]), | |||||
('83018202039f0405ff', [1, [2, 3], [4, 5]]), | |||||
('83019f0203ff820405', [1, [2, 3], [4, 5]]), | |||||
('9f0102030405060708090a0b0c0d0e0f101112131415161718181819ff', list(range(1, 26))), | |||||
('bf61610161629f0203ffff', {'a': 1, 'b': [2, 3]}), | |||||
('826161bf61626163ff', ['a', {'b': 'c'}]), | |||||
('bf6346756ef563416d7421ff', {'Fun': True, 'Amt': -2}), | |||||
]) | |||||
def test_streaming(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('e0', 0), | |||||
('e2', 2), | |||||
('f3', 19), | |||||
('f820', 32), | |||||
('e0', CBORSimpleValue(0)), | |||||
('e2', CBORSimpleValue(2)), | |||||
('f3', CBORSimpleValue(19)), | |||||
('f820', CBORSimpleValue(32)) | |||||
]) | |||||
def test_simple_value(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
# | |||||
# Tests for extension tags | |||||
# | |||||
@pytest.mark.parametrize('payload, expected', [ | |||||
('c074323031332d30332d32315432303a30343a30305a', | |||||
datetime(2013, 3, 21, 20, 4, 0, tzinfo=timezone.utc)), | |||||
('c0781b323031332d30332d32315432303a30343a30302e3338303834315a', | |||||
datetime(2013, 3, 21, 20, 4, 0, 380841, tzinfo=timezone.utc)), | |||||
('c07819323031332d30332d32315432323a30343a30302b30323a3030', | |||||
datetime(2013, 3, 21, 22, 4, 0, tzinfo=timezone(timedelta(hours=2)))), | |||||
('c11a514b67b0', datetime(2013, 3, 21, 20, 4, 0, tzinfo=timezone.utc)), | |||||
('c11a514b67b0', datetime(2013, 3, 21, 22, 4, 0, tzinfo=timezone(timedelta(hours=2)))) | |||||
], ids=['datetime/utc', 'datetime+micro/utc', 'datetime/eet', 'timestamp/utc', 'timestamp/eet']) | |||||
def test_datetime(payload, expected): | |||||
decoded = loads(unhexlify(payload)) | |||||
assert decoded == expected | |||||
def test_bad_datetime(): | |||||
exc = pytest.raises(CBORDecodeError, loads, unhexlify('c06b303030302d3132332d3031')) | |||||
assert str(exc.value).endswith('invalid datetime string: 0000-123-01') | |||||
def test_fraction(): | |||||
decoded = loads(unhexlify('c48221196ab3')) | |||||
assert decoded == Decimal('273.15') | |||||
def test_bigfloat(): | |||||
decoded = loads(unhexlify('c5822003')) | |||||
assert decoded == Decimal('1.5') | |||||
def test_rational(): | |||||
decoded = loads(unhexlify('d81e820205')) | |||||
assert decoded == Fraction(2, 5) | |||||
def test_regex(): | |||||
decoded = loads(unhexlify('d8236d68656c6c6f2028776f726c6429')) | |||||
expr = re.compile(u'hello (world)') | |||||
assert decoded == expr | |||||
def test_mime(): | |||||
decoded = loads(unhexlify( | |||||
'd824787b436f6e74656e742d547970653a20746578742f706c61696e3b20636861727365743d2269736f2d38' | |||||
'3835392d3135220a4d494d452d56657273696f6e3a20312e300a436f6e74656e742d5472616e736665722d45' | |||||
'6e636f64696e673a2071756f7465642d7072696e7461626c650a0a48656c6c6f203d413475726f')) | |||||
assert isinstance(decoded, Message) | |||||
assert decoded.get_payload() == 'Hello =A4uro' | |||||
def test_uuid(): | |||||
decoded = loads(unhexlify('d825505eaffac8b51e480581277fdcc7842faf')) | |||||
assert decoded == UUID(hex='5eaffac8b51e480581277fdcc7842faf') | |||||
def test_bad_shared_reference(): | |||||
exc = pytest.raises(CBORDecodeError, loads, unhexlify('d81d05')) | |||||
assert str(exc.value).endswith('shared reference 5 not found') | |||||
def test_uninitialized_shared_reference(): | |||||
fp = BytesIO(unhexlify('d81d00')) | |||||
decoder = CBORDecoder(fp) | |||||
decoder._shareables.append(None) | |||||
exc = pytest.raises(CBORDecodeError, decoder.decode) | |||||
assert str(exc.value).endswith('shared value 0 has not been initialized') | |||||
def test_cyclic_array(): | |||||
decoded = loads(unhexlify('d81c81d81d00')) | |||||
assert decoded == [decoded] | |||||
def test_cyclic_map(): | |||||
decoded = loads(unhexlify('d81ca100d81d00')) | |||||
assert decoded == {0: decoded} | |||||
def test_unhandled_tag(): | |||||
""" | |||||
Test that a tag is simply ignored and its associated value returned if there is no special | |||||
handling available for it. | |||||
""" | |||||
decoded = loads(unhexlify('d917706548656c6c6f')) | |||||
assert decoded == CBORTag(6000, u'Hello') | |||||
def test_premature_end_of_stream(): | |||||
""" | |||||
Test that the decoder detects a situation where read() returned fewer than expected bytes. | |||||
""" | |||||
exc = pytest.raises(CBORDecodeError, loads, unhexlify('437879')) | |||||
exc.match('premature end of stream \(expected to read 3 bytes, got 2 instead\)') | |||||
def test_tag_hook(): | |||||
def reverse(decoder, tag, fp, shareable_index=None): | |||||
return tag.value[::-1] | |||||
decoded = loads(unhexlify('d917706548656c6c6f'), tag_hook=reverse) | |||||
assert decoded == u'olleH' | |||||
def test_tag_hook_cyclic(): | |||||
class DummyType(object): | |||||
def __init__(self, value): | |||||
self.value = value | |||||
def unmarshal_dummy(decoder, tag, shareable_index=None): | |||||
instance = DummyType.__new__(DummyType) | |||||
decoder.set_shareable(shareable_index, instance) | |||||
instance.value = decoder.decode_from_bytes(tag.value) | |||||
return instance | |||||
decoded = loads(unhexlify('D81CD90BB849D81CD90BB843D81D00'), tag_hook=unmarshal_dummy) | |||||
assert isinstance(decoded, DummyType) | |||||
assert decoded.value.value is decoded | |||||
def test_object_hook(): | |||||
class DummyType(object): | |||||
def __init__(self, state): | |||||
self.state = state | |||||
payload = unhexlify('A2616103616205') | |||||
decoded = loads(payload, object_hook=lambda decoder, value: DummyType(value)) | |||||
assert isinstance(decoded, DummyType) | |||||
assert decoded.state == {'a': 3, 'b': 5} | |||||
def test_error_major_type(): | |||||
exc = pytest.raises(CBORDecodeError, loads, b'') | |||||
assert str(exc.value).startswith('error reading major type at index 0: ') | |||||
def test_load_from_file(tmpdir): | |||||
path = tmpdir.join('testdata.cbor') | |||||
path.write_binary(b'\x82\x01\x0a') | |||||
with path.open('rb') as fp: | |||||
obj = load(fp) | |||||
assert obj == [1, 10] | |||||
@pytest.mark.skipif(sys.version_info < (3, 0), reason="No exception with python 2.7") | |||||
def test_nested_exception(): | |||||
exc = pytest.raises((CBORDecodeError, TypeError), loads, unhexlify('A1D9177082010201')) | |||||
exc.match(r"error decoding value at index 8: " | |||||
r"(unhashable type: 'CBORTag'|'CBORTag' objects are unhashable)") | |||||
def test_set(): | |||||
payload = unhexlify('d9010283616361626161') | |||||
value = loads(payload) | |||||
assert type(value) is set | |||||
assert value == set([u'a', u'b', u'c']) |
import re | |||||
from binascii import unhexlify | |||||
from collections import OrderedDict | |||||
from datetime import datetime, timedelta, date | |||||
from decimal import Decimal | |||||
from email.mime.text import MIMEText | |||||
from fractions import Fraction | |||||
from uuid import UUID | |||||
import pytest | |||||
from cbor2.compat import timezone | |||||
from cbor2.encoder import dumps, CBOREncodeError, dump, shareable_encoder | |||||
from cbor2.types import CBORTag, undefined, CBORSimpleValue | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(0, '00'), | |||||
(1, '01'), | |||||
(10, '0a'), | |||||
(23, '17'), | |||||
(24, '1818'), | |||||
(100, '1864'), | |||||
(1000, '1903e8'), | |||||
(1000000, '1a000f4240'), | |||||
(1000000000000, '1b000000e8d4a51000'), | |||||
(18446744073709551615, '1bffffffffffffffff'), | |||||
(18446744073709551616, 'c249010000000000000000'), | |||||
(-18446744073709551616, '3bffffffffffffffff'), | |||||
(-18446744073709551617, 'c349010000000000000000'), | |||||
(-1, '20'), | |||||
(-10, '29'), | |||||
(-100, '3863'), | |||||
(-1000, '3903e7') | |||||
]) | |||||
def test_integer(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(1.1, 'fb3ff199999999999a'), | |||||
(1.0e+300, 'fb7e37e43c8800759c'), | |||||
(-4.1, 'fbc010666666666666'), | |||||
(float('inf'), 'f97c00'), | |||||
(float('nan'), 'f97e00'), | |||||
(float('-inf'), 'f9fc00') | |||||
]) | |||||
def test_float(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(b'', '40'), | |||||
(b'\x01\x02\x03\x04', '4401020304'), | |||||
]) | |||||
def test_bytestring(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
def test_bytearray(): | |||||
expected = unhexlify('4401020304') | |||||
assert dumps(bytearray(b'\x01\x02\x03\x04')) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(u'', '60'), | |||||
(u'a', '6161'), | |||||
(u'IETF', '6449455446'), | |||||
(u'"\\', '62225c'), | |||||
(u'\u00fc', '62c3bc'), | |||||
(u'\u6c34', '63e6b0b4') | |||||
]) | |||||
def test_string(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(False, 'f4'), | |||||
(True, 'f5'), | |||||
(None, 'f6'), | |||||
(undefined, 'f7') | |||||
], ids=['false', 'true', 'null', 'undefined']) | |||||
def test_special(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(CBORSimpleValue(0), 'e0'), | |||||
(CBORSimpleValue(2), 'e2'), | |||||
(CBORSimpleValue(19), 'f3'), | |||||
(CBORSimpleValue(32), 'f820') | |||||
]) | |||||
def test_simple_value(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
# | |||||
# Tests for extension tags | |||||
# | |||||
@pytest.mark.parametrize('value, as_timestamp, expected', [ | |||||
(datetime(2013, 3, 21, 20, 4, 0, tzinfo=timezone.utc), False, | |||||
'c074323031332d30332d32315432303a30343a30305a'), | |||||
(datetime(2013, 3, 21, 20, 4, 0, 380841, tzinfo=timezone.utc), False, | |||||
'c0781b323031332d30332d32315432303a30343a30302e3338303834315a'), | |||||
(datetime(2013, 3, 21, 22, 4, 0, tzinfo=timezone(timedelta(hours=2))), False, | |||||
'c07819323031332d30332d32315432323a30343a30302b30323a3030'), | |||||
(datetime(2013, 3, 21, 20, 4, 0), False, 'c074323031332d30332d32315432303a30343a30305a'), | |||||
(datetime(2013, 3, 21, 20, 4, 0, tzinfo=timezone.utc), True, 'c11a514b67b0'), | |||||
(datetime(2013, 3, 21, 22, 4, 0, tzinfo=timezone(timedelta(hours=2))), True, 'c11a514b67b0') | |||||
], ids=['datetime/utc', 'datetime+micro/utc', 'datetime/eet', 'naive', 'timestamp/utc', | |||||
'timestamp/eet']) | |||||
def test_datetime(value, as_timestamp, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value, datetime_as_timestamp=as_timestamp, timezone=timezone.utc) == expected | |||||
def test_date(): | |||||
expected = unhexlify('c074323031332d30332d32315430303a30303a30305a') | |||||
assert dumps(date(2013, 3, 21), timezone=timezone.utc) == expected | |||||
def test_naive_datetime(): | |||||
"""Test that naive datetimes are gracefully rejected when no timezone has been set.""" | |||||
exc = pytest.raises(CBOREncodeError, dumps, datetime(2013, 3, 21)) | |||||
exc.match('naive datetime encountered and no default timezone has been set') | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(Decimal('14.123'), 'c4822219372b'), | |||||
(Decimal('NaN'), 'f97e00'), | |||||
(Decimal('Infinity'), 'f97c00'), | |||||
(Decimal('-Infinity'), 'f9fc00') | |||||
], ids=['normal', 'nan', 'inf', 'neginf']) | |||||
def test_decimal(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value) == expected | |||||
def test_rational(): | |||||
expected = unhexlify('d81e820205') | |||||
assert dumps(Fraction(2, 5)) == expected | |||||
def test_regex(): | |||||
expected = unhexlify('d8236d68656c6c6f2028776f726c6429') | |||||
assert dumps(re.compile(u'hello (world)')) == expected | |||||
def test_mime(): | |||||
expected = unhexlify( | |||||
'd824787b436f6e74656e742d547970653a20746578742f706c61696e3b20636861727365743d2269736f2d38' | |||||
'3835392d3135220a4d494d452d56657273696f6e3a20312e300a436f6e74656e742d5472616e736665722d456' | |||||
'e636f64696e673a2071756f7465642d7072696e7461626c650a0a48656c6c6f203d413475726f') | |||||
message = MIMEText(u'Hello \u20acuro', 'plain', 'iso-8859-15') | |||||
assert dumps(message) == expected | |||||
def test_uuid(): | |||||
expected = unhexlify('d825505eaffac8b51e480581277fdcc7842faf') | |||||
assert dumps(UUID(hex='5eaffac8b51e480581277fdcc7842faf')) == expected | |||||
def test_custom_tag(): | |||||
expected = unhexlify('d917706548656c6c6f') | |||||
assert dumps(CBORTag(6000, u'Hello')) == expected | |||||
def test_cyclic_array(): | |||||
"""Test that an array that contains itself can be serialized with value sharing enabled.""" | |||||
expected = unhexlify('d81c81d81c81d81d00') | |||||
a = [[]] | |||||
a[0].append(a) | |||||
assert dumps(a, value_sharing=True) == expected | |||||
def test_cyclic_array_nosharing(): | |||||
"""Test that serializing a cyclic structure w/o value sharing will blow up gracefully.""" | |||||
a = [] | |||||
a.append(a) | |||||
exc = pytest.raises(CBOREncodeError, dumps, a) | |||||
exc.match('cyclic data structure detected but value sharing is disabled') | |||||
def test_cyclic_map(): | |||||
"""Test that a dict that contains itself can be serialized with value sharing enabled.""" | |||||
expected = unhexlify('d81ca100d81d00') | |||||
a = {} | |||||
a[0] = a | |||||
assert dumps(a, value_sharing=True) == expected | |||||
def test_cyclic_map_nosharing(): | |||||
"""Test that serializing a cyclic structure w/o value sharing will fail gracefully.""" | |||||
a = {} | |||||
a[0] = a | |||||
exc = pytest.raises(CBOREncodeError, dumps, a) | |||||
exc.match('cyclic data structure detected but value sharing is disabled') | |||||
@pytest.mark.parametrize('value_sharing, expected', [ | |||||
(False, '828080'), | |||||
(True, 'd81c82d81c80d81d01') | |||||
], ids=['nosharing', 'sharing']) | |||||
def test_not_cyclic_same_object(value_sharing, expected): | |||||
"""Test that the same shareable object can be included twice if not in a cyclic structure.""" | |||||
expected = unhexlify(expected) | |||||
a = [] | |||||
b = [a, a] | |||||
assert dumps(b, value_sharing=value_sharing) == expected | |||||
def test_unsupported_type(): | |||||
exc = pytest.raises(CBOREncodeError, dumps, lambda: None) | |||||
exc.match('cannot serialize type function') | |||||
def test_default(): | |||||
class DummyType(object): | |||||
def __init__(self, state): | |||||
self.state = state | |||||
def default_encoder(encoder, value): | |||||
encoder.encode(value.state) | |||||
expected = unhexlify('820305') | |||||
obj = DummyType([3, 5]) | |||||
serialized = dumps(obj, default=default_encoder) | |||||
assert serialized == expected | |||||
def test_default_cyclic(): | |||||
class DummyType(object): | |||||
def __init__(self, value=None): | |||||
self.value = value | |||||
@shareable_encoder | |||||
def default_encoder(encoder, value): | |||||
state = encoder.encode_to_bytes(value.value) | |||||
encoder.encode(CBORTag(3000, state)) | |||||
expected = unhexlify('D81CD90BB849D81CD90BB843D81D00') | |||||
obj = DummyType() | |||||
obj2 = DummyType(obj) | |||||
obj.value = obj2 | |||||
serialized = dumps(obj, value_sharing=True, default=default_encoder) | |||||
assert serialized == expected | |||||
def test_dump_to_file(tmpdir): | |||||
path = tmpdir.join('testdata.cbor') | |||||
with path.open('wb') as fp: | |||||
dump([1, 10], fp) | |||||
assert path.read_binary() == b'\x82\x01\x0a' | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(OrderedDict([(b'a', b''), (b'b', b'')]), 'A2416140416240'), | |||||
(OrderedDict([(b'b', b''), (b'a', b'')]), 'A2416140416240'), | |||||
(OrderedDict([(u'a', u''), (u'b', u'')]), 'a2616160616260'), | |||||
(OrderedDict([(u'b', u''), (u'a', u'')]), 'a2616160616260'), | |||||
(OrderedDict([(b'00001', u''), (b'002', u'')]), 'A2433030326045303030303160'), | |||||
(OrderedDict([(255, 0), (2, 0)]), 'a2020018ff00') | |||||
], ids=['bytes in order', 'bytes out of order', 'text in order', | |||||
'text out of order', 'byte length', 'integer keys']) | |||||
def test_ordered_map(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value, canonical=True) == expected | |||||
@pytest.mark.parametrize('value, expected', [ | |||||
(3.5, 'F94300'), | |||||
(100000.0, 'FA47C35000'), | |||||
(3.8, 'FB400E666666666666'), | |||||
(float('inf'), 'f97c00'), | |||||
(float('nan'), 'f97e00'), | |||||
(float('-inf'), 'f9fc00'), | |||||
(float.fromhex('0x1.0p-24'), 'f90001'), | |||||
(float.fromhex('0x1.4p-24'), 'fa33a00000'), | |||||
(float.fromhex('0x1.ff8p-63'), 'fa207fc000') | |||||
], ids=['float 16', 'float 32', 'float 64', 'inf', 'nan', '-inf', | |||||
'float 16 minimum positive subnormal', 'mantissa o/f to 32', | |||||
'exponent o/f to 32']) | |||||
def test_minimal_floats(value, expected): | |||||
expected = unhexlify(expected) | |||||
assert dumps(value, canonical=True) == expected | |||||
def test_tuple_key(): | |||||
assert dumps({(2, 1): u''}) == unhexlify('a182020160') | |||||
@pytest.mark.parametrize('frozen', [False, True], ids=['set', 'frozenset']) | |||||
def test_set(frozen): | |||||
value = {u'a', u'b', u'c'} | |||||
if frozen: | |||||
value = frozenset(value) | |||||
serialized = dumps(value) | |||||
assert len(serialized) == 10 | |||||
assert serialized.startswith(unhexlify('d9010283')) | |||||
@pytest.mark.parametrize('frozen', [False, True], ids=['set', 'frozenset']) | |||||
def test_canonical_set(frozen): | |||||
value = {u'y', u'x', u'aa', u'a'} | |||||
if frozen: | |||||
value = frozenset(value) | |||||
serialized = dumps(value, canonical=True) | |||||
assert serialized == unhexlify('d9010284616161786179626161') |
import pytest | |||||
from cbor2.types import CBORTag, CBORSimpleValue | |||||
def test_tag_repr(): | |||||
assert repr(CBORTag(600, 'blah')) == "CBORTag(600, 'blah')" | |||||
def test_tag_equals(): | |||||
tag1 = CBORTag(500, ['foo']) | |||||
tag2 = CBORTag(500, ['foo']) | |||||
tag3 = CBORTag(500, ['bar']) | |||||
assert tag1 == tag2 | |||||
assert not tag1 == tag3 | |||||
assert not tag1 == 500 | |||||
def test_simple_value_repr(): | |||||
assert repr(CBORSimpleValue(1)) == "CBORSimpleValue(1)" | |||||
def test_simple_value_equals(): | |||||
tag1 = CBORSimpleValue(1) | |||||
tag2 = CBORSimpleValue(1) | |||||
tag3 = CBORSimpleValue(21) | |||||
assert tag1 == tag2 | |||||
assert tag1 == 1 | |||||
assert not tag1 == tag3 | |||||
assert not tag1 == 21 | |||||
assert not tag2 == "21" | |||||
def test_simple_value_too_big(): | |||||
exc = pytest.raises(TypeError, CBORSimpleValue, 256) | |||||
assert str(exc.value) == 'simple value too big' |
[tox] | |||||
envlist = py27, py33, py34, py35, py36, pypy, pypy3, flake8 | |||||
skip_missing_interpreters = true | |||||
[testenv] | |||||
commands = python -m pytest {posargs} | |||||
extras = test | |||||
[testenv:flake8] | |||||
deps = flake8 | |||||
commands = flake8 cbor2 tests | |||||
skip_install = true |
#require test-repo | #require test-repo | ||||
$ . "$TESTDIR/helpers-testrepo.sh" | $ . "$TESTDIR/helpers-testrepo.sh" | ||||
$ cd "$TESTDIR"/.. | $ cd "$TESTDIR"/.. | ||||
$ testrepohg files 'set:(**.py)' \ | $ testrepohg files 'set:(**.py)' \ | ||||
> -X hgdemandimport/demandimportpy2.py \ | > -X hgdemandimport/demandimportpy2.py \ | ||||
> -X mercurial/thirdparty/cbor \ | |||||
> | sed 's|\\|/|g' | xargs $PYTHON contrib/check-py3-compat.py | > | sed 's|\\|/|g' | xargs $PYTHON contrib/check-py3-compat.py | ||||
contrib/python-zstandard/setup.py not using absolute_import | contrib/python-zstandard/setup.py not using absolute_import | ||||
contrib/python-zstandard/setup_zstd.py not using absolute_import | contrib/python-zstandard/setup_zstd.py not using absolute_import | ||||
contrib/python-zstandard/tests/common.py not using absolute_import | contrib/python-zstandard/tests/common.py not using absolute_import | ||||
contrib/python-zstandard/tests/test_buffer_util.py not using absolute_import | contrib/python-zstandard/tests/test_buffer_util.py not using absolute_import | ||||
contrib/python-zstandard/tests/test_compressor.py not using absolute_import | contrib/python-zstandard/tests/test_compressor.py not using absolute_import | ||||
contrib/python-zstandard/tests/test_compressor_fuzzing.py not using absolute_import | contrib/python-zstandard/tests/test_compressor_fuzzing.py not using absolute_import | ||||
contrib/python-zstandard/tests/test_data_structures.py not using absolute_import | contrib/python-zstandard/tests/test_data_structures.py not using absolute_import |
$ pyflakes test.py 2>/dev/null | "$TESTDIR/filterpyflakes.py" | $ pyflakes test.py 2>/dev/null | "$TESTDIR/filterpyflakes.py" | ||||
test.py:1: undefined name 'undefinedname' | test.py:1: undefined name 'undefinedname' | ||||
$ cd "`dirname "$TESTDIR"`" | $ cd "`dirname "$TESTDIR"`" | ||||
$ testrepohg locate 'set:**.py or grep("^#!.*python")' \ | $ testrepohg locate 'set:**.py or grep("^#!.*python")' \ | ||||
> -X hgext/fsmonitor/pywatchman \ | > -X hgext/fsmonitor/pywatchman \ | ||||
> -X mercurial/pycompat.py -X contrib/python-zstandard \ | > -X mercurial/pycompat.py -X contrib/python-zstandard \ | ||||
> -X mercurial/thirdparty/cbor \ | |||||
> 2>/dev/null \ | > 2>/dev/null \ | ||||
> | xargs pyflakes 2>/dev/null | "$TESTDIR/filterpyflakes.py" | > | xargs pyflakes 2>/dev/null | "$TESTDIR/filterpyflakes.py" | ||||