Kiln » Kiln Extensions
Clone URL:  
Pushed to 2 repositories · View In Graph Contained in tip

upgrade to Kiln 2.5.122 extensions

Changeset 08a3cb8d3d4a

Parent cfd62a21bd59

by Profile picture of User 12Benjamin Pollack <benjamin@fogcreek.com>

Changes to 24 files · Browse files at 08a3cb8d3d4a Showing diff from parent cfd62a21bd59 Diff from another changeset...

Change 1 of 1 Show Entire File .hgeol Stacked
 
 
 
 
 
 
 
 
1
2
3
4
5
6
@@ -0,0 +1,6 @@
+[patterns] +**.py = native +**.py.out = native + +[repository] +native = LF
Change 1 of 1 Show Entire File _custom/​json/​__init__.py Stacked
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
@@ -1,318 +1,318 @@
-r"""A simple, fast, extensible JSON encoder and decoder - -JSON (JavaScript Object Notation) <http://json.org> is a subset of -JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data -interchange format. - -json exposes an API familiar to uses of the standard library -marshal and pickle modules. - -Encoding basic Python object hierarchies:: - - >>> import json - >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) - '["foo", {"bar": ["baz", null, 1.0, 2]}]' - >>> print json.dumps("\"foo\bar") - "\"foo\bar" - >>> print json.dumps(u'\u1234') - "\u1234" - >>> print json.dumps('\\') - "\\" - >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True) - {"a": 0, "b": 0, "c": 0} - >>> from StringIO import StringIO - >>> io = StringIO() - >>> json.dump(['streaming API'], io) - >>> io.getvalue() - '["streaming API"]' - -Compact encoding:: - - >>> import json - >>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':')) - '[1,2,3,{"4":5,"6":7}]' - -Pretty printing (using repr() because of extraneous whitespace in the output):: - - >>> import json - >>> print repr(json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)) - '{\n "4": 5, \n "6": 7\n}' - -Decoding JSON:: - - >>> import json - >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') - [u'foo', {u'bar': [u'baz', None, 1.0, 2]}] - >>> json.loads('"\\"foo\\bar"') - u'"foo\x08ar' - >>> from StringIO import StringIO - >>> io = StringIO('["streaming API"]') - >>> json.load(io) - [u'streaming API'] - -Specializing JSON object decoding:: - - >>> import json - >>> def as_complex(dct): - ... if '__complex__' in dct: - ... return complex(dct['real'], dct['imag']) - ... return dct - ... - >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}', - ... object_hook=as_complex) - (1+2j) - >>> import decimal - >>> json.loads('1.1', parse_float=decimal.Decimal) - Decimal('1.1') - -Extending JSONEncoder:: - - >>> import json - >>> class ComplexEncoder(json.JSONEncoder): - ... def default(self, obj): - ... if isinstance(obj, complex): - ... return [obj.real, obj.imag] - ... return json.JSONEncoder.default(self, obj) - ... - >>> dumps(2 + 1j, cls=ComplexEncoder) - '[2.0, 1.0]' - >>> ComplexEncoder().encode(2 + 1j) - '[2.0, 1.0]' - >>> list(ComplexEncoder().iterencode(2 + 1j)) - ['[', '2.0', ', ', '1.0', ']'] - - -Using json.tool from the shell to validate and -pretty-print:: - - $ echo '{"json":"obj"}' | python -mjson.tool - { - "json": "obj" - } - $ echo '{ 1.2:3.4}' | python -mjson.tool - Expecting property name: line 1 column 2 (char 2) - -Note that the JSON produced by this module's default settings -is a subset of YAML, so it may be used as a serializer for that as well. - -""" - -__version__ = '1.9' -__all__ = [ - 'dump', 'dumps', 'load', 'loads', - 'JSONDecoder', 'JSONEncoder', -] - -__author__ = 'Bob Ippolito <bob@redivi.com>' - -from .decoder import JSONDecoder -from .encoder import JSONEncoder - -_default_encoder = JSONEncoder( - skipkeys=False, - ensure_ascii=True, - check_circular=True, - allow_nan=True, - indent=None, - separators=None, - encoding='utf-8', - default=None, -) - -def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True, - allow_nan=True, cls=None, indent=None, separators=None, - encoding='utf-8', default=None, **kw): - """Serialize ``obj`` as a JSON formatted stream to ``fp`` (a - ``.write()``-supporting file-like object). - - If ``skipkeys`` is ``True`` then ``dict`` keys that are not basic types - (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``) - will be skipped instead of raising a ``TypeError``. - - If ``ensure_ascii`` is ``False``, then the some chunks written to ``fp`` - may be ``unicode`` instances, subject to normal Python ``str`` to - ``unicode`` coercion rules. Unless ``fp.write()`` explicitly - understands ``unicode`` (as in ``codecs.getwriter()``) this is likely - to cause an error. - - If ``check_circular`` is ``False``, then the circular reference check - for container types will be skipped and a circular reference will - result in an ``OverflowError`` (or worse). - - If ``allow_nan`` is ``False``, then it will be a ``ValueError`` to - serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) - in strict compliance of the JSON specification, instead of using the - JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``). - - If ``indent`` is a non-negative integer, then JSON array elements and object - members will be pretty-printed with that indent level. An indent level - of 0 will only insert newlines. ``None`` is the most compact representation. - - If ``separators`` is an ``(item_separator, dict_separator)`` tuple - then it will be used instead of the default ``(', ', ': ')`` separators. - ``(',', ':')`` is the most compact JSON representation. - - ``encoding`` is the character encoding for str instances, default is UTF-8. - - ``default(obj)`` is a function that should return a serializable version - of obj or raise TypeError. The default simply raises TypeError. - - To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the - ``.default()`` method to serialize additional types), specify it with - the ``cls`` kwarg. - - """ - # cached encoder - if (skipkeys is False and ensure_ascii is True and - check_circular is True and allow_nan is True and - cls is None and indent is None and separators is None and - encoding == 'utf-8' and default is None and not kw): - iterable = _default_encoder.iterencode(obj) - else: - if cls is None: - cls = JSONEncoder - iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii, - check_circular=check_circular, allow_nan=allow_nan, indent=indent, - separators=separators, encoding=encoding, - default=default, **kw).iterencode(obj) - # could accelerate with writelines in some versions of Python, at - # a debuggability cost - for chunk in iterable: - fp.write(chunk) - - -def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True, - allow_nan=True, cls=None, indent=None, separators=None, - encoding='utf-8', default=None, **kw): - """Serialize ``obj`` to a JSON formatted ``str``. - - If ``skipkeys`` is ``True`` then ``dict`` keys that are not basic types - (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``) - will be skipped instead of raising a ``TypeError``. - - If ``ensure_ascii`` is ``False``, then the return value will be a - ``unicode`` instance subject to normal Python ``str`` to ``unicode`` - coercion rules instead of being escaped to an ASCII ``str``. - - If ``check_circular`` is ``False``, then the circular reference check - for container types will be skipped and a circular reference will - result in an ``OverflowError`` (or worse). - - If ``allow_nan`` is ``False``, then it will be a ``ValueError`` to - serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in - strict compliance of the JSON specification, instead of using the - JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``). - - If ``indent`` is a non-negative integer, then JSON array elements and - object members will be pretty-printed with that indent level. An indent - level of 0 will only insert newlines. ``None`` is the most compact - representation. - - If ``separators`` is an ``(item_separator, dict_separator)`` tuple - then it will be used instead of the default ``(', ', ': ')`` separators. - ``(',', ':')`` is the most compact JSON representation. - - ``encoding`` is the character encoding for str instances, default is UTF-8. - - ``default(obj)`` is a function that should return a serializable version - of obj or raise TypeError. The default simply raises TypeError. - - To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the - ``.default()`` method to serialize additional types), specify it with - the ``cls`` kwarg. - - """ - # cached encoder - if (skipkeys is False and ensure_ascii is True and - check_circular is True and allow_nan is True and - cls is None and indent is None and separators is None and - encoding == 'utf-8' and default is None and not kw): - return _default_encoder.encode(obj) - if cls is None: - cls = JSONEncoder - return cls( - skipkeys=skipkeys, ensure_ascii=ensure_ascii, - check_circular=check_circular, allow_nan=allow_nan, indent=indent, - separators=separators, encoding=encoding, default=default, - **kw).encode(obj) - - -_default_decoder = JSONDecoder(encoding=None, object_hook=None) - - -def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None, - parse_int=None, parse_constant=None, **kw): - """Deserialize ``fp`` (a ``.read()``-supporting file-like object - containing a JSON document) to a Python object. - - If the contents of ``fp`` is encoded with an ASCII based encoding other - than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must - be specified. Encodings that are not ASCII based (such as UCS-2) are - not allowed, and should be wrapped with - ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode`` - object and passed to ``loads()`` - - ``object_hook`` is an optional function that will be called with the - result of any object literal decode (a ``dict``). The return value of - ``object_hook`` will be used instead of the ``dict``. This feature - can be used to implement custom decoders (e.g. JSON-RPC class hinting). - - To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` - kwarg. - - """ - return loads(fp.read(), - encoding=encoding, cls=cls, object_hook=object_hook, - parse_float=parse_float, parse_int=parse_int, - parse_constant=parse_constant, **kw) - - -def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None, - parse_int=None, parse_constant=None, **kw): - """Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON - document) to a Python object. - - If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding - other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name - must be specified. Encodings that are not ASCII based (such as UCS-2) - are not allowed and should be decoded to ``unicode`` first. - - ``object_hook`` is an optional function that will be called with the - result of any object literal decode (a ``dict``). The return value of - ``object_hook`` will be used instead of the ``dict``. This feature - can be used to implement custom decoders (e.g. JSON-RPC class hinting). - - ``parse_float``, if specified, will be called with the string - of every JSON float to be decoded. By default this is equivalent to - float(num_str). This can be used to use another datatype or parser - for JSON floats (e.g. decimal.Decimal). - - ``parse_int``, if specified, will be called with the string - of every JSON int to be decoded. By default this is equivalent to - int(num_str). This can be used to use another datatype or parser - for JSON integers (e.g. float). - - ``parse_constant``, if specified, will be called with one of the - following strings: -Infinity, Infinity, NaN, null, true, false. - This can be used to raise an exception if invalid JSON numbers - are encountered. - - To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` - kwarg. - - """ - if (cls is None and encoding is None and object_hook is None and - parse_int is None and parse_float is None and - parse_constant is None and not kw): - return _default_decoder.decode(s) - if cls is None: - cls = JSONDecoder - if object_hook is not None: - kw['object_hook'] = object_hook - if parse_float is not None: - kw['parse_float'] = parse_float - if parse_int is not None: - kw['parse_int'] = parse_int - if parse_constant is not None: - kw['parse_constant'] = parse_constant - return cls(encoding=encoding, **kw).decode(s) +r"""A simple, fast, extensible JSON encoder and decoder + +JSON (JavaScript Object Notation) <http://json.org> is a subset of +JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data +interchange format. + +json exposes an API familiar to uses of the standard library +marshal and pickle modules. + +Encoding basic Python object hierarchies:: + + >>> import json + >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) + '["foo", {"bar": ["baz", null, 1.0, 2]}]' + >>> print json.dumps("\"foo\bar") + "\"foo\bar" + >>> print json.dumps(u'\u1234') + "\u1234" + >>> print json.dumps('\\') + "\\" + >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True) + {"a": 0, "b": 0, "c": 0} + >>> from StringIO import StringIO + >>> io = StringIO() + >>> json.dump(['streaming API'], io) + >>> io.getvalue() + '["streaming API"]' + +Compact encoding:: + + >>> import json + >>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':')) + '[1,2,3,{"4":5,"6":7}]' + +Pretty printing (using repr() because of extraneous whitespace in the output):: + + >>> import json + >>> print repr(json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)) + '{\n "4": 5, \n "6": 7\n}' + +Decoding JSON:: + + >>> import json + >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') + [u'foo', {u'bar': [u'baz', None, 1.0, 2]}] + >>> json.loads('"\\"foo\\bar"') + u'"foo\x08ar' + >>> from StringIO import StringIO + >>> io = StringIO('["streaming API"]') + >>> json.load(io) + [u'streaming API'] + +Specializing JSON object decoding:: + + >>> import json + >>> def as_complex(dct): + ... if '__complex__' in dct: + ... return complex(dct['real'], dct['imag']) + ... return dct + ... + >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}', + ... object_hook=as_complex) + (1+2j) + >>> import decimal + >>> json.loads('1.1', parse_float=decimal.Decimal) + Decimal('1.1') + +Extending JSONEncoder:: + + >>> import json + >>> class ComplexEncoder(json.JSONEncoder): + ... def default(self, obj): + ... if isinstance(obj, complex): + ... return [obj.real, obj.imag] + ... return json.JSONEncoder.default(self, obj) + ... + >>> dumps(2 + 1j, cls=ComplexEncoder) + '[2.0, 1.0]' + >>> ComplexEncoder().encode(2 + 1j) + '[2.0, 1.0]' + >>> list(ComplexEncoder().iterencode(2 + 1j)) + ['[', '2.0', ', ', '1.0', ']'] + + +Using json.tool from the shell to validate and +pretty-print:: + + $ echo '{"json":"obj"}' | python -mjson.tool + { + "json": "obj" + } + $ echo '{ 1.2:3.4}' | python -mjson.tool + Expecting property name: line 1 column 2 (char 2) + +Note that the JSON produced by this module's default settings +is a subset of YAML, so it may be used as a serializer for that as well. + +""" + +__version__ = '1.9' +__all__ = [ + 'dump', 'dumps', 'load', 'loads', + 'JSONDecoder', 'JSONEncoder', +] + +__author__ = 'Bob Ippolito <bob@redivi.com>' + +from .decoder import JSONDecoder +from .encoder import JSONEncoder + +_default_encoder = JSONEncoder( + skipkeys=False, + ensure_ascii=True, + check_circular=True, + allow_nan=True, + indent=None, + separators=None, + encoding='utf-8', + default=None, +) + +def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True, + allow_nan=True, cls=None, indent=None, separators=None, + encoding='utf-8', default=None, **kw): + """Serialize ``obj`` as a JSON formatted stream to ``fp`` (a + ``.write()``-supporting file-like object). + + If ``skipkeys`` is ``True`` then ``dict`` keys that are not basic types + (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``) + will be skipped instead of raising a ``TypeError``. + + If ``ensure_ascii`` is ``False``, then the some chunks written to ``fp`` + may be ``unicode`` instances, subject to normal Python ``str`` to + ``unicode`` coercion rules. Unless ``fp.write()`` explicitly + understands ``unicode`` (as in ``codecs.getwriter()``) this is likely + to cause an error. + + If ``check_circular`` is ``False``, then the circular reference check + for container types will be skipped and a circular reference will + result in an ``OverflowError`` (or worse). + + If ``allow_nan`` is ``False``, then it will be a ``ValueError`` to + serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) + in strict compliance of the JSON specification, instead of using the + JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``). + + If ``indent`` is a non-negative integer, then JSON array elements and object + members will be pretty-printed with that indent level. An indent level + of 0 will only insert newlines. ``None`` is the most compact representation. + + If ``separators`` is an ``(item_separator, dict_separator)`` tuple + then it will be used instead of the default ``(', ', ': ')`` separators. + ``(',', ':')`` is the most compact JSON representation. + + ``encoding`` is the character encoding for str instances, default is UTF-8. + + ``default(obj)`` is a function that should return a serializable version + of obj or raise TypeError. The default simply raises TypeError. + + To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the + ``.default()`` method to serialize additional types), specify it with + the ``cls`` kwarg. + + """ + # cached encoder + if (skipkeys is False and ensure_ascii is True and + check_circular is True and allow_nan is True and + cls is None and indent is None and separators is None and + encoding == 'utf-8' and default is None and not kw): + iterable = _default_encoder.iterencode(obj) + else: + if cls is None: + cls = JSONEncoder + iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii, + check_circular=check_circular, allow_nan=allow_nan, indent=indent, + separators=separators, encoding=encoding, + default=default, **kw).iterencode(obj) + # could accelerate with writelines in some versions of Python, at + # a debuggability cost + for chunk in iterable: + fp.write(chunk) + + +def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True, + allow_nan=True, cls=None, indent=None, separators=None, + encoding='utf-8', default=None, **kw): + """Serialize ``obj`` to a JSON formatted ``str``. + + If ``skipkeys`` is ``True`` then ``dict`` keys that are not basic types + (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``) + will be skipped instead of raising a ``TypeError``. + + If ``ensure_ascii`` is ``False``, then the return value will be a + ``unicode`` instance subject to normal Python ``str`` to ``unicode`` + coercion rules instead of being escaped to an ASCII ``str``. + + If ``check_circular`` is ``False``, then the circular reference check + for container types will be skipped and a circular reference will + result in an ``OverflowError`` (or worse). + + If ``allow_nan`` is ``False``, then it will be a ``ValueError`` to + serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in + strict compliance of the JSON specification, instead of using the + JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``). + + If ``indent`` is a non-negative integer, then JSON array elements and + object members will be pretty-printed with that indent level. An indent + level of 0 will only insert newlines. ``None`` is the most compact + representation. + + If ``separators`` is an ``(item_separator, dict_separator)`` tuple + then it will be used instead of the default ``(', ', ': ')`` separators. + ``(',', ':')`` is the most compact JSON representation. + + ``encoding`` is the character encoding for str instances, default is UTF-8. + + ``default(obj)`` is a function that should return a serializable version + of obj or raise TypeError. The default simply raises TypeError. + + To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the + ``.default()`` method to serialize additional types), specify it with + the ``cls`` kwarg. + + """ + # cached encoder + if (skipkeys is False and ensure_ascii is True and + check_circular is True and allow_nan is True and + cls is None and indent is None and separators is None and + encoding == 'utf-8' and default is None and not kw): + return _default_encoder.encode(obj) + if cls is None: + cls = JSONEncoder + return cls( + skipkeys=skipkeys, ensure_ascii=ensure_ascii, + check_circular=check_circular, allow_nan=allow_nan, indent=indent, + separators=separators, encoding=encoding, default=default, + **kw).encode(obj) + + +_default_decoder = JSONDecoder(encoding=None, object_hook=None) + + +def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None, + parse_int=None, parse_constant=None, **kw): + """Deserialize ``fp`` (a ``.read()``-supporting file-like object + containing a JSON document) to a Python object. + + If the contents of ``fp`` is encoded with an ASCII based encoding other + than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must + be specified. Encodings that are not ASCII based (such as UCS-2) are + not allowed, and should be wrapped with + ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode`` + object and passed to ``loads()`` + + ``object_hook`` is an optional function that will be called with the + result of any object literal decode (a ``dict``). The return value of + ``object_hook`` will be used instead of the ``dict``. This feature + can be used to implement custom decoders (e.g. JSON-RPC class hinting). + + To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` + kwarg. + + """ + return loads(fp.read(), + encoding=encoding, cls=cls, object_hook=object_hook, + parse_float=parse_float, parse_int=parse_int, + parse_constant=parse_constant, **kw) + + +def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None, + parse_int=None, parse_constant=None, **kw): + """Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON + document) to a Python object. + + If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding + other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name + must be specified. Encodings that are not ASCII based (such as UCS-2) + are not allowed and should be decoded to ``unicode`` first. + + ``object_hook`` is an optional function that will be called with the + result of any object literal decode (a ``dict``). The return value of + ``object_hook`` will be used instead of the ``dict``. This feature + can be used to implement custom decoders (e.g. JSON-RPC class hinting). + + ``parse_float``, if specified, will be called with the string + of every JSON float to be decoded. By default this is equivalent to + float(num_str). This can be used to use another datatype or parser + for JSON floats (e.g. decimal.Decimal). + + ``parse_int``, if specified, will be called with the string + of every JSON int to be decoded. By default this is equivalent to + int(num_str). This can be used to use another datatype or parser + for JSON integers (e.g. float). + + ``parse_constant``, if specified, will be called with one of the + following strings: -Infinity, Infinity, NaN, null, true, false. + This can be used to raise an exception if invalid JSON numbers + are encountered. + + To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` + kwarg. + + """ + if (cls is None and encoding is None and object_hook is None and + parse_int is None and parse_float is None and + parse_constant is None and not kw): + return _default_decoder.decode(s) + if cls is None: + cls = JSONDecoder + if object_hook is not None: + kw['object_hook'] = object_hook + if parse_float is not None: + kw['parse_float'] = parse_float + if parse_int is not None: + kw['parse_int'] = parse_int + if parse_constant is not None: + kw['parse_constant'] = parse_constant + return cls(encoding=encoding, **kw).decode(s)
Change 1 of 1 Show Entire File _custom/​json/​decoder.py Stacked
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
@@ -1,339 +1,339 @@
-"""Implementation of JSONDecoder -""" - -import re -import sys - -from json.scanner import Scanner, pattern -try: - from _json import scanstring as c_scanstring -except ImportError: - c_scanstring = None - -__all__ = ['JSONDecoder'] - -FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL - -NaN, PosInf, NegInf = float('nan'), float('inf'), float('-inf') - - -def linecol(doc, pos): - lineno = doc.count('\n', 0, pos) + 1 - if lineno == 1: - colno = pos - else: - colno = pos - doc.rindex('\n', 0, pos) - return lineno, colno - - -def errmsg(msg, doc, pos, end=None): - lineno, colno = linecol(doc, pos) - if end is None: - fmt = '{0}: line {1} column {2} (char {3})' - return fmt.format(msg, lineno, colno, pos) - endlineno, endcolno = linecol(doc, end) - fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})' - return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end) - - -_CONSTANTS = { - '-Infinity': NegInf, - 'Infinity': PosInf, - 'NaN': NaN, - 'true': True, - 'false': False, - 'null': None, -} - - -def JSONConstant(match, context, c=_CONSTANTS): - s = match.group(0) - fn = getattr(context, 'parse_constant', None) - if fn is None: - rval = c[s] - else: - rval = fn(s) - return rval, None -pattern('(-?Infinity|NaN|true|false|null)')(JSONConstant) - - -def JSONNumber(match, context): - match = JSONNumber.regex.match(match.string, *match.span()) - integer, frac, exp = match.groups() - if frac or exp: - fn = getattr(context, 'parse_float', None) or float - res = fn(integer + (frac or '') + (exp or '')) - else: - fn = getattr(context, 'parse_int', None) or int - res = fn(integer) - return res, None -pattern(r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?')(JSONNumber) - - -STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS) -BACKSLASH = { - '"': u'"', '\\': u'\\', '/': u'/', - 'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t', -} - -DEFAULT_ENCODING = "utf-8" - - -def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): - if encoding is None: - encoding = DEFAULT_ENCODING - chunks = [] - _append = chunks.append - begin = end - 1 - while 1: - chunk = _m(s, end) - if chunk is None: - raise ValueError( - errmsg("Unterminated string starting at", s, begin)) - end = chunk.end() - content, terminator = chunk.groups() - if content: - if not isinstance(content, unicode): - content = unicode(content, encoding) - _append(content) - if terminator == '"': - break - elif terminator != '\\': - if strict: - msg = "Invalid control character {0!r} at".format(terminator) - raise ValueError(errmsg(msg, s, end)) - else: - _append(terminator) - continue - try: - esc = s[end] - except IndexError: - raise ValueError( - errmsg("Unterminated string starting at", s, begin)) - if esc != 'u': - try: - m = _b[esc] - except KeyError: - msg = "Invalid \\escape: {0!r}".format(esc) - raise ValueError(errmsg(msg, s, end)) - end += 1 - else: - esc = s[end + 1:end + 5] - next_end = end + 5 - msg = "Invalid \\uXXXX escape" - try: - if len(esc) != 4: - raise ValueError - uni = int(esc, 16) - if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535: - msg = "Invalid \\uXXXX\\uXXXX surrogate pair" - if not s[end + 5:end + 7] == '\\u': - raise ValueError - esc2 = s[end + 7:end + 11] - if len(esc2) != 4: - raise ValueError - uni2 = int(esc2, 16) - uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00)) - next_end += 6 - m = unichr(uni) - except ValueError: - raise ValueError(errmsg(msg, s, end)) - end = next_end - _append(m) - return u''.join(chunks), end - - -# Use speedup -if c_scanstring is not None: - scanstring = c_scanstring -else: - scanstring = py_scanstring - -def JSONString(match, context): - encoding = getattr(context, 'encoding', None) - strict = getattr(context, 'strict', True) - return scanstring(match.string, match.end(), encoding, strict) -pattern(r'"')(JSONString) - - -WHITESPACE = re.compile(r'\s*', FLAGS) - - -def JSONObject(match, context, _w=WHITESPACE.match): - pairs = {} - s = match.string - end = _w(s, match.end()).end() - nextchar = s[end:end + 1] - # Trivial empty object - if nextchar == '}': - return pairs, end + 1 - if nextchar != '"': - raise ValueError(errmsg("Expecting property name", s, end)) - end += 1 - encoding = getattr(context, 'encoding', None) - strict = getattr(context, 'strict', True) - iterscan = JSONScanner.iterscan - while True: - key, end = scanstring(s, end, encoding, strict) - end = _w(s, end).end() - if s[end:end + 1] != ':': - raise ValueError(errmsg("Expecting : delimiter", s, end)) - end = _w(s, end + 1).end() - try: - value, end = iterscan(s, idx=end, context=context).next() - except StopIteration: - raise ValueError(errmsg("Expecting object", s, end)) - pairs[key] = value - end = _w(s, end).end() - nextchar = s[end:end + 1] - end += 1 - if nextchar == '}': - break - if nextchar != ',': - raise ValueError(errmsg("Expecting , delimiter", s, end - 1)) - end = _w(s, end).end() - nextchar = s[end:end + 1] - end += 1 - if nextchar != '"': - raise ValueError(errmsg("Expecting property name", s, end - 1)) - object_hook = getattr(context, 'object_hook', None) - if object_hook is not None: - pairs = object_hook(pairs) - return pairs, end -pattern(r'{')(JSONObject) - - -def JSONArray(match, context, _w=WHITESPACE.match): - values = [] - s = match.string - end = _w(s, match.end()).end() - # Look-ahead for trivial empty array - nextchar = s[end:end + 1] - if nextchar == ']': - return values, end + 1 - iterscan = JSONScanner.iterscan - while True: - try: - value, end = iterscan(s, idx=end, context=context).next() - except StopIteration: - raise ValueError(errmsg("Expecting object", s, end)) - values.append(value) - end = _w(s, end).end() - nextchar = s[end:end + 1] - end += 1 - if nextchar == ']': - break - if nextchar != ',': - raise ValueError(errmsg("Expecting , delimiter", s, end)) - end = _w(s, end).end() - return values, end -pattern(r'\[')(JSONArray) - - -ANYTHING = [ - JSONObject, - JSONArray, - JSONString, - JSONConstant, - JSONNumber, -] - -JSONScanner = Scanner(ANYTHING) - - -class JSONDecoder(object): - """Simple JSON <http://json.org> decoder - - Performs the following translations in decoding by default: - - +---------------+-------------------+ - | JSON | Python | - +===============+===================+ - | object | dict | - +---------------+-------------------+ - | array | list | - +---------------+-------------------+ - | string | unicode | - +---------------+-------------------+ - | number (int) | int, long | - +---------------+-------------------+ - | number (real) | float | - +---------------+-------------------+ - | true | True | - +---------------+-------------------+ - | false | False | - +---------------+-------------------+ - | null | None | - +---------------+-------------------+ - - It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as - their corresponding ``float`` values, which is outside the JSON spec. - """ - - _scanner = Scanner(ANYTHING) - __all__ = ['__init__', 'decode', 'raw_decode'] - - def __init__(self, encoding=None, object_hook=None, parse_float=None, - parse_int=None, parse_constant=None, strict=True): - """``encoding`` determines the encoding used to interpret any ``str`` - objects decoded by this instance (utf-8 by default). It has no - effect when decoding ``unicode`` objects. - - Note that currently only encodings that are a superset of ASCII work, - strings of other encodings should be passed in as ``unicode``. - - ``object_hook``, if specified, will be called with the result of - every JSON object decoded and its return value will be used in - place of the given ``dict``. This can be used to provide custom - deserializations (e.g. to support JSON-RPC class hinting). - - ``parse_float``, if specified, will be called with the string - of every JSON float to be decoded. By default this is equivalent to - float(num_str). This can be used to use another datatype or parser - for JSON floats (e.g. decimal.Decimal). - - ``parse_int``, if specified, will be called with the string - of every JSON int to be decoded. By default this is equivalent to - int(num_str). This can be used to use another datatype or parser - for JSON integers (e.g. float). - - ``parse_constant``, if specified, will be called with one of the - following strings: -Infinity, Infinity, NaN, null, true, false. - This can be used to raise an exception if invalid JSON numbers - are encountered. - - """ - self.encoding = encoding - self.object_hook = object_hook - self.parse_float = parse_float - self.parse_int = parse_int - self.parse_constant = parse_constant - self.strict = strict - - def decode(self, s, _w=WHITESPACE.match): - """ - Return the Python representation of ``s`` (a ``str`` or ``unicode`` - instance containing a JSON document) - - """ - obj, end = self.raw_decode(s, idx=_w(s, 0).end()) - end = _w(s, end).end() - if end != len(s): - raise ValueError(errmsg("Extra data", s, end, len(s))) - return obj - - def raw_decode(self, s, **kw): - """Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning - with a JSON document) and return a 2-tuple of the Python - representation and the index in ``s`` where the document ended. - - This can be used to decode a JSON document from a string that may - have extraneous data at the end. - - """ - kw.setdefault('context', self) - try: - obj, end = self._scanner.iterscan(s, **kw).next() - except StopIteration: - raise ValueError("No JSON object could be decoded") - return obj, end +"""Implementation of JSONDecoder +""" + +import re +import sys + +from json.scanner import Scanner, pattern +try: + from _json import scanstring as c_scanstring +except ImportError: + c_scanstring = None + +__all__ = ['JSONDecoder'] + +FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL + +NaN, PosInf, NegInf = float('nan'), float('inf'), float('-inf') + + +def linecol(doc, pos): + lineno = doc.count('\n', 0, pos) + 1 + if lineno == 1: + colno = pos + else: + colno = pos - doc.rindex('\n', 0, pos) + return lineno, colno + + +def errmsg(msg, doc, pos, end=None): + lineno, colno = linecol(doc, pos) + if end is None: + fmt = '{0}: line {1} column {2} (char {3})' + return fmt.format(msg, lineno, colno, pos) + endlineno, endcolno = linecol(doc, end) + fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})' + return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end) + + +_CONSTANTS = { + '-Infinity': NegInf, + 'Infinity': PosInf, + 'NaN': NaN, + 'true': True, + 'false': False, + 'null': None, +} + + +def JSONConstant(match, context, c=_CONSTANTS): + s = match.group(0) + fn = getattr(context, 'parse_constant', None) + if fn is None: + rval = c[s] + else: + rval = fn(s) + return rval, None +pattern('(-?Infinity|NaN|true|false|null)')(JSONConstant) + + +def JSONNumber(match, context): + match = JSONNumber.regex.match(match.string, *match.span()) + integer, frac, exp = match.groups() + if frac or exp: + fn = getattr(context, 'parse_float', None) or float + res = fn(integer + (frac or '') + (exp or '')) + else: + fn = getattr(context, 'parse_int', None) or int + res = fn(integer) + return res, None +pattern(r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?')(JSONNumber) + + +STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS) +BACKSLASH = { + '"': u'"', '\\': u'\\', '/': u'/', + 'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t', +} + +DEFAULT_ENCODING = "utf-8" + + +def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): + if encoding is None: + encoding = DEFAULT_ENCODING + chunks = [] + _append = chunks.append + begin = end - 1 + while 1: + chunk = _m(s, end) + if chunk is None: + raise ValueError( + errmsg("Unterminated string starting at", s, begin)) + end = chunk.end() + content, terminator = chunk.groups() + if content: + if not isinstance(content, unicode): + content = unicode(content, encoding) + _append(content) + if terminator == '"': + break + elif terminator != '\\': + if strict: + msg = "Invalid control character {0!r} at".format(terminator) + raise ValueError(errmsg(msg, s, end)) + else: + _append(terminator) + continue + try: + esc = s[end] + except IndexError: + raise ValueError( + errmsg("Unterminated string starting at", s, begin)) + if esc != 'u': + try: + m = _b[esc] + except KeyError: + msg = "Invalid \\escape: {0!r}".format(esc) + raise ValueError(errmsg(msg, s, end)) + end += 1 + else: + esc = s[end + 1:end + 5] + next_end = end + 5 + msg = "Invalid \\uXXXX escape" + try: + if len(esc) != 4: + raise ValueError + uni = int(esc, 16) + if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535: + msg = "Invalid \\uXXXX\\uXXXX surrogate pair" + if not s[end + 5:end + 7] == '\\u': + raise ValueError + esc2 = s[end + 7:end + 11] + if len(esc2) != 4: + raise ValueError + uni2 = int(esc2, 16) + uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00)) + next_end += 6 + m = unichr(uni) + except ValueError: + raise ValueError(errmsg(msg, s, end)) + end = next_end + _append(m) + return u''.join(chunks), end + + +# Use speedup +if c_scanstring is not None: + scanstring = c_scanstring +else: + scanstring = py_scanstring + +def JSONString(match, context): + encoding = getattr(context, 'encoding', None) + strict = getattr(context, 'strict', True) + return scanstring(match.string, match.end(), encoding, strict) +pattern(r'"')(JSONString) + + +WHITESPACE = re.compile(r'\s*', FLAGS) + + +def JSONObject(match, context, _w=WHITESPACE.match): + pairs = {} + s = match.string + end = _w(s, match.end()).end() + nextchar = s[end:end + 1] + # Trivial empty object + if nextchar == '}': + return pairs, end + 1 + if nextchar != '"': + raise ValueError(errmsg("Expecting property name", s, end)) + end += 1 + encoding = getattr(context, 'encoding', None) + strict = getattr(context, 'strict', True) + iterscan = JSONScanner.iterscan + while True: + key, end = scanstring(s, end, encoding, strict) + end = _w(s, end).end() + if s[end:end + 1] != ':': + raise ValueError(errmsg("Expecting : delimiter", s, end)) + end = _w(s, end + 1).end() + try: + value, end = iterscan(s, idx=end, context=context).next() + except StopIteration: + raise ValueError(errmsg("Expecting object", s, end)) + pairs[key] = value + end = _w(s, end).end() + nextchar = s[end:end + 1] + end += 1 + if nextchar == '}': + break + if nextchar != ',': + raise ValueError(errmsg("Expecting , delimiter", s, end - 1)) + end = _w(s, end).end() + nextchar = s[end:end + 1] + end += 1 + if nextchar != '"': + raise ValueError(errmsg("Expecting property name", s, end - 1)) + object_hook = getattr(context, 'object_hook', None) + if object_hook is not None: + pairs = object_hook(pairs) + return pairs, end +pattern(r'{')(JSONObject) + + +def JSONArray(match, context, _w=WHITESPACE.match): + values = [] + s = match.string + end = _w(s, match.end()).end() + # Look-ahead for trivial empty array + nextchar = s[end:end + 1] + if nextchar == ']': + return values, end + 1 + iterscan = JSONScanner.iterscan + while True: + try: + value, end = iterscan(s, idx=end, context=context).next() + except StopIteration: + raise ValueError(errmsg("Expecting object", s, end)) + values.append(value) + end = _w(s, end).end() + nextchar = s[end:end + 1] + end += 1 + if nextchar == ']': + break + if nextchar != ',': + raise ValueError(errmsg("Expecting , delimiter", s, end)) + end = _w(s, end).end() + return values, end +pattern(r'\[')(JSONArray) + + +ANYTHING = [ + JSONObject, + JSONArray, + JSONString, + JSONConstant, + JSONNumber, +] + +JSONScanner = Scanner(ANYTHING) + + +class JSONDecoder(object): + """Simple JSON <http://json.org> decoder + + Performs the following translations in decoding by default: + + +---------------+-------------------+ + | JSON | Python | + +===============+===================+ + | object | dict | + +---------------+-------------------+ + | array | list | + +---------------+-------------------+ + | string | unicode | + +---------------+-------------------+ + | number (int) | int, long | + +---------------+-------------------+ + | number (real) | float | + +---------------+-------------------+ + | true | True | + +---------------+-------------------+ + | false | False | + +---------------+-------------------+ + | null | None | + +---------------+-------------------+ + + It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as + their corresponding ``float`` values, which is outside the JSON spec. + """ + + _scanner = Scanner(ANYTHING) + __all__ = ['__init__', 'decode', 'raw_decode'] + + def __init__(self, encoding=None, object_hook=None, parse_float=None, + parse_int=None, parse_constant=None, strict=True): + """``encoding`` determines the encoding used to interpret any ``str`` + objects decoded by this instance (utf-8 by default). It has no + effect when decoding ``unicode`` objects. + + Note that currently only encodings that are a superset of ASCII work, + strings of other encodings should be passed in as ``unicode``. + + ``object_hook``, if specified, will be called with the result of + every JSON object decoded and its return value will be used in + place of the given ``dict``. This can be used to provide custom + deserializations (e.g. to support JSON-RPC class hinting). + + ``parse_float``, if specified, will be called with the string + of every JSON float to be decoded. By default this is equivalent to + float(num_str). This can be used to use another datatype or parser + for JSON floats (e.g. decimal.Decimal). + + ``parse_int``, if specified, will be called with the string + of every JSON int to be decoded. By default this is equivalent to + int(num_str). This can be used to use another datatype or parser + for JSON integers (e.g. float). + + ``parse_constant``, if specified, will be called with one of the + following strings: -Infinity, Infinity, NaN, null, true, false. + This can be used to raise an exception if invalid JSON numbers + are encountered. + + """ + self.encoding = encoding + self.object_hook = object_hook + self.parse_float = parse_float + self.parse_int = parse_int + self.parse_constant = parse_constant + self.strict = strict + + def decode(self, s, _w=WHITESPACE.match): + """ + Return the Python representation of ``s`` (a ``str`` or ``unicode`` + instance containing a JSON document) + + """ + obj, end = self.raw_decode(s, idx=_w(s, 0).end()) + end = _w(s, end).end() + if end != len(s): + raise ValueError(errmsg("Extra data", s, end, len(s))) + return obj + + def raw_decode(self, s, **kw): + """Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning + with a JSON document) and return a 2-tuple of the Python + representation and the index in ``s`` where the document ended. + + This can be used to decode a JSON document from a string that may + have extraneous data at the end. + + """ + kw.setdefault('context', self) + try: + obj, end = self._scanner.iterscan(s, **kw).next() + except StopIteration: + raise ValueError("No JSON object could be decoded") + return obj, end
Change 1 of 1 Show Entire File _custom/​json/​encoder.py Stacked
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
@@ -1,384 +1,384 @@
-"""Implementation of JSONEncoder -""" - -import re -import math - -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None - -__all__ = ['JSONEncoder'] - -ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') -ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') -HAS_UTF8 = re.compile(r'[\x80-\xff]') -ESCAPE_DCT = { - '\\': '\\\\', - '"': '\\"', - '\b': '\\b', - '\f': '\\f', - '\n': '\\n', - '\r': '\\r', - '\t': '\\t', -} -for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - -FLOAT_REPR = repr - -def floatstr(o, allow_nan=True): - # Check for specials. Note that this type of test is processor- and/or - # platform-specific, so do tests which don't depend on the internals. - - if math.isnan(o): - text = 'NaN' - elif math.isinf(o): - if math.copysign(1., o) == 1.: - text = 'Infinity' - else: - text = '-Infinity' - else: - return FLOAT_REPR(o) - - if not allow_nan: - msg = "Out of range float values are not JSON compliant: " + repr(o) - raise ValueError(msg) - - return text - - -def encode_basestring(s): - """Return a JSON representation of a Python string - - """ - def replace(match): - return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' - - -def py_encode_basestring_ascii(s): - if isinstance(s, str) and HAS_UTF8.search(s) is not None: - s = s.decode('utf-8') - def replace(match): - s = match.group(0) - try: - return ESCAPE_DCT[s] - except KeyError: - n = ord(s) - if n < 0x10000: - return '\\u{0:04x}'.format(n) - else: - # surrogate pair - n -= 0x10000 - s1 = 0xd800 | ((n >> 10) & 0x3ff) - s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -if c_encode_basestring_ascii is not None: - encode_basestring_ascii = c_encode_basestring_ascii -else: - encode_basestring_ascii = py_encode_basestring_ascii - - -class JSONEncoder(object): - """Extensible JSON <http://json.org> encoder for Python data structures. - - Supports the following objects and types by default: - - +-------------------+---------------+ - | Python | JSON | - +===================+===============+ - | dict | object | - +-------------------+---------------+ - | list, tuple | array | - +-------------------+---------------+ - | str, unicode | string | - +-------------------+---------------+ - | int, long, float | number | - +-------------------+---------------+ - | True | true | - +-------------------+---------------+ - | False | false | - +-------------------+---------------+ - | None | null | - +-------------------+---------------+ - - To extend this to recognize other objects, subclass and implement a - ``.default()`` method with another method that returns a serializable - object for ``o`` if possible, otherwise it should call the superclass - implementation (to raise ``TypeError``). - - """ - __all__ = ['__init__', 'default', 'encode', 'iterencode'] - item_separator = ', ' - key_separator = ': ' - def __init__(self, skipkeys=False, ensure_ascii=True, - check_circular=True, allow_nan=True, sort_keys=False, - indent=None, separators=None, encoding='utf-8', default=None): - """Constructor for JSONEncoder, with sensible defaults. - - If skipkeys is False, then it is a TypeError to attempt - encoding of keys that are not str, int, long, float or None. If - skipkeys is True, such items are simply skipped. - - If ensure_ascii is True, the output is guaranteed to be str - objects with all incoming unicode characters escaped. If - ensure_ascii is false, the output will be unicode object. - - If check_circular is True, then lists, dicts, and custom encoded - objects will be checked for circular references during encoding to - prevent an infinite recursion (which would cause an OverflowError). - Otherwise, no such check takes place. - - If allow_nan is True, then NaN, Infinity, and -Infinity will be - encoded as such. This behavior is not JSON specification compliant, - but is consistent with most JavaScript based encoders and decoders. - Otherwise, it will be a ValueError to encode such floats. - - If sort_keys is True, then the output of dictionaries will be - sorted by key; this is useful for regression tests to ensure - that JSON serializations can be compared on a day-to-day basis. - - If indent is a non-negative integer, then JSON array - elements and object members will be pretty-printed with that - indent level. An indent level of 0 will only insert newlines. - None is the most compact representation. - - If specified, separators should be a (item_separator, key_separator) - tuple. The default is (', ', ': '). To get the most compact JSON - representation you should specify (',', ':') to eliminate whitespace. - - If specified, default is a function that gets called for objects - that can't otherwise be serialized. It should return a JSON encodable - version of the object or raise a ``TypeError``. - - If encoding is not None, then all input strings will be - transformed into unicode using that encoding prior to JSON-encoding. - The default is UTF-8. - - """ - self.skipkeys = skipkeys - self.ensure_ascii = ensure_ascii - self.check_circular = check_circular - self.allow_nan = allow_nan - self.sort_keys = sort_keys - self.indent = indent - self.current_indent_level = 0 - if separators is not None: - self.item_separator, self.key_separator = separators - if default is not None: - self.default = default - self.encoding = encoding - - def _newline_indent(self): - return '\n' + (' ' * (self.indent * self.current_indent_level)) - - def _iterencode_list(self, lst, markers=None): - if not lst: - yield '[]' - return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst - yield '[' - if self.indent is not None: - self.current_indent_level += 1 - newline_indent = self._newline_indent() - separator = self.item_separator + newline_indent - yield newline_indent - else: - newline_indent = None - separator = self.item_separator - first = True - for value in lst: - if first: - first = False - else: - yield separator - for chunk in self._iterencode(value, markers): - yield chunk - if newline_indent is not None: - self.current_indent_level -= 1 - yield self._newline_indent() - yield ']' - if markers is not None: - del markers[markerid] - - def _iterencode_dict(self, dct, markers=None): - if not dct: - yield '{}' - return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct - yield '{' - key_separator = self.key_separator - if self.indent is not None: - self.current_indent_level += 1 - newline_indent = self._newline_indent() - item_separator = self.item_separator + newline_indent - yield newline_indent - else: - newline_indent = None - item_separator = self.item_separator - first = True - if self.ensure_ascii: - encoder = encode_basestring_ascii - else: - encoder = encode_basestring - allow_nan = self.allow_nan - if self.sort_keys: - keys = dct.keys() - keys.sort() - items = [(k, dct[k]) for k in keys] - else: - items = dct.iteritems() - _encoding = self.encoding - _do_decode = (_encoding is not None - and not (_encoding == 'utf-8')) - for key, value in items: - if isinstance(key, str): - if _do_decode: - key = key.decode(_encoding) - elif isinstance(key, basestring): - pass - # JavaScript is weakly typed for these, so it makes sense to - # also allow them. Many encoders seem to do something like this. - elif isinstance(key, float): - key = floatstr(key, allow_nan) - elif isinstance(key, (int, long)): - key = str(key) - elif key is True: - key = 'true' - elif key is False: - key = 'false' - elif key is None: - key = 'null' - elif self.skipkeys: - continue - else: - raise TypeError("key {0!r} is not a string".format(key)) - if first: - first = False - else: - yield item_separator - yield encoder(key) - yield key_separator - for chunk in self._iterencode(value, markers): - yield chunk - if newline_indent is not None: - self.current_indent_level -= 1 - yield self._newline_indent() - yield '}' - if markers is not None: - del markers[markerid] - - def _iterencode(self, o, markers=None): - if isinstance(o, basestring): - if self.ensure_ascii: - encoder = encode_basestring_ascii - else: - encoder = encode_basestring - _encoding = self.encoding - if (_encoding is not None and isinstance(o, str) - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - yield encoder(o) - elif o is None: - yield 'null' - elif o is True: - yield 'true' - elif o is False: - yield 'false' - elif isinstance(o, (int, long)): - yield str(o) - elif isinstance(o, float): - yield floatstr(o, self.allow_nan) - elif isinstance(o, (list, tuple)): - for chunk in self._iterencode_list(o, markers): - yield chunk - elif isinstance(o, dict): - for chunk in self._iterencode_dict(o, markers): - yield chunk - else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - for chunk in self._iterencode_default(o, markers): - yield chunk - if markers is not None: - del markers[markerid] - - def _iterencode_default(self, o, markers=None): - newobj = self.default(o) - return self._iterencode(newobj, markers) - - def default(self, o): - """Implement this method in a subclass such that it returns a serializable - object for ``o``, or calls the base implementation (to raise a - ``TypeError``). - - For example, to support arbitrary iterators, you could implement - default like this:: - - def default(self, o): - try: - iterable = iter(o) - except TypeError: - pass - else: - return list(iterable) - return JSONEncoder.default(self, o) - - """ - raise TypeError(repr(o) + " is not JSON serializable") - - def encode(self, o): - """Return a JSON string representation of a Python data structure. - - >>> JSONEncoder().encode({"foo": ["bar", "baz"]}) - '{"foo": ["bar", "baz"]}' - - """ - # This is for extremely simple cases and benchmarks. - if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) - else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = list(self.iterencode(o)) - return ''.join(chunks) - - def iterencode(self, o): - """Encode the given object and yield each string representation as - available. - - For example:: - - for chunk in JSONEncoder().iterencode(bigobject): - mysocket.write(chunk) - - """ - if self.check_circular: - markers = {} - else: - markers = None - return self._iterencode(o, markers) +"""Implementation of JSONEncoder +""" + +import re +import math + +try: + from _json import encode_basestring_ascii as c_encode_basestring_ascii +except ImportError: + c_encode_basestring_ascii = None + +__all__ = ['JSONEncoder'] + +ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') +ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') +HAS_UTF8 = re.compile(r'[\x80-\xff]') +ESCAPE_DCT = { + '\\': '\\\\', + '"': '\\"', + '\b': '\\b', + '\f': '\\f', + '\n': '\\n', + '\r': '\\r', + '\t': '\\t', +} +for i in range(0x20): + ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) + +FLOAT_REPR = repr + +def floatstr(o, allow_nan=True): + # Check for specials. Note that this type of test is processor- and/or + # platform-specific, so do tests which don't depend on the internals. + + if math.isnan(o): + text = 'NaN' + elif math.isinf(o): + if math.copysign(1., o) == 1.: + text = 'Infinity' + else: + text = '-Infinity' + else: + return FLOAT_REPR(o) + + if not allow_nan: + msg = "Out of range float values are not JSON compliant: " + repr(o) + raise ValueError(msg) + + return text + + +def encode_basestring(s): + """Return a JSON representation of a Python string + + """ + def replace(match): + return ESCAPE_DCT[match.group(0)] + return '"' + ESCAPE.sub(replace, s) + '"' + + +def py_encode_basestring_ascii(s): + if isinstance(s, str) and HAS_UTF8.search(s) is not None: + s = s.decode('utf-8') + def replace(match): + s = match.group(0) + try: + return ESCAPE_DCT[s] + except KeyError: + n = ord(s) + if n < 0x10000: + return '\\u{0:04x}'.format(n) + else: + # surrogate pair + n -= 0x10000 + s1 = 0xd800 | ((n >> 10) & 0x3ff) + s2 = 0xdc00 | (n & 0x3ff) + return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) + return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' + + +if c_encode_basestring_ascii is not None: + encode_basestring_ascii = c_encode_basestring_ascii +else: + encode_basestring_ascii = py_encode_basestring_ascii + + +class JSONEncoder(object): + """Extensible JSON <http://json.org> encoder for Python data structures. + + Supports the following objects and types by default: + + +-------------------+---------------+ + | Python | JSON | + +===================+===============+ + | dict | object | + +-------------------+---------------+ + | list, tuple | array | + +-------------------+---------------+ + | str, unicode | string | + +-------------------+---------------+ + | int, long, float | number | + +-------------------+---------------+ + | True | true | + +-------------------+---------------+ + | False | false | + +-------------------+---------------+ + | None | null | + +-------------------+---------------+ + + To extend this to recognize other objects, subclass and implement a + ``.default()`` method with another method that returns a serializable + object for ``o`` if possible, otherwise it should call the superclass + implementation (to raise ``TypeError``). + + """ + __all__ = ['__init__', 'default', 'encode', 'iterencode'] + item_separator = ', ' + key_separator = ': ' + def __init__(self, skipkeys=False, ensure_ascii=True, + check_circular=True, allow_nan=True, sort_keys=False, + indent=None, separators=None, encoding='utf-8', default=None): + """Constructor for JSONEncoder, with sensible defaults. + + If skipkeys is False, then it is a TypeError to attempt + encoding of keys that are not str, int, long, float or None. If + skipkeys is True, such items are simply skipped. + + If ensure_ascii is True, the output is guaranteed to be str + objects with all incoming unicode characters escaped. If + ensure_ascii is false, the output will be unicode object. + + If check_circular is True, then lists, dicts, and custom encoded + objects will be checked for circular references during encoding to + prevent an infinite recursion (which would cause an OverflowError). + Otherwise, no such check takes place. + + If allow_nan is True, then NaN, Infinity, and -Infinity will be + encoded as such. This behavior is not JSON specification compliant, + but is consistent with most JavaScript based encoders and decoders. + Otherwise, it will be a ValueError to encode such floats. + + If sort_keys is True, then the output of dictionaries will be + sorted by key; this is useful for regression tests to ensure + that JSON serializations can be compared on a day-to-day basis. + + If indent is a non-negative integer, then JSON array + elements and object members will be pretty-printed with that + indent level. An indent level of 0 will only insert newlines. + None is the most compact representation. + + If specified, separators should be a (item_separator, key_separator) + tuple. The default is (', ', ': '). To get the most compact JSON + representation you should specify (',', ':') to eliminate whitespace. + + If specified, default is a function that gets called for objects + that can't otherwise be serialized. It should return a JSON encodable + version of the object or raise a ``TypeError``. + + If encoding is not None, then all input strings will be + transformed into unicode using that encoding prior to JSON-encoding. + The default is UTF-8. + + """ + self.skipkeys = skipkeys + self.ensure_ascii = ensure_ascii + self.check_circular = check_circular + self.allow_nan = allow_nan + self.sort_keys = sort_keys + self.indent = indent + self.current_indent_level = 0 + if separators is not None: + self.item_separator, self.key_separator = separators + if default is not None: + self.default = default + self.encoding = encoding + + def _newline_indent(self): + return '\n' + (' ' * (self.indent * self.current_indent_level)) + + def _iterencode_list(self, lst, markers=None): + if not lst: + yield '[]' + return + if markers is not None: + markerid = id(lst) + if markerid in markers: + raise ValueError("Circular reference detected") + markers[markerid] = lst + yield '[' + if self.indent is not None: + self.current_indent_level += 1 + newline_indent = self._newline_indent() + separator = self.item_separator + newline_indent + yield newline_indent + else: + newline_indent = None + separator = self.item_separator + first = True + for value in lst: + if first: + first = False + else: + yield separator + for chunk in self._iterencode(value, markers): + yield chunk + if newline_indent is not None: + self.current_indent_level -= 1 + yield self._newline_indent() + yield ']' + if markers is not None: + del markers[markerid] + + def _iterencode_dict(self, dct, markers=None): + if not dct: + yield '{}' + return + if markers is not None: + markerid = id(dct) + if markerid in markers: + raise ValueError("Circular reference detected") + markers[markerid] = dct + yield '{' + key_separator = self.key_separator + if self.indent is not None: + self.current_indent_level += 1 + newline_indent = self._newline_indent() + item_separator = self.item_separator + newline_indent + yield newline_indent + else: + newline_indent = None + item_separator = self.item_separator + first = True + if self.ensure_ascii: + encoder = encode_basestring_ascii + else: + encoder = encode_basestring + allow_nan = self.allow_nan + if self.sort_keys: + keys = dct.keys() + keys.sort() + items = [(k, dct[k]) for k in keys] + else: + items = dct.iteritems() + _encoding = self.encoding + _do_decode = (_encoding is not None + and not (_encoding == 'utf-8')) + for key, value in items: + if isinstance(key, str): + if _do_decode: + key = key.decode(_encoding) + elif isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = floatstr(key, allow_nan) + elif isinstance(key, (int, long)): + key = str(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif self.skipkeys: + continue + else: + raise TypeError("key {0!r} is not a string".format(key)) + if first: + first = False + else: + yield item_separator + yield encoder(key) + yield key_separator + for chunk in self._iterencode(value, markers): + yield chunk + if newline_indent is not None: + self.current_indent_level -= 1 + yield self._newline_indent() + yield '}' + if markers is not None: + del markers[markerid] + + def _iterencode(self, o, markers=None): + if isinstance(o, basestring): + if self.ensure_ascii: + encoder = encode_basestring_ascii + else: + encoder = encode_basestring + _encoding = self.encoding + if (_encoding is not None and isinstance(o, str) + and not (_encoding == 'utf-8')): + o = o.decode(_encoding) + yield encoder(o) + elif o is None: + yield 'null' + elif o is True: + yield 'true' + elif o is False: + yield 'false' + elif isinstance(o, (int, long)): + yield str(o) + elif isinstance(o, float): + yield floatstr(o, self.allow_nan) + elif isinstance(o, (list, tuple)): + for chunk in self._iterencode_list(o, markers): + yield chunk + elif isinstance(o, dict): + for chunk in self._iterencode_dict(o, markers): + yield chunk + else: + if markers is not None: + markerid = id(o) + if markerid in markers: + raise ValueError("Circular reference detected") + markers[markerid] = o + for chunk in self._iterencode_default(o, markers): + yield chunk + if markers is not None: + del markers[markerid] + + def _iterencode_default(self, o, markers=None): + newobj = self.default(o) + return self._iterencode(newobj, markers) + + def default(self, o): + """Implement this method in a subclass such that it returns a serializable + object for ``o``, or calls the base implementation (to raise a + ``TypeError``). + + For example, to support arbitrary iterators, you could implement + default like this:: + + def default(self, o): + try: + iterable = iter(o) + except TypeError: + pass + else: + return list(iterable) + return JSONEncoder.default(self, o) + + """ + raise TypeError(repr(o) + " is not JSON serializable") + + def encode(self, o): + """Return a JSON string representation of a Python data structure. + + >>> JSONEncoder().encode({"foo": ["bar", "baz"]}) + '{"foo": ["bar", "baz"]}' + + """ + # This is for extremely simple cases and benchmarks. + if isinstance(o, basestring): + if isinstance(o, str): + _encoding = self.encoding + if (_encoding is not None + and not (_encoding == 'utf-8')): + o = o.decode(_encoding) + if self.ensure_ascii: + return encode_basestring_ascii(o) + else: + return encode_basestring(o) + # This doesn't pass the iterator directly to ''.join() because the + # exceptions aren't as detailed. The list call should be roughly + # equivalent to the PySequence_Fast that ''.join() would do. + chunks = list(self.iterencode(o)) + return ''.join(chunks) + + def iterencode(self, o): + """Encode the given object and yield each string representation as + available. + + For example:: + + for chunk in JSONEncoder().iterencode(bigobject): + mysocket.write(chunk) + + """ + if self.check_circular: + markers = {} + else: + markers = None + return self._iterencode(o, markers)
Show Entire File _custom/​json/​scanner.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Change 1 of 1 Show Entire File _custom/​json/​tool.py Stacked
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@@ -1,37 +1,37 @@
-r"""Command-line tool to validate and pretty-print JSON - -Usage:: - - $ echo '{"json":"obj"}' | python -mjson.tool - { - "json": "obj" - } - $ echo '{ 1.2:3.4}' | python -mjson.tool - Expecting property name: line 1 column 2 (char 2) - -""" -import sys -import json - -def main(): - if len(sys.argv) == 1: - infile = sys.stdin - outfile = sys.stdout - elif len(sys.argv) == 2: - infile = open(sys.argv[1], 'rb') - outfile = sys.stdout - elif len(sys.argv) == 3: - infile = open(sys.argv[1], 'rb') - outfile = open(sys.argv[2], 'wb') - else: - raise SystemExit("{0} [infile [outfile]]".format(sys.argv[0])) - try: - obj = json.load(infile) - except ValueError, e: - raise SystemExit(e) - json.dump(obj, outfile, sort_keys=True, indent=4) - outfile.write('\n') - - -if __name__ == '__main__': - main() +r"""Command-line tool to validate and pretty-print JSON + +Usage:: + + $ echo '{"json":"obj"}' | python -mjson.tool + { + "json": "obj" + } + $ echo '{ 1.2:3.4}' | python -mjson.tool + Expecting property name: line 1 column 2 (char 2) + +""" +import sys +import json + +def main(): + if len(sys.argv) == 1: + infile = sys.stdin + outfile = sys.stdout + elif len(sys.argv) == 2: + infile = open(sys.argv[1], 'rb') + outfile = sys.stdout + elif len(sys.argv) == 3: + infile = open(sys.argv[1], 'rb') + outfile = open(sys.argv[2], 'wb') + else: + raise SystemExit("{0} [infile [outfile]]".format(sys.argv[0])) + try: + obj = json.load(infile) + except ValueError, e: + raise SystemExit(e) + json.dump(obj, outfile, sort_keys=True, indent=4) + outfile.write('\n') + + +if __name__ == '__main__': + main()
Change 1 of 2 Show Entire File big-push.py Stacked
 
19
20
21
 
 
 
 
 
22
23
24
25
26
27
 
28
 
 
 
 
29
30
31
 
51
52
53
54
 
 
55
56
57
 
19
20
21
22
23
24
25
26
27
28
29
30
 
 
31
32
33
34
35
36
37
38
39
 
59
60
61
 
62
63
64
65
66
@@ -19,13 +19,21 @@
 from mercurial import cmdutil, commands, hg, extensions  from mercurial.i18n import _   +try: + from mercurial import discovery +except ImportError: + pass +  max_push_size = 1000    def findoutgoing(repo, other):   try: - # Mercurial 1.6 and higher - from mercurial import discovery + # Mercurial 1.6 through 1.8   return discovery.findoutgoing(repo, other, force=False) + except AttributeError: + # Mercurial 1.9 and higher + common, _anyinc, _heads = discovery.findcommonincoming(repo, other, force=False) + return repo.changelog.findmissing(common)   except ImportError:   # Mercurial 1.5 and lower   return repo.findoutgoing(other, force=False) @@ -51,7 +59,8 @@
  '''Pushes this repository to a target repository.     If this repository is small, behaves as the native push command. - For large, remote repositories, the repository is pushed in chunks of 1000 changesets.''' + For large, remote repositories, the repository is pushed in chunks + of size optimized for performance on the network.'''   if not opts.get('chunked'):   return push_fn(ui, repo, dest, **opts)  
Show Entire File caseguard.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Change 1 of 1 Show Entire File kiln.py Stacked
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
@@ -1,598 +1,599 @@
-# Copyright (C) 2011 Fog Creek Software. All rights reserved. -# -# To enable the "kiln" extension put these lines in your ~/.hgrc: -# [extensions] -# kiln = /path/to/kiln.py -# -# For help on the usage of "hg kiln" use: -# hg help kiln -# -# This program is free software; you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation; either version 2 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. - -'''provides command-line support for working with Kiln - -This extension allows you to directly open up the Kiln page for your -repository, including the annotation, file view, outgoing, and other -pages. Additionally, it will attempt to guess which remote Kiln -repository you wish push to and pull from based on its related repositories. - -This extension will also notify you when a Kiln server you access has an -updated version of the Kiln Client and Tools available. -To disable the check for a version 'X.Y.Z' and all lower versions, add the -following line in the [kiln] section of your hgrc: - ignoreversion = X.Y.Z -''' -import os -import re -import urllib -import urllib2 -import sys - -from cookielib import MozillaCookieJar -from hashlib import md5 -from mercurial import commands, demandimport, extensions, hg, httprepo, \ - localrepo, match, util -from mercurial import ui as hgui -from mercurial import url as hgurl -from mercurial.error import RepoError -from mercurial.i18n import _ -from mercurial.node import nullrev - -try: - from mercurial import scmutil -except ImportError: - pass - -demandimport.disable() -try: - import json -except ImportError: - sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '_custom')) - import json - -try: - import webbrowser - def browse(url): - webbrowser.open(escape_reserved(url)) -except ImportError: - if os.name == 'nt': - import win32api - def browse(url): - win32api.ShellExecute(0, 'open', escape_reserved(url), None, None, 0) -demandimport.enable() - -_did_version_check = False - -class APIError(Exception): - def __init__(self, obj): - '''takes a json object for debugging - - Inspect self.errors to see the API errors thrown. - ''' - self.errors = [] - for error in obj['errors']: - data = error['codeError'], error['sError'] - self.errors.append('%s: %s' % data) - - def __str__(self): - return '\n'.join(self.errors) - -def urljoin(*components): - url = components[0] - for next in components[1:]: - if not url.endswith('/'): - url += '/' - if next.startswith('/'): - next = next[1:] - url += next - return url - -def _baseurl(ui, path): - remote = hg.repository(ui, path) - try: - # Mercurial >= 1.9 - url = util.removeauth(remote.url()) - except AttributeError: - # Mercurial <= 1.8 - url = hgurl.removeauth(remote.url()) - if url.lower().find('/kiln/') > 0 or url.lower().find('kilnhg.com/') > 0: - return url - else: - return None - -def escape_reserved(path): - reserved = re.compile( - r'^(((com[1-9]|lpt[1-9]|con|prn|aux)(\..*)?)|web\.config' + - r'|clock\$|app_data|app_code|app_browsers' + - r'|app_globalresources|app_localresources|app_themes' + - r'|app_webreferences|bin|.*\.(cs|vb)html?)$', re.IGNORECASE) - p = path.split('?') - path = p[0] - query = '?' + p[1] if len(p) > 1 else '' - return '/'.join('$' + part + '$' - if reserved.match(part) or part.startswith('$') or part.endswith('$') - else part - for part in path.split('/')) + query - -def normalize_name(s): - return s.lower().replace(' ', '-') - -def call_api(ui, baseurl, urlsuffix, params, post=False): - '''returns the json object for the url and the data dictionary - - Uses HTTP POST if the post parameter is True and HTTP GET - otherwise. Raises APIError on API errors. - ''' - url = baseurl + urlsuffix - data = urllib.urlencode(params, doseq=True) - try: - if post: - fd = urllib2.urlopen(url, data) - else: - fd = urllib2.urlopen(url + '?' + data) - obj = json.load(fd) - except: - raise util.Abort(_('Path guessing requires Fog Creek Kiln 2.0. If you' - ' are running Kiln 2.0 and continue to experience' - ' problems, please contact Fog Creek Software.')) - - if isinstance(obj, dict) and 'errors' in obj: - if 'token' in params and obj['errors'][0]['codeError'] == 'InvalidToken': - token = login(ui, baseurl) - add_kilnapi_token(ui, baseurl, token) - params['token'] = token - return call_api(ui, baseurl, urlsuffix, params, post) - raise APIError(obj) - return obj - -def login(ui, url): - ui.write(_('realm: %s\n') % url) - user = ui.prompt('username:') - pw = ui.getpass() - - token = call_api(ui, url, 'Api/1.0/Auth/Login', dict(sUser=user, sPassword=pw)) - - if token: - return token - raise util.Abort(_('authorization failed')) - -def get_domain(url): - temp = url[url.find('://') + len('://'):] - domain = temp[:temp.find('/')] - port = None - if ':' in domain: - domain, port = domain.split(':', 1) - if '.' not in domain: - domain += '.local' - - return domain - -def _get_path(path): - if os.name == 'nt': - ret = os.path.expanduser('~\\_' + path) - else: - ret = os.path.expanduser('~/.' + path) - # Cygwin's Python does not always expanduser() properly... - if re.match(r'^[A-Za-z]:', ret) is not None and re.match(r'[A-Za-z]:\\', ret) is None: - ret = re.sub(r'([A-Za-z]):', r'\1:\\', ret) - return ret - -def _upgradecheck(ui, repo): - global _did_version_check - if _did_version_check or not ui.configbool('kiln', 'autoupdate', True): - return - _did_version_check = True - _upgrade(ui, repo) - -def _upgrade(ui, repo): - ext_dir = os.path.dirname(os.path.abspath(__file__)) - ui.debug('kiln: checking for extensions upgrade for %s\n' % ext_dir) - - try: - r = localrepo.localrepository(hgui.ui(), ext_dir) - except RepoError: - commands.init(hgui.ui(), dest=ext_dir) - r = localrepo.localrepository(hgui.ui(), ext_dir) - - r.ui.setconfig('kiln', 'autoupdate', False) - r.ui.pushbuffer() - try: - source = 'https://developers.kilnhg.com/Repo/Kiln/Group/Kiln-Extensions' - if commands.incoming(r.ui, r, bundle=None, force=False, source=source) != 0: - # no incoming changesets, or an error. Don't try to upgrade. - ui.debug('kiln: no extensions upgrade available\n') - return - ui.write(_('updating Kiln Extensions at %s... ') % ext_dir) - # pull and update return falsy values on success - if commands.pull(r.ui, r, source=source) or commands.update(r.ui, r, clean=True): - url = urljoin(repo.url()[:repo.url().lower().index('/repo')], 'Tools') - ui.write(_('unable to update\nvisit %s to download the newest extensions\n') % url) - else: - ui.write(_('complete\n')) - except Exception, e: - ui.debug(_('kiln: error updating Kiln Extensions: %s\n') % e) - -def is_dest_a_path(ui, dest): - paths = ui.configitems('paths') - for pathname, path in paths: - if pathname == dest: - return True - return False - -def is_dest_a_scheme(ui, dest): - destscheme = dest[:dest.find('://')] - if destscheme: - for scheme in hg.schemes: - if destscheme == scheme: - return True - return False - -def create_match_list(matchlist): - ret = '' - for m in matchlist: - ret += ' ' + m + '\n' - return ret - -def get_username(url): - url = re.sub(r'https?://', '', url) - url = re.sub(r'/.*', '', url) - if '@' in url: - # There should be some login info - # rfind in case it's an email address - username = url[:url.rfind('@')] - if ':' in username: - username = url[:url.find(':')] - return username - # Didn't find anything... - return '' - -def get_dest(ui): - from mercurial.dispatch import _parse - try: - cmd_info = _parse(ui, sys.argv[1:]) - cmd = cmd_info[0] - dest = cmd_info[2] - if dest: - dest = dest[0] - elif cmd in ['outgoing', 'push']: - dest = 'default-push' - else: - dest = 'default' - except: - dest = 'default' - return ui.expandpath(dest) - -def check_kilnapi_token(ui, url): - tokenpath = _get_path('hgkiln') - - if (not os.path.exists(tokenpath)) or os.path.isdir(tokenpath): - return '' - - domain = get_domain(url) - userhash = md5(get_username(get_dest(ui))).hexdigest() - - fp = open(tokenpath, 'r') - ret = "" - for line in fp: - try: - d, u, t = line.split(' ') - except: - raise util.Abort(_('Authentication file %s is malformed.') % tokenpath) - if d == domain and u == userhash: - # Get rid of that newline character... - ret = t[:-1] - - fp.close() - return ret - -def add_kilnapi_token(ui, url, fbToken): - if not fbToken: - return - tokenpath = _get_path('hgkiln') - if os.path.isdir(tokenpath): - raise util.Abort(_('Authentication file %s exists, but is a directory.') % tokenpath) - - domain = get_domain(url) - userhash = md5(get_username(get_dest(ui))).hexdigest() - - fp = open(tokenpath, 'a') - fp.write(domain + ' ' + userhash + ' ' + fbToken + '\n') - fp.close() - -def delete_kilnapi_tokens(): - # deletes the hgkiln file - tokenpath = _get_path('hgkiln') - if os.path.exists(tokenpath) and not os.path.isdir(tokenpath): - os.remove(tokenpath) - -def check_kilnauth_token(ui, url): - cookiepath = _get_path('hgcookies') - if (not os.path.exists(cookiepath)) or (not os.path.isdir(cookiepath)): - return '' - cookiepath = os.path.join(cookiepath, md5(get_username(get_dest(ui))).hexdigest()) - - try: - if not os.path.exists(cookiepath): - return '' - cj = MozillaCookieJar(cookiepath) - except IOError, e: - return '' - - domain = get_domain(url) - - cj.load(ignore_discard=True, ignore_expires=True) - for cookie in cj: - if domain == cookie.domain: - if cookie.name == 'fbToken': - return cookie.value - -def remember_path(ui, repo, path, value): - '''appends the path to the working copy's hgrc and backs up the original''' - - paths = dict(ui.configitems('paths')) - # This should never happen. - if path in paths: return - # ConfigParser only cares about these three characters. - if re.search(r'[:=\s]', path): return - - try: - audit_path = scmutil.pathauditor(repo.root) - except NameError: - audit_path = getattr(repo.opener, 'audit_path', util.path_auditor) - - audit_path('hgrc') - audit_path('hgrc.backup') - base = repo.opener.base - util.copyfile(os.path.join(base, 'hgrc'), - os.path.join(base, 'hgrc.backup')) - ui.setconfig('paths', path, value) - - try: - fp = repo.opener('hgrc', 'a', text=True) - # Mercurial assumes Unix newlines by default and so do we. - fp.write('\n[paths]\n%s = %s\n' % (path, value)) - fp.close() - except IOError, e: - return - -def unremember_path(ui, repo): - '''restores the working copy's hgrc''' - - try: - audit_path = scmutil.pathauditor(repo.root) - except NameError: - audit_path = getattr(repo.opener, 'audit_path', util.path_auditor) - - audit_path('hgrc') - audit_path('hgrc.backup') - base = repo.opener.base - if os.path.exists(os.path.join(base, 'hgrc')): - util.copyfile(os.path.join(base, 'hgrc.backup'), - os.path.join(base, 'hgrc')) - -def guess_kilnpath(orig, ui, repo, dest=None, **opts): - if not dest: - return orig(ui, repo, **opts) - - if os.path.exists(dest) or is_dest_a_path(ui, dest) or is_dest_a_scheme(ui, dest): - return orig(ui, repo, dest, **opts) - else: - targets = get_targets(repo); - matches = [] - prefixmatches = [] - - for target in targets: - url = '%s/%s/%s/%s' % (target[0], target[1], target[2], target[3]) - ndest = normalize_name(dest) - ntarget = [normalize_name(t) for t in target[1:4]] - aliases = [normalize_name(s) for s in target[4]] - - if ndest.count('/') == 0 and \ - (ntarget[0] == ndest or \ - ntarget[1] == ndest or \ - ntarget[2] == ndest or \ - ndest in aliases): - matches.append(url) - elif ndest.count('/') == 1 and \ - '/'.join(ntarget[0:2]) == ndest or \ - '/'.join(ntarget[1:3]) == ndest: - matches.append(url) - elif ndest.count('/') == 2 and \ - '/'.join(ntarget[0:3]) == ndest: - matches.append(url) - - if (ntarget[0].startswith(ndest) or \ - ntarget[1].startswith(ndest) or \ - ntarget[2].startswith(ndest) or \ - '/'.join(ntarget[0:2]).startswith(ndest) or \ - '/'.join(ntarget[1:3]).startswith(ndest) or \ - '/'.join(ntarget[0:3]).startswith(ndest)): - prefixmatches.append(url) - - if len(matches) == 0: - if len(prefixmatches) == 0: - # if there are no matches at all, let's just let mercurial handle it. - return orig(ui, repo, dest, **opts) - else: - urllist = create_match_list(prefixmatches) - raise util.Abort(_('%s did not exactly match any part of the repository slug:\n\n%s') % (dest, urllist)) - elif len(matches) > 1: - urllist = create_match_list(matches) - raise util.Abort(_('%s matches more than one Kiln repository:\n\n%s') % (dest, urllist)) - - # Unique match -- perform the operation - try: - remember_path(ui, repo, dest, matches[0]) - return orig(ui, repo, matches[0], **opts) - finally: - unremember_path(ui, repo) - -def get_tails(repo): - tails = [] - for rev in xrange(repo['tip'].rev() + 1): - ctx = repo[rev] - if ctx.p1().rev() == nullrev and ctx.p2().rev() == nullrev: - tails.append(ctx.hex()) - if not len(tails): - raise util.Abort(_('Path guessing is only enabled for non-empty repositories.')) - return tails - -def get_targets(repo): - targets = [] - kilnschemes = repo.ui.configitems('kiln_scheme') - for scheme in kilnschemes: - url = scheme[1] - if url.lower().find('/kiln/') != -1: - baseurl = url[:url.lower().find('/kiln/') + len("/kiln/")] - elif url.lower().find('kilnhg.com/') != -1: - baseurl = url[:url.lower().find('kilnhg.com/') + len("kilnhg.com/")] - else: - continue - - tails = get_tails(repo) - - token = check_kilnapi_token(repo.ui, baseurl) - if not token: - token = check_kilnauth_token(repo.ui, baseurl) - add_kilnapi_token(repo.ui, baseurl, token) - if not token: - token = login(repo.ui, baseurl) - add_kilnapi_token(repo.ui, baseurl, token) - - # We have an token at this point - params = dict(revTails=tails, token=token) - related_repos = call_api(repo.ui, baseurl, 'Api/1.0/Repo/Related', params) - targets.extend([[url, - related_repo['sProjectSlug'], - related_repo['sGroupSlug'], - related_repo['sSlug'], - related_repo.get('rgAliases', [])] for related_repo in related_repos]) - return targets - -def display_targets(repo): - targets = get_targets(repo) - repo.ui.write(_('The following Kiln targets are available for this repository:\n\n')) - for target in targets: - if target[4]: - alias_text = _(' (alias%s: %s)') % ('es' if len(target[4]) > 1 else '', ', '.join(target[4])) - else: - alias_text = '' - repo.ui.write(' %s/%s/%s/%s%s\n' % (target[0], target[1], target[2], target[3], alias_text)) - -def dummy_command(ui, repo, dest=None, **opts): - '''dummy command to pass to guess_path() for hg kiln - - Returns the repository URL if dest has been successfully path - guessed, None otherwise. - ''' - return opts['path'] != dest and dest or None - -def kiln(ui, repo, *pats, **opts): - '''show the relevant page of the repository in Kiln - - This command allows you to navigate straight the Kiln page for a - repository, including directly to settings, file annotation, and - file & changeset viewing. - - Typing "hg kiln" by itself will take you directly to the - repository history in kiln. Specify any other options to override - this default. The --rev, --annotate, --file, and --filehistory options - can be used together. - - To display a list of valid targets, type hg kiln --targets. To - push or pull from one of these targets, use any unique identifier - from this list as the parameter to the push/pull command. - ''' - - try: - url = _baseurl(ui, ui.expandpath(opts['path'] or 'default', opts['path'] or 'default-push')) - except RepoError: - url = guess_kilnpath(dummy_command, ui, repo, dest=opts['path'], **opts) - if not url: - raise - - if not url: - raise util.Abort(_('this does not appear to be a Kiln-hosted repository\n')) - default = True - - def files(key): - allpaths = [] - for f in opts[key]: - paths = [path for path in repo['.'].manifest().iterkeys() if re.search(match._globre(f) + '$', path)] - if not paths: - ui.warn(_('cannot find %s') % f) - allpaths += paths - return allpaths - - if opts['rev']: - default = False - for ctx in (repo[rev] for rev in opts['rev']): - browse(urljoin(url, 'History', ctx.hex())) - - if opts['annotate']: - default = False - for f in files('annotate'): - browse(urljoin(url, 'File', f) + '?view=annotate') - if opts['file']: - default = False - for f in files('file'): - browse(urljoin(url, 'File', f)) - if opts['filehistory']: - default = False - for f in files('filehistory'): - browse(urljoin(url, 'FileHistory', f) + '?rev=tip') - - if opts['outgoing']: - default = False - browse(urljoin(url, 'Outgoing')) - if opts['settings']: - default = False - browse(urljoin(url, 'Settings')) - - if opts['targets']: - default = False - display_targets(repo) - if opts['logout']: - default = False - delete_kilnapi_tokens() - - if default or opts['changes']: - browse(url) - -def uisetup(ui): - extensions.wrapcommand(commands.table, 'outgoing', guess_kilnpath) - extensions.wrapcommand(commands.table, 'push', guess_kilnpath) - extensions.wrapcommand(commands.table, 'pull', guess_kilnpath) - extensions.wrapcommand(commands.table, 'incoming', guess_kilnpath) - -def reposetup(ui, repo): - if issubclass(repo.__class__, httprepo.httprepository): - _upgradecheck(ui, repo) - -cmdtable = { - 'kiln': - (kiln, - [('a', 'annotate', [], _('annotate the file provided')), - ('c', 'changes', None, _('view the history of this repository; this is the default')), - ('f', 'file', [], _('view the file contents')), - ('l', 'filehistory', [], _('view the history of the file')), - ('o', 'outgoing', None, _('view the repository\'s outgoing tab')), - ('s', 'settings', None, _('view the repository\'s settings tab')), - ('p', 'path', '', _('select which Kiln branch of the repository to use')), - ('r', 'rev', [], _('view the specified changeset in Kiln')), - ('t', 'targets', None, _('view the repository\'s targets')), - ('', 'logout', None, _('log out of Kiln sessions'))], - _('hg kiln [-p url] [-r rev|-a file|-f file|-c|-o|-s|-t|--logout]')) - } +# Copyright (C) 2011 Fog Creek Software. All rights reserved. +# +# To enable the "kiln" extension put these lines in your ~/.hgrc: +# [extensions] +# kiln = /path/to/kiln.py +# +# For help on the usage of "hg kiln" use: +# hg help kiln +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + +'''provides command-line support for working with Kiln + +This extension allows you to directly open up the Kiln page for your +repository, including the annotation, file view, outgoing, and other +pages. Additionally, it will attempt to guess which remote Kiln +repository you wish push to and pull from based on its related repositories. + +This extension will also notify you when a Kiln server you access has an +updated version of the Kiln Client and Tools available. +To disable the check for a version 'X.Y.Z' and all lower versions, add the +following line in the [kiln] section of your hgrc: + ignoreversion = X.Y.Z +''' +import os +import re +import urllib +import urllib2 +import sys + +from cookielib import MozillaCookieJar +from hashlib import md5 +from mercurial import commands, demandimport, extensions, hg, httprepo, \ + localrepo, match, util +from mercurial import ui as hgui +from mercurial import url as hgurl +from mercurial.error import RepoError +from mercurial.i18n import _ +from mercurial.node import nullrev + +try: + from mercurial import scmutil +except ImportError: + pass + +demandimport.disable() +try: + import json +except ImportError: + sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), '_custom')) + import json + +try: + import webbrowser + def browse(url): + webbrowser.open(escape_reserved(url)) +except ImportError: + if os.name == 'nt': + import win32api + def browse(url): + win32api.ShellExecute(0, 'open', escape_reserved(url), None, None, 0) +demandimport.enable() + +_did_version_check = False + +class APIError(Exception): + def __init__(self, obj): + '''takes a json object for debugging + + Inspect self.errors to see the API errors thrown. + ''' + self.errors = [] + for error in obj['errors']: + data = error['codeError'], error['sError'] + self.errors.append('%s: %s' % data) + + def __str__(self): + return '\n'.join(self.errors) + +def urljoin(*components): + url = components[0] + for next in components[1:]: + if not url.endswith('/'): + url += '/' + if next.startswith('/'): + next = next[1:] + url += next + return url + +def _baseurl(ui, path): + remote = hg.repository(ui, path) + try: + # Mercurial >= 1.9 + url = util.removeauth(remote.url()) + except AttributeError: + # Mercurial <= 1.8 + url = hgurl.removeauth(remote.url()) + if url.lower().find('/kiln/') > 0 or url.lower().find('kilnhg.com/') > 0: + return url + else: + return None + +def escape_reserved(path): + reserved = re.compile( + r'^(((com[1-9]|lpt[1-9]|con|prn|aux)(\..*)?)|web\.config' + + r'|clock\$|app_data|app_code|app_browsers' + + r'|app_globalresources|app_localresources|app_themes' + + r'|app_webreferences|bin|.*\.(cs|vb)html?)$', re.IGNORECASE) + p = path.split('?') + path = p[0] + query = '?' + p[1] if len(p) > 1 else '' + return '/'.join('$' + part + '$' + if reserved.match(part) or part.startswith('$') or part.endswith('$') + else part + for part in path.split('/')) + query + +def normalize_name(s): + return s.lower().replace(' ', '-') + +def call_api(ui, baseurl, urlsuffix, params, post=False): + '''returns the json object for the url and the data dictionary + + Uses HTTP POST if the post parameter is True and HTTP GET + otherwise. Raises APIError on API errors. + ''' + url = baseurl + urlsuffix + data = urllib.urlencode(params, doseq=True) + try: + if post: + fd = urllib2.urlopen(url, data) + else: + fd = urllib2.urlopen(url + '?' + data) + obj = json.load(fd) + except: + raise util.Abort(_('Path guessing requires Fog Creek Kiln 2.0. If you' + ' are running Kiln 2.0 and continue to experience' + ' problems, please contact Fog Creek Software.')) + + if isinstance(obj, dict) and 'errors' in obj: + if 'token' in params and obj['errors'][0]['codeError'] == 'InvalidToken': + token = login(ui, baseurl) + add_kilnapi_token(ui, baseurl, token) + params['token'] = token + return call_api(ui, baseurl, urlsuffix, params, post) + raise APIError(obj) + return obj + +def login(ui, url): + ui.write(_('realm: %s\n') % url) + user = ui.prompt('username:') + pw = ui.getpass() + + token = call_api(ui, url, 'Api/1.0/Auth/Login', dict(sUser=user, sPassword=pw)) + + if token: + return token + raise util.Abort(_('authorization failed')) + +def get_domain(url): + temp = url[url.find('://') + len('://'):] + domain = temp[:temp.find('/')] + port = None + if ':' in domain: + domain, port = domain.split(':', 1) + if '.' not in domain: + domain += '.local' + + return domain + +def _get_path(path): + if os.name == 'nt': + ret = os.path.expanduser('~\\_' + path) + else: + ret = os.path.expanduser('~/.' + path) + # Cygwin's Python does not always expanduser() properly... + if re.match(r'^[A-Za-z]:', ret) is not None and re.match(r'[A-Za-z]:\\', ret) is None: + ret = re.sub(r'([A-Za-z]):', r'\1:\\', ret) + return ret + +def _upgradecheck(ui, repo): + global _did_version_check + if _did_version_check or not ui.configbool('kiln', 'autoupdate', True): + return + _did_version_check = True + _upgrade(ui, repo) + +def _upgrade(ui, repo): + ext_dir = os.path.dirname(os.path.abspath(__file__)) + ui.debug('kiln: checking for extensions upgrade for %s\n' % ext_dir) + + try: + r = localrepo.localrepository(hgui.ui(), ext_dir) + except RepoError: + commands.init(hgui.ui(), dest=ext_dir) + r = localrepo.localrepository(hgui.ui(), ext_dir) + + r.ui.setconfig('kiln', 'autoupdate', False) + r.ui.pushbuffer() + try: + source = 'https://developers.kilnhg.com/Repo/Kiln/Group/Kiln-Extensions' + if commands.incoming(r.ui, r, bundle=None, force=False, source=source) != 0: + # no incoming changesets, or an error. Don't try to upgrade. + ui.debug('kiln: no extensions upgrade available\n') + return + ui.write(_('updating Kiln Extensions at %s... ') % ext_dir) + # pull and update return falsy values on success + if commands.pull(r.ui, r, source=source) or commands.update(r.ui, r, clean=True): + url = urljoin(repo.url()[:repo.url().lower().index('/repo')], 'Tools') + ui.write(_('unable to update\nvisit %s to download the newest extensions\n') % url) + else: + ui.write(_('complete\n')) + except Exception, e: + ui.debug(_('kiln: error updating Kiln Extensions: %s\n') % e) + +def is_dest_a_path(ui, dest): + paths = ui.configitems('paths') + for pathname, path in paths: + if pathname == dest: + return True + return False + +def is_dest_a_scheme(ui, dest): + destscheme = dest[:dest.find('://')] + if destscheme: + for scheme in hg.schemes: + if destscheme == scheme: + return True + return False + +def create_match_list(matchlist): + ret = '' + for m in matchlist: + ret += ' ' + m + '\n' + return ret + +def get_username(url): + url = re.sub(r'https?://', '', url) + url = re.sub(r'/.*', '', url) + if '@' in url: + # There should be some login info + # rfind in case it's an email address + username = url[:url.rfind('@')] + if ':' in username: + username = url[:url.find(':')] + return username + # Didn't find anything... + return '' + +def get_dest(ui): + from mercurial.dispatch import _parse + try: + cmd_info = _parse(ui, sys.argv[1:]) + cmd = cmd_info[0] + dest = cmd_info[2] + if dest: + dest = dest[0] + elif cmd in ['outgoing', 'push']: + dest = 'default-push' + else: + dest = 'default' + except: + dest = 'default' + return ui.expandpath(dest) + +def check_kilnapi_token(ui, url): + tokenpath = _get_path('hgkiln') + + if (not os.path.exists(tokenpath)) or os.path.isdir(tokenpath): + return '' + + domain = get_domain(url) + userhash = md5(get_username(get_dest(ui))).hexdigest() + + fp = open(tokenpath, 'r') + ret = "" + for line in fp: + try: + d, u, t = line.split(' ') + except: + raise util.Abort(_('Authentication file %s is malformed.') % tokenpath) + if d == domain and u == userhash: + # Get rid of that newline character... + ret = t[:-1] + + fp.close() + return ret + +def add_kilnapi_token(ui, url, fbToken): + if not fbToken: + return + tokenpath = _get_path('hgkiln') + if os.path.isdir(tokenpath): + raise util.Abort(_('Authentication file %s exists, but is a directory.') % tokenpath) + + domain = get_domain(url) + userhash = md5(get_username(get_dest(ui))).hexdigest() + + fp = open(tokenpath, 'a') + fp.write(domain + ' ' + userhash + ' ' + fbToken + '\n') + fp.close() + +def delete_kilnapi_tokens(): + # deletes the hgkiln file + tokenpath = _get_path('hgkiln') + if os.path.exists(tokenpath) and not os.path.isdir(tokenpath): + os.remove(tokenpath) + +def check_kilnauth_token(ui, url): + cookiepath = _get_path('hgcookies') + if (not os.path.exists(cookiepath)) or (not os.path.isdir(cookiepath)): + return '' + cookiepath = os.path.join(cookiepath, md5(get_username(get_dest(ui))).hexdigest()) + + try: + if not os.path.exists(cookiepath): + return '' + cj = MozillaCookieJar(cookiepath) + except IOError, e: + return '' + + domain = get_domain(url) + + cj.load(ignore_discard=True, ignore_expires=True) + for cookie in cj: + if domain == cookie.domain: + if cookie.name == 'fbToken': + return cookie.value + +def remember_path(ui, repo, path, value): + '''appends the path to the working copy's hgrc and backs up the original''' + + paths = dict(ui.configitems('paths')) + # This should never happen. + if path in paths: return + # ConfigParser only cares about these three characters. + if re.search(r'[:=\s]', path): return + + try: + audit_path = scmutil.pathauditor(repo.root) + except ImportError: + audit_path = getattr(repo.opener, 'audit_path', util.path_auditor(repo.root)) + + audit_path('hgrc') + audit_path('hgrc.backup') + base = repo.opener.base + util.copyfile(os.path.join(base, 'hgrc'), + os.path.join(base, 'hgrc.backup')) + ui.setconfig('paths', path, value) + + try: + fp = repo.opener('hgrc', 'a', text=True) + # Mercurial assumes Unix newlines by default and so do we. + fp.write('\n[paths]\n%s = %s\n' % (path, value)) + fp.close() + except IOError, e: + return + +def unremember_path(ui, repo): + '''restores the working copy's hgrc''' + + try: + audit_path = scmutil.pathauditor(repo.root) + except ImportError: + audit_path = getattr(repo.opener, 'audit_path', util.path_auditor(repo.root)) + + audit_path('hgrc') + audit_path('hgrc.backup') + base = repo.opener.base + if os.path.exists(os.path.join(base, 'hgrc')): + util.copyfile(os.path.join(base, 'hgrc.backup'), + os.path.join(base, 'hgrc')) + +def guess_kilnpath(orig, ui, repo, dest=None, **opts): + if not dest: + return orig(ui, repo, **opts) + + if os.path.exists(dest) or is_dest_a_path(ui, dest) or is_dest_a_scheme(ui, dest): + return orig(ui, repo, dest, **opts) + else: + targets = get_targets(repo); + matches = [] + prefixmatches = [] + + for target in targets: + url = '%s/%s/%s/%s' % (target[0], target[1], target[2], target[3]) + ndest = normalize_name(dest) + ntarget = [normalize_name(t) for t in target[1:4]] + aliases = [normalize_name(s) for s in target[4]] + + if ndest.count('/') == 0 and \ + (ntarget[0] == ndest or \ + ntarget[1] == ndest or \ + ntarget[2] == ndest or \ + ndest in aliases): + matches.append(url) + elif ndest.count('/') == 1 and \ + '/'.join(ntarget[0:2]) == ndest or \ + '/'.join(ntarget[1:3]) == ndest: + matches.append(url) + elif ndest.count('/') == 2 and \ + '/'.join(ntarget[0:3]) == ndest: + matches.append(url) + + if (ntarget[0].startswith(ndest) or \ + ntarget[1].startswith(ndest) or \ + ntarget[2].startswith(ndest) or \ + '/'.join(ntarget[0:2]).startswith(ndest) or \ + '/'.join(ntarget[1:3]).startswith(ndest) or \ + '/'.join(ntarget[0:3]).startswith(ndest)): + prefixmatches.append(url) + + if len(matches) == 0: + if len(prefixmatches) == 0: + # if there are no matches at all, let's just let mercurial handle it. + return orig(ui, repo, dest, **opts) + else: + urllist = create_match_list(prefixmatches) + raise util.Abort(_('%s did not exactly match any part of the repository slug:\n\n%s') % (dest, urllist)) + elif len(matches) > 1: + urllist = create_match_list(matches) + raise util.Abort(_('%s matches more than one Kiln repository:\n\n%s') % (dest, urllist)) + + # Unique match -- perform the operation + try: + remember_path(ui, repo, dest, matches[0]) + return orig(ui, repo, matches[0], **opts) + finally: + unremember_path(ui, repo) + +def get_tails(repo): + tails = [] + for rev in xrange(repo['tip'].rev() + 1): + ctx = repo[rev] + if ctx.p1().rev() == nullrev and ctx.p2().rev() == nullrev: + tails.append(ctx.hex()) + if not len(tails): + raise util.Abort(_('Path guessing is only enabled for non-empty repositories.')) + return tails + +def get_targets(repo): + targets = [] + kilnschemes = repo.ui.configitems('kiln_scheme') + for scheme in kilnschemes: + url = scheme[1] + if url.lower().find('/kiln/') != -1: + baseurl = url[:url.lower().find('/kiln/') + len("/kiln/")] + elif url.lower().find('kilnhg.com/') != -1: + baseurl = url[:url.lower().find('kilnhg.com/') + len("kilnhg.com/")] + else: + continue + + tails = get_tails(repo) + + token = check_kilnapi_token(repo.ui, baseurl) + if not token: + token = check_kilnauth_token(repo.ui, baseurl) + add_kilnapi_token(repo.ui, baseurl, token) + if not token: + token = login(repo.ui, baseurl) + add_kilnapi_token(repo.ui, baseurl, token) + + # We have an token at this point + params = dict(revTails=tails, token=token) + related_repos = call_api(repo.ui, baseurl, 'Api/1.0/Repo/Related', params) + targets.extend([[url, + related_repo['sProjectSlug'], + related_repo['sGroupSlug'], + related_repo['sSlug'], + related_repo.get('rgAliases', [])] for related_repo in related_repos]) + return targets + +def display_targets(repo): + targets = get_targets(repo) + repo.ui.write(_('The following Kiln targets are available for this repository:\n\n')) + for target in targets: + if target[4]: + alias_text = _(' (alias%s: %s)') % ('es' if len(target[4]) > 1 else '', ', '.join(target[4])) + else: + alias_text = '' + repo.ui.write(' %s/%s/%s/%s%s\n' % (target[0], target[1], target[2], target[3], alias_text)) + +def dummy_command(ui, repo, dest=None, **opts): + '''dummy command to pass to guess_path() for hg kiln + + Returns the repository URL if dest has been successfully path + guessed, None otherwise. + ''' + return opts['path'] != dest and dest or None + +def kiln(ui, repo, *pats, **opts): + '''show the relevant page of the repository in Kiln + + This command allows you to navigate straight the Kiln page for a + repository, including directly to settings, file annotation, and + file & changeset viewing. + + Typing "hg kiln" by itself will take you directly to the + repository history in kiln. Specify any other options to override + this default. The --rev, --annotate, --file, and --filehistory options + can be used together. + + To display a list of valid targets, type hg kiln --targets. To + push or pull from one of these targets, use any unique identifier + from this list as the parameter to the push/pull command. + ''' + + try: + url = _baseurl(ui, ui.expandpath(opts['path'] or 'default', opts['path'] or 'default-push')) + except RepoError: + url = guess_kilnpath(dummy_command, ui, repo, dest=opts['path'], **opts) + if not url: + raise + + if not url: + raise util.Abort(_('this does not appear to be a Kiln-hosted repository\n')) + default = True + + def files(key): + allpaths = [] + for f in opts[key]: + paths = [path for path in repo['.'].manifest().iterkeys() if re.search(match._globre(f) + '$', path)] + paths = [re.sub(r'^\.kbf', '', path) for path in paths] + if not paths: + ui.warn(_('cannot find %s') % f) + allpaths += paths + return allpaths + + if opts['rev']: + default = False + for ctx in (repo[rev] for rev in opts['rev']): + browse(urljoin(url, 'History', ctx.hex())) + + if opts['annotate']: + default = False + for f in files('annotate'): + browse(urljoin(url, 'File', f) + '?view=annotate') + if opts['file']: + default = False + for f in files('file'): + browse(urljoin(url, 'File', f)) + if opts['filehistory']: + default = False + for f in files('filehistory'): + browse(urljoin(url, 'FileHistory', f) + '?rev=tip') + + if opts['outgoing']: + default = False + browse(urljoin(url, 'Outgoing')) + if opts['settings']: + default = False + browse(urljoin(url, 'Settings')) + + if opts['targets']: + default = False + display_targets(repo) + if opts['logout']: + default = False + delete_kilnapi_tokens() + + if default or opts['changes']: + browse(url) + +def uisetup(ui): + extensions.wrapcommand(commands.table, 'outgoing', guess_kilnpath) + extensions.wrapcommand(commands.table, 'push', guess_kilnpath) + extensions.wrapcommand(commands.table, 'pull', guess_kilnpath) + extensions.wrapcommand(commands.table, 'incoming', guess_kilnpath) + +def reposetup(ui, repo): + if issubclass(repo.__class__, httprepo.httprepository): + _upgradecheck(ui, repo) + +cmdtable = { + 'kiln': + (kiln, + [('a', 'annotate', [], _('annotate the file provided')), + ('c', 'changes', None, _('view the history of this repository; this is the default')), + ('f', 'file', [], _('view the file contents')), + ('l', 'filehistory', [], _('view the history of the file')), + ('o', 'outgoing', None, _('view the repository\'s outgoing tab')), + ('s', 'settings', None, _('view the repository\'s settings tab')), + ('p', 'path', '', _('select which Kiln branch of the repository to use')), + ('r', 'rev', [], _('view the specified changeset in Kiln')), + ('t', 'targets', None, _('view the repository\'s targets')), + ('', 'logout', None, _('log out of Kiln sessions'))], + _('hg kiln [-p url] [-r rev|-a file|-f file|-c|-o|-s|-t|--logout]')) + }
Show Entire File kilnpath.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​bfpath.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Change 1 of 1 Show Entire File tests/​common.py Stacked
 
 
 
 
1
2
@@ -0,0 +1,2 @@
+import bfpath +from tests.common import *
Show Entire File tests/​hgtest.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​kilntest.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-big-push.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-big-push.py.out Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-caseguard.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-caseguard.py.out Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-gestalt.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-gestalt.py.out Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-path.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-path.py.out Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-targets.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File tests/​test-targets.py.out Stacked
This file's diff was not loaded because this changeset is very large. Load changes