Commit 4c3dd932 authored by Jerome Kieffer's avatar Jerome Kieffer

Merge branch 'master', remote branch 'origin'

parents 80a383ca 5446ec29
...@@ -40,3 +40,12 @@ ef9d2c680684d0df7d81f529cda29e9e1741f575 cython-0.10.1 ...@@ -40,3 +40,12 @@ ef9d2c680684d0df7d81f529cda29e9e1741f575 cython-0.10.1
a6b9f0a6d02d23fc3d3a9d0587867faa3afb2fcd 0.14.rc0 a6b9f0a6d02d23fc3d3a9d0587867faa3afb2fcd 0.14.rc0
15bf34c9387444e262acb1de594405444dd571a4 0.14 15bf34c9387444e262acb1de594405444dd571a4 0.14
5320ddd8c3a60d00e0513f9f70d6846cd449409b 0.17.beta1 5320ddd8c3a60d00e0513f9f70d6846cd449409b 0.17.beta1
275fb550c1d802da3df35ebae41e04eadc60e49e 0.17.2
b0faba6967e74f652286a0de6d02d736845b0708 0.17.3
c1a18ab6b0808e87f68d2f9d914c01934510aef5 0.18b1
9a11631e0edb0394b17554cde8bec5d4784117e4 0.18rc1
76f33728e8534e698267e097cf603ea59ade6f30 0.18
4f782ac7b3fdf3b4408bbf9f2ed4c38f31e60920 0.19b1
52beb5b16df5b8a92bb6c8c47faf42370d73cb0f 0.19b2
4818f5b68eb4b4ea6ad7e415f6672b491e2461bc 0.19rc1
48407fa3f3c9da84ab3dc103a6a2b1ca4c1beb2a 0.19
language: python language: python
python: python:
- 2.5
- 2.6 - 2.6
- 2.7 - 2.7
# - 3.2 - 3.2
- 3.3
- pypy
branches:
only:
- master
install: pip install . install: pip install .
script: CFLAGS=-O0 python runtests.py -vv script: CFLAGS=-O0 python runtests.py -vv
matrix:
allow_failures:
- python: pypy
...@@ -2,7 +2,251 @@ ...@@ -2,7 +2,251 @@
Cython Changelog Cython Changelog
================ ================
0.17.2 (2012-11-??) 0.20 (??)
===================
Features added
--------------
Bugs fixed
----------
* The automatic C switch statement generation behaves more safely for
heterogeneous value types (e.g. mixing enum and char), allowing for
a slightly wider application and reducing corner cases. It now always
generates a 'default' clause to avoid C compiler warnings about
unmatched enum values.
Other changes
-------------
0.19.1 (2013-05-11)
===================
Features added
--------------
* Completely empty C-API structs for extension type slots (protocols like
number/mapping/sequence) are no longer generated into the C code.
* Docstrings that directly follow a public/readonly attribute declaration
in a cdef class will be used as docstring of the auto-generated property.
This fixes ticket 206.
* The automatic signature documentation tries to preserve more semantics
of default arguments and argument types. Specifically, ``bint`` arguments
now appear as type ``bool``.
* A warning is emitted when negative literal indices are found inside of
a code section that disables ``wraparound`` handling. This helps with
fixing invalid code that might fail in the face of future compiler
optimisations.
* Constant folding for boolean expressions (and/or) was improved.
* Added a build_dir option to cythonize() which allows one to place
the generated .c files outside the source tree.
Bugs fixed
----------
* ``isinstance(X, type)`` failed to get optimised into a call to
``PyType_Check()``, as done for other builtin types.
* A spurious "from datetime cimport *" was removed from the "cpython"
declaration package. This means that the "datetime" declarations
(added in 0.19) are no longer available directly from the "cpython"
namespace, but only from "cpython.datetime". This is the correct
way of doing it because the declarations refer to a standard library
module, not the core CPython C-API itself.
* The C code for extension types is now generated in topological order
instead of source code order to avoid C compiler errors about missing
declarations for subtypes that are defined before their parent.
* The ``memoryview`` type name no longer shows up in the module dict of
modules that use memory views. This fixes trac ticket 775.
* Regression in 0.19 that rejected valid C expressions from being used
in C array size declarations.
* In C++ mode, the C99-only keyword ``restrict`` could accidentally be
seen by the GNU C++ compiler. It is now specially handled for both
GCC and MSVC.
* Testing large (> int) C integer values for their truth value could fail
due to integer wrap-around.
Other changes
-------------
0.19 (2013-04-19)
=================
Features added
--------------
* New directives ``c_string_type`` and ``c_string_encoding`` to more easily
and automatically convert between C strings and the different Python string
types.
* The extension type flag ``Py_TPFLAGS_HAVE_VERSION_TAG`` is enabled by default
on extension types and can be disabled using the ``type_version_tag`` compiler
directive.
* EXPERIMENTAL support for simple Cython code level line tracing. Enabled by
the "linetrace" compiler directive.
* Cython implemented functions make their argument and return type annotations
available through the ``__annotations__`` attribute (PEP 3107).
* Access to non-cdef module globals and Python object attributes is faster.
* ``Py_UNICODE*`` coerces from and to Python unicode strings. This is
helpful when talking to Windows APIs, which use compatible wchar_t
arrays for strings. Note that the ``Py_UNICODE`` type is otherwise
deprecated as of CPython 3.3.
* ``isinstance(obj, basestring)`` is optimised. In Python 3 it only tests
for instances of ``str`` (i.e. Py2 ``unicode``).
* The ``basestring`` builtin is mapped to ``str`` (i.e. Py2 ``unicode``) when
compiling the generated C code under Python 3.
* Closures use freelists, which can speed up their creation quite substantially.
This is also visible for short running generator expressions, for example.
* A new class decorator ``@cython.freelist(N)`` creates a static freelist of N
instances for an extension type, thus avoiding the costly allocation step if
possible. This can speed up object instantiation by 20-30% in suitable
scenarios. Note that freelists are currently only supported for base types,
not for types that inherit from others.
* Fast extension type instantiation using the ``Type.__new__(Type)`` idiom has
gained support for passing arguments. It is also a bit faster for types defined
inside of the module.
* The Python2-only dict methods ``.iter*()`` and ``.view*()`` (requires Python 2.7)
are automatically mapped to the equivalent keys/values/items methods in Python 3
for typed dictionaries.
* Slicing unicode strings, lists and tuples is faster.
* list.append() is faster on average.
* ``raise Exception() from None`` suppresses the exception context in Py3.3.
* Py3 compatible ``exec(tuple)`` syntax is supported in Py2 code.
* Keyword arguments are supported for cdef functions.
* External C++ classes can be declared nogil. Patch by John Stumpo. This fixes
trac ticket 805.
Bugs fixed
----------
* 2-value slicing of unknown objects passes the correct slice when the ``getitem``
protocol is used instead of the ``getslice`` protocol (especially in Python 3),
i.e. ``None`` values for missing bounds instead of ``[0,maxsize]``. It is also
a bit faster in some cases, e.g. for constant bounds. This fixes trac ticket 636.
* Cascaded assignments of None values to extension type variables failed with
a ``TypeError`` at runtime.
* The ``__defaults__`` attribute was not writable for Cython implemented
functions.
* Default values of keyword-only arguments showed up in ``__defaults__`` instead
of ``__kwdefaults__`` (which was not implemented). Both are available for
Cython implemented functions now, as specified in Python 3.x.
* ``yield`` works inside of ``with gil`` sections. It previously lead to a crash.
This fixes trac ticket 803.
* Static methods without explicitly named positional arguments (e.g. having only
``*args``) crashed when being called. This fixes trac ticket 804.
* ``dir()`` without arguments previously returned an unsorted list, which now
gets sorted as expected.
* ``dict.items()``, ``dict.keys()`` and ``dict.values()`` no longer return lists
in Python 3.
* Exiting from an ``except-as`` clause now deletes the exception in Python 3 mode.
* The declarations of ``frexp()`` and ``ldexp()`` in ``math.pxd`` were incorrect.
Other changes
-------------
0.18 (2013-01-28)
=================
Features added
--------------
* Named Unicode escapes ("\N{...}") are supported.
* Python functions/classes provide the special attribute "__qualname__"
as defined by PEP 3155.
* Added a directive ``overflowcheck`` which raises an OverflowException when
arithmetic with C ints overflow. This has a modest performance penalty, but
is much faster than using Python ints.
* Calls to nested Python functions are resolved at compile time.
* Type inference works across nested functions.
* ``py_bytes_string.decode(...)`` is optimised.
* C ``const`` declarations are supported in the language.
Bugs fixed
----------
* Automatic C++ exception mapping didn't work in nogil functions (only in
"with nogil" blocks).
Other changes
-------------
0.17.4 (2013-01-03)
===================
Bugs fixed
----------
* Garbage collection triggered during deallocation of container classes could lead to a double-deallocation.
0.17.3 (2012-12-14)
===================
Features added
--------------
Bugs fixed
----------
* During final interpreter cleanup (with types cleanup enabled at compile time), extension types that inherit from base types over more than one level that were cimported from other modules could lead to a crash.
* Weak-reference support in extension types (with a ``cdef __weakref__`` attribute) generated incorrect deallocation code.
* In CPython 3.3, converting a Unicode character to the Py_UNICODE type could fail to raise an overflow for non-BMP characters that do not fit into a wchar_t on the current platform.
* Negative C integer constants lost their longness suffix in the generated C code.
Other changes
-------------
0.17.2 (2012-11-20)
=================== ===================
Features added Features added
...@@ -13,6 +257,10 @@ Features added ...@@ -13,6 +257,10 @@ Features added
Bugs fixed Bugs fixed
---------- ----------
* Replacing an object reference with the value of one of its cdef attributes could generate incorrect C code that accessed the object after deleting its last reference.
* C-to-Python type coercions during cascaded comparisons could generate invalid C code, specifically when using the 'in' operator.
* "obj[1,]" passed a single integer into the item getter instead of a tuple. * "obj[1,]" passed a single integer into the item getter instead of a tuple.
* Cyclic imports at module init time did not work in Py3. * Cyclic imports at module init time did not work in Py3.
......
...@@ -18,13 +18,28 @@ try: ...@@ -18,13 +18,28 @@ try:
except ImportError: except ImportError:
import md5 as hashlib import md5 as hashlib
try:
from os.path import relpath as _relpath
except ImportError:
# Py<2.6
def _relpath(path, start=os.path.curdir):
if not path:
raise ValueError("no path specified")
start_list = os.path.abspath(start).split(os.path.sep)
path_list = os.path.abspath(path).split(os.path.sep)
i = len(os.path.commonprefix([start_list, path_list]))
rel_list = [os.path.pardir] * (len(start_list)-i) + path_list[i:]
if not rel_list:
return os.path.curdir
return os.path.join(*rel_list)
from distutils.extension import Extension from distutils.extension import Extension
from Cython import Utils from Cython import Utils
from Cython.Utils import cached_function, cached_method, path_exists from Cython.Utils import cached_function, cached_method, path_exists, find_root_package_dir
from Cython.Compiler.Main import Context, CompilationOptions, default_options from Cython.Compiler.Main import Context, CompilationOptions, default_options
join_path = cached_function(os.path.join) join_path = cached_function(os.path.join)
if sys.version_info[0] < 3: if sys.version_info[0] < 3:
...@@ -53,7 +68,7 @@ def extended_iglob(pattern): ...@@ -53,7 +68,7 @@ def extended_iglob(pattern):
if path not in seen: if path not in seen:
seen.add(path) seen.add(path)
yield path yield path
for path in extended_iglob(join_path(root, '*', '**', rest)): for path in extended_iglob(join_path(root, '*', '**/' + rest)):
if path not in seen: if path not in seen:
seen.add(path) seen.add(path)
yield path yield path
...@@ -208,7 +223,7 @@ def strip_string_literals(code, prefix='__Pyx_L'): ...@@ -208,7 +223,7 @@ def strip_string_literals(code, prefix='__Pyx_L'):
in_quote = False in_quote = False
hash_mark = single_q = double_q = -1 hash_mark = single_q = double_q = -1
code_len = len(code) code_len = len(code)
while True: while True:
if hash_mark < q: if hash_mark < q:
hash_mark = code.find('#', q) hash_mark = code.find('#', q)
...@@ -288,12 +303,33 @@ def normalize_existing(base_path, rel_paths): ...@@ -288,12 +303,33 @@ def normalize_existing(base_path, rel_paths):
@cached_function @cached_function
def normalize_existing0(base_dir, rel_paths): def normalize_existing0(base_dir, rel_paths):
filtered = [] normalized = []
for rel in rel_paths: for rel in rel_paths:
path = join_path(base_dir, rel) path = join_path(base_dir, rel)
if os.path.exists(path): if path_exists(path):
filtered.append(os.path.normpath(path)) normalized.append(os.path.normpath(path))
return filtered else:
normalized.append(rel)
return normalized
def resolve_depends(depends, include_dirs):
include_dirs = tuple(include_dirs)
resolved = []
for depend in depends:
path = resolve_depend(depend, include_dirs)
if path is not None:
resolved.append(path)
return resolved
@cached_function
def resolve_depend(depend, include_dirs):
if depend[0] == '<' and depend[-1] == '>':
return None
for dir in include_dirs:
path = join_path(dir, depend)
if path_exists(path):
return os.path.normpath(path)
return None
@cached_function @cached_function
def parse_dependencies(source_filename): def parse_dependencies(source_filename):
...@@ -353,7 +389,7 @@ class DependencyTree(object): ...@@ -353,7 +389,7 @@ class DependencyTree(object):
elif not self.quiet: elif not self.quiet:
print("Unable to locate '%s' referenced from '%s'" % (filename, include)) print("Unable to locate '%s' referenced from '%s'" % (filename, include))
return all return all
@cached_method @cached_method
def cimports_and_externs(self, filename): def cimports_and_externs(self, filename):
# This is really ugly. Nested cimports are resolved with respect to the # This is really ugly. Nested cimports are resolved with respect to the
...@@ -383,16 +419,29 @@ class DependencyTree(object): ...@@ -383,16 +419,29 @@ class DependencyTree(object):
module = os.path.splitext(os.path.basename(filename))[0] module = os.path.splitext(os.path.basename(filename))[0]
return '.'.join(self.package(filename) + (module,)) return '.'.join(self.package(filename) + (module,))
@cached_method
def find_pxd(self, module, filename=None): def find_pxd(self, module, filename=None):
if module[0] == '.': is_relative = module[0] == '.'
if is_relative and not filename:
raise NotImplementedError("New relative imports.") raise NotImplementedError("New relative imports.")
if filename is not None: if filename is not None:
relative = '.'.join(self.package(filename) + tuple(module.split('.'))) module_path = module.split('.')
if is_relative:
module_path.pop(0) # just explicitly relative
package_path = list(self.package(filename))
while module_path and not module_path[0]:
try:
package_path.pop()
except IndexError:
return None # FIXME: error?
module_path.pop(0)
relative = '.'.join(package_path + module_path)
pxd = self.context.find_pxd_file(relative, None) pxd = self.context.find_pxd_file(relative, None)
if pxd: if pxd:
return pxd return pxd
if is_relative:
return None # FIXME: error?
return self.context.find_pxd_file(module, None) return self.context.find_pxd_file(module, None)
find_pxd = cached_method(find_pxd)
@cached_method @cached_method
def cimported_files(self, filename): def cimported_files(self, filename):
...@@ -448,7 +497,7 @@ class DependencyTree(object): ...@@ -448,7 +497,7 @@ class DependencyTree(object):
externs = self.cimports_and_externs(filename)[1] externs = self.cimports_and_externs(filename)[1]
if externs: if externs:
if 'depends' in info.values: if 'depends' in info.values:
info.values['depends'] = list(set(info.values['depends']).union(set(externs))) info.values['depends'] = list(set(info.values['depends']).union(externs))
else: else:
info.values['depends'] = list(externs) info.values['depends'] = list(externs)
return info return info
...@@ -502,6 +551,8 @@ def create_dependency_tree(ctx=None, quiet=False): ...@@ -502,6 +551,8 @@ def create_dependency_tree(ctx=None, quiet=False):
# This may be useful for advanced users? # This may be useful for advanced users?
def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=False, exclude_failures=False): def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=False, exclude_failures=False):
if not isinstance(patterns, list):
patterns = [patterns]
explicit_modules = set([m.name for m in patterns if isinstance(m, Extension)]) explicit_modules = set([m.name for m in patterns if isinstance(m, Extension)])
seen = set() seen = set()
deps = create_dependency_tree(ctx, quiet=quiet) deps = create_dependency_tree(ctx, quiet=quiet)
...@@ -510,8 +561,6 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa ...@@ -510,8 +561,6 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa
exclude = [exclude] exclude = [exclude]
for pattern in exclude: for pattern in exclude:
to_exclude.update(extended_iglob(pattern)) to_exclude.update(extended_iglob(pattern))
if not isinstance(patterns, list):
patterns = [patterns]
module_list = [] module_list = []
for pattern in patterns: for pattern in patterns:
if isinstance(pattern, str): if isinstance(pattern, str):
...@@ -563,6 +612,12 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa ...@@ -563,6 +612,12 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa
if source not in sources: if source not in sources:
sources.append(source) sources.append(source)
del kwds['sources'] del kwds['sources']
if 'depends' in kwds:
depends = resolve_depends(kwds['depends'], (kwds.get('include_dirs') or []) + [find_root_package_dir(file)])
if template is not None:
# Always include everything from the template.
depends = list(set(template.depends).union(set(depends)))
kwds['depends'] = depends
module_list.append(exn_type( module_list.append(exn_type(
name=module_name, name=module_name,
sources=sources, sources=sources,
...@@ -600,6 +655,7 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -600,6 +655,7 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
c_options = CompilationOptions(**options) c_options = CompilationOptions(**options)
cpp_options = CompilationOptions(**options); cpp_options.cplus = True cpp_options = CompilationOptions(**options); cpp_options.cplus = True
ctx = c_options.create_context() ctx = c_options.create_context()
options = c_options
module_list = create_extension_list( module_list = create_extension_list(
module_list, module_list,
exclude=exclude, exclude=exclude,
...@@ -608,9 +664,25 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -608,9 +664,25 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
exclude_failures=exclude_failures, exclude_failures=exclude_failures,
aliases=aliases) aliases=aliases)
deps = create_dependency_tree(ctx, quiet=quiet) deps = create_dependency_tree(ctx, quiet=quiet)
build_dir = getattr(options, 'build_dir', None)
modules_by_cfile = {} modules_by_cfile = {}
to_compile = [] to_compile = []
for m in module_list: for m in module_list:
if build_dir:
root = os.path.realpath(os.path.abspath(find_root_package_dir(m.sources[0])))
def copy_to_build_dir(filepath, root=root):
filepath_abs = os.path.realpath(os.path.abspath(filepath))
if os.path.isabs(filepath):
filepath = filepath_abs
if filepath_abs.startswith(root):
mod_dir = os.path.join(build_dir,
os.path.dirname(_relpath(filepath, root)))
if not os.path.isdir(mod_dir):
os.makedirs(mod_dir)
shutil.copy(filepath, mod_dir)
for dep in m.depends:
copy_to_build_dir(dep)
new_sources = [] new_sources = []
for source in m.sources: for source in m.sources:
base, ext = os.path.splitext(source) base, ext = os.path.splitext(source)
...@@ -621,11 +693,19 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -621,11 +693,19 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
else: else:
c_file = base + '.c' c_file = base + '.c'
options = c_options options = c_options
# setup for out of place build directory if enabled
if build_dir:
c_file = os.path.join(build_dir, c_file)
dir = os.path.dirname(c_file)
if not os.path.isdir(dir):
os.makedirs(dir)
if os.path.exists(c_file): if os.path.exists(c_file):
c_timestamp = os.path.getmtime(c_file) c_timestamp = os.path.getmtime(c_file)
else: else:
c_timestamp = -1 c_timestamp = -1
# Priority goes first to modified files, second to direct # Priority goes first to modified files, second to direct
# dependents, and finally to indirect dependents. # dependents, and finally to indirect dependents.
if c_timestamp < deps.timestamp(source): if c_timestamp < deps.timestamp(source):
...@@ -654,6 +734,8 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -654,6 +734,8 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
modules_by_cfile[c_file].append(m) modules_by_cfile[c_file].append(m)
else: else:
new_sources.append(source) new_sources.append(source)
if build_dir:
copy_to_build_dir(source)
m.sources = new_sources m.sources = new_sources
if hasattr(options, 'cache'): if hasattr(options, 'cache'):
if not os.path.exists(options.cache): if not os.path.exists(options.cache):
...@@ -680,6 +762,9 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -680,6 +762,9 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
module_list.remove(module) module_list.remove(module)
if hasattr(options, 'cache'): if hasattr(options, 'cache'):
cleanup_cache(options.cache, getattr(options, 'cache_size', 1024 * 1024 * 100)) cleanup_cache(options.cache, getattr(options, 'cache_size', 1024 * 1024 * 100))
# cythonize() is often followed by the (non-Python-buffered)
# compiler output, flush now to avoid interleaving output.
sys.stdout.flush()
return module_list return module_list
# TODO: Share context? Issue: pyx processing leaks into pxd module # TODO: Share context? Issue: pyx processing leaks into pxd module
......
...@@ -11,6 +11,40 @@ class EmbedSignature(CythonTransform): ...@@ -11,6 +11,40 @@ class EmbedSignature(CythonTransform):
self.class_name = None self.class_name = None
self.class_node = None self.class_node = None
unop_precedence = 11
binop_precedence = {
'or': 1,
'and': 2,
'not': 3,
'in': 4, 'not in': 4, 'is': 4, 'is not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4,
'|': 5,
'^': 6,
'&': 7,
'<<': 8, '>>': 8,
'+': 9, '-': 9,
'*': 10, '/': 10, '//': 10, '%': 10,
# unary: '+': 11, '-': 11, '~': 11
'**': 12}
def _fmt_expr_node(self, node, precedence=0):
if isinstance(node, ExprNodes.BinopNode) and not node.inplace:
new_prec = self.binop_precedence.get(node.operator, 0)
result = '%s %s %s' % (self._fmt_expr_node(node.operand1, new_prec),
node.operator,
self._fmt_expr_node(node.operand2, new_prec))
if precedence > new_prec:
result = '(%s)' % result
elif isinstance(node, ExprNodes.UnopNode):
result = '%s%s' % (node.operator,
self._fmt_expr_node(node.operand, self.unop_precedence))
if precedence > self.unop_precedence:
result = '(%s)' % result
elif isinstance(node, ExprNodes.AttributeNode):
result = '%s.%s' % (self._fmt_expr_node(node.obj), node.attribute)
else:
result = node.name
return result
def _fmt_arg_defv(self, arg): def _fmt_arg_defv(self, arg):
default_val = arg.default default_val = arg.default
if not default_val: if not default_val:
...@@ -31,8 +65,8 @@ class EmbedSignature(CythonTransform): ...@@ -31,8 +65,8 @@ class EmbedSignature(CythonTransform):
return repr_val return repr_val
except Exception: except Exception:
try: try:
return default_val.name # XXX return self._fmt_expr_node(default_val)
except AttributeError: except AttributeError, e:
return '<???>' return '<???>'
def _fmt_arg(self, arg): def _fmt_arg(self, arg):
...@@ -93,7 +127,6 @@ class EmbedSignature(CythonTransform): ...@@ -93,7 +127,6 @@ class EmbedSignature(CythonTransform):
else: else:
return signature return signature
def __call__(self, node): def __call__(self, node):
if not Options.docstrings: if not Options.docstrings:
return node return node
...@@ -172,8 +205,25 @@ class EmbedSignature(CythonTransform): ...@@ -172,8 +205,25 @@ class EmbedSignature(CythonTransform):
old_doc = node.py_func.entry.doc old_doc = node.py_func.entry.doc
else: else:
old_doc = None old_doc = None
new_doc = self._embed_signature(signature, old_doc) new_doc = self._embed_signature(signature, old_doc)
node.entry.doc = EncodedString(new_doc) node.entry.doc = EncodedString(new_doc)
if hasattr(node, 'py_func') and node.py_func is not None: if hasattr(node, 'py_func') and node.py_func is not None:
node.py_func.entry.doc = EncodedString(new_doc) node.py_func.entry.doc = EncodedString(new_doc)
return node return node
def visit_PropertyNode(self, node):
if not self.current_directives['embedsignature']:
return node
entry = node.entry
if entry.visibility == 'public':
# property synthesised from a cdef public attribute
type_name = entry.type.declaration_code("", for_display=1)
if not entry.type.is_pyobject:
type_name = "'%s'" % type_name
elif entry.type.is_extension_type:
type_name = entry.type.module_name + '.' + type_name
signature = '%s: %s' % (entry.name, type_name)
new_doc = self._embed_signature(signature, entry.doc)
entry.doc = EncodedString(new_doc)
return node
from Visitor import CythonTransform from Cython.Compiler.Visitor import CythonTransform
from ModuleNode import ModuleNode from Cython.Compiler.ModuleNode import ModuleNode
from ExprNodes import * from Cython.Compiler.Errors import CompileError
from Errors import CompileError from Cython.Compiler.UtilityCode import CythonUtilityCode
from UtilityCode import CythonUtilityCode from Cython.Compiler.Code import UtilityCode, TempitaUtilityCode
from Code import UtilityCode, TempitaUtilityCode
import Interpreter from Cython.Compiler import Options
import PyrexTypes from Cython.Compiler import Interpreter
import Naming from Cython.Compiler import PyrexTypes
import Symtab from Cython.Compiler import Naming
from Cython.Compiler import Symtab
def dedent(text, reindent=0): def dedent(text, reindent=0):
...@@ -600,10 +601,14 @@ class GetAndReleaseBufferUtilityCode(object): ...@@ -600,10 +601,14 @@ class GetAndReleaseBufferUtilityCode(object):
find_buffer_types(env) find_buffer_types(env)
proto, impl = TempitaUtilityCode.load_as_string( util_code = TempitaUtilityCode.load(
"GetAndReleaseBuffer", from_file="Buffer.c", "GetAndReleaseBuffer", from_file="Buffer.c",
context=dict(types=types)) context=dict(types=types))
proto = util_code.format_code(util_code.proto)
impl = util_code.format_code(
util_code.inject_string_constants(util_code.impl, output))
proto_code.putln(proto) proto_code.putln(proto)
code.putln(impl) code.putln(impl)
......
This diff is collapsed.
...@@ -43,10 +43,6 @@ Options: ...@@ -43,10 +43,6 @@ Options:
-X, --directive <name>=<value>[,<name=value,...] Overrides a compiler directive -X, --directive <name>=<value>[,<name=value,...] Overrides a compiler directive
""" """
# The following is broken http://trac.cython.org/cython_trac/ticket/379
# -r, --recursive Recursively find and compile dependencies (implies -t)
#The following experimental options are supported only on MacOSX: #The following experimental options are supported only on MacOSX:
# -C, --compile Compile generated .c file to .o file # -C, --compile Compile generated .c file to .o file
# --link Link .o file to produce extension module (implies -C) # --link Link .o file to produce extension module (implies -C)
...@@ -98,8 +94,6 @@ def parse_command_line(args): ...@@ -98,8 +94,6 @@ def parse_command_line(args):
options.working_path = pop_arg() options.working_path = pop_arg()
elif option in ("-o", "--output-file"): elif option in ("-o", "--output-file"):
options.output_file = pop_arg() options.output_file = pop_arg()
elif option in ("-r", "--recursive"):
options.recursive = 1
elif option in ("-t", "--timestamps"): elif option in ("-t", "--timestamps"):
options.timestamps = 1 options.timestamps = 1
elif option in ("-f", "--force"): elif option in ("-f", "--force"):
...@@ -137,8 +131,6 @@ def parse_command_line(args): ...@@ -137,8 +131,6 @@ def parse_command_line(args):
Options.warning_errors = True Options.warning_errors = True
elif option in ('-Wextra', '--warning-extra'): elif option in ('-Wextra', '--warning-extra'):
options.compiler_directives.update(Options.extra_warnings) options.compiler_directives.update(Options.extra_warnings)
elif option == "--disable-function-redefinition":
Options.disable_function_redefinition = True
elif option == "--old-style-globals": elif option == "--old-style-globals":
Options.old_style_globals = True Options.old_style_globals = True
elif option == "--directive" or option.startswith('-X'): elif option == "--directive" or option.startswith('-X'):
......
...@@ -30,6 +30,7 @@ cdef class FunctionState: ...@@ -30,6 +30,7 @@ cdef class FunctionState:
cdef public bint in_try_finally cdef public bint in_try_finally
cdef public object exc_vars cdef public object exc_vars
cdef public bint can_trace
cdef public list temps_allocated cdef public list temps_allocated
cdef public dict temps_free cdef public dict temps_free
...@@ -39,6 +40,7 @@ cdef class FunctionState: ...@@ -39,6 +40,7 @@ cdef class FunctionState:
cdef public object closure_temps cdef public object closure_temps
cdef public bint should_declare_error_indicator cdef public bint should_declare_error_indicator
cdef public bint uses_error_indicator
@cython.locals(n=size_t) @cython.locals(n=size_t)
cpdef new_label(self, name=*) cpdef new_label(self, name=*)
......
This diff is collapsed.
This diff is collapsed.
cimport cython cimport cython
from Cython.Compiler.Visitor cimport CythonTransform, TreeVisitor
cdef class ControlBlock: cdef class ControlBlock:
cdef public set children cdef public set children
cdef public set parents cdef public set parents
...@@ -39,6 +41,10 @@ cdef class AssignmentList: ...@@ -39,6 +41,10 @@ cdef class AssignmentList:
cdef public object mask cdef public object mask
cdef public list stats cdef public list stats
cdef class AssignmentCollector(TreeVisitor):
cdef list assignments
@cython.final
cdef class ControlFlow: cdef class ControlFlow:
cdef public set blocks cdef public set blocks
cdef public set entries cdef public set entries
...@@ -51,17 +57,20 @@ cdef class ControlFlow: ...@@ -51,17 +57,20 @@ cdef class ControlFlow:
cdef public dict assmts cdef public dict assmts
cpdef newblock(self, parent=*) cpdef newblock(self, ControlBlock parent=*)
cpdef nextblock(self, parent=*) cpdef nextblock(self, ControlBlock parent=*)
cpdef bint is_tracked(self, entry) cpdef bint is_tracked(self, entry)
cpdef bint is_statically_assigned(self, entry)
cpdef mark_position(self, node) cpdef mark_position(self, node)
cpdef mark_assignment(self, lhs, rhs, entry) cpdef mark_assignment(self, lhs, rhs, entry)
cpdef mark_argument(self, lhs, rhs, entry) cpdef mark_argument(self, lhs, rhs, entry)
cpdef mark_deletion(self, node, entry) cpdef mark_deletion(self, node, entry)
cpdef mark_reference(self, node, entry) cpdef mark_reference(self, node, entry)
@cython.locals(block=ControlBlock, parent=ControlBlock, unreachable=set)
cpdef normalize(self) cpdef normalize(self)
@cython.locals(offset=object, assmts=AssignmentList, @cython.locals(bit=object, assmts=AssignmentList,
block=ControlBlock) block=ControlBlock)
cpdef initialize(self) cpdef initialize(self)
...@@ -74,5 +83,22 @@ cdef class ControlFlow: ...@@ -74,5 +83,22 @@ cdef class ControlFlow:
cdef class Uninitialized: cdef class Uninitialized:
pass pass
@cython.locals(dirty=bint, block=ControlBlock, parent=ControlBlock) cdef class Unknown:
pass
@cython.locals(dirty=bint, block=ControlBlock, parent=ControlBlock,
assmt=NameAssignment)
cdef check_definitions(ControlFlow flow, dict compiler_directives) cdef check_definitions(ControlFlow flow, dict compiler_directives)
@cython.final
cdef class ControlFlowAnalysis(CythonTransform):
cdef object gv_ctx
cdef set reductions
cdef list env_stack
cdef list stack
cdef object env
cdef ControlFlow flow
cdef bint in_inplace_assignment
cpdef mark_assignment(self, lhs, rhs=*)
cpdef mark_position(self, node)
This diff is collapsed.
...@@ -673,22 +673,22 @@ class FusedCFuncDefNode(StatListNode): ...@@ -673,22 +673,22 @@ class FusedCFuncDefNode(StatListNode):
specialization_type.create_declaration_utility_code(env) specialization_type.create_declaration_utility_code(env)
if self.py_func: if self.py_func:
self.__signatures__.analyse_expressions(env) self.__signatures__ = self.__signatures__.analyse_expressions(env)
self.py_func.analyse_expressions(env) self.py_func = self.py_func.analyse_expressions(env)
self.resulting_fused_function.analyse_expressions(env) self.resulting_fused_function = self.resulting_fused_function.analyse_expressions(env)
self.fused_func_assignment.analyse_expressions(env) self.fused_func_assignment = self.fused_func_assignment.analyse_expressions(env)
self.defaults = defaults = [] self.defaults = defaults = []
for arg in self.node.args: for arg in self.node.args:
if arg.default: if arg.default:
arg.default.analyse_expressions(env) arg.default = arg.default.analyse_expressions(env)
defaults.append(ProxyNode(arg.default)) defaults.append(ProxyNode(arg.default))
else: else:
defaults.append(None) defaults.append(None)
for stat in self.stats: for i, stat in enumerate(self.stats):
stat.analyse_expressions(env) stat = self.stats[i] = stat.analyse_expressions(env)
if isinstance(stat, FuncDefNode): if isinstance(stat, FuncDefNode):
for arg, default in zip(stat.args, defaults): for arg, default in zip(stat.args, defaults):
if default is not None: if default is not None:
...@@ -697,7 +697,7 @@ class FusedCFuncDefNode(StatListNode): ...@@ -697,7 +697,7 @@ class FusedCFuncDefNode(StatListNode):
if self.py_func: if self.py_func:
args = [CloneNode(default) for default in defaults if default] args = [CloneNode(default) for default in defaults if default]
self.defaults_tuple = TupleNode(self.pos, args=args) self.defaults_tuple = TupleNode(self.pos, args=args)
self.defaults_tuple.analyse_types(env, skip_children=True) self.defaults_tuple = self.defaults_tuple.analyse_types(env, skip_children=True)
self.defaults_tuple = ProxyNode(self.defaults_tuple) self.defaults_tuple = ProxyNode(self.defaults_tuple)
self.code_object = ProxyNode(self.specialized_pycfuncs[0].code_object) self.code_object = ProxyNode(self.specialized_pycfuncs[0].code_object)
...@@ -705,10 +705,11 @@ class FusedCFuncDefNode(StatListNode): ...@@ -705,10 +705,11 @@ class FusedCFuncDefNode(StatListNode):
fused_func.defaults_tuple = CloneNode(self.defaults_tuple) fused_func.defaults_tuple = CloneNode(self.defaults_tuple)
fused_func.code_object = CloneNode(self.code_object) fused_func.code_object = CloneNode(self.code_object)
for pycfunc in self.specialized_pycfuncs: for i, pycfunc in enumerate(self.specialized_pycfuncs):
pycfunc.code_object = CloneNode(self.code_object) pycfunc.code_object = CloneNode(self.code_object)
pycfunc.analyse_types(env) pycfunc = self.specialized_pycfuncs[i] = pycfunc.analyse_types(env)
pycfunc.defaults_tuple = CloneNode(self.defaults_tuple) pycfunc.defaults_tuple = CloneNode(self.defaults_tuple)
return self
def synthesize_defnodes(self): def synthesize_defnodes(self):
""" """
......
...@@ -66,6 +66,7 @@ def make_lexicon(): ...@@ -66,6 +66,7 @@ def make_lexicon():
two_hex = hexdigit + hexdigit two_hex = hexdigit + hexdigit
four_hex = two_hex + two_hex four_hex = two_hex + two_hex
escapeseq = Str("\\") + (two_oct | three_oct | escapeseq = Str("\\") + (two_oct | three_oct |
Str('N{') + Rep(AnyBut('}')) + Str('}') |
Str('u') + four_hex | Str('x') + two_hex | Str('u') + four_hex | Str('x') + two_hex |
Str('U') + four_hex + four_hex | AnyChar) Str('U') + four_hex + four_hex | AnyChar)
......
...@@ -322,7 +322,7 @@ class Context(object): ...@@ -322,7 +322,7 @@ class Context(object):
"at top of source (cannot decode with encoding %r: %s)" % (encoding, msg)) "at top of source (cannot decode with encoding %r: %s)" % (encoding, msg))
if Errors.num_errors > 0: if Errors.num_errors > 0:
raise CompileError raise CompileError()
return tree return tree
def extract_module_name(self, path, options): def extract_module_name(self, path, options):
...@@ -456,11 +456,8 @@ class CompilationOptions(object): ...@@ -456,11 +456,8 @@ class CompilationOptions(object):
capi_reexport_cincludes capi_reexport_cincludes
boolean Add cincluded headers to any auto-generated boolean Add cincluded headers to any auto-generated
header files. header files.
recursive boolean Recursively find and compile dependencies timestamps boolean Only compile changed source files.
timestamps boolean Only compile changed source files. If None,
defaults to true when recursive is true.
verbose boolean Always print source names being compiled verbose boolean Always print source names being compiled
quiet boolean Don't print source names in recursive mode
compiler_directives dict Overrides for pragma options (see Options.py) compiler_directives dict Overrides for pragma options (see Options.py)
evaluate_tree_assertions boolean Test support: evaluate parse tree assertions evaluate_tree_assertions boolean Test support: evaluate parse tree assertions
language_level integer The Python language level: 2 or 3 language_level integer The Python language level: 2 or 3
...@@ -562,11 +559,8 @@ def compile_multiple(sources, options): ...@@ -562,11 +559,8 @@ def compile_multiple(sources, options):
sources = [os.path.abspath(source) for source in sources] sources = [os.path.abspath(source) for source in sources]
processed = set() processed = set()
results = CompilationResultSet() results = CompilationResultSet()
recursive = options.recursive
timestamps = options.timestamps timestamps = options.timestamps
if timestamps is None: verbose = options.verbose
timestamps = recursive
verbose = options.verbose or ((recursive or timestamps) and not options.quiet)
context = None context = None
for source in sources: for source in sources:
if source not in processed: if source not in processed:
...@@ -582,14 +576,6 @@ def compile_multiple(sources, options): ...@@ -582,14 +576,6 @@ def compile_multiple(sources, options):
# work properly yet. # work properly yet.
context = None context = None
processed.add(source) processed.add(source)
if recursive:
for module_name in context.find_cimported_module_names(source):
path = context.find_pyx_file(module_name, [source])
if path:
sources.append(path)
else:
sys.stderr.write(
"Cannot find .pyx file for cimported module '%s'\n" % module_name)
return results return results
def compile(source, options = None, full_module_name = None, **kwds): def compile(source, options = None, full_module_name = None, **kwds):
...@@ -603,8 +589,7 @@ def compile(source, options = None, full_module_name = None, **kwds): ...@@ -603,8 +589,7 @@ def compile(source, options = None, full_module_name = None, **kwds):
CompilationResultSet is returned. CompilationResultSet is returned.
""" """
options = CompilationOptions(defaults = options, **kwds) options = CompilationOptions(defaults = options, **kwds)
if isinstance(source, basestring) and not options.timestamps \ if isinstance(source, basestring) and not options.timestamps:
and not options.recursive:
return compile_single(source, options, full_module_name) return compile_single(source, options, full_module_name)
else: else:
return compile_multiple(source, options) return compile_multiple(source, options)
...@@ -659,7 +644,6 @@ default_options = dict( ...@@ -659,7 +644,6 @@ default_options = dict(
generate_pxi = 0, generate_pxi = 0,
capi_reexport_cincludes = 0, capi_reexport_cincludes = 0,
working_path = "", working_path = "",
recursive = 0,
timestamps = None, timestamps = None,
verbose = 0, verbose = 0,
quiet = 0, quiet = 0,
...@@ -671,4 +655,5 @@ default_options = dict( ...@@ -671,4 +655,5 @@ default_options = dict(
language_level = 2, language_level = 2,
gdb_debug = False, gdb_debug = False,
compile_time_env = None, compile_time_env = None,
common_utility_include_dir = None,
) )
...@@ -180,6 +180,9 @@ def valid_memslice_dtype(dtype, i=0): ...@@ -180,6 +180,9 @@ def valid_memslice_dtype(dtype, i=0):
if dtype.is_complex and dtype.real_type.is_int: if dtype.is_complex and dtype.real_type.is_int:
return False return False
if dtype is PyrexTypes.c_bint_type:
return False
if dtype.is_struct and dtype.kind == 'struct': if dtype.is_struct and dtype.kind == 'struct':
for member in dtype.scope.var_entries: for member in dtype.scope.var_entries:
if not valid_memslice_dtype(member.type): if not valid_memslice_dtype(member.type):
...@@ -273,7 +276,7 @@ class MemoryViewSliceBufferEntry(Buffer.BufferEntry): ...@@ -273,7 +276,7 @@ class MemoryViewSliceBufferEntry(Buffer.BufferEntry):
return bufp return bufp
def generate_buffer_slice_code(self, code, indices, dst, have_gil, def generate_buffer_slice_code(self, code, indices, dst, have_gil,
have_slices): have_slices, directives):
""" """
Slice a memoryviewslice. Slice a memoryviewslice.
...@@ -359,6 +362,8 @@ class MemoryViewSliceBufferEntry(Buffer.BufferEntry): ...@@ -359,6 +362,8 @@ class MemoryViewSliceBufferEntry(Buffer.BufferEntry):
"All preceding dimensions must be " "All preceding dimensions must be "
"indexed and not sliced") "indexed and not sliced")
wraparound = int(directives['wraparound'])
boundscheck = int(directives['boundscheck'])
d = locals() d = locals()
code.put(load_slice_util("SliceIndex", d)) code.put(load_slice_util("SliceIndex", d))
...@@ -914,8 +919,7 @@ memviewslice_init_code = load_memview_c_utility( ...@@ -914,8 +919,7 @@ memviewslice_init_code = load_memview_c_utility(
context=dict(context, BUF_MAX_NDIMS=Options.buffer_max_dims), context=dict(context, BUF_MAX_NDIMS=Options.buffer_max_dims),
requires=[memviewslice_declare_code, requires=[memviewslice_declare_code,
Buffer.acquire_utility_code, Buffer.acquire_utility_code,
atomic_utility, atomic_utility],
Buffer.typeinfo_compare_code],
) )
memviewslice_index_helpers = load_memview_c_utility("MemviewSliceIndex") memviewslice_index_helpers = load_memview_c_utility("MemviewSliceIndex")
......
This diff is collapsed.
...@@ -103,6 +103,8 @@ global_code_object_cache_find = pyrex_prefix + 'find_code_object' ...@@ -103,6 +103,8 @@ global_code_object_cache_find = pyrex_prefix + 'find_code_object'
global_code_object_cache_insert = pyrex_prefix + 'insert_code_object' global_code_object_cache_insert = pyrex_prefix + 'insert_code_object'
genexpr_id_ref = 'genexpr' genexpr_id_ref = 'genexpr'
freelist_name = 'freelist'
freecount_name = 'freecount'
line_c_macro = "__LINE__" line_c_macro = "__LINE__"
...@@ -131,7 +133,7 @@ h_guard_prefix = "__PYX_HAVE__" ...@@ -131,7 +133,7 @@ h_guard_prefix = "__PYX_HAVE__"
api_guard_prefix = "__PYX_HAVE_API__" api_guard_prefix = "__PYX_HAVE_API__"
api_func_guard = "__PYX_HAVE_API_FUNC_" api_func_guard = "__PYX_HAVE_API_FUNC_"
PYX_NAN = "__PYX_NAN" PYX_NAN = "__PYX_NAN()"
def py_version_hex(major, minor=0, micro=0, release_level=0, release_serial=0): def py_version_hex(major, minor=0, micro=0, release_level=0, release_serial=0):
return (major << 24) | (minor << 16) | (micro << 8) | (release_level << 4) | (release_serial) return (major << 24) | (minor << 16) | (micro << 8) | (release_level << 4) | (release_serial)
This diff is collapsed.
This diff is collapsed.
...@@ -34,6 +34,12 @@ warning_errors = False ...@@ -34,6 +34,12 @@ warning_errors = False
# you should disable this option and also 'cache_builtins'. # you should disable this option and also 'cache_builtins'.
error_on_unknown_names = True error_on_unknown_names = True
# Make uninitialized local variable reference a compile time error.
# Python raises UnboundLocalError at runtime, whereas this option makes
# them a compile time error. Note that this option affects only variables
# of "python object" type.
error_on_uninitialized = True
# This will convert statements of the form "for i in range(...)" # This will convert statements of the form "for i in range(...)"
# to "for i from ..." when i is a cdef'd integer type, and the direction # to "for i from ..." when i is a cdef'd integer type, and the direction
# (i.e. sign of step) can be determined. # (i.e. sign of step) can be determined.
...@@ -55,10 +61,6 @@ lookup_module_cpdef = False ...@@ -55,10 +61,6 @@ lookup_module_cpdef = False
# executes the body of this module. # executes the body of this module.
embed = None embed = None
# Disables function redefinition, allowing all functions to be declared at
# module creation time. For legacy code only, needed for some circular imports.
disable_function_redefinition = False
# In previous iterations of Cython, globals() gave the first non-Cython module # In previous iterations of Cython, globals() gave the first non-Cython module
# globals in the call stack. Sage relies on this behavior for variable injection. # globals in the call stack. Sage relies on this behavior for variable injection.
old_style_globals = False old_style_globals = False
...@@ -71,6 +73,9 @@ cimport_from_pyx = False ...@@ -71,6 +73,9 @@ cimport_from_pyx = False
# slices are passed by value and involve a lot of copying # slices are passed by value and involve a lot of copying
buffer_max_dims = 8 buffer_max_dims = 8
# Number of function closure instances to keep in a freelist (0: no freelists)
closure_freelist_size = 8
# Declare compiler directives # Declare compiler directives
directive_defaults = { directive_defaults = {
'boundscheck' : True, 'boundscheck' : True,
...@@ -82,6 +87,7 @@ directive_defaults = { ...@@ -82,6 +87,7 @@ directive_defaults = {
'cdivision': False, # was True before 0.12 'cdivision': False, # was True before 0.12
'cdivision_warnings': False, 'cdivision_warnings': False,
'overflowcheck': False, 'overflowcheck': False,
'overflowcheck.fold': True,
'always_allow_keywords': False, 'always_allow_keywords': False,
'allow_none_for_extension_args': True, 'allow_none_for_extension_args': True,
'wraparound' : True, 'wraparound' : True,
...@@ -90,6 +96,7 @@ directive_defaults = { ...@@ -90,6 +96,7 @@ directive_defaults = {
'final' : False, 'final' : False,
'internal' : False, 'internal' : False,
'profile': False, 'profile': False,
'linetrace': False,
'infer_types': None, 'infer_types': None,
'infer_types.verbose': False, 'infer_types.verbose': False,
'autotestdict': True, 'autotestdict': True,
...@@ -98,6 +105,9 @@ directive_defaults = { ...@@ -98,6 +105,9 @@ directive_defaults = {
'language_level': 2, 'language_level': 2,
'fast_getattr': False, # Undocumented until we come up with a better way to handle this everywhere. 'fast_getattr': False, # Undocumented until we come up with a better way to handle this everywhere.
'py2_import': False, # For backward compatibility of Cython's source code in Py3 source mode 'py2_import': False, # For backward compatibility of Cython's source code in Py3 source mode
'c_string_type': 'bytes',
'c_string_encoding': '',
'type_version_tag': True, # enables Py_TPFLAGS_HAVE_VERSION_TAG on extension types
# set __file__ and/or __path__ to known source/target path at import time (instead of not having them available) # set __file__ and/or __path__ to known source/target path at import time (instead of not having them available)
'set_initial_path' : None, # SOURCEFILE or "/full/path/to/module" 'set_initial_path' : None, # SOURCEFILE or "/full/path/to/module"
...@@ -111,7 +121,7 @@ directive_defaults = { ...@@ -111,7 +121,7 @@ directive_defaults = {
'warn.unused_result': False, 'warn.unused_result': False,
# optimizations # optimizations
'optimize.inline_defnode_calls': False, 'optimize.inline_defnode_calls': True,
# remove unreachable code # remove unreachable code
'remove_unreachable': True, 'remove_unreachable': True,
...@@ -126,7 +136,8 @@ directive_defaults = { ...@@ -126,7 +136,8 @@ directive_defaults = {
# experimental, subject to change # experimental, subject to change
'binding': None, 'binding': None,
'experimental_cpp_class_def': False 'experimental_cpp_class_def': False,
'freelist': 0,
} }
# Extra warning directives # Extra warning directives
...@@ -136,6 +147,50 @@ extra_warnings = { ...@@ -136,6 +147,50 @@ extra_warnings = {
'warn.unused': True, 'warn.unused': True,
} }
def one_of(*args):
def validate(name, value):
if value not in args:
raise ValueError("%s directive must be one of %s, got '%s'" % (
name, args, value))
else:
return value
return validate
def normalise_encoding_name(option_name, encoding):
"""
>>> normalise_encoding_name('c_string_encoding', 'ascii')
'ascii'
>>> normalise_encoding_name('c_string_encoding', 'AsCIi')
'ascii'
>>> normalise_encoding_name('c_string_encoding', 'us-ascii')
'ascii'
>>> normalise_encoding_name('c_string_encoding', 'utF8')
'utf8'
>>> normalise_encoding_name('c_string_encoding', 'utF-8')
'utf8'
>>> normalise_encoding_name('c_string_encoding', 'deFAuLT')
'default'
>>> normalise_encoding_name('c_string_encoding', 'default')
'default'
>>> normalise_encoding_name('c_string_encoding', 'SeriousLyNoSuch--Encoding')
'SeriousLyNoSuch--Encoding'
"""
if not encoding:
return ''
if encoding.lower() in ('default', 'ascii', 'utf8'):
return encoding.lower()
import codecs
try:
decoder = codecs.getdecoder(encoding)
except LookupError:
return encoding # may exists at runtime ...
for name in ('ascii', 'utf8'):
if codecs.getdecoder(name) == decoder:
return name
return encoding
# Override types possibilities above, if needed # Override types possibilities above, if needed
directive_types = { directive_types = {
'final' : bool, # final cdef classes and methods 'final' : bool, # final cdef classes and methods
...@@ -147,7 +202,10 @@ directive_types = { ...@@ -147,7 +202,10 @@ directive_types = {
'cclass' : None, 'cclass' : None,
'returns' : type, 'returns' : type,
'set_initial_path': str, 'set_initial_path': str,
} 'freelist': int,
'c_string_type': one_of('bytes', 'str', 'unicode'),
'c_string_encoding': normalise_encoding_name,
}
for key, val in directive_defaults.items(): for key, val in directive_defaults.items():
if key not in directive_types: if key not in directive_types:
...@@ -163,6 +221,11 @@ directive_scopes = { # defaults to available everywhere ...@@ -163,6 +221,11 @@ directive_scopes = { # defaults to available everywhere
'set_initial_path' : ('module',), 'set_initial_path' : ('module',),
'test_assert_path_exists' : ('function', 'class', 'cclass'), 'test_assert_path_exists' : ('function', 'class', 'cclass'),
'test_fail_if_path_exists' : ('function', 'class', 'cclass'), 'test_fail_if_path_exists' : ('function', 'class', 'cclass'),
'freelist': ('cclass',),
# Avoid scope-specific to/from_py_functions for c_string.
'c_string_type': ('module',),
'c_string_encoding': ('module',),
'type_version_tag': ('module', 'cclass'),
} }
def parse_directive_value(name, value, relaxed_bool=False): def parse_directive_value(name, value, relaxed_bool=False):
...@@ -179,6 +242,17 @@ def parse_directive_value(name, value, relaxed_bool=False): ...@@ -179,6 +242,17 @@ def parse_directive_value(name, value, relaxed_bool=False):
... ...
ValueError: boundscheck directive must be set to True or False, got 'true' ValueError: boundscheck directive must be set to True or False, got 'true'
>>> parse_directive_value('c_string_encoding', 'us-ascii')
'ascii'
>>> parse_directive_value('c_string_type', 'str')
'str'
>>> parse_directive_value('c_string_type', 'bytes')
'bytes'
>>> parse_directive_value('c_string_type', 'unicode')
'unicode'
>>> parse_directive_value('c_string_type', 'unnicode')
Traceback (most recent call last):
ValueError: c_string_type directive must be one of ('bytes', 'str', 'unicode'), got 'unnicode'
""" """
type = directive_types.get(name) type = directive_types.get(name)
if not type: return None if not type: return None
...@@ -201,6 +275,8 @@ def parse_directive_value(name, value, relaxed_bool=False): ...@@ -201,6 +275,8 @@ def parse_directive_value(name, value, relaxed_bool=False):
name, orig_value)) name, orig_value))
elif type is str: elif type is str:
return str(value) return str(value)
elif callable(type):
return type(name, value)
else: else:
assert False assert False
......
...@@ -34,10 +34,10 @@ cdef map_starred_assignment(list lhs_targets, list starred_assignments, list lhs ...@@ -34,10 +34,10 @@ cdef map_starred_assignment(list lhs_targets, list starred_assignments, list lhs
#class WithTransform(CythonTransform, SkipDeclarations): #class WithTransform(CythonTransform, SkipDeclarations):
#class DecoratorTransform(CythonTransform, SkipDeclarations): #class DecoratorTransform(CythonTransform, SkipDeclarations):
#class AnalyseDeclarationsTransform(CythonTransform): #class AnalyseDeclarationsTransform(EnvTransform):
cdef class AnalyseExpressionsTransform(CythonTransform): cdef class AnalyseExpressionsTransform(CythonTransform):
cdef list env_stack pass
cdef class ExpandInplaceOperators(EnvTransform): cdef class ExpandInplaceOperators(EnvTransform):
pass pass
......
This diff is collapsed.
...@@ -59,6 +59,7 @@ cdef p_atom(PyrexScanner s) ...@@ -59,6 +59,7 @@ cdef p_atom(PyrexScanner s)
@cython.locals(value=unicode) @cython.locals(value=unicode)
cdef p_int_literal(PyrexScanner s) cdef p_int_literal(PyrexScanner s)
cdef p_name(PyrexScanner s, name) cdef p_name(PyrexScanner s, name)
cdef wrap_compile_time_constant(pos, value)
cdef p_cat_string_literal(PyrexScanner s) cdef p_cat_string_literal(PyrexScanner s)
cdef p_opt_string_literal(PyrexScanner s, required_type=*) cdef p_opt_string_literal(PyrexScanner s, required_type=*)
cdef bint check_for_non_ascii_characters(unicode string) cdef bint check_for_non_ascii_characters(unicode string)
......
This diff is collapsed.
...@@ -144,6 +144,7 @@ def create_pipeline(context, mode, exclude_classes=()): ...@@ -144,6 +144,7 @@ def create_pipeline(context, mode, exclude_classes=()):
from Optimize import InlineDefNodeCalls from Optimize import InlineDefNodeCalls
from Optimize import ConstantFolding, FinalOptimizePhase from Optimize import ConstantFolding, FinalOptimizePhase
from Optimize import DropRefcountingTransform from Optimize import DropRefcountingTransform
from Optimize import ConsolidateOverflowCheck
from Buffer import IntroduceBufferAuxiliaryVars from Buffer import IntroduceBufferAuxiliaryVars
from ModuleNode import check_c_declarations, check_c_declarations_pxd from ModuleNode import check_c_declarations, check_c_declarations_pxd
...@@ -196,7 +197,8 @@ def create_pipeline(context, mode, exclude_classes=()): ...@@ -196,7 +197,8 @@ def create_pipeline(context, mode, exclude_classes=()):
CreateClosureClasses(context), ## After all lookups and type inference CreateClosureClasses(context), ## After all lookups and type inference
ExpandInplaceOperators(context), ExpandInplaceOperators(context),
OptimizeBuiltinCalls(context), ## Necessary? OptimizeBuiltinCalls(context), ## Necessary?
IterationTransform(), ConsolidateOverflowCheck(context),
IterationTransform(context),
SwitchTransform(), SwitchTransform(),
DropRefcountingTransform(), DropRefcountingTransform(),
FinalOptimizePhase(context), FinalOptimizePhase(context),
...@@ -273,7 +275,8 @@ def create_pyx_as_pxd_pipeline(context, result): ...@@ -273,7 +275,8 @@ def create_pyx_as_pxd_pipeline(context, result):
break break
def fake_pxd(root): def fake_pxd(root):
for entry in root.scope.entries.values(): for entry in root.scope.entries.values():
entry.defined_in_pxd = 1 if not entry.in_cinclude:
entry.defined_in_pxd = 1
return StatListNode(root.pos, stats=[]), root.scope return StatListNode(root.pos, stats=[]), root.scope
pipeline.append(fake_pxd) pipeline.append(fake_pxd)
return pipeline return pipeline
......
This diff is collapsed.
...@@ -96,11 +96,15 @@ def initial_compile_time_env(): ...@@ -96,11 +96,15 @@ def initial_compile_time_env():
import __builtin__ as builtins import __builtin__ as builtins
except ImportError: except ImportError:
import builtins import builtins
names = ('False', 'True', names = ('False', 'True',
'abs', 'bool', 'chr', 'cmp', 'complex', 'dict', 'divmod', 'enumerate', 'abs', 'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes',
'float', 'hash', 'hex', 'int', 'len', 'list', 'long', 'map', 'max', 'min', 'chr', 'cmp', 'complex', 'dict', 'divmod', 'enumerate', 'filter',
'oct', 'ord', 'pow', 'range', 'reduce', 'repr', 'round', 'slice', 'str', 'float', 'format', 'frozenset', 'hash', 'hex', 'int', 'len',
'sum', 'tuple', 'xrange', 'zip') 'list', 'long', 'map', 'max', 'min', 'oct', 'ord', 'pow', 'range',
'repr', 'reversed', 'round', 'set', 'slice', 'sorted', 'str',
'sum', 'tuple', 'xrange', 'zip')
for name in names: for name in names:
try: try:
benv.declare(name, getattr(builtins, name)) benv.declare(name, getattr(builtins, name))
......
...@@ -126,9 +126,28 @@ class EncodedString(_unicode): ...@@ -126,9 +126,28 @@ class EncodedString(_unicode):
assert self.encoding is None assert self.encoding is None
return self.encode("UTF-8") return self.encode("UTF-8")
@property
def is_unicode(self): def is_unicode(self):
return self.encoding is None return self.encoding is None
is_unicode = property(is_unicode)
def contains_surrogates(self):
return string_contains_surrogates(self)
def string_contains_surrogates(ustring):
"""
Check if the unicode string contains surrogate code points
on a CPython platform with wide (UCS-4) or narrow (UTF-16)
Unicode, i.e. characters that would be spelled as two
separate code units on a narrow platform.
"""
for c in map(ord, ustring):
if c > 65535: # can only happen on wide platforms
return True
if 0xD800 <= c <= 0xDFFF:
return True
return False
class BytesLiteral(_bytes): class BytesLiteral(_bytes):
# bytes subclass that is compatible with EncodedString # bytes subclass that is compatible with EncodedString
...@@ -155,6 +174,7 @@ class BytesLiteral(_bytes): ...@@ -155,6 +174,7 @@ class BytesLiteral(_bytes):
is_unicode = False is_unicode = False
char_from_escape_sequence = { char_from_escape_sequence = {
r'\a' : u'\a', r'\a' : u'\a',
r'\b' : u'\b', r'\b' : u'\b',
...@@ -165,6 +185,9 @@ char_from_escape_sequence = { ...@@ -165,6 +185,9 @@ char_from_escape_sequence = {
r'\v' : u'\v', r'\v' : u'\v',
}.get }.get
_c_special = ('\\', '??', '"') + tuple(map(chr, range(32)))
def _to_escape_sequence(s): def _to_escape_sequence(s):
if s in '\n\r\t': if s in '\n\r\t':
return repr(s)[1:-1] return repr(s)[1:-1]
...@@ -176,19 +199,23 @@ def _to_escape_sequence(s): ...@@ -176,19 +199,23 @@ def _to_escape_sequence(s):
# within a character sequence, oct passes much better than hex # within a character sequence, oct passes much better than hex
return ''.join(['\\%03o' % ord(c) for c in s]) return ''.join(['\\%03o' % ord(c) for c in s])
_c_special = ('\\', '??', '"') + tuple(map(chr, range(32)))
_c_special_replacements = [(orig.encode('ASCII'),
_to_escape_sequence(orig).encode('ASCII'))
for orig in _c_special ]
def _build_specials_test(): def _build_specials_replacer():
subexps = [] subexps = []
replacements = {}
for special in _c_special: for special in _c_special:
regexp = ''.join(['[%s]' % c.replace('\\', '\\\\') for c in special]) regexp = ''.join(['[%s]' % c.replace('\\', '\\\\') for c in special])
subexps.append(regexp) subexps.append(regexp)
return re.compile('|'.join(subexps).encode('ASCII')).search replacements[special.encode('ASCII')] = _to_escape_sequence(special).encode('ASCII')
sub = re.compile(('(%s)' % '|'.join(subexps)).encode('ASCII')).sub
def replace_specials(m):
return replacements[m.group(1)]
def replace(s):
return sub(replace_specials, s)
return replace
_replace_specials = _build_specials_replacer()
_has_specials = _build_specials_test()
def escape_char(c): def escape_char(c):
if IS_PYTHON3: if IS_PYTHON3:
...@@ -210,10 +237,7 @@ def escape_byte_string(s): ...@@ -210,10 +237,7 @@ def escape_byte_string(s):
encoded as ISO-8859-1, will result in the correct byte sequence encoded as ISO-8859-1, will result in the correct byte sequence
being written. being written.
""" """
if _has_specials(s): s = _replace_specials(s)
for special, replacement in _c_special_replacements:
if special in s:
s = s.replace(special, replacement)
try: try:
return s.decode("ASCII") # trial decoding: plain ASCII => done return s.decode("ASCII") # trial decoding: plain ASCII => done
except UnicodeDecodeError: except UnicodeDecodeError:
...@@ -258,3 +282,30 @@ def split_string_literal(s, limit=2000): ...@@ -258,3 +282,30 @@ def split_string_literal(s, limit=2000):
chunks.append(s[start:end]) chunks.append(s[start:end])
start = end start = end
return '""'.join(chunks) return '""'.join(chunks)
def encode_pyunicode_string(s):
"""Create Py_UNICODE[] representation of a given unicode string.
"""
s = map(ord, s) + [0]
if sys.maxunicode >= 0x10000: # Wide build or Py3.3
utf16, utf32 = [], s
for code_point in s:
if code_point >= 0x10000: # outside of BMP
high, low = divmod(code_point - 0x10000, 1024)
utf16.append(high + 0xD800)
utf16.append(low + 0xDC00)
else:
utf16.append(code_point)
else:
utf16, utf32 = s, []
for code_unit in s:
if 0xDC00 <= code_unit <= 0xDFFF and utf32 and 0xD800 <= utf32[-1] <= 0xDBFF:
high, low = utf32[-1], code_unit
utf32[-1] = ((high & 0x3FF) << 10) + (low & 0x3FF) + 0x10000
else:
utf32.append(code_unit)
if utf16 == utf32:
utf16 = []
return ",".join(map(unicode, utf16)), ",".join(map(unicode, utf32))
This diff is collapsed.
from Cython.Compiler.ModuleNode import ModuleNode
from Cython.Compiler.Symtab import ModuleScope
from Cython.TestUtils import TransformTest
from Cython.Compiler.Visitor import MethodDispatcherTransform
from Cython.Compiler.ParseTreeTransforms import (
NormalizeTree, AnalyseDeclarationsTransform,
AnalyseExpressionsTransform, InterpretCompilerDirectives)
class TestMethodDispatcherTransform(TransformTest):
_tree = None
def _build_tree(self):
if self._tree is None:
context = None
def fake_module(node):
scope = ModuleScope('test', None, None)
return ModuleNode(node.pos, doc=None, body=node,
scope=scope, full_module_name='test',
directive_comments={})
pipeline = [
fake_module,
NormalizeTree(context),
InterpretCompilerDirectives(context, {}),
AnalyseDeclarationsTransform(context),
AnalyseExpressionsTransform(context),
]
self._tree = self.run_pipeline(pipeline, u"""
cdef bytes s = b'asdfg'
cdef dict d = {1:2}
x = s * 3
d.get('test')
""")
return self._tree
def test_builtin_method(self):
calls = [0]
class Test(MethodDispatcherTransform):
def _handle_simple_method_dict_get(self, node, func, args, unbound):
calls[0] += 1
return node
tree = self._build_tree()
Test(None)(tree)
self.assertEqual(1, calls[0])
def test_binop_method(self):
calls = {'bytes': 0, 'object': 0}
class Test(MethodDispatcherTransform):
def _handle_simple_method_bytes___mul__(self, node, func, args, unbound):
calls['bytes'] += 1
return node
def _handle_simple_method_object___mul__(self, node, func, args, unbound):
calls['object'] += 1
return node
tree = self._build_tree()
Test(None)(tree)
self.assertEqual(1, calls['bytes'])
self.assertEqual(0, calls['object'])
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
File mode changed from 100644 to 100755
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
File mode changed from 100644 to 100755
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment