Commit 53bbcee6 authored by Jerome Kieffer's avatar Jerome Kieffer

Merge remote branch 'upstream/master'

parents 4c3dd932 16c9b6bc
...@@ -7,6 +7,7 @@ __pycache__ ...@@ -7,6 +7,7 @@ __pycache__
Cython/Compiler/*.c Cython/Compiler/*.c
Cython/Plex/*.c Cython/Plex/*.c
Cython/Runtime/refnanny.c Cython/Runtime/refnanny.c
Cython/Tempita/*.c
Tools/*.elc Tools/*.elc
......
...@@ -5,7 +5,8 @@ python: ...@@ -5,7 +5,8 @@ python:
- 2.7 - 2.7
- 3.2 - 3.2
- 3.3 - 3.3
- pypy
# - pypy
branches: branches:
only: only:
......
...@@ -8,6 +8,43 @@ Cython Changelog ...@@ -8,6 +8,43 @@ Cython Changelog
Features added Features added
-------------- --------------
* Using ``cdef basestring stringvar`` and function arguments typed as
``basestring`` is now meaningful and allows assigning exactly
``str`` and ``unicode`` objects, but no subtypes of these types.
* Support for the ``__debug__`` builtin.
* Assertions in Cython compiled modules are disabled if the running
Python interpreter was started with the "-O" option.
* Some types that Cython provides internally, such as functions and
generators, are now shared across modules if more than one Cython
implemented module is imported.
* The type inference algorithm works more fine granular by taking the
results of the control flow analysis into account.
* A new script in ``bin/cythonize`` provides a command line frontend
to the cythonize() compilation function (including distutils build).
* The new extension type decorator ``@cython.no_gc_clear`` prevents
objects from being cleared during cyclic garbage collection, thus
making sure that object attributes are kept alive until deallocation.
* During cyclic garbage collection, attributes of extension types that
cannot create reference cycles due to their type (e.g. strings) are
no longer considered for traversal or clearing. This can reduce the
processing overhead when searching for or cleaning up reference cycles.
* Package compilation (i.e. ``__init__.py`` files) now works, starting
with Python 3.3.
* The cython-mode.el script for Emacs was updated. Patch by Ivan Andrus.
* An option common_utility_include_dir was added to cythonize() to save
oft-used utility code once in a separate directory rather than as
part of each generated file.
Bugs fixed Bugs fixed
---------- ----------
...@@ -20,6 +57,33 @@ Bugs fixed ...@@ -20,6 +57,33 @@ Bugs fixed
Other changes Other changes
------------- -------------
* In Py3.4+, the Cython generator type uses ``tp_finalize()`` for safer
cleanup instead of ``tp_del()``.
0.19.2 (??)
===================
Features added
--------------
Bugs fixed
----------
* Calling the unbound method dict.keys/value/items() in dict subtypes could
call the bound object method instead of the unbound supertype method.
* "yield" wasn't supported in "return" value expressions.
* Using the "bint" type in memory views lead to unexpected results.
It is now an error.
* Assignments to global/closure variables could catch them in an illegal state
while deallocating the old value.
Other changes
-------------
0.19.1 (2013-05-11) 0.19.1 (2013-05-11)
=================== ===================
...@@ -312,13 +376,13 @@ Other changes ...@@ -312,13 +376,13 @@ Other changes
Features added Features added
-------------- --------------
* Alpha quality support for compiling and running Cython generated extension modules in PyPy (through cpyext). Note that this requires at leastPyPy 1.9 and in many cases also adaptations in user code, especially to avoid borrowed references when no owned reference is being held directly in C space (a reference in a Python list or dict is not enough, for example). See the documentation on porting Cython code to PyPy. * Alpha quality support for compiling and running Cython generated extension modules in PyPy (through cpyext). Note that this requires at least PyPy 1.9 and in many cases also adaptations in user code, especially to avoid borrowed references when no owned reference is being held directly in C space (a reference in a Python list or dict is not enough, for example). See the documentation on porting Cython code to PyPy.
* "yield from" is supported (PEP 380) and a couple of minor problems with generators were fixed. * "yield from" is supported (PEP 380) and a couple of minor problems with generators were fixed.
* C++ STL container classes automatically coerce from and to the equivalent Python container types on typed assignments and casts. Usage examples are here. Note that the data in the containers is copied during this conversion. * C++ STL container classes automatically coerce from and to the equivalent Python container types on typed assignments and casts. Note that the data in the containers is copied during this conversion.
* C++ iterators can now be iterated over using for x in cpp_container whenever cpp_container has begin() and end() methods returning objects satisfying the iterator pattern (that is, it can be incremented, dereferenced, and compared (for non-equality)). * C++ iterators can now be iterated over using "for x in cpp_container" whenever cpp_container has begin() and end() methods returning objects satisfying the iterator pattern (that is, it can be incremented, dereferenced, and compared (for non-equality)).
* cdef classes can now have C++ class members (provided a zero-argument constructor exists) * cdef classes can now have C++ class members (provided a zero-argument constructor exists)
...@@ -343,7 +407,7 @@ Bugs fixed ...@@ -343,7 +407,7 @@ Bugs fixed
* Old-style Py2 imports did not work reliably in Python 3.x and were broken in Python 3.3. Regardless of this fix, it's generally best to be explicit about relative and global imports in Cython code because old-style imports have a higher overhead. To this end, "from __future__ import absolute_import" is supported in Python/Cython 2.x code now (previous versions of Cython already used it when compiling Python 3 code). * Old-style Py2 imports did not work reliably in Python 3.x and were broken in Python 3.3. Regardless of this fix, it's generally best to be explicit about relative and global imports in Cython code because old-style imports have a higher overhead. To this end, "from __future__ import absolute_import" is supported in Python/Cython 2.x code now (previous versions of Cython already used it when compiling Python 3 code).
* Stricter constraints on the inline and final modifiers. If your code does not compile due to this change, chances are these modifiers were previously being ignored by the compiler and can be removed without any performance regression. * Stricter constraints on the "inline" and "final" modifiers. If your code does not compile due to this change, chances are these modifiers were previously being ignored by the compiler and can be removed without any performance regression.
* Exceptions are always instantiated while raising them (as in Python), instead of risking to instantiate them in potentially unsafe situations when they need to be handled or otherwise processed. * Exceptions are always instantiated while raising them (as in Python), instead of risking to instantiate them in potentially unsafe situations when they need to be handled or otherwise processed.
......
#!/usr/bin/env python
import os
import shutil
import tempfile
from distutils.core import setup
from Cython.Build.Dependencies import cythonize, extended_iglob
from Cython.Utils import is_package_dir
from Cython.Compiler import Options
try:
import multiprocessing
parallel_compiles = int(multiprocessing.cpu_count() * 1.5)
except ImportError:
multiprocessing = None
parallel_compiles = 0
class _FakePool(object):
def map_async(self, func, args):
from itertools import imap
for _ in imap(func, args):
pass
def close(self): pass
def terminate(self): pass
def join(self): pass
def parse_directives(option, name, value, parser):
dest = option.dest
old_directives = dict(getattr(parser.values, dest,
Options.directive_defaults))
directives = Options.parse_directive_list(
value, relaxed_bool=True, current_settings=old_directives)
setattr(parser.values, dest, directives)
def parse_options(option, name, value, parser):
dest = option.dest
options = dict(getattr(parser.values, dest, {}))
for opt in value.split(','):
if '=' in opt:
n, v = opt.split('=', 1)
v = v.lower() not in ('false', 'f', '0', 'no')
else:
n, v = opt, True
options[n] = v
setattr(parser.values, dest, options)
def find_package_base(path):
base_dir, package_path = os.path.split(path)
while os.path.isfile(os.path.join(base_dir, '__init__.py')):
base_dir, parent = os.path.split(base_dir)
package_path = '%s/%s' % (parent, package_path)
return base_dir, package_path
def cython_compile(path_pattern, options):
pool = None
paths = map(os.path.abspath, extended_iglob(path_pattern))
try:
for path in paths:
if options.build_inplace:
base_dir = path
while not os.path.isdir(base_dir) or is_package_dir(base_dir):
base_dir = os.path.dirname(base_dir)
else:
base_dir = None
if os.path.isdir(path):
# recursively compiling a package
paths = [os.path.join(path, '**', '*.%s' % ext)
for ext in ('py', 'pyx')]
else:
# assume it's a file(-like thing)
paths = [path]
ext_modules = cythonize(
paths,
nthreads=options.parallel,
exclude_failures=options.keep_going,
exclude=options.excludes,
compiler_directives=options.directives,
force=options.force,
quiet=options.quiet,
**options.options)
if ext_modules and options.build:
if len(ext_modules) > 1 and options.parallel > 1:
if pool is None:
try:
pool = multiprocessing.Pool(options.parallel)
except OSError:
pool = _FakePool()
pool.map_async(run_distutils, [
(base_dir, [ext]) for ext in ext_modules])
else:
run_distutils((base_dir, ext_modules))
except:
if pool is not None:
pool.terminate()
raise
else:
if pool is not None:
pool.close()
pool.join()
def run_distutils(args):
base_dir, ext_modules = args
script_args = ['build_ext', '-i']
cwd = os.getcwd()
temp_dir = None
try:
if base_dir:
os.chdir(base_dir)
temp_dir = tempfile.mkdtemp(dir=base_dir)
script_args.extend(['--build-temp', temp_dir])
setup(
script_name='setup.py',
script_args=script_args,
ext_modules=ext_modules,
)
finally:
if base_dir:
os.chdir(cwd)
if temp_dir and os.path.isdir(temp_dir):
shutil.rmtree(temp_dir)
def parse_args(args):
from optparse import OptionParser
parser = OptionParser(usage='%prog [options] [sources and packages]+')
parser.add_option('-X', '--directive', metavar='NAME=VALUE,...', dest='directives',
type=str, action='callback', callback=parse_directives, default={},
help='set a compiler directive')
parser.add_option('-s', '--option', metavar='NAME=VALUE', dest='options',
type=str, action='callback', callback=parse_options, default={},
help='set a cythonize option')
parser.add_option('-3', dest='python3_mode', action='store_true',
help='use Python 3 syntax mode by default')
parser.add_option('-x', '--exclude', metavar='PATTERN', dest='excludes',
action='append', default=[],
help='exclude certain file patterns from the compilation')
parser.add_option('-b', '--build', dest='build', action='store_true',
help='build extension modules using distutils')
parser.add_option('-i', '--inplace', dest='build_inplace', action='store_true',
help='build extension modules in place using distutils (implies -b)')
parser.add_option('-j', '--parallel', dest='parallel', metavar='N',
type=int, default=parallel_compiles,
help=('run builds in N parallel jobs (default: %d)' %
parallel_compiles or 1))
parser.add_option('-f', '--force', dest='force', action='store_true',
help='force recompilation')
parser.add_option('-q', '--quiet', dest='quiet', action='store_true',
help='be less verbose during compilation')
parser.add_option('--lenient', dest='lenient', action='store_true',
help='increase Python compatibility by ignoring some compile time errors')
parser.add_option('-k', '--keep-going', dest='keep_going', action='store_true',
help='compile as much as possible, ignore compilation failures')
options, args = parser.parse_args(args)
if not args:
parser.error("no source files provided")
if options.build_inplace:
options.build = True
if multiprocessing is None:
options.parallel = 0
if options.python3_mode:
options.options['language_level'] = 3
return options, args
def main(args=None):
options, paths = parse_args(args)
if options.lenient:
# increase Python compatibility by ignoring compile time errors
Options.error_on_unknown_names = False
Options.error_on_uninitialized = False
for path in paths:
cython_compile(path, options)
if __name__ == '__main__':
main()
import cython import cython
from Cython import __version__ from Cython import __version__
from glob import glob import re, os, sys, time
import re, os, sys try:
from glob import iglob
except ImportError:
# Py2.4
from glob import glob as iglob
try: try:
import gzip import gzip
gzip_open = gzip.open gzip_open = gzip.open
...@@ -18,6 +23,11 @@ try: ...@@ -18,6 +23,11 @@ try:
except ImportError: except ImportError:
import md5 as hashlib import md5 as hashlib
try:
from io import open as io_open
except ImportError:
from codecs import open as io_open
try: try:
from os.path import relpath as _relpath from os.path import relpath as _relpath
except ImportError: except ImportError:
...@@ -60,7 +70,7 @@ def extended_iglob(pattern): ...@@ -60,7 +70,7 @@ def extended_iglob(pattern):
seen = set() seen = set()
first, rest = pattern.split('**/', 1) first, rest = pattern.split('**/', 1)
if first: if first:
first = glob(first+'/') first = iglob(first+'/')
else: else:
first = [''] first = ['']
for root in first: for root in first:
...@@ -73,7 +83,7 @@ def extended_iglob(pattern): ...@@ -73,7 +83,7 @@ def extended_iglob(pattern):
seen.add(path) seen.add(path)
yield path yield path
else: else:
for path in glob(pattern): for path in iglob(pattern):
yield path yield path
@cached_function @cached_function
...@@ -331,6 +341,20 @@ def resolve_depend(depend, include_dirs): ...@@ -331,6 +341,20 @@ def resolve_depend(depend, include_dirs):
return os.path.normpath(path) return os.path.normpath(path)
return None return None
@cached_function
def package(filename):
dir = os.path.dirname(os.path.abspath(str(filename)))
if dir != filename and path_exists(join_path(dir, '__init__.py')):
return package(dir) + (os.path.basename(dir),)
else:
return ()
@cached_function
def fully_qualified_name(filename):
module = os.path.splitext(os.path.basename(filename))[0]
return '.'.join(package(filename) + (module,))
@cached_function @cached_function
def parse_dependencies(source_filename): def parse_dependencies(source_filename):
# Actual parsing is way to slow, so we use regular expressions. # Actual parsing is way to slow, so we use regular expressions.
...@@ -406,18 +430,11 @@ class DependencyTree(object): ...@@ -406,18 +430,11 @@ class DependencyTree(object):
def cimports(self, filename): def cimports(self, filename):
return self.cimports_and_externs(filename)[0] return self.cimports_and_externs(filename)[0]
@cached_method
def package(self, filename): def package(self, filename):
dir = os.path.dirname(os.path.abspath(str(filename))) return package(filename)
if dir != filename and path_exists(join_path(dir, '__init__.py')):
return self.package(dir) + (os.path.basename(dir),)
else:
return ()
@cached_method def fully_qualified_name(self, filename):
def fully_qualifeid_name(self, filename): return fully_qualified_name(filename)
module = os.path.splitext(os.path.basename(filename))[0]
return '.'.join(self.package(filename) + (module,))
@cached_method @cached_method
def find_pxd(self, module, filename=None): def find_pxd(self, module, filename=None):
...@@ -560,7 +577,7 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa ...@@ -560,7 +577,7 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa
if not isinstance(exclude, list): if not isinstance(exclude, list):
exclude = [exclude] exclude = [exclude]
for pattern in exclude: for pattern in exclude:
to_exclude.update(extended_iglob(pattern)) to_exclude.update(map(os.path.abspath, extended_iglob(pattern)))
module_list = [] module_list = []
for pattern in patterns: for pattern in patterns:
if isinstance(pattern, str): if isinstance(pattern, str):
...@@ -582,11 +599,11 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa ...@@ -582,11 +599,11 @@ def create_extension_list(patterns, exclude=[], ctx=None, aliases=None, quiet=Fa
else: else:
raise TypeError(pattern) raise TypeError(pattern)
for file in extended_iglob(filepattern): for file in extended_iglob(filepattern):
if file in to_exclude: if os.path.abspath(file) in to_exclude:
continue continue
pkg = deps.package(file) pkg = deps.package(file)
if '*' in name: if '*' in name:
module_name = deps.fully_qualifeid_name(file) module_name = deps.fully_qualified_name(file)
if module_name in explicit_modules: if module_name in explicit_modules:
continue continue
else: else:
...@@ -652,6 +669,11 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -652,6 +669,11 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
""" """
if 'include_path' not in options: if 'include_path' not in options:
options['include_path'] = ['.'] options['include_path'] = ['.']
if 'common_utility_include_dir' in options:
if 'cache' in options:
raise NotImplementedError, "common_utility_include_dir does not yet work with caching"
if not os.path.exists(options['common_utility_include_dir']):
os.makedirs(options['common_utility_include_dir'])
c_options = CompilationOptions(**options) c_options = CompilationOptions(**options)
cpp_options = CompilationOptions(**options); cpp_options.cplus = True cpp_options = CompilationOptions(**options); cpp_options.cplus = True
ctx = c_options.create_context() ctx = c_options.create_context()
...@@ -746,10 +768,11 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -746,10 +768,11 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
try: try:
import multiprocessing import multiprocessing
pool = multiprocessing.Pool(nthreads) pool = multiprocessing.Pool(nthreads)
pool.map(cythonize_one_helper, to_compile) except (ImportError, OSError):
except ImportError:
print("multiprocessing required for parallel cythonization") print("multiprocessing required for parallel cythonization")
nthreads = 0 nthreads = 0
else:
pool.map(cythonize_one_helper, to_compile)
if not nthreads: if not nthreads:
for args in to_compile: for args in to_compile:
cythonize_one(*args[1:]) cythonize_one(*args[1:])
...@@ -758,8 +781,19 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -758,8 +781,19 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
for c_file, modules in modules_by_cfile.iteritems(): for c_file, modules in modules_by_cfile.iteritems():
if not os.path.exists(c_file): if not os.path.exists(c_file):
failed_modules.update(modules) failed_modules.update(modules)
for module in failed_modules: elif os.path.getsize(c_file) < 200:
module_list.remove(module) f = io_open(c_file, 'r', encoding='iso8859-1')
try:
if f.read(len('#error ')) == '#error ':
# dead compilation result
failed_modules.update(modules)
finally:
f.close()
if failed_modules:
for module in failed_modules:
module_list.remove(module)
print("Failed compilations: %s" % ', '.join(sorted([
module.name for module in failed_modules])))
if hasattr(options, 'cache'): if hasattr(options, 'cache'):
cleanup_cache(options.cache, getattr(options, 'cache_size', 1024 * 1024 * 100)) cleanup_cache(options.cache, getattr(options, 'cache_size', 1024 * 1024 * 100))
# cythonize() is often followed by the (non-Python-buffered) # cythonize() is often followed by the (non-Python-buffered)
...@@ -767,7 +801,43 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo ...@@ -767,7 +801,43 @@ def cythonize(module_list, exclude=[], nthreads=0, aliases=None, quiet=False, fo
sys.stdout.flush() sys.stdout.flush()
return module_list return module_list
if os.environ.get('XML_RESULTS'):
compile_result_dir = os.environ['XML_RESULTS']
def record_results(func):
def with_record(*args):
t = time.time()
success = True
try:
try:
func(*args)
except:
success = False
finally:
t = time.time() - t
module = fully_qualified_name(args[0])
name = "cythonize." + module
failures = 1 - success
if success:
failure_item = ""
else:
failure_item = "failure"
output = open(os.path.join(compile_result_dir, name + ".xml"), "w")
output.write("""
<?xml version="1.0" ?>
<testsuite name="%(name)s" errors="0" failures="%(failures)s" tests="1" time="%(t)s">
<testcase classname="%(name)s" name="cythonize">
%(failure_item)s
</testcase>
</testsuite>
""".strip() % locals())
output.close()
return with_record
else:
record_results = lambda x: x
# TODO: Share context? Issue: pyx processing leaks into pxd module # TODO: Share context? Issue: pyx processing leaks into pxd module
@record_results
def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_failure=True): def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_failure=True):
from Cython.Compiler.Main import compile, default_options from Cython.Compiler.Main import compile, default_options
from Cython.Compiler.Errors import CompileError, PyrexError from Cython.Compiler.Errors import CompileError, PyrexError
...@@ -811,6 +881,9 @@ def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_f ...@@ -811,6 +881,9 @@ def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_f
except (EnvironmentError, PyrexError), e: except (EnvironmentError, PyrexError), e:
sys.stderr.write('%s\n' % e) sys.stderr.write('%s\n' % e)
any_failures = 1 any_failures = 1
# XXX
import traceback
traceback.print_exc()
except Exception: except Exception:
if raise_on_failure: if raise_on_failure:
raise raise
...@@ -834,7 +907,12 @@ def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_f ...@@ -834,7 +907,12 @@ def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_f
f.close() f.close()
def cythonize_one_helper(m): def cythonize_one_helper(m):
return cythonize_one(*m[1:]) import traceback
try:
return cythonize_one(*m[1:])
except Exception:
traceback.print_exc()
raise
def cleanup_cache(cache, target_size, ratio=.85): def cleanup_cache(cache, target_size, ratio=.85):
try: try:
......
...@@ -11,9 +11,11 @@ from Code import CCodeWriter ...@@ -11,9 +11,11 @@ from Code import CCodeWriter
from Cython import Utils from Cython import Utils
# need one-characters subsitutions (for now) so offsets aren't off # need one-characters subsitutions (for now) so offsets aren't off
special_chars = [(u'<', u'\xF0', u'&lt;'), special_chars = [
(u'>', u'\xF1', u'&gt;'), (u'&', u'\xF2', u'&amp;'),
(u'&', u'\xF2', u'&amp;')] (u'<', u'\xF0', u'&lt;'),
(u'>', u'\xF1', u'&gt;'),
]
line_pos_comment = re.compile(r'/\*.*?<<<<<<<<<<<<<<.*?\*/\n*', re.DOTALL) line_pos_comment = re.compile(r'/\*.*?<<<<<<<<<<<<<<.*?\*/\n*', re.DOTALL)
...@@ -57,8 +59,7 @@ class AnnotationCCodeWriter(CCodeWriter): ...@@ -57,8 +59,7 @@ class AnnotationCCodeWriter(CCodeWriter):
self.mark_pos(None) self.mark_pos(None)
f = Utils.open_source_file(source_filename) f = Utils.open_source_file(source_filename)
lines = f.readlines() lines = f.readlines()
for k in range(len(lines)): for k, line in enumerate(lines):
line = lines[k]
for c, cc, html in special_chars: for c, cc, html in special_chars:
line = line.replace(c, cc) line = line.replace(c, cc)
lines[k] = line lines[k] = line
...@@ -75,8 +76,7 @@ class AnnotationCCodeWriter(CCodeWriter): ...@@ -75,8 +76,7 @@ class AnnotationCCodeWriter(CCodeWriter):
else: else:
all.append((pos, start+end)) all.append((pos, start+end))
all.sort() all.sort(reverse=True)
all.reverse()
for pos, item in all: for pos, item in all:
_, line_no, col = pos _, line_no, col = pos
line_no -= 1 line_no -= 1
...@@ -86,6 +86,7 @@ class AnnotationCCodeWriter(CCodeWriter): ...@@ -86,6 +86,7 @@ class AnnotationCCodeWriter(CCodeWriter):
html_filename = os.path.splitext(target_filename)[0] + ".html" html_filename = os.path.splitext(target_filename)[0] + ".html"
f = codecs.open(html_filename, "w", encoding="UTF-8") f = codecs.open(html_filename, "w", encoding="UTF-8")
f.write(u'<!DOCTYPE html>\n')
f.write(u'<!-- Generated by Cython %s -->\n' % Version.watermark) f.write(u'<!-- Generated by Cython %s -->\n' % Version.watermark)
f.write(u'<html>\n') f.write(u'<html>\n')
f.write(u""" f.write(u"""
...@@ -120,7 +121,7 @@ body { font-family: courier; font-size: 12; } ...@@ -120,7 +121,7 @@ body { font-family: courier; font-size: 12; }
<script> <script>
function toggleDiv(id) { function toggleDiv(id) {
theDiv = document.getElementById(id); theDiv = document.getElementById(id);
if (theDiv.style.display == 'none') theDiv.style.display = 'block'; if (theDiv.style.display != 'block') theDiv.style.display = 'block';
else theDiv.style.display = 'none'; else theDiv.style.display = 'none';
} }
</script> </script>
...@@ -147,8 +148,9 @@ function toggleDiv(id) { ...@@ -147,8 +148,9 @@ function toggleDiv(id) {
code = code_source_file[k] code = code_source_file[k]
except KeyError: except KeyError:
code = '' code = ''
else:
code = code.replace('<', '<code><</code>') for c, cc, html in special_chars:
code = code.replace(c, html)
code, py_c_api_calls = py_c_api.subn(ur"<span class='py_c_api'>\1</span>(", code) code, py_c_api_calls = py_c_api.subn(ur"<span class='py_c_api'>\1</span>(", code)
code, pyx_c_api_calls = pyx_c_api.subn(ur"<span class='pyx_c_api'>\1</span>(", code) code, pyx_c_api_calls = pyx_c_api.subn(ur"<span class='pyx_c_api'>\1</span>(", code)
......
...@@ -607,7 +607,7 @@ class GetAndReleaseBufferUtilityCode(object): ...@@ -607,7 +607,7 @@ class GetAndReleaseBufferUtilityCode(object):
proto = util_code.format_code(util_code.proto) proto = util_code.format_code(util_code.proto)
impl = util_code.format_code( impl = util_code.format_code(
util_code.inject_string_constants(util_code.impl, output)) util_code.inject_string_constants(util_code.impl, output)[1])
proto_code.putln(proto) proto_code.putln(proto)
code.putln(impl) code.putln(impl)
......
...@@ -12,8 +12,8 @@ import Options ...@@ -12,8 +12,8 @@ import Options
# C-level implementations of builtin types, functions and methods # C-level implementations of builtin types, functions and methods
iter_next_utility_code = UtilityCode.load("IterNext", "ObjectHandling.c") iter_next_utility_code = UtilityCode.load("IterNext", "ObjectHandling.c")
getattr_utility_code = UtilityCode.load("GetAttr", "ObjectHandling.c")
getattr3_utility_code = UtilityCode.load("GetAttr3", "Builtins.c") getattr3_utility_code = UtilityCode.load("GetAttr3", "Builtins.c")
getattr_utility_code = UtilityCode.load("GetAttr", "Builtins.c")
pyexec_utility_code = UtilityCode.load("PyExec", "Builtins.c") pyexec_utility_code = UtilityCode.load("PyExec", "Builtins.c")
pyexec_globals_utility_code = UtilityCode.load("PyExecGlobals", "Builtins.c") pyexec_globals_utility_code = UtilityCode.load("PyExecGlobals", "Builtins.c")
globals_utility_code = UtilityCode.load("Globals", "Builtins.c") globals_utility_code = UtilityCode.load("Globals", "Builtins.c")
...@@ -287,6 +287,8 @@ builtin_types_table = [ ...@@ -287,6 +287,8 @@ builtin_types_table = [
BuiltinMethod("reverse", "T", "r", "PyList_Reverse"), BuiltinMethod("reverse", "T", "r", "PyList_Reverse"),
BuiltinMethod("append", "TO", "r", "__Pyx_PyList_Append", BuiltinMethod("append", "TO", "r", "__Pyx_PyList_Append",
utility_code=UtilityCode.load("ListAppend", "Optimize.c")), utility_code=UtilityCode.load("ListAppend", "Optimize.c")),
BuiltinMethod("extend", "TO", "r", "__Pyx_PyList_Extend",
utility_code=UtilityCode.load("ListExtend", "Optimize.c")),
]), ]),
("dict", "PyDict_Type", [BuiltinMethod("__contains__", "TO", "b", "PyDict_Contains"), ("dict", "PyDict_Type", [BuiltinMethod("__contains__", "TO", "b", "PyDict_Contains"),
...@@ -331,14 +333,16 @@ builtin_types_table = [ ...@@ -331,14 +333,16 @@ builtin_types_table = [
("frozenset", "PyFrozenSet_Type", []), ("frozenset", "PyFrozenSet_Type", []),
] ]
types_that_construct_their_instance = (
types_that_construct_their_instance = set([
# some builtin types do not always return an instance of # some builtin types do not always return an instance of
# themselves - these do: # themselves - these do:
'type', 'bool', 'long', 'float', 'bytes', 'unicode', 'tuple', 'list', 'type', 'bool', 'long', 'float', 'complex',
'dict', 'set', 'frozenset' 'bytes', 'unicode', 'bytearray',
'tuple', 'list', 'dict', 'set', 'frozenset'
# 'str', # only in Py3.x # 'str', # only in Py3.x
# 'file', # only in Py2.x # 'file', # only in Py2.x
) ])
builtin_structs_table = [ builtin_structs_table = [
...@@ -400,8 +404,11 @@ def init_builtins(): ...@@ -400,8 +404,11 @@ def init_builtins():
init_builtin_structs() init_builtin_structs()
init_builtin_funcs() init_builtin_funcs()
init_builtin_types() init_builtin_types()
builtin_scope.declare_var(
'__debug__', PyrexTypes.c_const_type(PyrexTypes.c_bint_type),
pos=None, cname='(!Py_OptimizeFlag)', is_cdef=True)
global list_type, tuple_type, dict_type, set_type, frozenset_type global list_type, tuple_type, dict_type, set_type, frozenset_type
global bytes_type, str_type, unicode_type global bytes_type, str_type, unicode_type, basestring_type
global float_type, bool_type, type_type, complex_type global float_type, bool_type, type_type, complex_type
type_type = builtin_scope.lookup('type').type type_type = builtin_scope.lookup('type').type
list_type = builtin_scope.lookup('list').type list_type = builtin_scope.lookup('list').type
...@@ -412,6 +419,7 @@ def init_builtins(): ...@@ -412,6 +419,7 @@ def init_builtins():
bytes_type = builtin_scope.lookup('bytes').type bytes_type = builtin_scope.lookup('bytes').type
str_type = builtin_scope.lookup('str').type str_type = builtin_scope.lookup('str').type
unicode_type = builtin_scope.lookup('unicode').type unicode_type = builtin_scope.lookup('unicode').type
basestring_type = builtin_scope.lookup('basestring').type
float_type = builtin_scope.lookup('float').type float_type = builtin_scope.lookup('float').type
bool_type = builtin_scope.lookup('bool').type bool_type = builtin_scope.lookup('bool').type
complex_type = builtin_scope.lookup('complex').type complex_type = builtin_scope.lookup('complex').type
......
...@@ -28,6 +28,7 @@ Options: ...@@ -28,6 +28,7 @@ Options:
-w, --working <directory> Sets the working directory for Cython (the directory modules -w, --working <directory> Sets the working directory for Cython (the directory modules
are searched from) are searched from)
--gdb Output debug information for cygdb --gdb Output debug information for cygdb
--gdb-outdir <directory> Specify gdb debug information output directory. Implies --gdb.
-D, --no-docstrings Strip docstrings from the compiled module. -D, --no-docstrings Strip docstrings from the compiled module.
-a, --annotate Produce a colorized HTML version of the source. -a, --annotate Produce a colorized HTML version of the source.
...@@ -36,6 +37,8 @@ Options: ...@@ -36,6 +37,8 @@ Options:
--embed[=<method_name>] Generate a main() function that embeds the Python interpreter. --embed[=<method_name>] Generate a main() function that embeds the Python interpreter.
-2 Compile based on Python-2 syntax and code semantics. -2 Compile based on Python-2 syntax and code semantics.
-3 Compile based on Python-3 syntax and code semantics. -3 Compile based on Python-3 syntax and code semantics.
--lenient Change some compile time errors to runtime errors to
improve Python compatibility
--capi-reexport-cincludes Add cincluded headers to any auto-generated header files. --capi-reexport-cincludes Add cincluded headers to any auto-generated header files.
--fast-fail Abort the compilation on the first error --fast-fail Abort the compilation on the first error
--warning-errors, -Werror Make all warnings into errors --warning-errors, -Werror Make all warnings into errors
...@@ -119,6 +122,12 @@ def parse_command_line(args): ...@@ -119,6 +122,12 @@ def parse_command_line(args):
elif option == "--gdb": elif option == "--gdb":
options.gdb_debug = True options.gdb_debug = True
options.output_dir = os.curdir options.output_dir = os.curdir
elif option == "--gdb-outdir":
options.gdb_debug = True
options.output_dir = pop_arg()
elif option == "--lenient":
Options.error_on_unknown_names = False
Options.error_on_uninitialized = False
elif option == '-2': elif option == '-2':
options.language_level = 2 options.language_level = 2
elif option == '-3': elif option == '-3':
......
...@@ -43,6 +43,7 @@ non_portable_builtins_map = { ...@@ -43,6 +43,7 @@ non_portable_builtins_map = {
'unicode' : ('PY_MAJOR_VERSION >= 3', 'str'), 'unicode' : ('PY_MAJOR_VERSION >= 3', 'str'),
'basestring' : ('PY_MAJOR_VERSION >= 3', 'str'), 'basestring' : ('PY_MAJOR_VERSION >= 3', 'str'),
'xrange' : ('PY_MAJOR_VERSION >= 3', 'range'), 'xrange' : ('PY_MAJOR_VERSION >= 3', 'range'),
'raw_input' : ('PY_MAJOR_VERSION >= 3', 'input'),
'BaseException' : ('PY_VERSION_HEX < 0x02050000', 'Exception'), 'BaseException' : ('PY_VERSION_HEX < 0x02050000', 'Exception'),
} }
...@@ -384,12 +385,18 @@ class UtilityCode(UtilityCodeBase): ...@@ -384,12 +385,18 @@ class UtilityCode(UtilityCodeBase):
def inject_string_constants(self, impl, output): def inject_string_constants(self, impl, output):
"""Replace 'PYIDENT("xyz")' by a constant Python identifier cname. """Replace 'PYIDENT("xyz")' by a constant Python identifier cname.
""" """
pystrings = re.findall('(PYIDENT\("([^"]+)"\))', impl) replacements = {}
for ref, name in pystrings: def externalise(matchobj):
py_const = output.get_interned_identifier( name = matchobj.group(1)
StringEncoding.EncodedString(name)) try:
impl = impl.replace(ref, py_const.cname) cname = replacements[name]
return impl except KeyError:
cname = replacements[name] = output.get_interned_identifier(
StringEncoding.EncodedString(name)).cname
return cname
impl = re.sub('PYIDENT\("([^"]+)"\)', externalise, impl)
return bool(replacements), impl
def put_code(self, output): def put_code(self, output):
if self.requires: if self.requires:
...@@ -400,10 +407,14 @@ class UtilityCode(UtilityCodeBase): ...@@ -400,10 +407,14 @@ class UtilityCode(UtilityCodeBase):
self.format_code(self.proto), self.format_code(self.proto),
'%s_proto' % self.name) '%s_proto' % self.name)
if self.impl: if self.impl:
output['utility_code_def'].put_or_include( impl = self.format_code(self.impl)
self.format_code( is_specialised, impl = self.inject_string_constants(impl, output)
self.inject_string_constants(self.impl, output)), if not is_specialised:
'%s_impl' % self.name) # no module specific adaptations => can be reused
output['utility_code_def'].put_or_include(
impl, '%s_impl' % self.name)
else:
output['utility_code_def'].put(impl)
if self.init: if self.init:
writer = output['init_globals'] writer = output['init_globals']
writer.putln("/* %s.init */" % self.name) writer.putln("/* %s.init */" % self.name)
...@@ -713,10 +724,10 @@ class PyObjectConst(object): ...@@ -713,10 +724,10 @@ class PyObjectConst(object):
self.type = type self.type = type
cython.declare(possible_unicode_identifier=object, possible_bytes_identifier=object, cython.declare(possible_unicode_identifier=object, possible_bytes_identifier=object,
nice_identifier=object, find_alphanums=object) replace_identifier=object, find_alphanums=object)
possible_unicode_identifier = re.compile(ur"(?![0-9])\w+$", re.U).match possible_unicode_identifier = re.compile(ur"(?![0-9])\w+$", re.U).match
possible_bytes_identifier = re.compile(r"(?![0-9])\w+$".encode('ASCII')).match possible_bytes_identifier = re.compile(r"(?![0-9])\w+$".encode('ASCII')).match
nice_identifier = re.compile(r'\A[a-zA-Z0-9_]+\Z').match replace_identifier = re.compile(r'[^a-zA-Z0-9_]+').sub
find_alphanums = re.compile('([a-zA-Z0-9]+)').findall find_alphanums = re.compile('([a-zA-Z0-9]+)').findall
class StringConst(object): class StringConst(object):
...@@ -836,7 +847,7 @@ class GlobalState(object): ...@@ -836,7 +847,7 @@ class GlobalState(object):
# In time, hopefully the literals etc. will be # In time, hopefully the literals etc. will be
# supplied directly instead. # supplied directly instead.
# #
# const_cname_counter int global counter for constant identifiers # const_cnames_used dict global counter for unique constant identifiers
# #
# parts {string:CCodeWriter} # parts {string:CCodeWriter}
...@@ -892,7 +903,7 @@ class GlobalState(object): ...@@ -892,7 +903,7 @@ class GlobalState(object):
self.module_node = module_node # because some utility code generation needs it self.module_node = module_node # because some utility code generation needs it
# (generating backwards-compatible Get/ReleaseBuffer # (generating backwards-compatible Get/ReleaseBuffer
self.const_cname_counter = 1 self.const_cnames_used = {}
self.string_const_index = {} self.string_const_index = {}
self.pyunicode_ptr_const_index = {} self.pyunicode_ptr_const_index = {}
self.int_const_index = {} self.int_const_index = {}
...@@ -1020,7 +1031,7 @@ class GlobalState(object): ...@@ -1020,7 +1031,7 @@ class GlobalState(object):
# create a new Python object constant # create a new Python object constant
const = self.new_py_const(type, prefix) const = self.new_py_const(type, prefix)
if cleanup_level is not None \ if cleanup_level is not None \
and cleanup_level <= Options.generate_cleanup_code: and cleanup_level <= Options.generate_cleanup_code:
cleanup_writer = self.parts['cleanup_globals'] cleanup_writer = self.parts['cleanup_globals']
cleanup_writer.putln('Py_CLEAR(%s);' % const.cname) cleanup_writer.putln('Py_CLEAR(%s);' % const.cname)
return const return const
...@@ -1082,17 +1093,10 @@ class GlobalState(object): ...@@ -1082,17 +1093,10 @@ class GlobalState(object):
self.py_constants.append(c) self.py_constants.append(c)
return c return c
def new_string_const_cname(self, bytes_value, intern=None): def new_string_const_cname(self, bytes_value):
# Create a new globally-unique nice name for a C string constant. # Create a new globally-unique nice name for a C string constant.
try: value = bytes_value.decode('ASCII', 'ignore')
value = bytes_value.decode('ASCII') return self.new_const_cname(value=value)
except UnicodeError:
return self.new_const_cname()
if len(value) < 20 and nice_identifier(value):
return "%s_%s" % (Naming.const_prefix, value)
else:
return self.new_const_cname()
def new_int_const_cname(self, value, longness): def new_int_const_cname(self, value, longness):
if longness: if longness:
...@@ -1101,10 +1105,15 @@ class GlobalState(object): ...@@ -1101,10 +1105,15 @@ class GlobalState(object):
cname = cname.replace('-', 'neg_').replace('.','_') cname = cname.replace('-', 'neg_').replace('.','_')
return cname return cname
def new_const_cname(self, prefix=''): def new_const_cname(self, prefix='', value=''):
n = self.const_cname_counter value = replace_identifier('_', value)[:32].strip('_')
self.const_cname_counter += 1 used = self.const_cnames_used
return "%s%s%d" % (Naming.const_prefix, prefix, n) name_suffix = value
while name_suffix in used:
counter = used[value] = used[value] + 1
name_suffix = '%s_%d' % (value, counter)
used[name_suffix] = 1
return "%s%s%s" % (Naming.const_prefix, prefix, name_suffix)
def add_cached_builtin_decl(self, entry): def add_cached_builtin_decl(self, entry):
if entry.is_builtin and entry.is_const: if entry.is_builtin and entry.is_const:
...@@ -1529,16 +1538,21 @@ class CCodeWriter(object): ...@@ -1529,16 +1538,21 @@ class CCodeWriter(object):
self.bol = 0 self.bol = 0
def put_or_include(self, code, name): def put_or_include(self, code, name):
if code: include_dir = self.globalstate.common_utility_include_dir
if self.globalstate.common_utility_include_dir and len(code) > 1042: if include_dir and len(code) > 1024:
include_file = "%s_%s.h" % (name, hashlib.md5(code).hexdigest()) include_file = "%s_%s.h" % (
path = os.path.join(self.globalstate.common_utility_include_dir, include_file) name, hashlib.md5(code.encode('utf8')).hexdigest())
if not os.path.exists(path): path = os.path.join(include_dir, include_file)
tmp_path = '%s.tmp%s' % (path, os.getpid()) if not os.path.exists(path):
open(tmp_path, 'w').write(code) tmp_path = '%s.tmp%s' % (path, os.getpid())
os.rename(tmp_path, path) f = Utils.open_new_file(tmp_path)
code = '#include "%s"\n' % path try:
self.put(code) f.write(code)
finally:
f.close()
os.rename(tmp_path, path)
code = '#include "%s"\n' % path
self.put(code)
def put(self, code): def put(self, code):
fix_indent = False fix_indent = False
......
...@@ -25,7 +25,7 @@ import Nodes ...@@ -25,7 +25,7 @@ import Nodes
from Nodes import Node from Nodes import Node
import PyrexTypes import PyrexTypes
from PyrexTypes import py_object_type, c_long_type, typecast, error_type, \ from PyrexTypes import py_object_type, c_long_type, typecast, error_type, \
unspecified_type, cython_memoryview_ptr_type unspecified_type
import TypeSlots import TypeSlots
from Builtin import list_type, tuple_type, set_type, dict_type, \ from Builtin import list_type, tuple_type, set_type, dict_type, \
unicode_type, str_type, bytes_type, type_type unicode_type, str_type, bytes_type, type_type
...@@ -67,7 +67,9 @@ coercion_error_dict = { ...@@ -67,7 +67,9 @@ coercion_error_dict = {
(Builtin.unicode_type, PyrexTypes.c_uchar_ptr_type) : "Unicode objects only support coercion to Py_UNICODE*.", (Builtin.unicode_type, PyrexTypes.c_uchar_ptr_type) : "Unicode objects only support coercion to Py_UNICODE*.",
(Builtin.bytes_type, Builtin.unicode_type) : "Cannot convert 'bytes' object to unicode implicitly, decoding required", (Builtin.bytes_type, Builtin.unicode_type) : "Cannot convert 'bytes' object to unicode implicitly, decoding required",
(Builtin.bytes_type, Builtin.str_type) : "Cannot convert 'bytes' object to str implicitly. This is not portable to Py3.", (Builtin.bytes_type, Builtin.str_type) : "Cannot convert 'bytes' object to str implicitly. This is not portable to Py3.",
(Builtin.bytes_type, Builtin.basestring_type) : "Cannot convert 'bytes' object to basestring implicitly. This is not portable to Py3.",
(Builtin.bytes_type, PyrexTypes.c_py_unicode_ptr_type) : "Cannot convert 'bytes' object to Py_UNICODE*, use 'unicode'.", (Builtin.bytes_type, PyrexTypes.c_py_unicode_ptr_type) : "Cannot convert 'bytes' object to Py_UNICODE*, use 'unicode'.",
(Builtin.basestring_type, Builtin.bytes_type) : "Cannot convert 'basestring' object to bytes implicitly. This is not portable.",
(Builtin.str_type, Builtin.unicode_type) : "str objects do not support coercion to unicode, use a unicode string literal instead (u'')", (Builtin.str_type, Builtin.unicode_type) : "str objects do not support coercion to unicode, use a unicode string literal instead (u'')",
(Builtin.str_type, Builtin.bytes_type) : "Cannot convert 'str' to 'bytes' implicitly. This is not portable.", (Builtin.str_type, Builtin.bytes_type) : "Cannot convert 'str' to 'bytes' implicitly. This is not portable.",
(Builtin.str_type, PyrexTypes.c_char_ptr_type) : "'str' objects do not support coercion to C types (use 'bytes'?).", (Builtin.str_type, PyrexTypes.c_char_ptr_type) : "'str' objects do not support coercion to C types (use 'bytes'?).",
...@@ -76,6 +78,7 @@ coercion_error_dict = { ...@@ -76,6 +78,7 @@ coercion_error_dict = {
(PyrexTypes.c_char_ptr_type, Builtin.unicode_type) : "Cannot convert 'char*' to unicode implicitly, decoding required", (PyrexTypes.c_char_ptr_type, Builtin.unicode_type) : "Cannot convert 'char*' to unicode implicitly, decoding required",
(PyrexTypes.c_uchar_ptr_type, Builtin.unicode_type) : "Cannot convert 'char*' to unicode implicitly, decoding required", (PyrexTypes.c_uchar_ptr_type, Builtin.unicode_type) : "Cannot convert 'char*' to unicode implicitly, decoding required",
} }
def find_coercion_error(type_tuple, default, env): def find_coercion_error(type_tuple, default, env):
err = coercion_error_dict.get(type_tuple) err = coercion_error_dict.get(type_tuple)
if err is None: if err is None:
...@@ -1250,9 +1253,8 @@ class UnicodeNode(ConstNode): ...@@ -1250,9 +1253,8 @@ class UnicodeNode(ConstNode):
"Unicode literals do not support coercion to C types other " "Unicode literals do not support coercion to C types other "
"than Py_UNICODE/Py_UCS4 (for characters) or Py_UNICODE* " "than Py_UNICODE/Py_UCS4 (for characters) or Py_UNICODE* "
"(for strings).") "(for strings).")
elif dst_type is not py_object_type: elif dst_type not in (py_object_type, Builtin.basestring_type):
if not self.check_for_coercion_error(dst_type, env): self.check_for_coercion_error(dst_type, env, fail=True)
self.fail_assignment(dst_type)
return self return self
def can_coerce_to_char_literal(self): def can_coerce_to_char_literal(self):
...@@ -1337,7 +1339,8 @@ class StringNode(PyConstNode): ...@@ -1337,7 +1339,8 @@ class StringNode(PyConstNode):
# return BytesNode(self.pos, value=self.value) # return BytesNode(self.pos, value=self.value)
if not dst_type.is_pyobject: if not dst_type.is_pyobject:
return BytesNode(self.pos, value=self.value).coerce_to(dst_type, env) return BytesNode(self.pos, value=self.value).coerce_to(dst_type, env)
self.check_for_coercion_error(dst_type, env, fail=True) if dst_type is not Builtin.basestring_type:
self.check_for_coercion_error(dst_type, env, fail=True)
return self return self
def can_coerce_to_char_literal(self): def can_coerce_to_char_literal(self):
...@@ -1476,6 +1479,7 @@ class NameNode(AtomicExprNode): ...@@ -1476,6 +1479,7 @@ class NameNode(AtomicExprNode):
cf_is_null = False cf_is_null = False
allow_null = False allow_null = False
nogil = False nogil = False
inferred_type = None
def as_cython_attribute(self): def as_cython_attribute(self):
return self.cython_attribute return self.cython_attribute
...@@ -1484,7 +1488,7 @@ class NameNode(AtomicExprNode): ...@@ -1484,7 +1488,7 @@ class NameNode(AtomicExprNode):
if self.entry is None: if self.entry is None:
self.entry = env.lookup(self.name) self.entry = env.lookup(self.name)
if self.entry is not None and self.entry.type.is_unspecified: if self.entry is not None and self.entry.type.is_unspecified:
return (self.entry,) return (self,)
else: else:
return () return ()
...@@ -1492,6 +1496,8 @@ class NameNode(AtomicExprNode): ...@@ -1492,6 +1496,8 @@ class NameNode(AtomicExprNode):
if self.entry is None: if self.entry is None:
self.entry = env.lookup(self.name) self.entry = env.lookup(self.name)
if self.entry is None or self.entry.type is unspecified_type: if self.entry is None or self.entry.type is unspecified_type:
if self.inferred_type is not None:
return self.inferred_type
return py_object_type return py_object_type
elif (self.entry.type.is_extension_type or self.entry.type.is_builtin_type) and \ elif (self.entry.type.is_extension_type or self.entry.type.is_builtin_type) and \
self.name == self.entry.type.name: self.name == self.entry.type.name:
...@@ -1506,6 +1512,12 @@ class NameNode(AtomicExprNode): ...@@ -1506,6 +1512,12 @@ class NameNode(AtomicExprNode):
# special case: referring to a C function must return its pointer # special case: referring to a C function must return its pointer
return PyrexTypes.CPtrType(self.entry.type) return PyrexTypes.CPtrType(self.entry.type)
else: else:
# If entry is inferred as pyobject it's safe to use local
# NameNode's inferred_type.
if self.entry.type.is_pyobject and self.inferred_type:
# Overflow may happen if integer
if not (self.inferred_type.is_int and self.entry.might_overflow):
return self.inferred_type
return self.entry.type return self.entry.type
def compile_time_value(self, denv): def compile_time_value(self, denv):
...@@ -2034,9 +2046,10 @@ class NameNode(AtomicExprNode): ...@@ -2034,9 +2046,10 @@ class NameNode(AtomicExprNode):
if hasattr(self, 'is_called') and self.is_called: if hasattr(self, 'is_called') and self.is_called:
pos = (self.pos[0], self.pos[1], self.pos[2] - len(self.name) - 1) pos = (self.pos[0], self.pos[1], self.pos[2] - len(self.name) - 1)
if self.type.is_pyobject: if self.type.is_pyobject:
code.annotate(pos, AnnotationItem('py_call', 'python function', size=len(self.name))) style, text = 'py_call', 'python function (%s)'
else: else:
code.annotate(pos, AnnotationItem('c_call', 'c function', size=len(self.name))) style, text = 'c_call', 'c function (%s)'
code.annotate(pos, AnnotationItem(style, text % self.type, size=len(self.name)))
class BackquoteNode(ExprNode): class BackquoteNode(ExprNode):
# `expr` # `expr`
...@@ -3010,6 +3023,7 @@ class IndexNode(ExprNode): ...@@ -3010,6 +3023,7 @@ class IndexNode(ExprNode):
else: else:
self.is_temp = 1 self.is_temp = 1
self.index = self.index.coerce_to(PyrexTypes.c_py_ssize_t_type, env).coerce_to_simple(env) self.index = self.index.coerce_to(PyrexTypes.c_py_ssize_t_type, env).coerce_to_simple(env)
self.original_index_type.create_to_py_utility_code(env)
else: else:
self.index = self.index.coerce_to_pyobject(env) self.index = self.index.coerce_to_pyobject(env)
self.is_temp = 1 self.is_temp = 1
...@@ -3193,11 +3207,22 @@ class IndexNode(ExprNode): ...@@ -3193,11 +3207,22 @@ class IndexNode(ExprNode):
return self.base.check_const_addr() and self.index.check_const() return self.base.check_const_addr() and self.index.check_const()
def is_lvalue(self): def is_lvalue(self):
base_type = self.base.type # NOTE: references currently have both is_reference and is_ptr
if self.type.is_ptr or self.type.is_array: # set. Since pointers and references have different lvalue
return not base_type.base_type.is_array # rules, we must be careful to separate the two.
else: if self.type.is_reference:
if self.type.ref_base_type.is_array:
# fixed-sized arrays aren't l-values
return False
elif self.type.is_ptr:
# non-const pointers can always be reassigned
return True return True
elif self.type.is_array:
# fixed-sized arrays aren't l-values
return False
# Just about everything else returned by the index operator
# can be an lvalue.
return True
def calculate_result_code(self): def calculate_result_code(self):
if self.is_buffer_access: if self.is_buffer_access:
...@@ -5369,9 +5394,10 @@ class AttributeNode(ExprNode): ...@@ -5369,9 +5394,10 @@ class AttributeNode(ExprNode):
def annotate(self, code): def annotate(self, code):
if self.is_py_attr: if self.is_py_attr:
code.annotate(self.pos, AnnotationItem('py_attr', 'python attribute', size=len(self.attribute))) style, text = 'py_attr', 'python attribute (%s)'
else: else:
code.annotate(self.pos, AnnotationItem('c_attr', 'c attribute', size=len(self.attribute))) style, text = 'c_attr', 'c attribute (%s)'
code.annotate(self.pos, AnnotationItem(style, text % self.type, size=len(self.attribute)))
#------------------------------------------------------------------- #-------------------------------------------------------------------
...@@ -7227,7 +7253,7 @@ class PyCFunctionNode(ExprNode, ModuleNameMixin): ...@@ -7227,7 +7253,7 @@ class PyCFunctionNode(ExprNode, ModuleNameMixin):
flags = '0' flags = '0'
code.putln( code.putln(
'%s = %s(&%s, %s, %s, %s, %s, %s); %s' % ( '%s = %s(&%s, %s, %s, %s, %s, %s, %s); %s' % (
self.result(), self.result(),
constructor, constructor,
self.pymethdef_cname, self.pymethdef_cname,
...@@ -7235,6 +7261,7 @@ class PyCFunctionNode(ExprNode, ModuleNameMixin): ...@@ -7235,6 +7261,7 @@ class PyCFunctionNode(ExprNode, ModuleNameMixin):
self.get_py_qualified_name(code), self.get_py_qualified_name(code),
self.self_result_code(), self.self_result_code(),
self.get_py_mod_name(code), self.get_py_mod_name(code),
"PyModule_GetDict(%s)" % Naming.module_cname,
code_object_result, code_object_result,
code.error_goto_if_null(self.result(), self.pos))) code.error_goto_if_null(self.result(), self.pos)))
...@@ -10039,7 +10066,8 @@ class CoercionNode(ExprNode): ...@@ -10039,7 +10066,8 @@ class CoercionNode(ExprNode):
self.arg.annotate(code) self.arg.annotate(code)
if self.arg.type != self.type: if self.arg.type != self.type:
file, line, col = self.pos file, line, col = self.pos
code.annotate((file, line, col-1), AnnotationItem(style='coerce', tag='coerce', text='[%s] to [%s]' % (self.arg.type, self.type))) code.annotate((file, line, col-1), AnnotationItem(
style='coerce', tag='coerce', text='[%s] to [%s]' % (self.arg.type, self.type)))
class CoerceToMemViewSliceNode(CoercionNode): class CoerceToMemViewSliceNode(CoercionNode):
""" """
......
...@@ -35,6 +35,7 @@ cdef class NameAssignment: ...@@ -35,6 +35,7 @@ cdef class NameAssignment:
cdef public object pos cdef public object pos
cdef public set refs cdef public set refs
cdef public object bit cdef public object bit
cdef public object inferred_type
cdef class AssignmentList: cdef class AssignmentList:
cdef public object bit cdef public object bit
......
...@@ -157,6 +157,7 @@ class ControlFlow(object): ...@@ -157,6 +157,7 @@ class ControlFlow(object):
def is_statically_assigned(self, entry): def is_statically_assigned(self, entry):
if (entry.is_local and entry.is_variable and if (entry.is_local and entry.is_variable and
(entry.type.is_struct_or_union or (entry.type.is_struct_or_union or
entry.type.is_complex or
entry.type.is_array or entry.type.is_array or
entry.type.is_cpp_class)): entry.type.is_cpp_class)):
# stack allocated structured variable => never uninitialised # stack allocated structured variable => never uninitialised
...@@ -319,15 +320,23 @@ class NameAssignment(object): ...@@ -319,15 +320,23 @@ class NameAssignment(object):
self.refs = set() self.refs = set()
self.is_arg = False self.is_arg = False
self.is_deletion = False self.is_deletion = False
self.inferred_type = None
def __repr__(self): def __repr__(self):
return '%s(entry=%r)' % (self.__class__.__name__, self.entry) return '%s(entry=%r)' % (self.__class__.__name__, self.entry)
def infer_type(self, scope): def infer_type(self):
return self.rhs.infer_type(scope) self.inferred_type = self.rhs.infer_type(self.entry.scope)
return self.inferred_type
def type_dependencies(self, scope): def type_dependencies(self):
return self.rhs.type_dependencies(scope) return self.rhs.type_dependencies(self.entry.scope)
@property
def type(self):
if not self.entry.type.is_unspecified:
return self.entry.type
return self.inferred_type
class StaticAssignment(NameAssignment): class StaticAssignment(NameAssignment):
...@@ -341,11 +350,11 @@ class StaticAssignment(NameAssignment): ...@@ -341,11 +350,11 @@ class StaticAssignment(NameAssignment):
entry.type, may_be_none=may_be_none, pos=entry.pos) entry.type, may_be_none=may_be_none, pos=entry.pos)
super(StaticAssignment, self).__init__(lhs, lhs, entry) super(StaticAssignment, self).__init__(lhs, lhs, entry)
def infer_type(self, scope): def infer_type(self):
return self.entry.type return self.entry.type
def type_dependencies(self, scope): def type_dependencies(self):
return [] return ()
class Argument(NameAssignment): class Argument(NameAssignment):
...@@ -359,11 +368,12 @@ class NameDeletion(NameAssignment): ...@@ -359,11 +368,12 @@ class NameDeletion(NameAssignment):
NameAssignment.__init__(self, lhs, lhs, entry) NameAssignment.__init__(self, lhs, lhs, entry)
self.is_deletion = True self.is_deletion = True
def infer_type(self, scope): def infer_type(self):
inferred_type = self.rhs.infer_type(scope) inferred_type = self.rhs.infer_type(self.entry.scope)
if (not inferred_type.is_pyobject and if (not inferred_type.is_pyobject and
inferred_type.can_coerce_to_pyobject(scope)): inferred_type.can_coerce_to_pyobject(self.entry.scope)):
return py_object_type return py_object_type
self.inferred_type = inferred_type
return inferred_type return inferred_type
...@@ -410,7 +420,9 @@ class ControlFlowState(list): ...@@ -410,7 +420,9 @@ class ControlFlowState(list):
else: else:
if len(state) == 1: if len(state) == 1:
self.is_single = True self.is_single = True
super(ControlFlowState, self).__init__(state) # XXX: Remove fake_rhs_expr
super(ControlFlowState, self).__init__(
[i for i in state if i.rhs is not fake_rhs_expr])
def one(self): def one(self):
return self[0] return self[0]
......
...@@ -290,6 +290,7 @@ class Context(object): ...@@ -290,6 +290,7 @@ class Context(object):
source_filename = source_desc.filename source_filename = source_desc.filename
scope.cpp = self.cpp scope.cpp = self.cpp
# Parse the given source file and return a parse tree. # Parse the given source file and return a parse tree.
num_errors = Errors.num_errors
try: try:
f = Utils.open_source_file(source_filename, "rU") f = Utils.open_source_file(source_filename, "rU")
try: try:
...@@ -321,7 +322,7 @@ class Context(object): ...@@ -321,7 +322,7 @@ class Context(object):
"Decoding error, missing or incorrect coding=<encoding-name> " "Decoding error, missing or incorrect coding=<encoding-name> "
"at top of source (cannot decode with encoding %r: %s)" % (encoding, msg)) "at top of source (cannot decode with encoding %r: %s)" % (encoding, msg))
if Errors.num_errors > 0: if Errors.num_errors > num_errors:
raise CompileError() raise CompileError()
return tree return tree
......
This diff is collapsed.
...@@ -1175,6 +1175,12 @@ class CVarDefNode(StatNode): ...@@ -1175,6 +1175,12 @@ class CVarDefNode(StatNode):
visibility = self.visibility visibility = self.visibility
for declarator in self.declarators: for declarator in self.declarators:
if (len(self.declarators) > 1
and not isinstance(declarator, CNameDeclaratorNode)
and env.directives['warn.multiple_declarators']):
warning(declarator.pos, "Non-trivial type declarators in shared declaration.", 1)
if isinstance(declarator, CFuncDeclaratorNode): if isinstance(declarator, CFuncDeclaratorNode):
name_declarator, type = declarator.analyse(base_type, env, directive_locals=self.directive_locals) name_declarator, type = declarator.analyse(base_type, env, directive_locals=self.directive_locals)
else: else:
...@@ -2788,7 +2794,7 @@ class DefNode(FuncDefNode): ...@@ -2788,7 +2794,7 @@ class DefNode(FuncDefNode):
return self.entry.signature.error_value return self.entry.signature.error_value
def caller_will_check_exceptions(self): def caller_will_check_exceptions(self):
return 1 return self.entry.signature.exception_check
def generate_function_definitions(self, env, code): def generate_function_definitions(self, env, code):
if self.defaults_getter: if self.defaults_getter:
...@@ -5183,6 +5189,7 @@ class AssertStatNode(StatNode): ...@@ -5183,6 +5189,7 @@ class AssertStatNode(StatNode):
def generate_execution_code(self, code): def generate_execution_code(self, code):
code.putln("#ifndef CYTHON_WITHOUT_ASSERTIONS") code.putln("#ifndef CYTHON_WITHOUT_ASSERTIONS")
code.putln("if (unlikely(!Py_OptimizeFlag)) {")
self.cond.generate_evaluation_code(code) self.cond.generate_evaluation_code(code)
code.putln( code.putln(
"if (unlikely(!%s)) {" % "if (unlikely(!%s)) {" %
...@@ -5203,6 +5210,8 @@ class AssertStatNode(StatNode): ...@@ -5203,6 +5210,8 @@ class AssertStatNode(StatNode):
"}") "}")
self.cond.generate_disposal_code(code) self.cond.generate_disposal_code(code)
self.cond.free_temps(code) self.cond.free_temps(code)
code.putln(
"}")
code.putln("#endif") code.putln("#endif")
def generate_function_definitions(self, env, code): def generate_function_definitions(self, env, code):
......
...@@ -19,6 +19,7 @@ from StringEncoding import EncodedString, BytesLiteral ...@@ -19,6 +19,7 @@ from StringEncoding import EncodedString, BytesLiteral
from Errors import error from Errors import error
from ParseTreeTransforms import SkipDeclarations from ParseTreeTransforms import SkipDeclarations
import copy
import codecs import codecs
try: try:
...@@ -3050,6 +3051,8 @@ class ConstantFolding(Visitor.VisitorTransform, SkipDeclarations): ...@@ -3050,6 +3051,8 @@ class ConstantFolding(Visitor.VisitorTransform, SkipDeclarations):
def visit_UnopNode(self, node): def visit_UnopNode(self, node):
self._calculate_const(node) self._calculate_const(node)
if node.constant_result is ExprNodes.not_a_constant: if node.constant_result is ExprNodes.not_a_constant:
if node.operator == '!':
return self._handle_NotNode(node)
return node return node
if not node.operand.is_literal: if not node.operand.is_literal:
return node return node
...@@ -3066,6 +3069,23 @@ class ConstantFolding(Visitor.VisitorTransform, SkipDeclarations): ...@@ -3066,6 +3069,23 @@ class ConstantFolding(Visitor.VisitorTransform, SkipDeclarations):
return self._handle_UnaryMinusNode(node) return self._handle_UnaryMinusNode(node)
return node return node
_negate_operator = {
'in': 'not_in',
'not_in': 'in',
'is': 'is_not',
'is_not': 'is'
}.get
def _handle_NotNode(self, node):
operand = node.operand
if isinstance(operand, ExprNodes.PrimaryCmpNode):
operator = self._negate_operator(operand.operator)
if operator:
node = copy.copy(operand)
node.operator = operator
node = self.visit_PrimaryCmpNode(node)
return node
def _handle_UnaryMinusNode(self, node): def _handle_UnaryMinusNode(self, node):
def _negate(value): def _negate(value):
if value.startswith('-'): if value.startswith('-'):
......
...@@ -68,7 +68,6 @@ old_style_globals = False ...@@ -68,7 +68,6 @@ old_style_globals = False
# Allows cimporting from a pyx file without a pxd file. # Allows cimporting from a pyx file without a pxd file.
cimport_from_pyx = False cimport_from_pyx = False
# max # of dims for buffers -- set lower than number of dimensions in numpy, as # max # of dims for buffers -- set lower than number of dimensions in numpy, as
# slices are passed by value and involve a lot of copying # slices are passed by value and involve a lot of copying
buffer_max_dims = 8 buffer_max_dims = 8
...@@ -76,6 +75,10 @@ buffer_max_dims = 8 ...@@ -76,6 +75,10 @@ buffer_max_dims = 8
# Number of function closure instances to keep in a freelist (0: no freelists) # Number of function closure instances to keep in a freelist (0: no freelists)
closure_freelist_size = 8 closure_freelist_size = 8
# Should tp_clear() set object fields to None instead of clearing them to NULL?
clear_to_none = True
# Declare compiler directives # Declare compiler directives
directive_defaults = { directive_defaults = {
'boundscheck' : True, 'boundscheck' : True,
...@@ -96,6 +99,7 @@ directive_defaults = { ...@@ -96,6 +99,7 @@ directive_defaults = {
'final' : False, 'final' : False,
'internal' : False, 'internal' : False,
'profile': False, 'profile': False,
'no_gc_clear': False,
'linetrace': False, 'linetrace': False,
'infer_types': None, 'infer_types': None,
'infer_types.verbose': False, 'infer_types.verbose': False,
...@@ -119,6 +123,7 @@ directive_defaults = { ...@@ -119,6 +123,7 @@ directive_defaults = {
'warn.unused': False, 'warn.unused': False,
'warn.unused_arg': False, 'warn.unused_arg': False,
'warn.unused_result': False, 'warn.unused_result': False,
'warn.multiple_declarators': True,
# optimizations # optimizations
'optimize.inline_defnode_calls': True, 'optimize.inline_defnode_calls': True,
...@@ -214,6 +219,7 @@ for key, val in directive_defaults.items(): ...@@ -214,6 +219,7 @@ for key, val in directive_defaults.items():
directive_scopes = { # defaults to available everywhere directive_scopes = { # defaults to available everywhere
# 'module', 'function', 'class', 'with statement' # 'module', 'function', 'class', 'with statement'
'final' : ('cclass', 'function'), 'final' : ('cclass', 'function'),
'no_gc_clear' : ('cclass',),
'internal' : ('cclass',), 'internal' : ('cclass',),
'autotestdict' : ('module',), 'autotestdict' : ('module',),
'autotestdict.all' : ('module',), 'autotestdict.all' : ('module',),
...@@ -303,6 +309,11 @@ def parse_directive_list(s, relaxed_bool=False, ignore_unknown=False, ...@@ -303,6 +309,11 @@ def parse_directive_list(s, relaxed_bool=False, ignore_unknown=False,
Traceback (most recent call last): Traceback (most recent call last):
... ...
ValueError: Unknown option: "unknown" ValueError: Unknown option: "unknown"
>>> warnings = parse_directive_list('warn.all=True')
>>> len(warnings) > 1
True
>>> sum(warnings.values()) == len(warnings) # all true.
True
""" """
if current_settings is None: if current_settings is None:
result = {} result = {}
...@@ -313,10 +324,18 @@ def parse_directive_list(s, relaxed_bool=False, ignore_unknown=False, ...@@ -313,10 +324,18 @@ def parse_directive_list(s, relaxed_bool=False, ignore_unknown=False,
if not item: continue if not item: continue
if not '=' in item: raise ValueError('Expected "=" in option "%s"' % item) if not '=' in item: raise ValueError('Expected "=" in option "%s"' % item)
name, value = [ s.strip() for s in item.strip().split('=', 1) ] name, value = [ s.strip() for s in item.strip().split('=', 1) ]
parsed_value = parse_directive_value(name, value, relaxed_bool=relaxed_bool) if name not in directive_defaults:
if parsed_value is None: found = False
if not ignore_unknown: if name.endswith('.all'):
prefix = name[:-3]
for directive in directive_defaults:
if directive.startswith(prefix):
found = True
parsed_value = parse_directive_value(directive, value, relaxed_bool=relaxed_bool)
result[directive] = parsed_value
if not found and not ignore_unknown:
raise ValueError('Unknown option: "%s"' % name) raise ValueError('Unknown option: "%s"' % name)
else: else:
parsed_value = parse_directive_value(name, value, relaxed_bool=relaxed_bool)
result[name] = parsed_value result[name] = parsed_value
return result return result
...@@ -121,13 +121,20 @@ class NormalizeTree(CythonTransform): ...@@ -121,13 +121,20 @@ class NormalizeTree(CythonTransform):
def visit_CStructOrUnionDefNode(self, node): def visit_CStructOrUnionDefNode(self, node):
return self.visit_StatNode(node, True) return self.visit_StatNode(node, True)
# Eliminate PassStatNode
def visit_PassStatNode(self, node): def visit_PassStatNode(self, node):
"""Eliminate PassStatNode"""
if not self.is_in_statlist: if not self.is_in_statlist:
return Nodes.StatListNode(pos=node.pos, stats=[]) return Nodes.StatListNode(pos=node.pos, stats=[])
else: else:
return [] return []
def visit_ExprStatNode(self, node):
"""Eliminate useless string literals"""
if node.expr.is_string_literal:
return self.visit_PassStatNode(node)
else:
return self.visit_StatNode(node)
def visit_CDeclaratorNode(self, node): def visit_CDeclaratorNode(self, node):
return node return node
...@@ -889,8 +896,8 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations): ...@@ -889,8 +896,8 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations):
'The %s directive takes one compile-time integer argument' % optname) 'The %s directive takes one compile-time integer argument' % optname)
return (optname, int(args[0].value)) return (optname, int(args[0].value))
elif directivetype is str: elif directivetype is str:
if kwds is not None or len(args) != 1 or not isinstance(args[0], (ExprNodes.StringNode, if kwds is not None or len(args) != 1 or not isinstance(
ExprNodes.UnicodeNode)): args[0], (ExprNodes.StringNode, ExprNodes.UnicodeNode)):
raise PostParseError(pos, raise PostParseError(pos,
'The %s directive takes one compile-time string argument' % optname) 'The %s directive takes one compile-time string argument' % optname)
return (optname, str(args[0].value)) return (optname, str(args[0].value))
...@@ -909,6 +916,12 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations): ...@@ -909,6 +916,12 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations):
raise PostParseError(pos, raise PostParseError(pos,
'The %s directive takes no keyword arguments' % optname) 'The %s directive takes no keyword arguments' % optname)
return optname, [ str(arg.value) for arg in args ] return optname, [ str(arg.value) for arg in args ]
elif callable(directivetype):
if kwds is not None or len(args) != 1 or not isinstance(
args[0], (ExprNodes.StringNode, ExprNodes.UnicodeNode)):
raise PostParseError(pos,
'The %s directive takes one compile-time string argument' % optname)
return (optname, directivetype(optname, str(args[0].value)))
else: else:
assert False assert False
...@@ -940,8 +953,10 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations): ...@@ -940,8 +953,10 @@ class InterpretCompilerDirectives(CythonTransform, SkipDeclarations):
if name == 'locals': if name == 'locals':
node.directive_locals = value node.directive_locals = value
elif name != 'final': elif name != 'final':
self.context.nonfatal_error(PostParseError(dec.pos, self.context.nonfatal_error(PostParseError(
"Cdef functions can only take cython.locals() or final decorators, got %s." % name)) node.pos,
"Cdef functions can only take cython.locals() "
"or final decorators, got %s." % name))
body = Nodes.StatListNode(node.pos, stats=[node]) body = Nodes.StatListNode(node.pos, stats=[node])
return self.visit_with_directives(body, directives) return self.visit_with_directives(body, directives)
...@@ -2096,13 +2111,14 @@ class YieldNodeCollector(TreeVisitor): ...@@ -2096,13 +2111,14 @@ class YieldNodeCollector(TreeVisitor):
self.has_return_value = False self.has_return_value = False
def visit_Node(self, node): def visit_Node(self, node):
return self.visitchildren(node) self.visitchildren(node)
def visit_YieldExprNode(self, node): def visit_YieldExprNode(self, node):
self.yields.append(node) self.yields.append(node)
self.visitchildren(node) self.visitchildren(node)
def visit_ReturnStatNode(self, node): def visit_ReturnStatNode(self, node):
self.visitchildren(node)
if node.value: if node.value:
self.has_return_value = True self.has_return_value = True
self.returns.append(node) self.returns.append(node)
...@@ -2119,6 +2135,7 @@ class YieldNodeCollector(TreeVisitor): ...@@ -2119,6 +2135,7 @@ class YieldNodeCollector(TreeVisitor):
def visit_GeneratorExpressionNode(self, node): def visit_GeneratorExpressionNode(self, node):
pass pass
class MarkClosureVisitor(CythonTransform): class MarkClosureVisitor(CythonTransform):
def visit_ModuleNode(self, node): def visit_ModuleNode(self, node):
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
cimport cython cimport cython
from Cython.Compiler.Scanning cimport PyrexScanner from Cython.Compiler.Scanning cimport PyrexScanner
ctypedef object (*p_sub_expr_func)(object) ctypedef object (*p_sub_expr_func)(PyrexScanner obj)
# entry points # entry points
...@@ -17,26 +17,26 @@ cdef p_ident_list(PyrexScanner s) ...@@ -17,26 +17,26 @@ cdef p_ident_list(PyrexScanner s)
cdef tuple p_binop_operator(PyrexScanner s) cdef tuple p_binop_operator(PyrexScanner s)
cdef p_binop_expr(PyrexScanner s, ops, p_sub_expr_func p_sub_expr) cdef p_binop_expr(PyrexScanner s, ops, p_sub_expr_func p_sub_expr)
cpdef p_lambdef(PyrexScanner s, bint allow_conditional=*) cdef p_lambdef(PyrexScanner s, bint allow_conditional=*)
cdef p_lambdef_nocond(PyrexScanner s) cdef p_lambdef_nocond(PyrexScanner s)
cdef p_test(PyrexScanner s) cdef p_test(PyrexScanner s)
cdef p_test_nocond(PyrexScanner s) cdef p_test_nocond(PyrexScanner s)
cdef p_or_test(PyrexScanner s) cdef p_or_test(PyrexScanner s)
cdef p_rassoc_binop_expr(PyrexScanner s, ops, p_subexpr) cdef p_rassoc_binop_expr(PyrexScanner s, ops, p_sub_expr_func p_subexpr)
cpdef p_and_test(PyrexScanner s) cdef p_and_test(PyrexScanner s)
cpdef p_not_test(PyrexScanner s) cdef p_not_test(PyrexScanner s)
cdef p_comparison(PyrexScanner s) cdef p_comparison(PyrexScanner s)
cdef p_test_or_starred_expr(PyrexScanner s) cdef p_test_or_starred_expr(PyrexScanner s)
cdef p_starred_expr(PyrexScanner s) cdef p_starred_expr(PyrexScanner s)
cdef p_cascaded_cmp(PyrexScanner s) cdef p_cascaded_cmp(PyrexScanner s)
cdef p_cmp_op(PyrexScanner s) cdef p_cmp_op(PyrexScanner s)
cdef p_bit_expr(PyrexScanner s) cdef p_bit_expr(PyrexScanner s)
cdef p_xor_expr(s) cdef p_xor_expr(PyrexScanner s)
cdef p_and_expr(s) cdef p_and_expr(PyrexScanner s)
cdef p_shift_expr(s) cdef p_shift_expr(PyrexScanner s)
cdef p_arith_expr(s) cdef p_arith_expr(PyrexScanner s)
cdef p_term(s) cdef p_term(PyrexScanner s)
cdef p_factor(s) cdef p_factor(PyrexScanner s)
cdef _p_factor(PyrexScanner s) cdef _p_factor(PyrexScanner s)
cdef p_typecast(PyrexScanner s) cdef p_typecast(PyrexScanner s)
cdef p_sizeof(PyrexScanner s) cdef p_sizeof(PyrexScanner s)
...@@ -45,7 +45,7 @@ cdef p_yield_statement(PyrexScanner s) ...@@ -45,7 +45,7 @@ cdef p_yield_statement(PyrexScanner s)
cdef p_power(PyrexScanner s) cdef p_power(PyrexScanner s)
cdef p_new_expr(PyrexScanner s) cdef p_new_expr(PyrexScanner s)
cdef p_trailer(PyrexScanner s, node1) cdef p_trailer(PyrexScanner s, node1)
cpdef p_call_parse_args(PyrexScanner s, bint allow_genexp = *) cdef p_call_parse_args(PyrexScanner s, bint allow_genexp = *)
cdef p_call_build_packed_args(pos, positional_args, keyword_args, star_arg, starstar_arg) cdef p_call_build_packed_args(pos, positional_args, keyword_args, star_arg, starstar_arg)
cdef p_call(PyrexScanner s, function) cdef p_call(PyrexScanner s, function)
cdef p_index(PyrexScanner s, base) cdef p_index(PyrexScanner s, base)
...@@ -71,7 +71,7 @@ cdef p_comp_for(PyrexScanner s, body) ...@@ -71,7 +71,7 @@ cdef p_comp_for(PyrexScanner s, body)
cdef p_comp_if(PyrexScanner s, body) cdef p_comp_if(PyrexScanner s, body)
cdef p_dict_or_set_maker(PyrexScanner s) cdef p_dict_or_set_maker(PyrexScanner s)
cdef p_backquote_expr(PyrexScanner s) cdef p_backquote_expr(PyrexScanner s)
cpdef p_simple_expr_list(PyrexScanner s, expr=*) cdef p_simple_expr_list(PyrexScanner s, expr=*)
cdef p_test_or_starred_expr_list(PyrexScanner s, expr=*) cdef p_test_or_starred_expr_list(PyrexScanner s, expr=*)
cdef p_testlist(PyrexScanner s) cdef p_testlist(PyrexScanner s)
cdef p_testlist_star_expr(PyrexScanner s) cdef p_testlist_star_expr(PyrexScanner s)
...@@ -90,15 +90,15 @@ cdef p_expression_or_assignment(PyrexScanner s) ...@@ -90,15 +90,15 @@ cdef p_expression_or_assignment(PyrexScanner s)
cdef p_print_statement(PyrexScanner s) cdef p_print_statement(PyrexScanner s)
cdef p_exec_statement(PyrexScanner s) cdef p_exec_statement(PyrexScanner s)
cdef p_del_statement(PyrexScanner s) cdef p_del_statement(PyrexScanner s)
cpdef p_pass_statement(PyrexScanner s, bint with_newline = *) cdef p_pass_statement(PyrexScanner s, bint with_newline = *)
cdef p_break_statement(PyrexScanner s) cdef p_break_statement(PyrexScanner s)
cdef p_continue_statement(PyrexScanner s) cdef p_continue_statement(PyrexScanner s)
cdef p_return_statement(PyrexScanner s) cdef p_return_statement(PyrexScanner s)
cdef p_raise_statement(PyrexScanner s) cdef p_raise_statement(PyrexScanner s)
cdef p_import_statement(PyrexScanner s) cdef p_import_statement(PyrexScanner s)
cpdef p_from_import_statement(PyrexScanner s, bint first_statement = *) cdef p_from_import_statement(PyrexScanner s, bint first_statement = *)
cdef p_imported_name(PyrexScanner s, bint is_cimport) cdef p_imported_name(PyrexScanner s, bint is_cimport)
cpdef p_dotted_name(PyrexScanner s, bint as_allowed) cdef p_dotted_name(PyrexScanner s, bint as_allowed)
cdef p_as_name(PyrexScanner s) cdef p_as_name(PyrexScanner s)
cdef p_assert_statement(PyrexScanner s) cdef p_assert_statement(PyrexScanner s)
cdef p_if_statement(PyrexScanner s) cdef p_if_statement(PyrexScanner s)
...@@ -106,7 +106,7 @@ cdef p_if_clause(PyrexScanner s) ...@@ -106,7 +106,7 @@ cdef p_if_clause(PyrexScanner s)
cdef p_else_clause(PyrexScanner s) cdef p_else_clause(PyrexScanner s)
cdef p_while_statement(PyrexScanner s) cdef p_while_statement(PyrexScanner s)
cdef p_for_statement(PyrexScanner s) cdef p_for_statement(PyrexScanner s)
cpdef dict p_for_bounds(PyrexScanner s, bint allow_testlist = *) cdef dict p_for_bounds(PyrexScanner s, bint allow_testlist = *)
cdef p_for_from_relation(PyrexScanner s) cdef p_for_from_relation(PyrexScanner s)
cdef p_for_from_step(PyrexScanner s) cdef p_for_from_step(PyrexScanner s)
cdef p_target(PyrexScanner s, terminator) cdef p_target(PyrexScanner s, terminator)
...@@ -117,21 +117,25 @@ cdef p_except_clause(PyrexScanner s) ...@@ -117,21 +117,25 @@ cdef p_except_clause(PyrexScanner s)
cdef p_include_statement(PyrexScanner s, ctx) cdef p_include_statement(PyrexScanner s, ctx)
cdef p_with_statement(PyrexScanner s) cdef p_with_statement(PyrexScanner s)
cdef p_with_items(PyrexScanner s) cdef p_with_items(PyrexScanner s)
cpdef p_simple_statement(PyrexScanner s, bint first_statement = *) cdef p_with_template(PyrexScanner s)
cpdef p_simple_statement_list(PyrexScanner s, ctx, bint first_statement = *) cdef p_simple_statement(PyrexScanner s, bint first_statement = *)
cdef p_simple_statement_list(PyrexScanner s, ctx, bint first_statement = *)
cdef p_compile_time_expr(PyrexScanner s) cdef p_compile_time_expr(PyrexScanner s)
cdef p_DEF_statement(PyrexScanner s) cdef p_DEF_statement(PyrexScanner s)
cdef p_IF_statement(PyrexScanner s, ctx) cdef p_IF_statement(PyrexScanner s, ctx)
cpdef p_statement(PyrexScanner s, ctx, bint first_statement = *) cdef p_statement(PyrexScanner s, ctx, bint first_statement = *)
cpdef p_statement_list(PyrexScanner s, ctx, bint first_statement = *) cdef p_statement_list(PyrexScanner s, ctx, bint first_statement = *)
cpdef p_suite(PyrexScanner s, ctx = *, bint with_doc = *, bint with_pseudo_doc = *) cdef p_suite(PyrexScanner s, ctx = *)
cdef tuple p_suite_with_docstring(PyrexScanner s, ctx, with_doc_only = *)
cdef tuple _extract_docstring(node)
cdef p_positional_and_keyword_args(PyrexScanner s, end_sy_set, templates = *) cdef p_positional_and_keyword_args(PyrexScanner s, end_sy_set, templates = *)
cpdef p_c_base_type(PyrexScanner s, bint self_flag = *, bint nonempty = *, templates = *) cpdef p_c_base_type(PyrexScanner s, bint self_flag = *, bint nonempty = *, templates = *)
cdef p_calling_convention(PyrexScanner s) cdef p_calling_convention(PyrexScanner s)
cdef p_c_complex_base_type(PyrexScanner s) cdef p_c_complex_base_type(PyrexScanner s)
cpdef p_c_simple_base_type(PyrexScanner s, bint self_flag, bint nonempty, templates = *) cdef p_c_simple_base_type(PyrexScanner s, bint self_flag, bint nonempty, templates = *)
cdef p_buffer_or_template(PyrexScanner s, base_type_node, templates) cdef p_buffer_or_template(PyrexScanner s, base_type_node, templates)
cdef p_bracketed_base_type(PyrexScanner s, base_type_node, nonempty, empty)
cdef is_memoryviewslice_access(PyrexScanner s) cdef is_memoryviewslice_access(PyrexScanner s)
cdef p_memoryviewslice_access(PyrexScanner s, base_type_node) cdef p_memoryviewslice_access(PyrexScanner s, base_type_node)
cdef bint looking_at_name(PyrexScanner s) except -2 cdef bint looking_at_name(PyrexScanner s) except -2
...@@ -154,7 +158,7 @@ cdef p_exception_value_clause(PyrexScanner s) ...@@ -154,7 +158,7 @@ cdef p_exception_value_clause(PyrexScanner s)
cpdef p_c_arg_list(PyrexScanner s, ctx = *, bint in_pyfunc = *, bint cmethod_flag = *, cpdef p_c_arg_list(PyrexScanner s, ctx = *, bint in_pyfunc = *, bint cmethod_flag = *,
bint nonempty_declarators = *, bint kw_only = *, bint annotated = *) bint nonempty_declarators = *, bint kw_only = *, bint annotated = *)
cdef p_optional_ellipsis(PyrexScanner s) cdef p_optional_ellipsis(PyrexScanner s)
cpdef p_c_arg_decl(PyrexScanner s, ctx, in_pyfunc, bint cmethod_flag = *, bint nonempty = *, bint kw_only = *, bint annotated = *) cdef p_c_arg_decl(PyrexScanner s, ctx, in_pyfunc, bint cmethod_flag = *, bint nonempty = *, bint kw_only = *, bint annotated = *)
cdef p_api(PyrexScanner s) cdef p_api(PyrexScanner s)
cdef p_cdef_statement(PyrexScanner s, ctx) cdef p_cdef_statement(PyrexScanner s, ctx)
cdef p_cdef_block(PyrexScanner s, ctx) cdef p_cdef_block(PyrexScanner s, ctx)
...@@ -164,14 +168,15 @@ cdef p_c_enum_line(PyrexScanner s, ctx, list items) ...@@ -164,14 +168,15 @@ cdef p_c_enum_line(PyrexScanner s, ctx, list items)
cdef p_c_enum_item(PyrexScanner s, ctx, list items) cdef p_c_enum_item(PyrexScanner s, ctx, list items)
cdef p_c_struct_or_union_definition(PyrexScanner s, pos, ctx) cdef p_c_struct_or_union_definition(PyrexScanner s, pos, ctx)
cdef p_fused_definition(PyrexScanner s, pos, ctx) cdef p_fused_definition(PyrexScanner s, pos, ctx)
cdef p_struct_enum(PyrexScanner s, pos, ctx)
cdef p_visibility(PyrexScanner s, prev_visibility) cdef p_visibility(PyrexScanner s, prev_visibility)
cdef p_c_modifiers(PyrexScanner s) cdef p_c_modifiers(PyrexScanner s)
cdef p_c_func_or_var_declaration(PyrexScanner s, pos, ctx) cdef p_c_func_or_var_declaration(PyrexScanner s, pos, ctx)
cdef p_ctypedef_statement(PyrexScanner s, ctx) cdef p_ctypedef_statement(PyrexScanner s, ctx)
cdef p_decorators(PyrexScanner s) cdef p_decorators(PyrexScanner s)
cdef p_def_statement(PyrexScanner s, list decorators = *) cdef p_def_statement(PyrexScanner s, list decorators = *)
cpdef p_varargslist(PyrexScanner s, terminator=*, bint annotated = *) cdef p_varargslist(PyrexScanner s, terminator=*, bint annotated = *)
cpdef p_py_arg_decl(PyrexScanner s, bint annotated = *) cdef p_py_arg_decl(PyrexScanner s, bint annotated = *)
cdef p_class_statement(PyrexScanner s, decorators) cdef p_class_statement(PyrexScanner s, decorators)
cdef p_c_class_definition(PyrexScanner s, pos, ctx) cdef p_c_class_definition(PyrexScanner s, pos, ctx)
cdef p_c_class_options(PyrexScanner s) cdef p_c_class_options(PyrexScanner s)
......
This diff is collapsed.
This diff is collapsed.
...@@ -42,6 +42,7 @@ class BufferAux(object): ...@@ -42,6 +42,7 @@ class BufferAux(object):
def __repr__(self): def __repr__(self):
return "<BufferAux %r>" % self.__dict__ return "<BufferAux %r>" % self.__dict__
class Entry(object): class Entry(object):
# A symbol table entry in a Scope or ModuleNamespace. # A symbol table entry in a Scope or ModuleNamespace.
# #
...@@ -832,23 +833,6 @@ class Scope(object): ...@@ -832,23 +833,6 @@ class Scope(object):
def add_include_file(self, filename): def add_include_file(self, filename):
self.outer_scope.add_include_file(filename) self.outer_scope.add_include_file(filename)
def get_refcounted_entries(self, include_weakref=False):
py_attrs = []
py_buffers = []
memoryview_slices = []
for entry in self.var_entries:
if entry.type.is_pyobject:
if include_weakref or entry.name != "__weakref__":
py_attrs.append(entry)
elif entry.type == PyrexTypes.c_py_buffer_type:
py_buffers.append(entry)
elif entry.type.is_memoryviewslice:
memoryview_slices.append(entry)
have_entries = py_attrs or py_buffers or memoryview_slices
return have_entries, (py_attrs, py_buffers, memoryview_slices)
class PreImportScope(Scope): class PreImportScope(Scope):
...@@ -1769,6 +1753,7 @@ class CClassScope(ClassScope): ...@@ -1769,6 +1753,7 @@ class CClassScope(ClassScope):
# method_table_cname string # method_table_cname string
# getset_table_cname string # getset_table_cname string
# has_pyobject_attrs boolean Any PyObject attributes? # has_pyobject_attrs boolean Any PyObject attributes?
# has_cyclic_pyobject_attrs boolean Any PyObject attributes that may need GC?
# property_entries [Entry] # property_entries [Entry]
# defined boolean Defined in .pxd file # defined boolean Defined in .pxd file
# implemented boolean Defined in .pyx file # implemented boolean Defined in .pyx file
...@@ -1776,24 +1761,56 @@ class CClassScope(ClassScope): ...@@ -1776,24 +1761,56 @@ class CClassScope(ClassScope):
is_c_class_scope = 1 is_c_class_scope = 1
has_pyobject_attrs = False
has_cyclic_pyobject_attrs = False
defined = False
implemented = False
def __init__(self, name, outer_scope, visibility): def __init__(self, name, outer_scope, visibility):
ClassScope.__init__(self, name, outer_scope) ClassScope.__init__(self, name, outer_scope)
if visibility != 'extern': if visibility != 'extern':
self.method_table_cname = outer_scope.mangle(Naming.methtab_prefix, name) self.method_table_cname = outer_scope.mangle(Naming.methtab_prefix, name)
self.getset_table_cname = outer_scope.mangle(Naming.gstab_prefix, name) self.getset_table_cname = outer_scope.mangle(Naming.gstab_prefix, name)
self.has_pyobject_attrs = 0
self.property_entries = [] self.property_entries = []
self.inherited_var_entries = [] self.inherited_var_entries = []
self.defined = 0
self.implemented = 0
def needs_gc(self): def needs_gc(self):
# If the type or any of its base types have Python-valued # If the type or any of its base types have Python-valued
# C attributes, then it needs to participate in GC. # C attributes, then it needs to participate in GC.
return self.has_pyobject_attrs or \ if self.has_cyclic_pyobject_attrs:
(self.parent_type.base_type and return True
self.parent_type.base_type.scope is not None and base_type = self.parent_type.base_type
self.parent_type.base_type.scope.needs_gc()) if base_type and base_type.scope is not None:
return base_type.scope.needs_gc()
elif self.parent_type.is_builtin_type:
return not self.parent_type.is_gc_simple
return False
def needs_tp_clear(self):
"""
Do we need to generate an implementation for the tp_clear slot? Can
be disabled to keep references for the __dealloc__ cleanup function.
"""
return self.needs_gc() and not self.directives.get('no_gc_clear', False)
def get_refcounted_entries(self, include_weakref=False,
include_gc_simple=True):
py_attrs = []
py_buffers = []
memoryview_slices = []
for entry in self.var_entries:
if entry.type.is_pyobject:
if include_weakref or entry.name != "__weakref__":
if include_gc_simple or not entry.type.is_gc_simple:
py_attrs.append(entry)
elif entry.type == PyrexTypes.c_py_buffer_type:
py_buffers.append(entry)
elif entry.type.is_memoryviewslice:
memoryview_slices.append(entry)
have_entries = py_attrs or py_buffers or memoryview_slices
return have_entries, (py_attrs, py_buffers, memoryview_slices)
def declare_var(self, name, type, pos, def declare_var(self, name, type, pos,
cname = None, visibility = 'private', cname = None, visibility = 'private',
...@@ -1819,7 +1836,10 @@ class CClassScope(ClassScope): ...@@ -1819,7 +1836,10 @@ class CClassScope(ClassScope):
entry.is_variable = 1 entry.is_variable = 1
self.var_entries.append(entry) self.var_entries.append(entry)
if type.is_pyobject and name != '__weakref__': if type.is_pyobject and name != '__weakref__':
self.has_pyobject_attrs = 1 self.has_pyobject_attrs = True
if (not type.is_builtin_type
or not type.scope or type.scope.needs_gc()):
self.has_cyclic_pyobject_attrs = True
if visibility not in ('private', 'public', 'readonly'): if visibility not in ('private', 'public', 'readonly'):
error(pos, error(pos,
"Attribute of extension type cannot be declared %s" % visibility) "Attribute of extension type cannot be declared %s" % visibility)
...@@ -1853,7 +1873,6 @@ class CClassScope(ClassScope): ...@@ -1853,7 +1873,6 @@ class CClassScope(ClassScope):
self.namespace_cname = "(PyObject *)%s" % self.parent_type.typeptr_cname self.namespace_cname = "(PyObject *)%s" % self.parent_type.typeptr_cname
return entry return entry
def declare_pyfunction(self, name, pos, allow_redefine=False): def declare_pyfunction(self, name, pos, allow_redefine=False):
# Add an entry for a method. # Add an entry for a method.
if name in ('__eq__', '__ne__', '__lt__', '__gt__', '__le__', '__ge__'): if name in ('__eq__', '__ne__', '__lt__', '__gt__', '__le__', '__ge__'):
...@@ -1987,10 +2006,11 @@ class CClassScope(ClassScope): ...@@ -1987,10 +2006,11 @@ class CClassScope(ClassScope):
entries = base_scope.inherited_var_entries + base_scope.var_entries entries = base_scope.inherited_var_entries + base_scope.var_entries
for base_entry in entries: for base_entry in entries:
entry = self.declare(base_entry.name, adapt(base_entry.cname), entry = self.declare(
base_entry.type, None, 'private') base_entry.name, adapt(base_entry.cname),
entry.is_variable = 1 base_entry.type, None, 'private')
self.inherited_var_entries.append(entry) entry.is_variable = 1
self.inherited_var_entries.append(entry)
# If the class defined in a pxd, specific entries have not been added. # If the class defined in a pxd, specific entries have not been added.
# Ensure now that the parent (base) scope has specific entries # Ensure now that the parent (base) scope has specific entries
...@@ -2013,7 +2033,7 @@ class CClassScope(ClassScope): ...@@ -2013,7 +2033,7 @@ class CClassScope(ClassScope):
entry.is_final_cmethod = True entry.is_final_cmethod = True
entry.is_inline_cmethod = base_entry.is_inline_cmethod entry.is_inline_cmethod = base_entry.is_inline_cmethod
if (self.parent_scope == base_scope.parent_scope or if (self.parent_scope == base_scope.parent_scope or
entry.is_inline_cmethod): entry.is_inline_cmethod):
entry.final_func_cname = base_entry.final_func_cname entry.final_func_cname = base_entry.final_func_cname
if is_builtin: if is_builtin:
entry.is_builtin_cmethod = True entry.is_builtin_cmethod = True
...@@ -2021,6 +2041,7 @@ class CClassScope(ClassScope): ...@@ -2021,6 +2041,7 @@ class CClassScope(ClassScope):
if base_entry.utility_code: if base_entry.utility_code:
entry.utility_code = base_entry.utility_code entry.utility_code = base_entry.utility_code
class CppClassScope(Scope): class CppClassScope(Scope):
# Namespace of a C++ class. # Namespace of a C++ class.
......
...@@ -339,8 +339,11 @@ class SimpleAssignmentTypeInferer(object): ...@@ -339,8 +339,11 @@ class SimpleAssignmentTypeInferer(object):
Note: in order to support cross-closure type inference, this must be Note: in order to support cross-closure type inference, this must be
applies to nested scopes in top-down order. applies to nested scopes in top-down order.
""" """
# TODO: Implement a real type inference algorithm. def set_entry_type(self, entry, entry_type):
# (Something more powerful than just extending this one...) entry.type = entry_type
for e in entry.all_entries():
e.type = entry_type
def infer_types(self, scope): def infer_types(self, scope):
enabled = scope.directives['infer_types'] enabled = scope.directives['infer_types']
verbose = scope.directives['infer_types.verbose'] verbose = scope.directives['infer_types.verbose']
...@@ -352,85 +355,126 @@ class SimpleAssignmentTypeInferer(object): ...@@ -352,85 +355,126 @@ class SimpleAssignmentTypeInferer(object):
else: else:
for entry in scope.entries.values(): for entry in scope.entries.values():
if entry.type is unspecified_type: if entry.type is unspecified_type:
entry.type = py_object_type self.set_entry_type(entry, py_object_type)
return return
dependancies_by_entry = {} # entry -> dependancies # Set of assignemnts
entries_by_dependancy = {} # dependancy -> entries assignments = set([])
ready_to_infer = [] assmts_resolved = set([])
dependencies = {}
assmt_to_names = {}
for name, entry in scope.entries.items(): for name, entry in scope.entries.items():
for assmt in entry.cf_assignments:
names = assmt.type_dependencies()
assmt_to_names[assmt] = names
assmts = set()
for node in names:
assmts.update(node.cf_state)
dependencies[assmt] = assmts
if entry.type is unspecified_type: if entry.type is unspecified_type:
all = set() assignments.update(entry.cf_assignments)
for assmt in entry.cf_assignments: else:
all.update(assmt.type_dependencies(entry.scope)) assmts_resolved.update(entry.cf_assignments)
if all:
dependancies_by_entry[entry] = all def infer_name_node_type(node):
for dep in all: types = [assmt.inferred_type for assmt in node.cf_state]
if dep not in entries_by_dependancy: if not types:
entries_by_dependancy[dep] = set([entry]) node_type = py_object_type
else: else:
entries_by_dependancy[dep].add(entry) node_type = spanning_type(
else: types, entry.might_overflow, entry.pos)
ready_to_infer.append(entry) node.inferred_type = node_type
def resolve_dependancy(dep): def infer_name_node_type_partial(node):
if dep in entries_by_dependancy: types = [assmt.inferred_type for assmt in node.cf_state
for entry in entries_by_dependancy[dep]: if assmt.inferred_type is not None]
entry_deps = dependancies_by_entry[entry] if not types:
entry_deps.remove(dep) return
if not entry_deps and entry != dep: return spanning_type(types, entry.might_overflow, entry.pos)
del dependancies_by_entry[entry]
ready_to_infer.append(entry) def resolve_assignments(assignments):
resolved = set()
# Try to infer things in order... for assmt in assignments:
deps = dependencies[assmt]
# All assignments are resolved
if assmts_resolved.issuperset(deps):
for node in assmt_to_names[assmt]:
infer_name_node_type(node)
# Resolve assmt
inferred_type = assmt.infer_type()
done = False
assmts_resolved.add(assmt)
resolved.add(assmt)
assignments -= resolved
return resolved
def partial_infer(assmt):
partial_types = []
for node in assmt_to_names[assmt]:
partial_type = infer_name_node_type_partial(node)
if partial_type is None:
return False
partial_types.append((node, partial_type))
for node, partial_type in partial_types:
node.inferred_type = partial_type
assmt.infer_type()
return True
partial_assmts = set()
def resolve_partial(assignments):
# try to handle circular references
partials = set()
for assmt in assignments:
partial_types = []
if assmt in partial_assmts:
continue
for node in assmt_to_names[assmt]:
if partial_infer(assmt):
partials.add(assmt)
assmts_resolved.add(assmt)
partial_assmts.update(partials)
return partials
# Infer assignments
while True: while True:
while ready_to_infer: if not resolve_assignments(assignments):
entry = ready_to_infer.pop() if not resolve_partial(assignments):
types = [ break
assmt.rhs.infer_type(scope) inferred = set()
for assmt in entry.cf_assignments # First pass
] for entry in scope.entries.values():
if entry.type is not unspecified_type:
continue
entry_type = py_object_type
if assmts_resolved.issuperset(entry.cf_assignments):
types = [assmt.inferred_type for assmt in entry.cf_assignments]
if types and Utils.all(types): if types and Utils.all(types):
entry_type = spanning_type(types, entry.might_overflow, entry.pos) entry_type = spanning_type(
else: types, entry.might_overflow, entry.pos)
# FIXME: raise a warning? inferred.add(entry)
# print "No assignments", entry.pos, entry self.set_entry_type(entry, entry_type)
entry_type = py_object_type
# propagate entry type to all nested scopes def reinfer():
for e in entry.all_entries(): dirty = False
if e.type is unspecified_type: for entry in inferred:
e.type = entry_type types = [assmt.infer_type()
else: for assmt in entry.cf_assignments]
# FIXME: can this actually happen? new_type = spanning_type(types, entry.might_overflow, entry.pos)
assert e.type == entry_type, ( if new_type != entry.type:
'unexpected type mismatch between closures for inferred type %s: %s vs. %s' % self.set_entry_type(entry, new_type)
entry_type, e, entry) dirty = True
if verbose: return dirty
message(entry.pos, "inferred '%s' to be of type '%s'" % (entry.name, entry.type))
resolve_dependancy(entry) # types propagation
# Deal with simple circular dependancies... while reinfer():
for entry, deps in dependancies_by_entry.items(): pass
if len(deps) == 1 and deps == set([entry]):
types = [assmt.infer_type(scope) if verbose:
for assmt in entry.cf_assignments for entry in inferred:
if assmt.type_dependencies(scope) == ()] message(entry.pos, "inferred '%s' to be of type '%s'" % (
if types: entry.name, entry.type))
entry.type = spanning_type(types, entry.might_overflow, entry.pos)
types = [assmt.infer_type(scope)
for assmt in entry.cf_assignments]
entry.type = spanning_type(types, entry.might_overflow, entry.pos) # might be wider...
resolve_dependancy(entry)
del dependancies_by_entry[entry]
if ready_to_infer:
break
if not ready_to_infer:
break
# We can't figure out the rest with this algorithm, let them be objects.
for entry in dependancies_by_entry:
entry.type = py_object_type
if verbose:
message(entry.pos, "inferred '%s' to be of type '%s' (default)" % (entry.name, entry.type))
def find_spanning_type(type1, type2): def find_spanning_type(type1, type2):
if type1 is type2: if type1 is type2:
......
...@@ -93,6 +93,7 @@ class Signature(object): ...@@ -93,6 +93,7 @@ class Signature(object):
self.fixed_arg_format = arg_format self.fixed_arg_format = arg_format
self.ret_format = ret_format self.ret_format = ret_format
self.error_value = self.error_value_map.get(ret_format, None) self.error_value = self.error_value_map.get(ret_format, None)
self.exception_check = self.error_value is not None
self.is_staticmethod = False self.is_staticmethod = False
def num_fixed_args(self): def num_fixed_args(self):
...@@ -135,7 +136,9 @@ class Signature(object): ...@@ -135,7 +136,9 @@ class Signature(object):
else: else:
ret_type = self.return_type() ret_type = self.return_type()
exc_value = self.exception_value() exc_value = self.exception_value()
return PyrexTypes.CFuncType(ret_type, args, exception_value = exc_value) return PyrexTypes.CFuncType(
ret_type, args, exception_value=exc_value,
exception_check=self.exception_check)
def method_flags(self): def method_flags(self):
if self.ret_format == "O": if self.ret_format == "O":
...@@ -319,10 +322,10 @@ class GCDependentSlot(InternalMethodSlot): ...@@ -319,10 +322,10 @@ class GCDependentSlot(InternalMethodSlot):
def slot_code(self, scope): def slot_code(self, scope):
if not scope.needs_gc(): if not scope.needs_gc():
return "0" return "0"
if not scope.has_pyobject_attrs: if not scope.has_cyclic_pyobject_attrs:
# if the type does not have object attributes, it can # if the type does not have GC relevant object attributes, it can
# delegate GC methods to its parent - iff the parent # delegate GC methods to its parent - iff the parent functions
# functions are defined in the same module # are defined in the same module
parent_type_scope = scope.parent_type.base_type.scope parent_type_scope = scope.parent_type.base_type.scope
if scope.parent_scope is parent_type_scope.parent_scope: if scope.parent_scope is parent_type_scope.parent_scope:
entry = scope.parent_scope.lookup_here(scope.parent_type.base_type.name) entry = scope.parent_scope.lookup_here(scope.parent_type.base_type.name)
...@@ -331,6 +334,14 @@ class GCDependentSlot(InternalMethodSlot): ...@@ -331,6 +334,14 @@ class GCDependentSlot(InternalMethodSlot):
return InternalMethodSlot.slot_code(self, scope) return InternalMethodSlot.slot_code(self, scope)
class GCClearReferencesSlot(GCDependentSlot):
def slot_code(self, scope):
if scope.needs_tp_clear():
return GCDependentSlot.slot_code(self, scope)
return "0"
class ConstructorSlot(InternalMethodSlot): class ConstructorSlot(InternalMethodSlot):
# Descriptor for tp_new and tp_dealloc. # Descriptor for tp_new and tp_dealloc.
...@@ -339,10 +350,10 @@ class ConstructorSlot(InternalMethodSlot): ...@@ -339,10 +350,10 @@ class ConstructorSlot(InternalMethodSlot):
self.method = method self.method = method
def slot_code(self, scope): def slot_code(self, scope):
if self.slot_name != 'tp_new' \ if (self.slot_name != 'tp_new'
and scope.parent_type.base_type \ and scope.parent_type.base_type
and not scope.has_pyobject_attrs \ and not scope.has_pyobject_attrs
and not scope.lookup_here(self.method): and not scope.lookup_here(self.method)):
# if the type does not have object attributes, it can # if the type does not have object attributes, it can
# delegate GC methods to its parent - iff the parent # delegate GC methods to its parent - iff the parent
# functions are defined in the same module # functions are defined in the same module
...@@ -449,7 +460,10 @@ class MethodTableSlot(SlotDescriptor): ...@@ -449,7 +460,10 @@ class MethodTableSlot(SlotDescriptor):
# Slot descriptor for the method table. # Slot descriptor for the method table.
def slot_code(self, scope): def slot_code(self, scope):
return scope.method_table_cname if scope.pyfunc_entries:
return scope.method_table_cname
else:
return "0"
class MemberTableSlot(SlotDescriptor): class MemberTableSlot(SlotDescriptor):
...@@ -753,7 +767,7 @@ slot_table = ( ...@@ -753,7 +767,7 @@ slot_table = (
DocStringSlot("tp_doc"), DocStringSlot("tp_doc"),
GCDependentSlot("tp_traverse"), GCDependentSlot("tp_traverse"),
GCDependentSlot("tp_clear"), GCClearReferencesSlot("tp_clear"),
# Later -- synthesize a method to split into separate ops? # Later -- synthesize a method to split into separate ops?
MethodSlot(richcmpfunc, "tp_richcompare", "__richcmp__", inherited=False), # Py3 checks for __hash__ MethodSlot(richcmpfunc, "tp_richcompare", "__richcmp__", inherited=False), # Py3 checks for __hash__
...@@ -788,6 +802,7 @@ slot_table = ( ...@@ -788,6 +802,7 @@ slot_table = (
EmptySlot("tp_weaklist"), EmptySlot("tp_weaklist"),
EmptySlot("tp_del"), EmptySlot("tp_del"),
EmptySlot("tp_version_tag", ifdef="PY_VERSION_HEX >= 0x02060000"), EmptySlot("tp_version_tag", ifdef="PY_VERSION_HEX >= 0x02060000"),
EmptySlot("tp_finalize", ifdef="PY_VERSION_HEX >= 0x030400a1"),
) )
#------------------------------------------------------------------------------------------ #------------------------------------------------------------------------------------------
......
...@@ -4,15 +4,17 @@ ...@@ -4,15 +4,17 @@
# Tree visitor and transform framework # Tree visitor and transform framework
# #
import inspect import inspect
import TypeSlots
import Builtin from Cython.Compiler import TypeSlots
import Nodes from Cython.Compiler import Builtin
import ExprNodes from Cython.Compiler import Nodes
import Errors from Cython.Compiler import ExprNodes
import DebugFlags from Cython.Compiler import Errors
from Cython.Compiler import DebugFlags
import cython import cython
class TreeVisitor(object): class TreeVisitor(object):
""" """
Base class for writing visitors for a Cython tree, contains utilities for Base class for writing visitors for a Cython tree, contains utilities for
...@@ -62,7 +64,7 @@ class TreeVisitor(object): ...@@ -62,7 +64,7 @@ class TreeVisitor(object):
self.access_path = [] self.access_path = []
def dump_node(self, node, indent=0): def dump_node(self, node, indent=0):
ignored = list(node.child_attrs) + [u'child_attrs', u'pos', ignored = list(node.child_attrs or []) + [u'child_attrs', u'pos',
u'gil_message', u'cpp_message', u'gil_message', u'cpp_message',
u'subexprs'] u'subexprs']
values = [] values = []
......
...@@ -5,9 +5,7 @@ Pyrex extension modules in setup scripts.""" ...@@ -5,9 +5,7 @@ Pyrex extension modules in setup scripts."""
__revision__ = "$Id:$" __revision__ = "$Id:$"
import os
import sys import sys
from types import *
import distutils.extension as _Extension import distutils.extension as _Extension
try: try:
...@@ -15,57 +13,36 @@ try: ...@@ -15,57 +13,36 @@ try:
except ImportError: except ImportError:
warnings = None warnings = None
class Extension(_Extension.Extension):
_Extension.Extension.__doc__ + \
"""cython_include_dirs : [string]
list of directories to search for Pyrex header files (.pxd) (in
Unix form for portability)
cython_directives : {string:value}
dict of compiler directives
cython_create_listing_file : boolean
write pyrex error messages to a listing (.lis) file.
cython_line_directives : boolean
emit pyx line numbers for debugging/profiling
cython_cplus : boolean
use the C++ compiler for compiling and linking.
cython_c_in_temp : boolean
put generated C files in temp directory.
cython_gen_pxi : boolean
generate .pxi file for public declarations
cython_gdb : boolean
generate Cython debug information for this extension for cygdb
no_c_in_traceback : boolean
emit the c file and line number from the traceback for exceptions
"""
class Extension(_Extension.Extension):
# When adding arguments to this constructor, be sure to update # When adding arguments to this constructor, be sure to update
# user_options.extend in build_ext.py. # user_options.extend in build_ext.py.
def __init__(self, name, sources, def __init__(self, name, sources,
include_dirs = None, include_dirs=None,
define_macros = None, define_macros=None,
undef_macros = None, undef_macros=None,
library_dirs = None, library_dirs=None,
libraries = None, libraries=None,
runtime_library_dirs = None, runtime_library_dirs=None,
extra_objects = None, extra_objects=None,
extra_compile_args = None, extra_compile_args=None,
extra_link_args = None, extra_link_args=None,
export_symbols = None, export_symbols=None,
#swig_opts = None, #swig_opts=None,
depends = None, depends=None,
language = None, language=None,
cython_include_dirs = None, cython_include_dirs=None,
cython_directives = None, cython_directives=None,
cython_create_listing = 0, cython_create_listing=False,
cython_line_directives = 0, cython_line_directives=False,
cython_cplus = 0, cython_cplus=False,
cython_c_in_temp = 0, cython_c_in_temp=False,
cython_gen_pxi = 0, cython_gen_pxi=False,
cython_gdb = False, cython_gdb=False,
no_c_in_traceback = False, no_c_in_traceback=False,
cython_compile_time_env = None, cython_compile_time_env=None,
**kw): **kw):
# Translate pyrex_X to cython_X for backwards compatibility. # Translate pyrex_X to cython_X for backwards compatibility.
had_pyrex_options = False had_pyrex_options = False
for key in kw.keys(): for key in kw.keys():
...@@ -73,38 +50,40 @@ class Extension(_Extension.Extension): ...@@ -73,38 +50,40 @@ class Extension(_Extension.Extension):
had_pyrex_options = True had_pyrex_options = True
kw['cython' + key[5:]] = kw.pop(key) kw['cython' + key[5:]] = kw.pop(key)
if had_pyrex_options: if had_pyrex_options:
Extension.__init__(self, name, sources, Extension.__init__(
include_dirs = include_dirs, self, name, sources,
define_macros = define_macros, include_dirs=include_dirs,
undef_macros = undef_macros, define_macros=define_macros,
library_dirs = library_dirs, undef_macros=undef_macros,
libraries = libraries, library_dirs=library_dirs,
runtime_library_dirs = runtime_library_dirs, libraries=libraries,
extra_objects = extra_objects, runtime_library_dirs=runtime_library_dirs,
extra_compile_args = extra_compile_args, extra_objects=extra_objects,
extra_link_args = extra_link_args, extra_compile_args=extra_compile_args,
export_symbols = export_symbols, extra_link_args=extra_link_args,
#swig_opts = swig_opts, export_symbols=export_symbols,
depends = depends, #swig_opts=swig_opts,
language = language, depends=depends,
no_c_in_traceback = no_c_in_traceback, language=language,
no_c_in_traceback=no_c_in_traceback,
**kw) **kw)
return return
_Extension.Extension.__init__(self, name, sources, _Extension.Extension.__init__(
include_dirs = include_dirs, self, name, sources,
define_macros = define_macros, include_dirs=include_dirs,
undef_macros = undef_macros, define_macros=define_macros,
library_dirs = library_dirs, undef_macros=undef_macros,
libraries = libraries, library_dirs=library_dirs,
runtime_library_dirs = runtime_library_dirs, libraries=libraries,
extra_objects = extra_objects, runtime_library_dirs=runtime_library_dirs,
extra_compile_args = extra_compile_args, extra_objects=extra_objects,
extra_link_args = extra_link_args, extra_compile_args=extra_compile_args,
export_symbols = export_symbols, extra_link_args=extra_link_args,
#swig_opts = swig_opts, export_symbols=export_symbols,
depends = depends, #swig_opts=swig_opts,
language = language, depends=depends,
language=language,
**kw) **kw)
self.cython_include_dirs = cython_include_dirs or [] self.cython_include_dirs = cython_include_dirs or []
...@@ -121,3 +100,29 @@ class Extension(_Extension.Extension): ...@@ -121,3 +100,29 @@ class Extension(_Extension.Extension):
# class Extension # class Extension
read_setup_file = _Extension.read_setup_file read_setup_file = _Extension.read_setup_file
# reuse and extend original docstring from base class (if we can)
if sys.version_info[0] < 3 and _Extension.Extension.__doc__:
# -OO discards docstrings
Extension.__doc__ = _Extension.Extension.__doc__ + """\
cython_include_dirs : [string]
list of directories to search for Pyrex header files (.pxd) (in
Unix form for portability)
cython_directives : {string:value}
dict of compiler directives
cython_create_listing_file : boolean
write pyrex error messages to a listing (.lis) file.
cython_line_directives : boolean
emit pyx line numbers for debugging/profiling
cython_cplus : boolean
use the C++ compiler for compiling and linking.
cython_c_in_temp : boolean
put generated C files in temp directory.
cython_gen_pxi : boolean
generate .pxi file for public declarations
cython_gdb : boolean
generate Cython debug information for this extension for cygdb
no_c_in_traceback : boolean
emit the c file and line number from the traceback for exceptions
"""
cdef extern from "math.h" nogil: cdef extern from "math.h" nogil:
double M_E double M_E
double M_LOG2E double M_LOG2E
double M_LOG10E double M_LOG10E
...@@ -14,6 +13,13 @@ cdef extern from "math.h" nogil: ...@@ -14,6 +13,13 @@ cdef extern from "math.h" nogil:
double M_SQRT2 double M_SQRT2
double M_SQRT1_2 double M_SQRT1_2
# C99 constants
float INFINITY
float NAN
double HUGE_VAL
float HUGE_VALF
long double HUGE_VALL
double acos(double x) double acos(double x)
double asin(double x) double asin(double x)
double atan(double x) double atan(double x)
...@@ -71,8 +77,15 @@ cdef extern from "math.h" nogil: ...@@ -71,8 +77,15 @@ cdef extern from "math.h" nogil:
long lround(double) long lround(double)
double copysign(double, double) double copysign(double, double)
float copysignf(float, float)
long double copysignl(long double, long double)
double erf(double) double erf(double)
float erff(float)
long double erfl(long double)
double erfc(double) double erfc(double)
float erfcf(float)
long double erfcl(long double)
double fdim(double x, double y) double fdim(double x, double y)
double fma(double x, double y) double fma(double x, double y)
...@@ -81,5 +94,9 @@ cdef extern from "math.h" nogil: ...@@ -81,5 +94,9 @@ cdef extern from "math.h" nogil:
double scalbln(double x, long n) double scalbln(double x, long n)
double scalbn(double x, int n) double scalbn(double x, int n)
double nan(char*) # const char* double nan(const char*)
bint isfinite(long double)
bint isnormal(long double)
bint isnan(long double)
bint isinf(long double)
...@@ -48,7 +48,7 @@ cdef extern from "stdio.h" nogil: ...@@ -48,7 +48,7 @@ cdef extern from "stdio.h" nogil:
void rewind (FILE *stream) void rewind (FILE *stream)
long int ftell (FILE *stream) long int ftell (FILE *stream)
ctypedef long long int fpos_t ctypedef struct fpos_t
ctypedef const fpos_t const_fpos_t "const fpos_t" ctypedef const fpos_t const_fpos_t "const fpos_t"
int fgetpos (FILE *stream, fpos_t *position) int fgetpos (FILE *stream, fpos_t *position)
int fsetpos (FILE *stream, const fpos_t *position) int fsetpos (FILE *stream, const fpos_t *position)
......
...@@ -445,8 +445,8 @@ cdef extern from "numpy/arrayobject.h": ...@@ -445,8 +445,8 @@ cdef extern from "numpy/arrayobject.h":
bint PyArray_ISVARIABLE(ndarray) bint PyArray_ISVARIABLE(ndarray)
bint PyArray_SAFEALIGNEDCOPY(ndarray) bint PyArray_SAFEALIGNEDCOPY(ndarray)
bint PyArray_ISNBO(ndarray) bint PyArray_ISNBO(char) # works on ndarray.byteorder
bint PyArray_IsNativeByteOrder(ndarray) bint PyArray_IsNativeByteOrder(char) # works on ndarray.byteorder
bint PyArray_ISNOTSWAPPED(ndarray) bint PyArray_ISNOTSWAPPED(ndarray)
bint PyArray_ISBYTESWAPPED(ndarray) bint PyArray_ISBYTESWAPPED(ndarray)
......
# NumPy math library
#
# This exports the functionality of the NumPy core math library, aka npymath,
# which provides implementations of C99 math functions and macros for system
# with a C89 library (such as MSVC). npymath is available with NumPy >=1.3,
# although some functions will require later versions. The spacing function is
# not in C99, but comes from Fortran.
#
# On the Cython side, the npymath functions are available without the "npy_"
# prefix that they have in C, to make this is a drop-in replacement for
# libc.math. The same is true for the constants, where possible.
#
# See the NumPy documentation for linking instructions.
#
# Complex number support and NumPy 2.0 half-precision functions are currently
# not exported.
#
# Author: Lars Buitinck
cdef extern from "numpy/npy_math.h":
# Floating-point classification
long double NAN "NPY_NAN"
long double INFINITY "NPY_INFINITY"
long double PZERO "NPY_PZERO" # positive zero
long double NZERO "NPY_NZERO" # negative zero
# These four are actually macros and work on any floating-point type.
bint isfinite "npy_isfinite"(long double)
bint isinf "npy_isinf"(long double)
bint isnan "npy_isnan"(long double)
bint signbit "npy_signbit"(long double)
double copysign "npy_copysign"(double, double)
# Math constants
long double E "NPY_E"
long double LOG2E "NPY_LOG2E" # ln(e) / ln(2)
long double LOG10E "NPY_LOG10E" # ln(e) / ln(10)
long double LOGE2 "NPY_LOGE2" # ln(2)
long double LOGE10 "NPY_LOGE10" # ln(10)
long double PI "NPY_PI"
long double PI_2 "NPY_PI_2" # pi / 2
long double PI_4 "NPY_PI_4" # pi / 4
long double NPY_1_PI # 1 / pi; NPY_ because of ident syntax
long double NPY_2_PI # 2 / pi
long double EULER "NPY_EULER" # Euler constant (gamma, 0.57721)
# Low-level floating point manipulation (NumPy >=1.4)
double nextafter "npy_nextafter"(double x, double y)
double spacing "npy_spacing"(double x)
...@@ -97,7 +97,7 @@ cclass = ccall = cfunc = _EmptyDecoratorAndManager() ...@@ -97,7 +97,7 @@ cclass = ccall = cfunc = _EmptyDecoratorAndManager()
returns = lambda type_arg: _EmptyDecoratorAndManager() returns = lambda type_arg: _EmptyDecoratorAndManager()
final = internal = type_version_tag = _empty_decorator final = internal = type_version_tag = no_gc_clear = _empty_decorator
def inline(f, *args, **kwds): def inline(f, *args, **kwds):
if isinstance(f, basestring): if isinstance(f, basestring):
...@@ -144,7 +144,7 @@ def address(arg): ...@@ -144,7 +144,7 @@ def address(arg):
return pointer(type(arg))([arg]) return pointer(type(arg))([arg])
def declare(type=None, value=_Unspecified, **kwds): def declare(type=None, value=_Unspecified, **kwds):
if type is not None and hasattr(type, '__call__'): if type not in (None, object) and hasattr(type, '__call__'):
if value is not _Unspecified: if value is not _Unspecified:
return type(value) return type(value)
else: else:
......
This diff is collapsed.
This diff is collapsed.
...@@ -189,7 +189,8 @@ def unpack_source_tree(tree_file, dir=None): ...@@ -189,7 +189,8 @@ def unpack_source_tree(tree_file, dir=None):
elif cur_file is not None: elif cur_file is not None:
cur_file.write(line) cur_file.write(line)
elif line.strip() and not line.lstrip().startswith('#'): elif line.strip() and not line.lstrip().startswith('#'):
header.append(line) if line.strip() not in ('"""', "'''"):
header.append(line)
if cur_file is not None: if cur_file is not None:
cur_file.close() cur_file.close()
return dir, ''.join(header) return dir, ''.join(header)
/*
* Special implementations of built-in functions and methods.
*
* Optional optimisations for builtins are in Optimize.c.
*
* General object operations and protocols are in ObjectHandling.c.
*/
//////////////////// Globals.proto //////////////////// //////////////////// Globals.proto ////////////////////
...@@ -5,6 +12,7 @@ static PyObject* __Pyx_Globals(void); /*proto*/ ...@@ -5,6 +12,7 @@ static PyObject* __Pyx_Globals(void); /*proto*/
//////////////////// Globals //////////////////// //////////////////// Globals ////////////////////
//@substitute: naming //@substitute: naming
//@requires: ObjectHandling.c::GetAttr
// This is a stub implementation until we have something more complete. // This is a stub implementation until we have something more complete.
// Currently, we only handle the most common case of a read-only dict // Currently, we only handle the most common case of a read-only dict
...@@ -33,7 +41,7 @@ static PyObject* __Pyx_Globals(void) { ...@@ -33,7 +41,7 @@ static PyObject* __Pyx_Globals(void) {
PyObject* name = PyList_GET_ITEM(names, i); PyObject* name = PyList_GET_ITEM(names, i);
#endif #endif
if (!PyDict_Contains(globals, name)) { if (!PyDict_Contains(globals, name)) {
PyObject* value = PyObject_GetAttr($module_cname, name); PyObject* value = __Pyx_GetAttr($module_cname, name);
if (!value) { if (!value) {
#if CYTHON_COMPILING_IN_PYPY #if CYTHON_COMPILING_IN_PYPY
Py_DECREF(name); Py_DECREF(name);
...@@ -164,35 +172,16 @@ bad: ...@@ -164,35 +172,16 @@ bad:
return 0; return 0;
} }
//////////////////// GetAttr.proto ////////////////////
static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); /*proto*/
//////////////////// GetAttr ////////////////////
//@requires: ObjectHandling.c::PyObjectGetAttrStr
static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
#if CYTHON_COMPILING_IN_CPYTHON
#if PY_MAJOR_VERSION >= 3
if (likely(PyUnicode_Check(n)))
#else
if (likely(PyString_Check(n)))
#endif
return __Pyx_PyObject_GetAttrStr(o, n);
#endif
return PyObject_GetAttr(o, n);
}
//////////////////// GetAttr3.proto //////////////////// //////////////////// GetAttr3.proto ////////////////////
static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); /*proto*/ static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); /*proto*/
//////////////////// GetAttr3 //////////////////// //////////////////// GetAttr3 ////////////////////
//@requires: GetAttr //@requires: ObjectHandling.c::GetAttr
static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) {
PyObject *r = __Pyx_GetAttr(o, n); PyObject *r = __Pyx_GetAttr(o, n);
if (!r) { if (unlikely(!r)) {
if (!PyErr_ExceptionMatches(PyExc_AttributeError)) if (!PyErr_ExceptionMatches(PyExc_AttributeError))
goto bad; goto bad;
PyErr_Clear(); PyErr_Clear();
...@@ -212,7 +201,7 @@ static PyObject* __Pyx_Intern(PyObject* s); /* proto */ ...@@ -212,7 +201,7 @@ static PyObject* __Pyx_Intern(PyObject* s); /* proto */
static PyObject* __Pyx_Intern(PyObject* s) { static PyObject* __Pyx_Intern(PyObject* s) {
if (!(likely(PyString_CheckExact(s)))) { if (!(likely(PyString_CheckExact(s)))) {
PyErr_Format(PyExc_TypeError, "Expected str, got %s", Py_TYPE(s)->tp_name); PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "str", Py_TYPE(s)->tp_name);
return 0; return 0;
} }
Py_INCREF(s); Py_INCREF(s);
...@@ -266,60 +255,55 @@ static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) { ...@@ -266,60 +255,55 @@ static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) {
//////////////////// py_dict_keys.proto //////////////////// //////////////////// py_dict_keys.proto ////////////////////
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Keys(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_Keys(PyObject* d); /*proto*/
#else
#define __Pyx_PyDict_Keys(d) PyDict_Keys(d)
#endif
//////////////////// py_dict_keys //////////////////// //////////////////// py_dict_keys ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Keys(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_Keys(PyObject* d) {
return PyObject_CallMethodObjArgs(d, PYIDENT("keys"), NULL); if (PY_MAJOR_VERSION >= 3)
return __Pyx_PyObject_CallMethod1((PyObject*)&PyDict_Type, PYIDENT("keys"), d);
else
return PyDict_Keys(d);
} }
#endif
//////////////////// py_dict_values.proto //////////////////// //////////////////// py_dict_values.proto ////////////////////
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Values(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_Values(PyObject* d); /*proto*/
#else
#define __Pyx_PyDict_Values(d) PyDict_Values(d)
#endif
//////////////////// py_dict_values //////////////////// //////////////////// py_dict_values ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Values(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_Values(PyObject* d) {
return PyObject_CallMethodObjArgs(d, PYIDENT("values"), NULL); if (PY_MAJOR_VERSION >= 3)
return __Pyx_PyObject_CallMethod1((PyObject*)&PyDict_Type, PYIDENT("values"), d);
else
return PyDict_Values(d);
} }
#endif
//////////////////// py_dict_items.proto //////////////////// //////////////////// py_dict_items.proto ////////////////////
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Items(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_Items(PyObject* d); /*proto*/
#else
#define __Pyx_PyDict_Items(d) PyDict_Items(d)
#endif
//////////////////// py_dict_items //////////////////// //////////////////// py_dict_items ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
#if PY_MAJOR_VERSION >= 3
static CYTHON_INLINE PyObject* __Pyx_PyDict_Items(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_Items(PyObject* d) {
return PyObject_CallMethodObjArgs(d, PYIDENT("items"), NULL); if (PY_MAJOR_VERSION >= 3)
return __Pyx_PyObject_CallMethod1((PyObject*)&PyDict_Type, PYIDENT("items"), d);
else
return PyDict_Items(d);
} }
#endif
//////////////////// py_dict_iterkeys.proto //////////////////// //////////////////// py_dict_iterkeys.proto ////////////////////
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d); /*proto*/
//////////////////// py_dict_iterkeys //////////////////// //////////////////// py_dict_iterkeys ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("keys") : PYIDENT("iterkeys"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("keys") : PYIDENT("iterkeys"));
} }
//////////////////// py_dict_itervalues.proto //////////////////// //////////////////// py_dict_itervalues.proto ////////////////////
...@@ -327,9 +311,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d) { ...@@ -327,9 +311,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterKeys(PyObject* d) {
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d); /*proto*/
//////////////////// py_dict_itervalues //////////////////// //////////////////// py_dict_itervalues ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("values") : PYIDENT("itervalues"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("values") : PYIDENT("itervalues"));
} }
//////////////////// py_dict_iteritems.proto //////////////////// //////////////////// py_dict_iteritems.proto ////////////////////
...@@ -337,9 +322,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d) { ...@@ -337,9 +322,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterValues(PyObject* d) {
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d); /*proto*/
//////////////////// py_dict_iteritems //////////////////// //////////////////// py_dict_iteritems ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("items") : PYIDENT("iteritems"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("items") : PYIDENT("iteritems"));
} }
//////////////////// py_dict_viewkeys.proto //////////////////// //////////////////// py_dict_viewkeys.proto ////////////////////
...@@ -350,9 +336,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d) { ...@@ -350,9 +336,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_IterItems(PyObject* d) {
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d); /*proto*/
//////////////////// py_dict_viewkeys //////////////////// //////////////////// py_dict_viewkeys ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("keys") : PYIDENT("viewkeys"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("keys") : PYIDENT("viewkeys"));
} }
//////////////////// py_dict_viewvalues.proto //////////////////// //////////////////// py_dict_viewvalues.proto ////////////////////
...@@ -363,9 +350,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d) { ...@@ -363,9 +350,10 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewKeys(PyObject* d) {
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d); /*proto*/
//////////////////// py_dict_viewvalues //////////////////// //////////////////// py_dict_viewvalues ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("values") : PYIDENT("viewvalues"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("values") : PYIDENT("viewvalues"));
} }
//////////////////// py_dict_viewitems.proto //////////////////// //////////////////// py_dict_viewitems.proto ////////////////////
...@@ -376,7 +364,8 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d) { ...@@ -376,7 +364,8 @@ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewValues(PyObject* d) {
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewItems(PyObject* d); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewItems(PyObject* d); /*proto*/
//////////////////// py_dict_viewitems //////////////////// //////////////////// py_dict_viewitems ////////////////////
//@requires: ObjectHandling.c::PyObjectCallMethod
static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewItems(PyObject* d) { static CYTHON_INLINE PyObject* __Pyx_PyDict_ViewItems(PyObject* d) {
return PyObject_CallMethodObjArgs(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("items") : PYIDENT("viewitems"), NULL); return __Pyx_PyObject_CallMethod0(d, (PY_MAJOR_VERSION >= 3) ? PYIDENT("items") : PYIDENT("viewitems"));
} }
/////////////// FetchCommonType.proto ///////////////
static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);
/////////////// FetchCommonType ///////////////
static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) {
static PyObject* fake_module = NULL;
PyObject* args = NULL;
PyTypeObject* cached_type = NULL;
const char* cython_module = "_cython_" CYTHON_ABI;
if (fake_module == NULL) {
PyObject* sys_modules = PyImport_GetModuleDict(); // borrowed
fake_module = PyDict_GetItemString(sys_modules, cython_module); // borrowed
if (fake_module != NULL) {
Py_INCREF(fake_module);
} else {
PyObject* py_cython_module;
args = PyTuple_New(1); if (args == NULL) goto bad;
#if PY_MAJOR_VERSION >= 3
py_cython_module = PyUnicode_DecodeUTF8(cython_module, strlen(cython_module), NULL);
#else
py_cython_module = PyBytes_FromString(cython_module);
#endif
if (py_cython_module == NULL) goto bad;
PyTuple_SET_ITEM(args, 0, py_cython_module);
fake_module = PyObject_Call((PyObject*) &PyModule_Type, args, NULL);
if (PyDict_SetItemString(sys_modules, cython_module, fake_module) < 0)
goto bad;
}
}
if (PyObject_HasAttrString(fake_module, type->tp_name)) {
cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name);
} else {
if (PyType_Ready(type) < 0) goto bad;
if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0)
goto bad;
cached_type = type;
}
cleanup:
Py_XDECREF(args);
return cached_type;
bad:
cached_type = NULL;
goto cleanup;
}
...@@ -27,6 +27,7 @@ typedef struct { ...@@ -27,6 +27,7 @@ typedef struct {
PyObject *func_name; PyObject *func_name;
PyObject *func_qualname; PyObject *func_qualname;
PyObject *func_doc; PyObject *func_doc;
PyObject *func_globals;
PyObject *func_code; PyObject *func_code;
PyObject *func_closure; PyObject *func_closure;
PyObject *func_classobj; /* No-args super() class cell */ PyObject *func_classobj; /* No-args super() class cell */
...@@ -44,12 +45,13 @@ typedef struct { ...@@ -44,12 +45,13 @@ typedef struct {
static PyTypeObject *__pyx_CyFunctionType = 0; static PyTypeObject *__pyx_CyFunctionType = 0;
#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, code) \ #define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code) \
__Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, code) __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)
static PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml, static PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,
int flags, PyObject* qualname, int flags, PyObject* qualname,
PyObject *self, PyObject *module, PyObject *self,
PyObject *module, PyObject *globals,
PyObject* code); PyObject* code);
static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,
...@@ -67,6 +69,7 @@ static int __Pyx_CyFunction_init(void); ...@@ -67,6 +69,7 @@ static int __Pyx_CyFunction_init(void);
//////////////////// CythonFunction //////////////////// //////////////////// CythonFunction ////////////////////
//@substitute: naming //@substitute: naming
//@requires: CommonTypes.c::FetchCommonType
static PyObject * static PyObject *
__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure) __Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)
...@@ -212,18 +215,11 @@ __Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value) ...@@ -212,18 +215,11 @@ __Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value)
return 0; return 0;
} }
// TODO: we implicitly use the global module to get func_globals. This
// will need to be passed into __Pyx_CyFunction_NewEx() if we share
// this type across modules. We currently avoid doing this to reduce
// the overhead of creating a function object, and to avoid keeping a
// reference to the module dict as long as we don't need to.
static PyObject * static PyObject *
__Pyx_CyFunction_get_globals(CYTHON_UNUSED __pyx_CyFunctionObject *op) __Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op)
{ {
PyObject* dict = PyModule_GetDict(${module_cname}); Py_INCREF(op->func_globals);
Py_XINCREF(dict); return op->func_globals;
return dict;
} }
static PyObject * static PyObject *
...@@ -399,7 +395,7 @@ static PyMethodDef __pyx_CyFunction_methods[] = { ...@@ -399,7 +395,7 @@ static PyMethodDef __pyx_CyFunction_methods[] = {
static PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, PyObject* qualname, static PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, PyObject* qualname,
PyObject *closure, PyObject *module, PyObject* code) { PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {
__pyx_CyFunctionObject *op = PyObject_GC_New(__pyx_CyFunctionObject, type); __pyx_CyFunctionObject *op = PyObject_GC_New(__pyx_CyFunctionObject, type);
if (op == NULL) if (op == NULL)
return NULL; return NULL;
...@@ -417,6 +413,8 @@ static PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int f ...@@ -417,6 +413,8 @@ static PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int f
op->func_qualname = qualname; op->func_qualname = qualname;
op->func_doc = NULL; op->func_doc = NULL;
op->func_classobj = NULL; op->func_classobj = NULL;
op->func_globals = globals;
Py_INCREF(op->func_globals);
Py_XINCREF(code); Py_XINCREF(code);
op->func_code = code; op->func_code = code;
/* Dynamic Default args */ /* Dynamic Default args */
...@@ -439,6 +437,7 @@ __Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) ...@@ -439,6 +437,7 @@ __Pyx_CyFunction_clear(__pyx_CyFunctionObject *m)
Py_CLEAR(m->func_name); Py_CLEAR(m->func_name);
Py_CLEAR(m->func_qualname); Py_CLEAR(m->func_qualname);
Py_CLEAR(m->func_doc); Py_CLEAR(m->func_doc);
Py_CLEAR(m->func_globals);
Py_CLEAR(m->func_code); Py_CLEAR(m->func_code);
Py_CLEAR(m->func_classobj); Py_CLEAR(m->func_classobj);
Py_CLEAR(m->defaults_tuple); Py_CLEAR(m->defaults_tuple);
...@@ -476,6 +475,7 @@ static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, ...@@ -476,6 +475,7 @@ static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit,
Py_VISIT(m->func_name); Py_VISIT(m->func_name);
Py_VISIT(m->func_qualname); Py_VISIT(m->func_qualname);
Py_VISIT(m->func_doc); Py_VISIT(m->func_doc);
Py_VISIT(m->func_globals);
Py_VISIT(m->func_code); Py_VISIT(m->func_code);
Py_VISIT(m->func_classobj); Py_VISIT(m->func_classobj);
Py_VISIT(m->defaults_tuple); Py_VISIT(m->defaults_tuple);
...@@ -634,6 +634,9 @@ static PyTypeObject __pyx_CyFunctionType_type = { ...@@ -634,6 +634,9 @@ static PyTypeObject __pyx_CyFunctionType_type = {
#if PY_VERSION_HEX >= 0x02060000 #if PY_VERSION_HEX >= 0x02060000
0, /*tp_version_tag*/ 0, /*tp_version_tag*/
#endif #endif
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_finalize*/
#endif
}; };
...@@ -642,9 +645,10 @@ static int __Pyx_CyFunction_init(void) { ...@@ -642,9 +645,10 @@ static int __Pyx_CyFunction_init(void) {
// avoid a useless level of call indirection // avoid a useless level of call indirection
__pyx_CyFunctionType_type.tp_call = PyCFunction_Call; __pyx_CyFunctionType_type.tp_call = PyCFunction_Call;
#endif #endif
if (PyType_Ready(&__pyx_CyFunctionType_type) < 0) __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);
if (__pyx_CyFunctionType == NULL) {
return -1; return -1;
__pyx_CyFunctionType = &__pyx_CyFunctionType_type; }
return 0; return 0;
} }
...@@ -703,11 +707,12 @@ typedef struct { ...@@ -703,11 +707,12 @@ typedef struct {
PyObject *self; PyObject *self;
} __pyx_FusedFunctionObject; } __pyx_FusedFunctionObject;
#define __pyx_FusedFunction_NewEx(ml, flags, qualname, self, module, code) \ #define __pyx_FusedFunction_NewEx(ml, flags, qualname, self, module, globals, code) \
__pyx_FusedFunction_New(__pyx_FusedFunctionType, ml, flags, qualname, self, module, code) __pyx_FusedFunction_New(__pyx_FusedFunctionType, ml, flags, qualname, self, module, globals, code)
static PyObject *__pyx_FusedFunction_New(PyTypeObject *type, static PyObject *__pyx_FusedFunction_New(PyTypeObject *type,
PyMethodDef *ml, int flags, PyMethodDef *ml, int flags,
PyObject *qualname, PyObject *self, PyObject *module, PyObject *qualname, PyObject *self,
PyObject *module, PyObject *globals,
PyObject *code); PyObject *code);
static int __pyx_FusedFunction_clear(__pyx_FusedFunctionObject *self); static int __pyx_FusedFunction_clear(__pyx_FusedFunctionObject *self);
...@@ -722,11 +727,12 @@ static int __pyx_FusedFunction_init(void); ...@@ -722,11 +727,12 @@ static int __pyx_FusedFunction_init(void);
static PyObject * static PyObject *
__pyx_FusedFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, __pyx_FusedFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags,
PyObject *qualname, PyObject *self, PyObject *qualname, PyObject *self,
PyObject *module, PyObject *code) PyObject *module, PyObject *globals,
PyObject *code)
{ {
__pyx_FusedFunctionObject *fusedfunc = __pyx_FusedFunctionObject *fusedfunc =
(__pyx_FusedFunctionObject *) __Pyx_CyFunction_New(type, ml, flags, qualname, (__pyx_FusedFunctionObject *) __Pyx_CyFunction_New(type, ml, flags, qualname,
self, module, code); self, module, globals, code);
if (!fusedfunc) if (!fusedfunc)
return NULL; return NULL;
...@@ -784,6 +790,7 @@ __pyx_FusedFunction_descr_get(PyObject *self, PyObject *obj, PyObject *type) ...@@ -784,6 +790,7 @@ __pyx_FusedFunction_descr_get(PyObject *self, PyObject *obj, PyObject *type)
((__pyx_CyFunctionObject *) func)->func_qualname, ((__pyx_CyFunctionObject *) func)->func_qualname,
((__pyx_CyFunctionObject *) func)->func_closure, ((__pyx_CyFunctionObject *) func)->func_closure,
((PyCFunctionObject *) func)->m_module, ((PyCFunctionObject *) func)->m_module,
((__pyx_CyFunctionObject *) func)->func_globals,
((__pyx_CyFunctionObject *) func)->func_code); ((__pyx_CyFunctionObject *) func)->func_code);
if (!meth) if (!meth)
return NULL; return NULL;
...@@ -1079,13 +1086,16 @@ static PyTypeObject __pyx_FusedFunctionType_type = { ...@@ -1079,13 +1086,16 @@ static PyTypeObject __pyx_FusedFunctionType_type = {
#if PY_VERSION_HEX >= 0x02060000 #if PY_VERSION_HEX >= 0x02060000
0, /*tp_version_tag*/ 0, /*tp_version_tag*/
#endif #endif
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_finalize*/
#endif
}; };
static int __pyx_FusedFunction_init(void) { static int __pyx_FusedFunction_init(void) {
if (PyType_Ready(&__pyx_FusedFunctionType_type) < 0) { __pyx_FusedFunctionType = __Pyx_FetchCommonType(&__pyx_FusedFunctionType_type);
if (__pyx_FusedFunctionType == NULL) {
return -1; return -1;
} }
__pyx_FusedFunctionType = &__pyx_FusedFunctionType_type;
return 0; return 0;
} }
......
...@@ -14,7 +14,10 @@ static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed ...@@ -14,7 +14,10 @@ static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed
} }
if (none_allowed && obj == Py_None) return 1; if (none_allowed && obj == Py_None) return 1;
else if (exact) { else if (exact) {
if (Py_TYPE(obj) == type) return 1; if (likely(Py_TYPE(obj) == type)) return 1;
#if PY_MAJOR_VERSION == 2
else if ((type == &PyBaseString_Type) && __Pyx_PyBaseString_CheckExact(obj)) return 1;
#endif
} }
else { else {
if (PyObject_TypeCheck(obj, type)) return 1; if (PyObject_TypeCheck(obj, type)) return 1;
......
...@@ -57,6 +57,7 @@ static int __Pyx_PyGen_FetchStopIterationValue(PyObject **pvalue); ...@@ -57,6 +57,7 @@ static int __Pyx_PyGen_FetchStopIterationValue(PyObject **pvalue);
//@requires: Exceptions.c::SwapException //@requires: Exceptions.c::SwapException
//@requires: Exceptions.c::RaiseException //@requires: Exceptions.c::RaiseException
//@requires: ObjectHandling.c::PyObjectCallMethod //@requires: ObjectHandling.c::PyObjectCallMethod
//@requires: CommonTypes.c::FetchCommonType
static PyObject *__Pyx_Generator_Next(PyObject *self); static PyObject *__Pyx_Generator_Next(PyObject *self);
static PyObject *__Pyx_Generator_Send(PyObject *self, PyObject *value); static PyObject *__Pyx_Generator_Send(PyObject *self, PyObject *value);
...@@ -460,16 +461,20 @@ static void __Pyx_Generator_dealloc(PyObject *self) { ...@@ -460,16 +461,20 @@ static void __Pyx_Generator_dealloc(PyObject *self) {
PyObject_GC_UnTrack(gen); PyObject_GC_UnTrack(gen);
if (gen->gi_weakreflist != NULL) if (gen->gi_weakreflist != NULL)
PyObject_ClearWeakRefs(self); PyObject_ClearWeakRefs(self);
PyObject_GC_Track(self);
if (gen->resume_label > 0) { if (gen->resume_label > 0) {
/* Generator is paused, so we need to close */ /* Generator is paused, so we need to close */
PyObject_GC_Track(self);
#if PY_VERSION_HEX >= 0x030400a1
if (PyObject_CallFinalizerFromDealloc(self))
#else
Py_TYPE(gen)->tp_del(self); Py_TYPE(gen)->tp_del(self);
if (self->ob_refcnt > 0) if (self->ob_refcnt > 0)
#endif
return; /* resurrected. :( */ return; /* resurrected. :( */
PyObject_GC_UnTrack(self);
} }
PyObject_GC_UnTrack(self);
__Pyx_Generator_clear(self); __Pyx_Generator_clear(self);
PyObject_GC_Del(gen); PyObject_GC_Del(gen);
} }
...@@ -482,9 +487,11 @@ static void __Pyx_Generator_del(PyObject *self) { ...@@ -482,9 +487,11 @@ static void __Pyx_Generator_del(PyObject *self) {
if (gen->resume_label <= 0) if (gen->resume_label <= 0)
return ; return ;
#if PY_VERSION_HEX < 0x030400a1
/* Temporarily resurrect the object. */ /* Temporarily resurrect the object. */
assert(self->ob_refcnt == 0); assert(self->ob_refcnt == 0);
self->ob_refcnt = 1; self->ob_refcnt = 1;
#endif
/* Save the current exception, if any. */ /* Save the current exception, if any. */
__Pyx_ErrFetch(&error_type, &error_value, &error_traceback); __Pyx_ErrFetch(&error_type, &error_value, &error_traceback);
...@@ -499,6 +506,7 @@ static void __Pyx_Generator_del(PyObject *self) { ...@@ -499,6 +506,7 @@ static void __Pyx_Generator_del(PyObject *self) {
/* Restore the saved exception. */ /* Restore the saved exception. */
__Pyx_ErrRestore(error_type, error_value, error_traceback); __Pyx_ErrRestore(error_type, error_value, error_traceback);
#if PY_VERSION_HEX < 0x030400a1
/* Undo the temporary resurrection; can't use DECREF here, it would /* Undo the temporary resurrection; can't use DECREF here, it would
* cause a recursive call. * cause a recursive call.
*/ */
...@@ -532,6 +540,7 @@ static void __Pyx_Generator_del(PyObject *self) { ...@@ -532,6 +540,7 @@ static void __Pyx_Generator_del(PyObject *self) {
--Py_TYPE(self)->tp_frees; --Py_TYPE(self)->tp_frees;
--Py_TYPE(self)->tp_allocs; --Py_TYPE(self)->tp_allocs;
#endif #endif
#endif
} }
static PyMemberDef __pyx_Generator_memberlist[] = { static PyMemberDef __pyx_Generator_memberlist[] = {
...@@ -578,7 +587,7 @@ static PyTypeObject __pyx_GeneratorType_type = { ...@@ -578,7 +587,7 @@ static PyTypeObject __pyx_GeneratorType_type = {
0, /*tp_getattro*/ 0, /*tp_getattro*/
0, /*tp_setattro*/ 0, /*tp_setattro*/
0, /*tp_as_buffer*/ 0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, /* tp_flags*/
0, /*tp_doc*/ 0, /*tp_doc*/
(traverseproc) __Pyx_Generator_traverse, /*tp_traverse*/ (traverseproc) __Pyx_Generator_traverse, /*tp_traverse*/
0, /*tp_clear*/ 0, /*tp_clear*/
...@@ -604,10 +613,17 @@ static PyTypeObject __pyx_GeneratorType_type = { ...@@ -604,10 +613,17 @@ static PyTypeObject __pyx_GeneratorType_type = {
0, /*tp_cache*/ 0, /*tp_cache*/
0, /*tp_subclasses*/ 0, /*tp_subclasses*/
0, /*tp_weaklist*/ 0, /*tp_weaklist*/
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_del*/
#else
__Pyx_Generator_del, /*tp_del*/ __Pyx_Generator_del, /*tp_del*/
#endif
#if PY_VERSION_HEX >= 0x02060000 #if PY_VERSION_HEX >= 0x02060000
0, /*tp_version_tag*/ 0, /*tp_version_tag*/
#endif #endif
#if PY_VERSION_HEX >= 0x030400a1
__Pyx_Generator_del, /*tp_finalize*/
#endif
}; };
static __pyx_GeneratorObject *__Pyx_Generator_New(__pyx_generator_body_t body, static __pyx_GeneratorObject *__Pyx_Generator_New(__pyx_generator_body_t body,
...@@ -638,9 +654,10 @@ static int __pyx_Generator_init(void) { ...@@ -638,9 +654,10 @@ static int __pyx_Generator_init(void) {
/* on Windows, C-API functions can't be used in slots statically */ /* on Windows, C-API functions can't be used in slots statically */
__pyx_GeneratorType_type.tp_getattro = PyObject_GenericGetAttr; __pyx_GeneratorType_type.tp_getattro = PyObject_GenericGetAttr;
__pyx_GeneratorType_type.tp_iter = PyObject_SelfIter; __pyx_GeneratorType_type.tp_iter = PyObject_SelfIter;
if (PyType_Ready(&__pyx_GeneratorType_type)) {
__pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type);
if (__pyx_GeneratorType == NULL) {
return -1; return -1;
} }
__pyx_GeneratorType = &__pyx_GeneratorType_type;
return 0; return 0;
} }
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Optional optimisations of built-in functions and methods.
*
* Required replacements of builtins are in Builtins.c.
*
* General object operations and protocols are in ObjectHandling.c.
*/
/////////////// append.proto /////////////// /////////////// append.proto ///////////////
static CYTHON_INLINE PyObject* __Pyx_PyObject_Append(PyObject* L, PyObject* x); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyObject_Append(PyObject* L, PyObject* x); /*proto*/
...@@ -52,6 +60,20 @@ static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { ...@@ -52,6 +60,20 @@ static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) #define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
#endif #endif
//////////////////// ListExtend.proto ////////////////////
static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
#if CYTHON_COMPILING_IN_CPYTHON
PyObject* none = _PyList_Extend((PyListObject*)L, v);
if (unlikely(!none))
return -1;
Py_DECREF(none);
return 0;
#else
return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v);
#endif
}
/////////////// pop.proto /////////////// /////////////// pop.proto ///////////////
static CYTHON_INLINE PyObject* __Pyx_PyObject_Pop(PyObject* L); /*proto*/ static CYTHON_INLINE PyObject* __Pyx_PyObject_Pop(PyObject* L); /*proto*/
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment