Commit da85ee9d authored by Thomas Wouters's avatar Thomas Wouters

Merged revisions 53304-53433,53435-53450 via svnmerge from

svn+ssh://pythondev@svn.python.org/python/trunk

........
  r53304 | vinay.sajip | 2007-01-09 15:50:28 +0100 (Tue, 09 Jan 2007) | 1 line

  Bug #1627575: Added _open() method to FileHandler which can be used to reopen files. The FileHandler instance now saves the encoding (which can be None) in an attribute called "encoding".
........
  r53305 | vinay.sajip | 2007-01-09 15:51:36 +0100 (Tue, 09 Jan 2007) | 1 line

  Added entry about addition of _open() method to logging.FileHandler.
........
  r53306 | vinay.sajip | 2007-01-09 15:54:56 +0100 (Tue, 09 Jan 2007) | 1 line

  Added a docstring
........
  r53316 | thomas.heller | 2007-01-09 20:19:33 +0100 (Tue, 09 Jan 2007) | 4 lines

  Verify the sizes of the basic ctypes data types against the struct
  module.

  Will backport to release25-maint.
........
  r53340 | gustavo.niemeyer | 2007-01-10 17:13:40 +0100 (Wed, 10 Jan 2007) | 3 lines

  Mention in the int() docstring that a base zero has meaning, as
  stated in http://docs.python.org/lib/built-in-funcs.html as well.
........
  r53341 | gustavo.niemeyer | 2007-01-10 17:15:48 +0100 (Wed, 10 Jan 2007) | 2 lines

  Minor change in int() docstring for proper spacing.
........
  r53358 | thomas.heller | 2007-01-10 21:12:13 +0100 (Wed, 10 Jan 2007) | 1 line

  Change the ctypes version number to "1.1.0".
........
  r53361 | thomas.heller | 2007-01-10 21:51:19 +0100 (Wed, 10 Jan 2007) | 1 line

  Must change the version number in the _ctypes extension as well.
........
  r53362 | guido.van.rossum | 2007-01-11 00:12:56 +0100 (Thu, 11 Jan 2007) | 3 lines

  Fix the signature of log_error().  (A subclass that did the right thing
  was getting complaints from pychecker.)
........
  r53370 | matthias.klose | 2007-01-11 11:26:31 +0100 (Thu, 11 Jan 2007) | 2 lines

  - Make the documentation match the code and the docstring
........
  r53375 | matthias.klose | 2007-01-11 12:44:04 +0100 (Thu, 11 Jan 2007) | 2 lines

  - idle: Honor the "Cancel" action in the save dialog (Debian bug #299092).
........
  r53381 | raymond.hettinger | 2007-01-11 19:22:55 +0100 (Thu, 11 Jan 2007) | 1 line

  SF #1486663 -- Allow keyword args in subclasses of set() and frozenset().
........
  r53388 | thomas.heller | 2007-01-11 22:18:56 +0100 (Thu, 11 Jan 2007) | 4 lines

  Fixes for 64-bit Windows: In ctypes.wintypes, correct the definitions
  of HANDLE, WPARAM, LPARAM data types.  Make parameterless foreign
  function calls work.
........
  r53390 | thomas.heller | 2007-01-11 22:23:12 +0100 (Thu, 11 Jan 2007) | 2 lines

  Correct the comments: the code is right.
........
  r53393 | brett.cannon | 2007-01-12 08:27:52 +0100 (Fri, 12 Jan 2007) | 3 lines

  Fix error where the end of a funcdesc environment was accidentally moved too
  far down.
........
  r53397 | anthony.baxter | 2007-01-12 10:35:56 +0100 (Fri, 12 Jan 2007) | 3 lines

  add parsetok.h as a dependency - previously, changing this file doesn't
  cause the right files to be rebuilt.
........
  r53401 | thomas.heller | 2007-01-12 21:08:19 +0100 (Fri, 12 Jan 2007) | 3 lines

  Avoid warnings in the test suite because ctypes.wintypes cannot be
  imported on non-windows systems.
........
  r53402 | thomas.heller | 2007-01-12 21:17:34 +0100 (Fri, 12 Jan 2007) | 6 lines

  patch #1610795: BSD version of ctypes.util.find_library, by Martin
  Kammerhofer.

  release25-maint backport candidate, but the release manager has to
  decide.
........
  r53403 | thomas.heller | 2007-01-12 21:21:53 +0100 (Fri, 12 Jan 2007) | 3 lines

  patch #1610795: BSD version of ctypes.util.find_library, by Martin
  Kammerhofer.
........
  r53406 | brett.cannon | 2007-01-13 01:29:49 +0100 (Sat, 13 Jan 2007) | 2 lines

  Deprecate the sets module.
........
  r53407 | georg.brandl | 2007-01-13 13:31:51 +0100 (Sat, 13 Jan 2007) | 3 lines

  Fix typo.
........
  r53409 | marc-andre.lemburg | 2007-01-13 22:00:08 +0100 (Sat, 13 Jan 2007) | 16 lines

  Bump version number and change copyright year.

  Add new API linux_distribution() which supports reading the full distribution
  name and also knows how to parse LSB-style release files.

  Redirect the old dist() API to the new API (using the short distribution name
  taken from the release file filename).

  Add branch and revision to _sys_version().

  Add work-around for Cygwin to libc_ver().

  Add support for IronPython (thanks for Anthony Baxter) and make
  Jython support more robust.
........
  r53410 | neal.norwitz | 2007-01-13 22:22:37 +0100 (Sat, 13 Jan 2007) | 1 line

  Fix grammar in docstrings
........
  r53411 | marc-andre.lemburg | 2007-01-13 23:32:21 +0100 (Sat, 13 Jan 2007) | 9 lines

  Add parameter sys_version to _sys_version().

  Change the cache for _sys_version() to take the parameter into account.

  Add support for parsing the IronPython 1.0.1 sys.version value - even
  though it still returns '1.0.0'; the version string no longer includes
  the patch level.
........
  r53412 | peter.astrand | 2007-01-13 23:35:35 +0100 (Sat, 13 Jan 2007) | 1 line

  Fix for bug #1634343: allow specifying empty arguments on Windows
........
  r53414 | marc-andre.lemburg | 2007-01-13 23:59:36 +0100 (Sat, 13 Jan 2007) | 14 lines

  Add Python implementation to the machine details.

  Pretty-print the Python version used for running PyBench.

  Let the user know when calibration has finished.

  [ 1563844 ] pybench support for IronPython:

  Simplify Unicode version detection.

  Make garbage collection and check interval settings optional if
  the Python implementation doesn't support thess (e.g. IronPython).
........
  r53415 | marc-andre.lemburg | 2007-01-14 00:13:54 +0100 (Sun, 14 Jan 2007) | 5 lines

  Use defaults if sys.executable isn't set (e.g. on Jython).

  This change allows running PyBench under Jython.
........
  r53416 | marc-andre.lemburg | 2007-01-14 00:15:33 +0100 (Sun, 14 Jan 2007) | 3 lines

  Jython doesn't have sys.setcheckinterval() - ignore it in that case.
........
  r53420 | gerhard.haering | 2007-01-14 02:43:50 +0100 (Sun, 14 Jan 2007) | 29 lines

  Merged changes from standalone version 2.3.3. This should probably all be
  merged into the 2.5 maintenance branch:

  - self->statement was not checked while fetching data, which could
    lead to crashes if you used the pysqlite API in unusual ways.
    Closing the cursor and continuing to fetch data was enough.

  - Converters are stored in a converters dictionary. The converter name
    is uppercased first. The old upper-casing algorithm was wrong and
    was replaced by a simple call to the Python string's upper() method
    instead.

  -Applied patch by Glyph Lefkowitz that fixes the problem with
   subsequent SQLITE_SCHEMA errors.

  - Improvement to the row type: rows can now be iterated over and have a keys()
    method. This improves compatibility with both tuple and dict a lot.

  - A bugfix for the subsecond resolution in timestamps.

  - Corrected the way the flags PARSE_DECLTYPES and PARSE_COLNAMES are
    checked for. Now they work as documented.

  - gcc on Linux sucks. It exports all symbols by default in shared
    libraries, so if symbols are not unique it can lead to problems with
    symbol lookup.  pysqlite used to crash under Apache when mod_cache
    was enabled because both modules had the symbol cache_init. I fixed
    this by applying the prefix pysqlite_ almost everywhere. Sigh.
........
  r53423 | guido.van.rossum | 2007-01-14 04:46:33 +0100 (Sun, 14 Jan 2007) | 2 lines

  Remove a dependency of this test on $COLUMNS.
........
  r53425 | ka-ping.yee | 2007-01-14 05:25:15 +0100 (Sun, 14 Jan 2007) | 3 lines

  Handle old-style instances more gracefully (display documentation on
  the relevant class instead of documentation on <type 'instance'>).
........
  r53440 | vinay.sajip | 2007-01-14 22:49:59 +0100 (Sun, 14 Jan 2007) | 1 line

  Added WatchedFileHandler (based on SF patch #1598415)
........
  r53441 | vinay.sajip | 2007-01-14 22:50:50 +0100 (Sun, 14 Jan 2007) | 1 line

  Added documentation for WatchedFileHandler (based on SF patch #1598415)
........
  r53442 | guido.van.rossum | 2007-01-15 01:02:35 +0100 (Mon, 15 Jan 2007) | 2 lines

  Doc patch matching r53434 (htonl etc. now always take/return positive ints).
........
parent 7c37353b
......@@ -989,10 +989,11 @@ The \class{FileHandler} class, located in the core \module{logging}
package, sends logging output to a disk file. It inherits the output
functionality from \class{StreamHandler}.
\begin{classdesc}{FileHandler}{filename\optional{, mode}}
\begin{classdesc}{FileHandler}{filename\optional{, mode\optional{, encoding}}}
Returns a new instance of the \class{FileHandler} class. The specified
file is opened and used as the stream for logging. If \var{mode} is
not specified, \constant{'a'} is used. By default, the file grows
not specified, \constant{'a'} is used. If \var{encoding} is not \var{None},
it is used to open the file with that encoding. By default, the file grows
indefinitely.
\end{classdesc}
......@@ -1004,6 +1005,41 @@ Closes the file.
Outputs the record to the file.
\end{methoddesc}
\subsubsection{WatchedFileHandler}
\versionadded{2.6}
The \class{WatchedFileHandler} class, located in the \module{logging.handlers}
module, is a \class{FileHandler} which watches the file it is logging to.
If the file changes, it is closed and reopened using the file name.
A file change can happen because of usage of programs such as \var{newsyslog}
and \var{logrotate} which perform log file rotation. This handler, intended
for use under Unix/Linux, watches the file to see if it has changed since the
last emit. (A file is deemed to have changed if its device or inode have
changed.) If the file has changed, the old file stream is closed, and the file
opened to get a new stream.
This handler is not appropriate for use under Windows, because under Windows
open log files cannot be moved or renamed - logging opens the files with
exclusive locks - and so there is no need for such a handler. Furthermore,
\var{ST_INO} is not supported under Windows; \function{stat()} always returns
zero for this value.
\begin{classdesc}{WatchedFileHandler}{filename\optional{,mode\optional{,
encoding}}}
Returns a new instance of the \class{WatchedFileHandler} class. The specified
file is opened and used as the stream for logging. If \var{mode} is
not specified, \constant{'a'} is used. If \var{encoding} is not \var{None},
it is used to open the file with that encoding. By default, the file grows
indefinitely.
\end{classdesc}
\begin{methoddesc}{emit}{record}
Outputs the record to the file, but first checks to see if the file has
changed. If it has, the existing stream is flushed and closed and the file
opened again, before outputting the record to the file.
\end{methoddesc}
\subsubsection{RotatingFileHandler}
The \class{RotatingFileHandler} class, located in the \module{logging.handlers}
......
......@@ -185,7 +185,7 @@ or may raise the following exceptions:
The server didn't reply properly to the \samp{HELO} greeting.
\item[\exception{SMTPAuthenticationError}]
The server didn't accept the username/password combination.
\item[\exception{SMTPError}]
\item[\exception{SMTPException}]
No suitable authentication method was found.
\end{description}
\end{methoddesc}
......
......@@ -331,25 +331,25 @@ Availability: \UNIX.
\end{funcdesc}
\begin{funcdesc}{ntohl}{x}
Convert 32-bit integers from network to host byte order. On machines
Convert 32-bit positive integers from network to host byte order. On machines
where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 4-byte swap operation.
\end{funcdesc}
\begin{funcdesc}{ntohs}{x}
Convert 16-bit integers from network to host byte order. On machines
Convert 16-bit positive integers from network to host byte order. On machines
where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 2-byte swap operation.
\end{funcdesc}
\begin{funcdesc}{htonl}{x}
Convert 32-bit integers from host to network byte order. On machines
Convert 32-bit positive integers from host to network byte order. On machines
where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 4-byte swap operation.
\end{funcdesc}
\begin{funcdesc}{htons}{x}
Convert 16-bit integers from host to network byte order. On machines
Convert 16-bit positive integers from host to network byte order. On machines
where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 2-byte swap operation.
\end{funcdesc}
......
......@@ -281,6 +281,7 @@ Execute the \class{unittest.TestSuite} instance \var{suite}.
The optional argument \var{testclass} accepts one of the test classes in the
suite so as to print out more detailed information on where the testing suite
originated from.
\end{funcdesc}
The \module{test.test_support} module defines the following classes:
......@@ -299,4 +300,3 @@ Temporarily set the environment variable \code{envvar} to the value of
Temporarily unset the environment variable \code{envvar}.
\end{methoddesc}
\end{funcdesc}
......@@ -396,7 +396,7 @@ class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
self.log_message('"%s" %s %s',
self.requestline, str(code), str(size))
def log_error(self, *args):
def log_error(self, format, *args):
"""Log an error.
This is called when a request cannot be fulfilled. By
......@@ -408,7 +408,7 @@ class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
"""
self.log_message(*args)
self.log_message(format, *args)
def log_message(self, format, *args):
"""Log an arbitrary message.
......
......@@ -5,7 +5,7 @@
import os as _os, sys as _sys
__version__ = "1.0.1"
__version__ = "1.1.0"
from _ctypes import Union, Structure, Array
from _ctypes import _Pointer
......@@ -133,6 +133,18 @@ elif _os.name == "posix":
from _ctypes import sizeof, byref, addressof, alignment, resize
from _ctypes import _SimpleCData
def _check_size(typ, typecode=None):
# Check if sizeof(ctypes_type) against struct.calcsize. This
# should protect somewhat against a misconfigured libffi.
from struct import calcsize
if typecode is None:
# Most _type_ codes are the same as used in struct
typecode = typ._type_
actual, required = sizeof(typ), calcsize(typecode)
if actual != required:
raise SystemError("sizeof(%s) wrong: %d instead of %d" % \
(typ, actual, required))
class py_object(_SimpleCData):
_type_ = "O"
def __repr__(self):
......@@ -140,18 +152,23 @@ class py_object(_SimpleCData):
return super(py_object, self).__repr__()
except ValueError:
return "%s(<NULL>)" % type(self).__name__
_check_size(py_object, "P")
class c_short(_SimpleCData):
_type_ = "h"
_check_size(c_short)
class c_ushort(_SimpleCData):
_type_ = "H"
_check_size(c_ushort)
class c_long(_SimpleCData):
_type_ = "l"
_check_size(c_long)
class c_ulong(_SimpleCData):
_type_ = "L"
_check_size(c_ulong)
if _calcsize("i") == _calcsize("l"):
# if int and long have the same size, make c_int an alias for c_long
......@@ -160,15 +177,19 @@ if _calcsize("i") == _calcsize("l"):
else:
class c_int(_SimpleCData):
_type_ = "i"
_check_size(c_int)
class c_uint(_SimpleCData):
_type_ = "I"
_check_size(c_uint)
class c_float(_SimpleCData):
_type_ = "f"
_check_size(c_float)
class c_double(_SimpleCData):
_type_ = "d"
_check_size(c_double)
if _calcsize("l") == _calcsize("q"):
# if long and long long have the same size, make c_longlong an alias for c_long
......@@ -177,33 +198,40 @@ if _calcsize("l") == _calcsize("q"):
else:
class c_longlong(_SimpleCData):
_type_ = "q"
_check_size(c_longlong)
class c_ulonglong(_SimpleCData):
_type_ = "Q"
## def from_param(cls, val):
## return ('d', float(val), val)
## from_param = classmethod(from_param)
_check_size(c_ulonglong)
class c_ubyte(_SimpleCData):
_type_ = "B"
c_ubyte.__ctype_le__ = c_ubyte.__ctype_be__ = c_ubyte
# backward compatibility:
##c_uchar = c_ubyte
_check_size(c_ubyte)
class c_byte(_SimpleCData):
_type_ = "b"
c_byte.__ctype_le__ = c_byte.__ctype_be__ = c_byte
_check_size(c_byte)
class c_char(_SimpleCData):
_type_ = "c"
c_char.__ctype_le__ = c_char.__ctype_be__ = c_char
_check_size(c_char)
class c_char_p(_SimpleCData):
_type_ = "z"
_check_size(c_char_p, "P")
class c_void_p(_SimpleCData):
_type_ = "P"
c_voidp = c_void_p # backwards compatibility (to a bug)
_check_size(c_void_p)
# This cache maps types to pointers to them.
_pointer_type_cache = {}
......
......@@ -32,12 +32,32 @@ if sys.platform == "win32" and sizeof(c_void_p) == sizeof(c_int):
# or wrong calling convention
self.assertRaises(ValueError, IsWindow, None)
if sys.platform == "win32":
class FunctionCallTestCase(unittest.TestCase):
if is_resource_enabled("SEH"):
def test_SEH(self):
# Call functions with invalid arguments, and make sure that access violations
# are trapped and raise an exception.
# Call functions with invalid arguments, and make sure
# that access violations are trapped and raise an
# exception.
self.assertRaises(WindowsError, windll.kernel32.GetModuleHandleA, 32)
def test_noargs(self):
# This is a special case on win32 x64
windll.user32.GetDesktopWindow()
class TestWintypes(unittest.TestCase):
def test_HWND(self):
from ctypes import wintypes
self.failUnlessEqual(sizeof(wintypes.HWND), sizeof(c_void_p))
def test_PARAM(self):
from ctypes import wintypes
self.failUnlessEqual(sizeof(wintypes.WPARAM),
sizeof(c_void_p))
self.failUnlessEqual(sizeof(wintypes.LPARAM),
sizeof(c_void_p))
class Structures(unittest.TestCase):
def test_struct_by_value(self):
......
......@@ -46,23 +46,16 @@ elif os.name == "posix":
import re, tempfile, errno
def _findLib_gcc(name):
expr = '[^\(\)\s]*lib%s\.[^\(\)\s]*' % name
expr = r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name)
fdout, ccout = tempfile.mkstemp()
os.close(fdout)
cmd = 'if type gcc &>/dev/null; then CC=gcc; else CC=cc; fi;' \
cmd = 'if type gcc >/dev/null 2>&1; then CC=gcc; else CC=cc; fi;' \
'$CC -Wl,-t -o ' + ccout + ' 2>&1 -l' + name
try:
fdout, outfile = tempfile.mkstemp()
os.close(fdout)
fd = os.popen(cmd)
trace = fd.read()
err = fd.close()
f = os.popen(cmd)
trace = f.read()
f.close()
finally:
try:
os.unlink(outfile)
except OSError as e:
if e.errno != errno.ENOENT:
raise
try:
os.unlink(ccout)
except OSError as e:
......@@ -73,29 +66,58 @@ elif os.name == "posix":
return None
return res.group(0)
def _findLib_ld(name):
expr = '/[^\(\)\s]*lib%s\.[^\(\)\s]*' % name
res = re.search(expr, os.popen('/sbin/ldconfig -p 2>/dev/null').read())
if not res:
# Hm, this works only for libs needed by the python executable.
cmd = 'ldd %s 2>/dev/null' % sys.executable
res = re.search(expr, os.popen(cmd).read())
if not res:
return None
return res.group(0)
def _get_soname(f):
# assuming GNU binutils / ELF
if not f:
return None
cmd = "objdump -p -j .dynamic 2>/dev/null " + f
res = re.search(r'\sSONAME\s+([^\s]+)', os.popen(cmd).read())
if not res:
return None
return res.group(1)
def find_library(name):
lib = _findLib_ld(name) or _findLib_gcc(name)
if not lib:
return None
return _get_soname(lib)
if (sys.platform.startswith("freebsd")
or sys.platform.startswith("openbsd")
or sys.platform.startswith("dragonfly")):
def _num_version(libname):
# "libxyz.so.MAJOR.MINOR" => [ MAJOR, MINOR ]
parts = libname.split(".")
nums = []
try:
while parts:
nums.insert(0, int(parts.pop()))
except ValueError:
pass
return nums or [ sys.maxint ]
def find_library(name):
ename = re.escape(name)
expr = r':-l%s\.\S+ => \S*/(lib%s\.\S+)' % (ename, ename)
res = re.findall(expr,
os.popen('/sbin/ldconfig -r 2>/dev/null').read())
if not res:
return _get_soname(_findLib_gcc(name))
res.sort(cmp= lambda x,y: cmp(_num_version(x), _num_version(y)))
return res[-1]
else:
def _findLib_ldconfig(name):
# XXX assuming GLIBC's ldconfig (with option -p)
expr = r'/[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name)
res = re.search(expr,
os.popen('/sbin/ldconfig -p 2>/dev/null').read())
if not res:
# Hm, this works only for libs needed by the python executable.
cmd = 'ldd %s 2>/dev/null' % sys.executable
res = re.search(expr, os.popen(cmd).read())
if not res:
return None
return res.group(0)
def find_library(name):
return _get_soname(_findLib_ldconfig(name) or _findLib_gcc(name))
################################################################
# test code
......
......@@ -34,8 +34,14 @@ LPCOLESTR = LPOLESTR = OLESTR = c_wchar_p
LPCWSTR = LPWSTR = c_wchar_p
LPCSTR = LPSTR = c_char_p
WPARAM = c_uint
LPARAM = c_long
# WPARAM is defined as UINT_PTR (unsigned type)
# LPARAM is defined as LONG_PTR (signed type)
if sizeof(c_long) == sizeof(c_void_p):
WPARAM = c_ulong
LPARAM = c_long
elif sizeof(c_longlong) == sizeof(c_void_p):
WPARAM = c_ulonglong
LPARAM = c_longlong
ATOM = WORD
LANGID = WORD
......@@ -48,7 +54,7 @@ LCID = DWORD
################################################################
# HANDLE types
HANDLE = c_ulong # in the header files: void *
HANDLE = c_void_p # in the header files: void *
HACCEL = HANDLE
HBITMAP = HANDLE
......
......@@ -819,7 +819,7 @@ class EditorWindow(object):
def close(self):
reply = self.maybesave()
if reply != "cancel":
if str(reply) != "cancel":
self._close()
return reply
......
......@@ -41,8 +41,8 @@ except ImportError:
__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
__status__ = "production"
__version__ = "0.5.0.0"
__date__ = "08 January 2007"
__version__ = "0.5.0.1"
__date__ = "09 January 2007"
#---------------------------------------------------------------------------
# Miscellaneous module data
......@@ -764,17 +764,15 @@ class FileHandler(StreamHandler):
"""
Open the specified file and use it as the stream for logging.
"""
if codecs is None:
encoding = None
if encoding is None:
stream = open(filename, mode)
else:
stream = codecs.open(filename, mode, encoding)
StreamHandler.__init__(self, stream)
#keep the absolute path, otherwise derived classes which use this
#may come a cropper when the current directory changes
if codecs is None:
encoding = None
self.baseFilename = os.path.abspath(filename)
self.mode = mode
self.encoding = encoding
stream = self._open()
StreamHandler.__init__(self, stream)
def close(self):
"""
......@@ -784,6 +782,17 @@ class FileHandler(StreamHandler):
self.stream.close()
StreamHandler.close(self)
def _open(self):
"""
Open the current base file with the (original) mode and encoding.
Return the resulting stream.
"""
if self.encoding is None:
stream = open(self.baseFilename, self.mode)
else:
stream = codecs.open(self.baseFilename, self.mode, self.encoding)
return stream
#---------------------------------------------------------------------------
# Manager classes and functions
#---------------------------------------------------------------------------
......
......@@ -32,6 +32,7 @@ try:
import cPickle as pickle
except ImportError:
import pickle
from stat import ST_DEV, ST_INO
try:
import codecs
......@@ -286,6 +287,54 @@ class TimedRotatingFileHandler(BaseRotatingHandler):
self.stream = open(self.baseFilename, 'w')
self.rolloverAt = self.rolloverAt + self.interval
class WatchedFileHandler(logging.FileHandler):
"""
A handler for logging to a file, which watches the file
to see if it has changed while in use. This can happen because of
usage of programs such as newsyslog and logrotate which perform
log file rotation. This handler, intended for use under Unix,
watches the file to see if it has changed since the last emit.
(A file has changed if its device or inode have changed.)
If it has changed, the old file stream is closed, and the file
opened to get a new stream.
This handler is not appropriate for use under Windows, because
under Windows open files cannot be moved or renamed - logging
opens the files with exclusive locks - and so there is no need
for such a handler. Furthermore, ST_INO is not supported under
Windows; stat always returns zero for this value.
This handler is based on a suggestion and patch by Chad J.
Schroeder.
"""
def __init__(self, filename, mode='a', encoding=None):
logging.FileHandler.__init__(self, filename, mode, encoding)
stat = os.stat(self.baseFilename)
self.dev, self.ino = stat[ST_DEV], stat[ST_INO]
def emit(self, record):
"""
Emit a record.
First check if the underlying file has changed, and if it
has, close the old stream and reopen the file to get the
current stream.
"""
if not os.path.exists(self.baseFilename):
stat = None
changed = 1
else:
stat = os.stat(self.baseFilename)
changed = (stat[ST_DEV] != self.dev) or (stat[ST_INO] != self.ino)
if changed:
self.stream.flush()
self.stream.close()
self.stream = self._open()
if stat is None:
stat = os.stat(self.baseFilename)
self.dev, self.ino = stat[ST_DEV], stat[ST_INO]
logging.FileHandler.emit(self, record)
class SocketHandler(logging.Handler):
"""
A handler class which writes logging records, in pickle format, to
......
This diff is collapsed.
......@@ -1448,6 +1448,9 @@ def locate(path, forceload=0):
text = TextDoc()
html = HTMLDoc()
class _OldStyleClass: pass
_OLD_INSTANCE_TYPE = type(_OldStyleClass())
def resolve(thing, forceload=0):
"""Given an object or a path to an object, get the object and its name."""
if isinstance(thing, str):
......@@ -1468,12 +1471,16 @@ def doc(thing, title='Python Library Documentation: %s', forceload=0):
desc += ' in ' + name[:name.rfind('.')]
elif module and module is not object:
desc += ' in module ' + module.__name__
if not (inspect.ismodule(object) or
inspect.isclass(object) or
inspect.isroutine(object) or
inspect.isgetsetdescriptor(object) or
inspect.ismemberdescriptor(object) or
isinstance(object, property)):
if type(object) is _OLD_INSTANCE_TYPE:
# If the passed object is an instance of an old-style class,
# document its available methods instead of its value.
object = object.__class__
elif not (inspect.ismodule(object) or
inspect.isclass(object) or
inspect.isroutine(object) or
inspect.isgetsetdescriptor(object) or
inspect.ismemberdescriptor(object) or
isinstance(object, property)):
# If the passed object is a piece of data or an instance,
# document its available methods instead of its value.
object = type(object)
......
......@@ -68,7 +68,7 @@ def register_adapters_and_converters():
timepart_full = timepart.split(".")
hours, minutes, seconds = map(int, timepart_full[0].split(":"))
if len(timepart_full) == 2:
microseconds = int(float("0." + timepart_full[1]) * 1000000)
microseconds = int(timepart_full[1])
else:
microseconds = 0
......
......@@ -91,7 +91,7 @@ class RowFactoryTests(unittest.TestCase):
list),
"row is not instance of list")
def CheckSqliteRow(self):
def CheckSqliteRowIndex(self):
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
self.failUnless(isinstance(row,
......@@ -110,6 +110,27 @@ class RowFactoryTests(unittest.TestCase):
self.failUnless(col1 == 1, "by index: wrong result for column 0")
self.failUnless(col2 == 2, "by index: wrong result for column 1")
def CheckSqliteRowIter(self):
"""Checks if the row object is iterable"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
for col in row:
pass
def CheckSqliteRowAsTuple(self):
"""Checks if the row object can be converted to a tuple"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
t = tuple(row)
def CheckSqliteRowAsDict(self):
"""Checks if the row object can be correctly converted to a dictionary"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
d = dict(row)
self.failUnlessEqual(d["a"], row["a"])
self.failUnlessEqual(d["b"], row["b"])
def tearDown(self):
self.con.close()
......
......@@ -69,6 +69,16 @@ class RegressionTests(unittest.TestCase):
cur.execute('select 1 as "foo baz"')
self.failUnlessEqual(cur.description[0][0], "foo baz")
def CheckStatementAvailable(self):
# pysqlite up to 2.3.2 crashed on this, because the active statement handle was not checked
# before trying to fetch data from it. close() destroys the active statement ...
con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("select 4 union select 5")
cur.close()
cur.fetchone()
cur.fetchone()
def suite():
regression_suite = unittest.makeSuite(RegressionTests, "Check")
return unittest.TestSuite((regression_suite,))
......
......@@ -112,6 +112,7 @@ class DeclTypesTests(unittest.TestCase):
# and implement two custom ones
sqlite.converters["BOOL"] = lambda x: bool(int(x))
sqlite.converters["FOO"] = DeclTypesTests.Foo
sqlite.converters["WRONG"] = lambda x: "WRONG"
def tearDown(self):
del sqlite.converters["FLOAT"]
......@@ -123,7 +124,7 @@ class DeclTypesTests(unittest.TestCase):
def CheckString(self):
# default
self.cur.execute("insert into test(s) values (?)", ("foo",))
self.cur.execute("select s from test")
self.cur.execute('select s as "s [WRONG]" from test')
row = self.cur.fetchone()
self.failUnlessEqual(row[0], "foo")
......@@ -210,26 +211,32 @@ class DeclTypesTests(unittest.TestCase):
class ColNamesTests(unittest.TestCase):
def setUp(self):
self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_COLNAMES|sqlite.PARSE_DECLTYPES)
self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_COLNAMES)
self.cur = self.con.cursor()
self.cur.execute("create table test(x foo)")
sqlite.converters["FOO"] = lambda x: "[%s]" % x
sqlite.converters["BAR"] = lambda x: "<%s>" % x
sqlite.converters["EXC"] = lambda x: 5/0
sqlite.converters["B1B1"] = lambda x: "MARKER"
def tearDown(self):
del sqlite.converters["FOO"]
del sqlite.converters["BAR"]
del sqlite.converters["EXC"]
del sqlite.converters["B1B1"]
self.cur.close()
self.con.close()
def CheckDeclType(self):
def CheckDeclTypeNotUsed(self):
"""
Assures that the declared type is not used when PARSE_DECLTYPES
is not set.
"""
self.cur.execute("insert into test(x) values (?)", ("xxx",))
self.cur.execute("select x from test")
val = self.cur.fetchone()[0]
self.failUnlessEqual(val, "[xxx]")
self.failUnlessEqual(val, "xxx")
def CheckNone(self):
self.cur.execute("insert into test(x) values (?)", (None,))
......@@ -247,6 +254,11 @@ class ColNamesTests(unittest.TestCase):
# whitespace should be stripped.
self.failUnlessEqual(self.cur.description[0][0], "x")
def CheckCaseInConverterName(self):
self.cur.execute("""select 'other' as "x [b1b1]\"""")
val = self.cur.fetchone()[0]
self.failUnlessEqual(val, "MARKER")
def CheckCursorDescriptionNoRow(self):
"""
cursor.description should at least provide the column name(s), even if
......@@ -340,6 +352,13 @@ class DateTimeTests(unittest.TestCase):
ts2 = self.cur.fetchone()[0]
self.failUnlessEqual(ts, ts2)
def CheckDateTimeSubSecondsFloatingPoint(self):
ts = sqlite.Timestamp(2004, 2, 14, 7, 15, 0, 510241)
self.cur.execute("insert into test(ts) values (?)", (ts,))
self.cur.execute("select ts from test")
ts2 = self.cur.fetchone()[0]
self.failUnlessEqual(ts, ts2)
def suite():
sqlite_type_suite = unittest.makeSuite(SqliteTypeTests, "Check")
decltypes_type_suite = unittest.makeSuite(DeclTypesTests, "Check")
......
......@@ -500,7 +500,7 @@ def list2cmdline(seq):
if result:
result.append(' ')
needquote = (" " in arg) or ("\t" in arg)
needquote = (" " in arg) or ("\t" in arg) or arg == ""
if needquote:
result.append('"')
......
......@@ -132,7 +132,6 @@ class AllTest(unittest.TestCase):
self.check_all("rlcompleter")
self.check_all("robotparser")
self.check_all("sched")
self.check_all("sets")
self.check_all("sgmllib")
self.check_all("shelve")
self.check_all("shlex")
......
......@@ -1500,8 +1500,16 @@ class TestHelp(BaseTest):
self.assertHelpEquals(_expected_help_long_opts_first)
def test_help_title_formatter(self):
self.parser.formatter = TitledHelpFormatter()
self.assertHelpEquals(_expected_help_title_formatter)
save = os.environ.get("COLUMNS")
try:
os.environ["COLUMNS"] = "80"
self.parser.formatter = TitledHelpFormatter()
self.assertHelpEquals(_expected_help_title_formatter)
finally:
if save is not None:
os.environ["COLUMNS"] = save
else:
del os.environ["COLUMNS"]
def test_wrap_columns(self):
# Ensure that wrapping respects $COLUMNS environment variable.
......
......@@ -476,6 +476,16 @@ class SetSubclass(set):
class TestSetSubclass(TestSet):
thetype = SetSubclass
class SetSubclassWithKeywordArgs(set):
def __init__(self, iterable=[], newarg=None):
set.__init__(self, iterable)
class TestSetSubclassWithKeywordArgs(TestSet):
def test_keywords_in_subclass(self):
'SF bug #1486663 -- this used to erroneously raise a TypeError'
SetSubclassWithKeywordArgs(newarg=1)
class TestFrozenSet(TestJointOps):
thetype = frozenset
......@@ -1454,6 +1464,7 @@ def test_main(verbose=None):
test_classes = (
TestSet,
TestSetSubclass,
TestSetSubclassWithKeywordArgs,
TestFrozenSet,
TestFrozenSetSubclass,
TestSetOfSets,
......
......@@ -430,6 +430,8 @@ class ProcessTestCase(unittest.TestCase):
'"a\\\\b c" d e')
self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']),
'"a\\\\b\\ c" d e')
self.assertEqual(subprocess.list2cmdline(['ab', '']),
'ab ""')
def test_poll(self):
......
......@@ -484,6 +484,8 @@ Parser/metagrammar.o: $(srcdir)/Parser/metagrammar.c
Parser/tokenizer_pgen.o: $(srcdir)/Parser/tokenizer.c
Parser/pgenmain.o: $(srcdir)/Include/parsetok.h
$(AST_H): $(AST_ASDL) $(ASDLGEN_FILES)
$(ASDLGEN) -h $(AST_H_DIR) $(AST_ASDL)
......@@ -537,6 +539,7 @@ PYTHON_HEADERS= \
Include/moduleobject.h \
Include/object.h \
Include/objimpl.h \
Include/parsetok.h \
Include/patchlevel.h \
Include/pyarena.h \
Include/pydebug.h \
......
......@@ -4749,7 +4749,7 @@ init_ctypes(void)
#endif
PyModule_AddObject(m, "FUNCFLAG_CDECL", PyInt_FromLong(FUNCFLAG_CDECL));
PyModule_AddObject(m, "FUNCFLAG_PYTHONAPI", PyInt_FromLong(FUNCFLAG_PYTHONAPI));
PyModule_AddStringConstant(m, "__version__", "1.0.1");
PyModule_AddStringConstant(m, "__version__", "1.1.0");
PyModule_AddObject(m, "_memmove_addr", PyLong_FromVoidPtr(memmove));
PyModule_AddObject(m, "_memset_addr", PyLong_FromVoidPtr(memset));
......
......@@ -224,7 +224,8 @@ ffi_call(/*@dependent@*/ ffi_cif *cif,
#else
case FFI_SYSV:
/*@-usedef@*/
return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes,
/* Function call needs at least 40 bytes stack size, on win64 AMD64 */
return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes ? cif->bytes : 40,
cif->flags, ecif.rvalue, fn);
/*@=usedef@*/
break;
......
......@@ -25,11 +25,11 @@
#include <limits.h>
/* only used internally */
Node* new_node(PyObject* key, PyObject* data)
pysqlite_Node* pysqlite_new_node(PyObject* key, PyObject* data)
{
Node* node;
pysqlite_Node* node;
node = (Node*) (NodeType.tp_alloc(&NodeType, 0));
node = (pysqlite_Node*) (pysqlite_NodeType.tp_alloc(&pysqlite_NodeType, 0));
if (!node) {
return NULL;
}
......@@ -46,7 +46,7 @@ Node* new_node(PyObject* key, PyObject* data)
return node;
}
void node_dealloc(Node* self)
void pysqlite_node_dealloc(pysqlite_Node* self)
{
Py_DECREF(self->key);
Py_DECREF(self->data);
......@@ -54,7 +54,7 @@ void node_dealloc(Node* self)
self->ob_type->tp_free((PyObject*)self);
}
int cache_init(Cache* self, PyObject* args, PyObject* kwargs)
int pysqlite_cache_init(pysqlite_Cache* self, PyObject* args, PyObject* kwargs)
{
PyObject* factory;
int size = 10;
......@@ -86,10 +86,10 @@ int cache_init(Cache* self, PyObject* args, PyObject* kwargs)
return 0;
}
void cache_dealloc(Cache* self)
void pysqlite_cache_dealloc(pysqlite_Cache* self)
{
Node* node;
Node* delete_node;
pysqlite_Node* node;
pysqlite_Node* delete_node;
if (!self->factory) {
/* constructor failed, just get out of here */
......@@ -112,14 +112,14 @@ void cache_dealloc(Cache* self)
self->ob_type->tp_free((PyObject*)self);
}
PyObject* cache_get(Cache* self, PyObject* args)
PyObject* pysqlite_cache_get(pysqlite_Cache* self, PyObject* args)
{
PyObject* key = args;
Node* node;
Node* ptr;
pysqlite_Node* node;
pysqlite_Node* ptr;
PyObject* data;
node = (Node*)PyDict_GetItem(self->mapping, key);
node = (pysqlite_Node*)PyDict_GetItem(self->mapping, key);
if (node) {
/* an entry for this key already exists in the cache */
......@@ -186,7 +186,7 @@ PyObject* cache_get(Cache* self, PyObject* args)
return NULL;
}
node = new_node(key, data);
node = pysqlite_new_node(key, data);
if (!node) {
return NULL;
}
......@@ -211,9 +211,9 @@ PyObject* cache_get(Cache* self, PyObject* args)
return node->data;
}
PyObject* cache_display(Cache* self, PyObject* args)
PyObject* pysqlite_cache_display(pysqlite_Cache* self, PyObject* args)
{
Node* ptr;
pysqlite_Node* ptr;
PyObject* prevkey;
PyObject* nextkey;
PyObject* fmt_args;
......@@ -265,20 +265,20 @@ PyObject* cache_display(Cache* self, PyObject* args)
}
static PyMethodDef cache_methods[] = {
{"get", (PyCFunction)cache_get, METH_O,
{"get", (PyCFunction)pysqlite_cache_get, METH_O,
PyDoc_STR("Gets an entry from the cache or calls the factory function to produce one.")},
{"display", (PyCFunction)cache_display, METH_NOARGS,
{"display", (PyCFunction)pysqlite_cache_display, METH_NOARGS,
PyDoc_STR("For debugging only.")},
{NULL, NULL}
};
PyTypeObject NodeType = {
PyTypeObject pysqlite_NodeType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
MODULE_NAME "Node", /* tp_name */
sizeof(Node), /* tp_basicsize */
sizeof(pysqlite_Node), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)node_dealloc, /* tp_dealloc */
(destructor)pysqlite_node_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
......@@ -315,13 +315,13 @@ PyTypeObject NodeType = {
0 /* tp_free */
};
PyTypeObject CacheType = {
PyTypeObject pysqlite_CacheType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
MODULE_NAME ".Cache", /* tp_name */
sizeof(Cache), /* tp_basicsize */
sizeof(pysqlite_Cache), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)cache_dealloc, /* tp_dealloc */
(destructor)pysqlite_cache_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
......@@ -352,24 +352,24 @@ PyTypeObject CacheType = {
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)cache_init, /* tp_init */
(initproc)pysqlite_cache_init, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
0 /* tp_free */
};
extern int cache_setup_types(void)
extern int pysqlite_cache_setup_types(void)
{
int rc;
NodeType.tp_new = PyType_GenericNew;
CacheType.tp_new = PyType_GenericNew;
pysqlite_NodeType.tp_new = PyType_GenericNew;
pysqlite_CacheType.tp_new = PyType_GenericNew;
rc = PyType_Ready(&NodeType);
rc = PyType_Ready(&pysqlite_NodeType);
if (rc < 0) {
return rc;
}
rc = PyType_Ready(&CacheType);
rc = PyType_Ready(&pysqlite_CacheType);
return rc;
}
......@@ -29,15 +29,15 @@
* dictionary. The list items are of type 'Node' and the dictionary has the
* nodes as values. */
typedef struct _Node
typedef struct _pysqlite_Node
{
PyObject_HEAD
PyObject* key;
PyObject* data;
long count;
struct _Node* prev;
struct _Node* next;
} Node;
struct _pysqlite_Node* prev;
struct _pysqlite_Node* next;
} pysqlite_Node;
typedef struct
{
......@@ -50,24 +50,24 @@ typedef struct
/* the factory callable */
PyObject* factory;
Node* first;
Node* last;
pysqlite_Node* first;
pysqlite_Node* last;
/* if set, decrement the factory function when the Cache is deallocated.
* this is almost always desirable, but not in the pysqlite context */
int decref_factory;
} Cache;
} pysqlite_Cache;
extern PyTypeObject NodeType;
extern PyTypeObject CacheType;
extern PyTypeObject pysqlite_NodeType;
extern PyTypeObject pysqlite_CacheType;
int node_init(Node* self, PyObject* args, PyObject* kwargs);
void node_dealloc(Node* self);
int pysqlite_node_init(pysqlite_Node* self, PyObject* args, PyObject* kwargs);
void pysqlite_node_dealloc(pysqlite_Node* self);
int cache_init(Cache* self, PyObject* args, PyObject* kwargs);
void cache_dealloc(Cache* self);
PyObject* cache_get(Cache* self, PyObject* args);
int pysqlite_cache_init(pysqlite_Cache* self, PyObject* args, PyObject* kwargs);
void pysqlite_cache_dealloc(pysqlite_Cache* self);
PyObject* pysqlite_cache_get(pysqlite_Cache* self, PyObject* args);
int cache_setup_types(void);
int pysqlite_cache_setup_types(void);
#endif
This diff is collapsed.
......@@ -66,7 +66,7 @@ typedef struct
/* thread identification of the thread the connection was created in */
long thread_ident;
Cache* statement_cache;
pysqlite_Cache* statement_cache;
/* A list of weak references to statements used within this connection */
PyObject* statements;
......@@ -106,24 +106,23 @@ typedef struct
PyObject* InternalError;
PyObject* ProgrammingError;
PyObject* NotSupportedError;
} Connection;
} pysqlite_Connection;
extern PyTypeObject ConnectionType;
extern PyTypeObject pysqlite_ConnectionType;
PyObject* connection_alloc(PyTypeObject* type, int aware);
void connection_dealloc(Connection* self);
PyObject* connection_cursor(Connection* self, PyObject* args, PyObject* kwargs);
PyObject* connection_close(Connection* self, PyObject* args);
PyObject* _connection_begin(Connection* self);
PyObject* connection_begin(Connection* self, PyObject* args);
PyObject* connection_commit(Connection* self, PyObject* args);
PyObject* connection_rollback(Connection* self, PyObject* args);
PyObject* connection_new(PyTypeObject* type, PyObject* args, PyObject* kw);
int connection_init(Connection* self, PyObject* args, PyObject* kwargs);
PyObject* pysqlite_connection_alloc(PyTypeObject* type, int aware);
void pysqlite_connection_dealloc(pysqlite_Connection* self);
PyObject* pysqlite_connection_cursor(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
PyObject* pysqlite_connection_close(pysqlite_Connection* self, PyObject* args);
PyObject* _pysqlite_connection_begin(pysqlite_Connection* self);
PyObject* pysqlite_connection_commit(pysqlite_Connection* self, PyObject* args);
PyObject* pysqlite_connection_rollback(pysqlite_Connection* self, PyObject* args);
PyObject* pysqlite_connection_new(PyTypeObject* type, PyObject* args, PyObject* kw);
int pysqlite_connection_init(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
int check_thread(Connection* self);
int check_connection(Connection* con);
int pysqlite_check_thread(pysqlite_Connection* self);
int pysqlite_check_connection(pysqlite_Connection* con);
int connection_setup_types(void);
int pysqlite_connection_setup_types(void);
#endif
This diff is collapsed.
......@@ -32,40 +32,40 @@
typedef struct
{
PyObject_HEAD
Connection* connection;
pysqlite_Connection* connection;
PyObject* description;
PyObject* row_cast_map;
int arraysize;
PyObject* lastrowid;
PyObject* rowcount;
PyObject* row_factory;
Statement* statement;
pysqlite_Statement* statement;
/* the next row to be returned, NULL if no next row available */
PyObject* next_row;
} Cursor;
} pysqlite_Cursor;
typedef enum {
STATEMENT_INVALID, STATEMENT_INSERT, STATEMENT_DELETE,
STATEMENT_UPDATE, STATEMENT_REPLACE, STATEMENT_SELECT,
STATEMENT_OTHER
} StatementKind;
} pysqlite_StatementKind;
extern PyTypeObject CursorType;
extern PyTypeObject pysqlite_CursorType;
int cursor_init(Cursor* self, PyObject* args, PyObject* kwargs);
void cursor_dealloc(Cursor* self);
PyObject* cursor_execute(Cursor* self, PyObject* args);
PyObject* cursor_executemany(Cursor* self, PyObject* args);
PyObject* cursor_getiter(Cursor *self);
PyObject* cursor_iternext(Cursor *self);
PyObject* cursor_fetchone(Cursor* self, PyObject* args);
PyObject* cursor_fetchmany(Cursor* self, PyObject* args);
PyObject* cursor_fetchall(Cursor* self, PyObject* args);
PyObject* pysqlite_noop(Connection* self, PyObject* args);
PyObject* cursor_close(Cursor* self, PyObject* args);
int pysqlite_cursor_init(pysqlite_Cursor* self, PyObject* args, PyObject* kwargs);
void pysqlite_cursor_dealloc(pysqlite_Cursor* self);
PyObject* pysqlite_cursor_execute(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_cursor_executemany(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_cursor_getiter(pysqlite_Cursor *self);
PyObject* pysqlite_cursor_iternext(pysqlite_Cursor *self);
PyObject* pysqlite_cursor_fetchone(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_cursor_fetchall(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_noop(pysqlite_Connection* self, PyObject* args);
PyObject* pysqlite_cursor_close(pysqlite_Cursor* self, PyObject* args);
int cursor_setup_types(void);
int pysqlite_cursor_setup_types(void);
#define UNKNOWN (-1)
#endif
......@@ -57,7 +57,7 @@ microprotocols_add(PyTypeObject *type, PyObject *proto, PyObject *cast)
PyObject* key;
int rc;
if (proto == NULL) proto = (PyObject*)&SQLitePrepareProtocolType;
if (proto == NULL) proto = (PyObject*)&pysqlite_PrepareProtocolType;
key = Py_BuildValue("(OO)", (PyObject*)type, proto);
if (!key) {
......@@ -78,7 +78,7 @@ microprotocols_adapt(PyObject *obj, PyObject *proto, PyObject *alt)
PyObject *adapter, *key;
/* we don't check for exact type conformance as specified in PEP 246
because the SQLitePrepareProtocolType type is abstract and there is no
because the pysqlite_PrepareProtocolType type is abstract and there is no
way to get a quotable object to be its instance */
/* look for an adapter in the registry */
......@@ -125,17 +125,17 @@ microprotocols_adapt(PyObject *obj, PyObject *proto, PyObject *alt)
}
/* else set the right exception and return NULL */
PyErr_SetString(ProgrammingError, "can't adapt");
PyErr_SetString(pysqlite_ProgrammingError, "can't adapt");
return NULL;
}
/** module-level functions **/
PyObject *
psyco_microprotocols_adapt(Cursor *self, PyObject *args)
psyco_microprotocols_adapt(pysqlite_Cursor *self, PyObject *args)
{
PyObject *obj, *alt = NULL;
PyObject *proto = (PyObject*)&SQLitePrepareProtocolType;
PyObject *proto = (PyObject*)&pysqlite_PrepareProtocolType;
if (!PyArg_ParseTuple(args, "O|OO", &obj, &proto, &alt)) return NULL;
return microprotocols_adapt(obj, proto, alt);
......
......@@ -52,7 +52,7 @@ extern PyObject *microprotocols_adapt(
PyObject *obj, PyObject *proto, PyObject *alt);
extern PyObject *
psyco_microprotocols_adapt(Cursor* self, PyObject *args);
psyco_microprotocols_adapt(pysqlite_Cursor* self, PyObject *args);
#define psyco_microprotocols_adapt_doc \
"adapt(obj, protocol, alternate) -> adapt obj to given protocol. Non-standard."
......
......@@ -35,9 +35,9 @@
/* static objects at module-level */
PyObject* Error, *Warning, *InterfaceError, *DatabaseError, *InternalError,
*OperationalError, *ProgrammingError, *IntegrityError, *DataError,
*NotSupportedError, *OptimizedUnicode;
PyObject* pysqlite_Error, *pysqlite_Warning, *pysqlite_InterfaceError, *pysqlite_DatabaseError,
*pysqlite_InternalError, *pysqlite_OperationalError, *pysqlite_ProgrammingError,
*pysqlite_IntegrityError, *pysqlite_DataError, *pysqlite_NotSupportedError, *pysqlite_OptimizedUnicode;
PyObject* converters;
int _enable_callback_tracebacks;
......@@ -67,7 +67,7 @@ static PyObject* module_connect(PyObject* self, PyObject* args, PyObject*
}
if (factory == NULL) {
factory = (PyObject*)&ConnectionType;
factory = (PyObject*)&pysqlite_ConnectionType;
}
result = PyObject_Call(factory, args, kwargs);
......@@ -115,7 +115,7 @@ static PyObject* module_enable_shared_cache(PyObject* self, PyObject* args, PyOb
rc = sqlite3_enable_shared_cache(do_enable);
if (rc != SQLITE_OK) {
PyErr_SetString(OperationalError, "Changing the shared_cache flag failed");
PyErr_SetString(pysqlite_OperationalError, "Changing the shared_cache flag failed");
return NULL;
} else {
Py_INCREF(Py_None);
......@@ -133,7 +133,7 @@ static PyObject* module_register_adapter(PyObject* self, PyObject* args, PyObjec
return NULL;
}
microprotocols_add(type, (PyObject*)&SQLitePrepareProtocolType, caster);
microprotocols_add(type, (PyObject*)&pysqlite_PrepareProtocolType, caster);
Py_INCREF(Py_None);
return Py_None;
......@@ -141,36 +141,29 @@ static PyObject* module_register_adapter(PyObject* self, PyObject* args, PyObjec
static PyObject* module_register_converter(PyObject* self, PyObject* args, PyObject* kwargs)
{
char* orig_name;
char* name = NULL;
char* c;
PyObject* orig_name;
PyObject* name = NULL;
PyObject* callable;
PyObject* retval = NULL;
if (!PyArg_ParseTuple(args, "sO", &orig_name, &callable)) {
if (!PyArg_ParseTuple(args, "SO", &orig_name, &callable)) {
return NULL;
}
/* convert the name to lowercase */
name = PyMem_Malloc(strlen(orig_name) + 2);
/* convert the name to upper case */
name = PyObject_CallMethod(orig_name, "upper", "");
if (!name) {
goto error;
}
strcpy(name, orig_name);
for (c = name; *c != (char)0; c++) {
*c = (*c) & 0xDF;
}
if (PyDict_SetItemString(converters, name, callable) != 0) {
if (PyDict_SetItem(converters, name, callable) != 0) {
goto error;
}
Py_INCREF(Py_None);
retval = Py_None;
error:
if (name) {
PyMem_Free(name);
}
Py_XDECREF(name);
return retval;
}
......@@ -184,7 +177,7 @@ static PyObject* enable_callback_tracebacks(PyObject* self, PyObject* args, PyOb
return Py_None;
}
void converters_init(PyObject* dict)
static void converters_init(PyObject* dict)
{
converters = PyDict_New();
if (!converters) {
......@@ -265,28 +258,28 @@ PyMODINIT_FUNC init_sqlite3(void)
module = Py_InitModule("_sqlite3", module_methods);
if (!module ||
(row_setup_types() < 0) ||
(cursor_setup_types() < 0) ||
(connection_setup_types() < 0) ||
(cache_setup_types() < 0) ||
(statement_setup_types() < 0) ||
(prepare_protocol_setup_types() < 0)
(pysqlite_row_setup_types() < 0) ||
(pysqlite_cursor_setup_types() < 0) ||
(pysqlite_connection_setup_types() < 0) ||
(pysqlite_cache_setup_types() < 0) ||
(pysqlite_statement_setup_types() < 0) ||
(pysqlite_prepare_protocol_setup_types() < 0)
) {
return;
}
Py_INCREF(&ConnectionType);
PyModule_AddObject(module, "Connection", (PyObject*) &ConnectionType);
Py_INCREF(&CursorType);
PyModule_AddObject(module, "Cursor", (PyObject*) &CursorType);
Py_INCREF(&CacheType);
PyModule_AddObject(module, "Statement", (PyObject*)&StatementType);
Py_INCREF(&StatementType);
PyModule_AddObject(module, "Cache", (PyObject*) &CacheType);
Py_INCREF(&SQLitePrepareProtocolType);
PyModule_AddObject(module, "PrepareProtocol", (PyObject*) &SQLitePrepareProtocolType);
Py_INCREF(&RowType);
PyModule_AddObject(module, "Row", (PyObject*) &RowType);
Py_INCREF(&pysqlite_ConnectionType);
PyModule_AddObject(module, "Connection", (PyObject*) &pysqlite_ConnectionType);
Py_INCREF(&pysqlite_CursorType);
PyModule_AddObject(module, "Cursor", (PyObject*) &pysqlite_CursorType);
Py_INCREF(&pysqlite_CacheType);
PyModule_AddObject(module, "Statement", (PyObject*)&pysqlite_StatementType);
Py_INCREF(&pysqlite_StatementType);
PyModule_AddObject(module, "Cache", (PyObject*) &pysqlite_CacheType);
Py_INCREF(&pysqlite_PrepareProtocolType);
PyModule_AddObject(module, "PrepareProtocol", (PyObject*) &pysqlite_PrepareProtocolType);
Py_INCREF(&pysqlite_RowType);
PyModule_AddObject(module, "Row", (PyObject*) &pysqlite_RowType);
if (!(dict = PyModule_GetDict(module))) {
goto error;
......@@ -294,67 +287,67 @@ PyMODINIT_FUNC init_sqlite3(void)
/*** Create DB-API Exception hierarchy */
if (!(Error = PyErr_NewException(MODULE_NAME ".Error", PyExc_StandardError, NULL))) {
if (!(pysqlite_Error = PyErr_NewException(MODULE_NAME ".Error", PyExc_StandardError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "Error", Error);
PyDict_SetItemString(dict, "Error", pysqlite_Error);
if (!(Warning = PyErr_NewException(MODULE_NAME ".Warning", PyExc_StandardError, NULL))) {
if (!(pysqlite_Warning = PyErr_NewException(MODULE_NAME ".Warning", PyExc_StandardError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "Warning", Warning);
PyDict_SetItemString(dict, "Warning", pysqlite_Warning);
/* Error subclasses */
if (!(InterfaceError = PyErr_NewException(MODULE_NAME ".InterfaceError", Error, NULL))) {
if (!(pysqlite_InterfaceError = PyErr_NewException(MODULE_NAME ".InterfaceError", pysqlite_Error, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "InterfaceError", InterfaceError);
PyDict_SetItemString(dict, "InterfaceError", pysqlite_InterfaceError);
if (!(DatabaseError = PyErr_NewException(MODULE_NAME ".DatabaseError", Error, NULL))) {
if (!(pysqlite_DatabaseError = PyErr_NewException(MODULE_NAME ".DatabaseError", pysqlite_Error, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "DatabaseError", DatabaseError);
PyDict_SetItemString(dict, "DatabaseError", pysqlite_DatabaseError);
/* DatabaseError subclasses */
/* pysqlite_DatabaseError subclasses */
if (!(InternalError = PyErr_NewException(MODULE_NAME ".InternalError", DatabaseError, NULL))) {
if (!(pysqlite_InternalError = PyErr_NewException(MODULE_NAME ".InternalError", pysqlite_DatabaseError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "InternalError", InternalError);
PyDict_SetItemString(dict, "InternalError", pysqlite_InternalError);
if (!(OperationalError = PyErr_NewException(MODULE_NAME ".OperationalError", DatabaseError, NULL))) {
if (!(pysqlite_OperationalError = PyErr_NewException(MODULE_NAME ".OperationalError", pysqlite_DatabaseError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "OperationalError", OperationalError);
PyDict_SetItemString(dict, "OperationalError", pysqlite_OperationalError);
if (!(ProgrammingError = PyErr_NewException(MODULE_NAME ".ProgrammingError", DatabaseError, NULL))) {
if (!(pysqlite_ProgrammingError = PyErr_NewException(MODULE_NAME ".ProgrammingError", pysqlite_DatabaseError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "ProgrammingError", ProgrammingError);
PyDict_SetItemString(dict, "ProgrammingError", pysqlite_ProgrammingError);
if (!(IntegrityError = PyErr_NewException(MODULE_NAME ".IntegrityError", DatabaseError,NULL))) {
if (!(pysqlite_IntegrityError = PyErr_NewException(MODULE_NAME ".IntegrityError", pysqlite_DatabaseError,NULL))) {
goto error;
}
PyDict_SetItemString(dict, "IntegrityError", IntegrityError);
PyDict_SetItemString(dict, "IntegrityError", pysqlite_IntegrityError);
if (!(DataError = PyErr_NewException(MODULE_NAME ".DataError", DatabaseError, NULL))) {
if (!(pysqlite_DataError = PyErr_NewException(MODULE_NAME ".DataError", pysqlite_DatabaseError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "DataError", DataError);
PyDict_SetItemString(dict, "DataError", pysqlite_DataError);
if (!(NotSupportedError = PyErr_NewException(MODULE_NAME ".NotSupportedError", DatabaseError, NULL))) {
if (!(pysqlite_NotSupportedError = PyErr_NewException(MODULE_NAME ".NotSupportedError", pysqlite_DatabaseError, NULL))) {
goto error;
}
PyDict_SetItemString(dict, "NotSupportedError", NotSupportedError);
PyDict_SetItemString(dict, "NotSupportedError", pysqlite_NotSupportedError);
/* We just need "something" unique for OptimizedUnicode. It does not really
/* We just need "something" unique for pysqlite_OptimizedUnicode. It does not really
* need to be a string subclass. Just anything that can act as a special
* marker for us. So I pulled PyCell_Type out of my magic hat.
*/
Py_INCREF((PyObject*)&PyCell_Type);
OptimizedUnicode = (PyObject*)&PyCell_Type;
PyDict_SetItemString(dict, "OptimizedUnicode", OptimizedUnicode);
pysqlite_OptimizedUnicode = (PyObject*)&PyCell_Type;
PyDict_SetItemString(dict, "OptimizedUnicode", pysqlite_OptimizedUnicode);
/* Set integer constants */
for (i = 0; _int_constants[i].constant_name != 0; i++) {
......
......@@ -25,20 +25,20 @@
#define PYSQLITE_MODULE_H
#include "Python.h"
#define PYSQLITE_VERSION "2.3.2"
extern PyObject* Error;
extern PyObject* Warning;
extern PyObject* InterfaceError;
extern PyObject* DatabaseError;
extern PyObject* InternalError;
extern PyObject* OperationalError;
extern PyObject* ProgrammingError;
extern PyObject* IntegrityError;
extern PyObject* DataError;
extern PyObject* NotSupportedError;
extern PyObject* OptimizedUnicode;
#define PYSQLITE_VERSION "2.3.3"
extern PyObject* pysqlite_Error;
extern PyObject* pysqlite_Warning;
extern PyObject* pysqlite_InterfaceError;
extern PyObject* pysqlite_DatabaseError;
extern PyObject* pysqlite_InternalError;
extern PyObject* pysqlite_OperationalError;
extern PyObject* pysqlite_ProgrammingError;
extern PyObject* pysqlite_IntegrityError;
extern PyObject* pysqlite_DataError;
extern PyObject* pysqlite_NotSupportedError;
extern PyObject* pysqlite_OptimizedUnicode;
/* the functions time.time() and time.sleep() */
extern PyObject* time_time;
......
......@@ -23,23 +23,23 @@
#include "prepare_protocol.h"
int prepare_protocol_init(SQLitePrepareProtocol* self, PyObject* args, PyObject* kwargs)
int pysqlite_prepare_protocol_init(pysqlite_PrepareProtocol* self, PyObject* args, PyObject* kwargs)
{
return 0;
}
void prepare_protocol_dealloc(SQLitePrepareProtocol* self)
void pysqlite_prepare_protocol_dealloc(pysqlite_PrepareProtocol* self)
{
self->ob_type->tp_free((PyObject*)self);
}
PyTypeObject SQLitePrepareProtocolType= {
PyTypeObject pysqlite_PrepareProtocolType= {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
MODULE_NAME ".PrepareProtocol", /* tp_name */
sizeof(SQLitePrepareProtocol), /* tp_basicsize */
sizeof(pysqlite_PrepareProtocol), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)prepare_protocol_dealloc, /* tp_dealloc */
(destructor)pysqlite_prepare_protocol_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
......@@ -70,15 +70,15 @@ PyTypeObject SQLitePrepareProtocolType= {
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)prepare_protocol_init, /* tp_init */
(initproc)pysqlite_prepare_protocol_init, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
0 /* tp_free */
};
extern int prepare_protocol_setup_types(void)
extern int pysqlite_prepare_protocol_setup_types(void)
{
SQLitePrepareProtocolType.tp_new = PyType_GenericNew;
SQLitePrepareProtocolType.ob_type= &PyType_Type;
return PyType_Ready(&SQLitePrepareProtocolType);
pysqlite_PrepareProtocolType.tp_new = PyType_GenericNew;
pysqlite_PrepareProtocolType.ob_type= &PyType_Type;
return PyType_Ready(&pysqlite_PrepareProtocolType);
}
......@@ -28,14 +28,14 @@
typedef struct
{
PyObject_HEAD
} SQLitePrepareProtocol;
} pysqlite_PrepareProtocol;
extern PyTypeObject SQLitePrepareProtocolType;
extern PyTypeObject pysqlite_PrepareProtocolType;
int prepare_protocol_init(SQLitePrepareProtocol* self, PyObject* args, PyObject* kwargs);
void prepare_protocol_dealloc(SQLitePrepareProtocol* self);
int pysqlite_prepare_protocol_init(pysqlite_PrepareProtocol* self, PyObject* args, PyObject* kwargs);
void pysqlite_prepare_protocol_dealloc(pysqlite_PrepareProtocol* self);
int prepare_protocol_setup_types(void);
int pysqlite_prepare_protocol_setup_types(void);
#define UNKNOWN (-1)
#endif
......@@ -25,7 +25,7 @@
#include "cursor.h"
#include "sqlitecompat.h"
void row_dealloc(Row* self)
void pysqlite_row_dealloc(pysqlite_Row* self)
{
Py_XDECREF(self->data);
Py_XDECREF(self->description);
......@@ -33,10 +33,10 @@ void row_dealloc(Row* self)
self->ob_type->tp_free((PyObject*)self);
}
int row_init(Row* self, PyObject* args, PyObject* kwargs)
int pysqlite_row_init(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{
PyObject* data;
Cursor* cursor;
pysqlite_Cursor* cursor;
self->data = 0;
self->description = 0;
......@@ -45,7 +45,7 @@ int row_init(Row* self, PyObject* args, PyObject* kwargs)
return -1;
}
if (!PyObject_IsInstance((PyObject*)cursor, (PyObject*)&CursorType)) {
if (!PyObject_IsInstance((PyObject*)cursor, (PyObject*)&pysqlite_CursorType)) {
PyErr_SetString(PyExc_TypeError, "instance of cursor required for first argument");
return -1;
}
......@@ -64,7 +64,7 @@ int row_init(Row* self, PyObject* args, PyObject* kwargs)
return 0;
}
PyObject* row_subscript(Row* self, PyObject* idx)
PyObject* pysqlite_row_subscript(pysqlite_Row* self, PyObject* idx)
{
long _idx;
char* key;
......@@ -133,32 +133,63 @@ PyObject* row_subscript(Row* self, PyObject* idx)
}
}
Py_ssize_t row_length(Row* self, PyObject* args, PyObject* kwargs)
Py_ssize_t pysqlite_row_length(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{
return PyTuple_GET_SIZE(self->data);
}
static int row_print(Row* self, FILE *fp, int flags)
PyObject* pysqlite_row_keys(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{
PyObject* list;
int nitems, i;
list = PyList_New(0);
if (!list) {
return NULL;
}
nitems = PyTuple_Size(self->description);
for (i = 0; i < nitems; i++) {
if (PyList_Append(list, PyTuple_GET_ITEM(PyTuple_GET_ITEM(self->description, i), 0)) != 0) {
Py_DECREF(list);
return NULL;
}
}
return list;
}
static int pysqlite_row_print(pysqlite_Row* self, FILE *fp, int flags)
{
return (&PyTuple_Type)->tp_print(self->data, fp, flags);
}
static PyObject* pysqlite_iter(pysqlite_Row* self)
{
return PyObject_GetIter(self->data);
}
PyMappingMethods row_as_mapping = {
/* mp_length */ (lenfunc)row_length,
/* mp_subscript */ (binaryfunc)row_subscript,
PyMappingMethods pysqlite_row_as_mapping = {
/* mp_length */ (lenfunc)pysqlite_row_length,
/* mp_subscript */ (binaryfunc)pysqlite_row_subscript,
/* mp_ass_subscript */ (objobjargproc)0,
};
static PyMethodDef pysqlite_row_methods[] = {
{"keys", (PyCFunction)pysqlite_row_keys, METH_NOARGS,
PyDoc_STR("Returns the keys of the row.")},
{NULL, NULL}
};
PyTypeObject RowType = {
PyTypeObject pysqlite_RowType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
MODULE_NAME ".Row", /* tp_name */
sizeof(Row), /* tp_basicsize */
sizeof(pysqlite_Row), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)row_dealloc, /* tp_dealloc */
(printfunc)row_print, /* tp_print */
(destructor)pysqlite_row_dealloc, /* tp_dealloc */
(printfunc)pysqlite_row_print, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_compare */
......@@ -174,13 +205,13 @@ PyTypeObject RowType = {
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /* tp_flags */
0, /* tp_doc */
0, /* tp_traverse */
(traverseproc)0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
(getiterfunc)pysqlite_iter, /* tp_iter */
0, /* tp_iternext */
0, /* tp_methods */
pysqlite_row_methods, /* tp_methods */
0, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
......@@ -188,15 +219,15 @@ PyTypeObject RowType = {
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)row_init, /* tp_init */
(initproc)pysqlite_row_init, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
0 /* tp_free */
};
extern int row_setup_types(void)
extern int pysqlite_row_setup_types(void)
{
RowType.tp_new = PyType_GenericNew;
RowType.tp_as_mapping = &row_as_mapping;
return PyType_Ready(&RowType);
pysqlite_RowType.tp_new = PyType_GenericNew;
pysqlite_RowType.tp_as_mapping = &pysqlite_row_as_mapping;
return PyType_Ready(&pysqlite_RowType);
}
......@@ -30,10 +30,10 @@ typedef struct _Row
PyObject_HEAD
PyObject* data;
PyObject* description;
} Row;
} pysqlite_Row;
extern PyTypeObject RowType;
extern PyTypeObject pysqlite_RowType;
int row_setup_types(void);
int pysqlite_row_setup_types(void);
#endif
......@@ -29,7 +29,7 @@
#include "sqlitecompat.h"
/* prototypes */
int check_remaining_sql(const char* tail);
static int pysqlite_check_remaining_sql(const char* tail);
typedef enum {
LINECOMMENT_1,
......@@ -40,7 +40,7 @@ typedef enum {
NORMAL
} parse_remaining_sql_state;
int statement_create(Statement* self, Connection* connection, PyObject* sql)
int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql)
{
const char* tail;
int rc;
......@@ -77,7 +77,7 @@ int statement_create(Statement* self, Connection* connection, PyObject* sql)
self->db = connection->db;
if (rc == SQLITE_OK && check_remaining_sql(tail)) {
if (rc == SQLITE_OK && pysqlite_check_remaining_sql(tail)) {
(void)sqlite3_finalize(self->st);
self->st = NULL;
rc = PYSQLITE_TOO_MUCH_SQL;
......@@ -86,7 +86,7 @@ int statement_create(Statement* self, Connection* connection, PyObject* sql)
return rc;
}
int statement_bind_parameter(Statement* self, int pos, PyObject* parameter)
int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter)
{
int rc = SQLITE_OK;
long longval;
......@@ -133,7 +133,7 @@ int statement_bind_parameter(Statement* self, int pos, PyObject* parameter)
return rc;
}
void statement_bind_parameters(Statement* self, PyObject* parameters)
void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters)
{
PyObject* current_param;
PyObject* adapted;
......@@ -154,19 +154,19 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
binding_name = sqlite3_bind_parameter_name(self->st, i);
Py_END_ALLOW_THREADS
if (!binding_name) {
PyErr_Format(ProgrammingError, "Binding %d has no name, but you supplied a dictionary (which has only names).", i);
PyErr_Format(pysqlite_ProgrammingError, "Binding %d has no name, but you supplied a dictionary (which has only names).", i);
return;
}
binding_name++; /* skip first char (the colon) */
current_param = PyDict_GetItemString(parameters, binding_name);
if (!current_param) {
PyErr_Format(ProgrammingError, "You did not supply a value for binding %d.", i);
PyErr_Format(pysqlite_ProgrammingError, "You did not supply a value for binding %d.", i);
return;
}
Py_INCREF(current_param);
adapted = microprotocols_adapt(current_param, (PyObject*)&SQLitePrepareProtocolType, NULL);
adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL);
if (adapted) {
Py_DECREF(current_param);
} else {
......@@ -174,11 +174,11 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
adapted = current_param;
}
rc = statement_bind_parameter(self, i, adapted);
rc = pysqlite_statement_bind_parameter(self, i, adapted);
Py_DECREF(adapted);
if (rc != SQLITE_OK) {
PyErr_Format(InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name);
PyErr_Format(pysqlite_InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name);
return;
}
}
......@@ -186,7 +186,7 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
/* parameters passed as sequence */
num_params = PySequence_Length(parameters);
if (num_params != num_params_needed) {
PyErr_Format(ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.",
PyErr_Format(pysqlite_ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.",
num_params_needed, num_params);
return;
}
......@@ -195,7 +195,7 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
if (!current_param) {
return;
}
adapted = microprotocols_adapt(current_param, (PyObject*)&SQLitePrepareProtocolType, NULL);
adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL);
if (adapted) {
Py_DECREF(current_param);
......@@ -204,18 +204,18 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
adapted = current_param;
}
rc = statement_bind_parameter(self, i + 1, adapted);
rc = pysqlite_statement_bind_parameter(self, i + 1, adapted);
Py_DECREF(adapted);
if (rc != SQLITE_OK) {
PyErr_Format(InterfaceError, "Error binding parameter %d - probably unsupported type.", i);
PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i);
return;
}
}
}
}
int statement_recompile(Statement* self, PyObject* params)
int pysqlite_statement_recompile(pysqlite_Statement* self, PyObject* params)
{
const char* tail;
int rc;
......@@ -250,7 +250,7 @@ int statement_recompile(Statement* self, PyObject* params)
return rc;
}
int statement_finalize(Statement* self)
int pysqlite_statement_finalize(pysqlite_Statement* self)
{
int rc;
......@@ -267,7 +267,7 @@ int statement_finalize(Statement* self)
return rc;
}
int statement_reset(Statement* self)
int pysqlite_statement_reset(pysqlite_Statement* self)
{
int rc;
......@@ -286,12 +286,12 @@ int statement_reset(Statement* self)
return rc;
}
void statement_mark_dirty(Statement* self)
void pysqlite_statement_mark_dirty(pysqlite_Statement* self)
{
self->in_use = 1;
}
void statement_dealloc(Statement* self)
void pysqlite_statement_dealloc(pysqlite_Statement* self)
{
int rc;
......@@ -320,7 +320,7 @@ void statement_dealloc(Statement* self)
*
* Returns 1 if there is more left than should be. 0 if ok.
*/
int check_remaining_sql(const char* tail)
static int pysqlite_check_remaining_sql(const char* tail)
{
const char* pos = tail;
......@@ -382,13 +382,13 @@ int check_remaining_sql(const char* tail)
return 0;
}
PyTypeObject StatementType = {
PyTypeObject pysqlite_StatementType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
MODULE_NAME ".Statement", /* tp_name */
sizeof(Statement), /* tp_basicsize */
sizeof(pysqlite_Statement), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)statement_dealloc, /* tp_dealloc */
(destructor)pysqlite_statement_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
......@@ -408,7 +408,7 @@ PyTypeObject StatementType = {
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
offsetof(Statement, in_weakreflist), /* tp_weaklistoffset */
offsetof(pysqlite_Statement, in_weakreflist), /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
0, /* tp_methods */
......@@ -425,8 +425,8 @@ PyTypeObject StatementType = {
0 /* tp_free */
};
extern int statement_setup_types(void)
extern int pysqlite_statement_setup_types(void)
{
StatementType.tp_new = PyType_GenericNew;
return PyType_Ready(&StatementType);
pysqlite_StatementType.tp_new = PyType_GenericNew;
return PyType_Ready(&pysqlite_StatementType);
}
......@@ -39,21 +39,21 @@ typedef struct
PyObject* sql;
int in_use;
PyObject* in_weakreflist; /* List of weak references */
} Statement;
} pysqlite_Statement;
extern PyTypeObject StatementType;
extern PyTypeObject pysqlite_StatementType;
int statement_create(Statement* self, Connection* connection, PyObject* sql);
void statement_dealloc(Statement* self);
int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql);
void pysqlite_statement_dealloc(pysqlite_Statement* self);
int statement_bind_parameter(Statement* self, int pos, PyObject* parameter);
void statement_bind_parameters(Statement* self, PyObject* parameters);
int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter);
void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters);
int statement_recompile(Statement* self, PyObject* parameters);
int statement_finalize(Statement* self);
int statement_reset(Statement* self);
void statement_mark_dirty(Statement* self);
int pysqlite_statement_recompile(pysqlite_Statement* self, PyObject* parameters);
int pysqlite_statement_finalize(pysqlite_Statement* self);
int pysqlite_statement_reset(pysqlite_Statement* self);
void pysqlite_statement_mark_dirty(pysqlite_Statement* self);
int statement_setup_types(void);
int pysqlite_statement_setup_types(void);
#endif
......@@ -24,8 +24,7 @@
#include "module.h"
#include "connection.h"
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connection
)
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, pysqlite_Connection* connection)
{
int rc;
......@@ -40,7 +39,7 @@ int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connectio
* Checks the SQLite error code and sets the appropriate DB-API exception.
* Returns the error code (0 means no error occurred).
*/
int _seterror(sqlite3* db)
int _pysqlite_seterror(sqlite3* db)
{
int errorcode;
......@@ -53,7 +52,7 @@ int _seterror(sqlite3* db)
break;
case SQLITE_INTERNAL:
case SQLITE_NOTFOUND:
PyErr_SetString(InternalError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_InternalError, sqlite3_errmsg(db));
break;
case SQLITE_NOMEM:
(void)PyErr_NoMemory();
......@@ -71,23 +70,23 @@ int _seterror(sqlite3* db)
case SQLITE_PROTOCOL:
case SQLITE_EMPTY:
case SQLITE_SCHEMA:
PyErr_SetString(OperationalError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_OperationalError, sqlite3_errmsg(db));
break;
case SQLITE_CORRUPT:
PyErr_SetString(DatabaseError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_DatabaseError, sqlite3_errmsg(db));
break;
case SQLITE_TOOBIG:
PyErr_SetString(DataError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_DataError, sqlite3_errmsg(db));
break;
case SQLITE_CONSTRAINT:
case SQLITE_MISMATCH:
PyErr_SetString(IntegrityError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_IntegrityError, sqlite3_errmsg(db));
break;
case SQLITE_MISUSE:
PyErr_SetString(ProgrammingError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_ProgrammingError, sqlite3_errmsg(db));
break;
default:
PyErr_SetString(DatabaseError, sqlite3_errmsg(db));
PyErr_SetString(pysqlite_DatabaseError, sqlite3_errmsg(db));
break;
}
......
......@@ -28,11 +28,11 @@
#include "sqlite3.h"
#include "connection.h"
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connection);
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, pysqlite_Connection* connection);
/**
* Checks the SQLite error code and sets the appropriate DB-API exception.
* Returns the error code (0 means no error occurred).
*/
int _seterror(sqlite3* db);
int _pysqlite_seterror(sqlite3* db);
#endif
......@@ -1050,8 +1050,9 @@ Convert a string or number to an integer, if possible. A floating point\n\
argument will be truncated towards zero (this does not include a string\n\
representation of a floating point number!) When converting a string, use\n\
the optional base. It is an error to supply a base when converting a\n\
non-string. If the argument is outside the integer range a long object\n\
will be returned instead.");
non-string. If base is zero, the proper base is guessed based on the\n\
string content. If the argument is outside the integer range a\n\
long object will be returned instead.");
static PyNumberMethods int_as_number = {
(binaryfunc)int_add, /*nb_add*/
......
......@@ -1024,7 +1024,7 @@ frozenset_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
PyObject *iterable = NULL, *result;
if (!_PyArg_NoKeywords("frozenset()", kwds))
if (type == &PyFrozenSet_Type && !_PyArg_NoKeywords("frozenset()", kwds))
return NULL;
if (!PyArg_UnpackTuple(args, type->tp_name, 0, 1, &iterable))
......@@ -1068,7 +1068,7 @@ PySet_Fini(void)
static PyObject *
set_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
if (!_PyArg_NoKeywords("set()", kwds))
if (type == &PySet_Type && !_PyArg_NoKeywords("set()", kwds))
return NULL;
return make_new_set(type, NULL);
......
......@@ -3126,7 +3126,7 @@ init_ast(void)
if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return;
if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0)
return;
if (PyModule_AddStringConstant(m, "__version__", "53170") < 0)
if (PyModule_AddStringConstant(m, "__version__", "53349") < 0)
return;
if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return;
if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0)
......
......@@ -34,7 +34,7 @@ NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE !
"""
import sys, time, operator, string
import sys, time, operator, string, platform
from CommandLine import *
try:
......@@ -102,27 +102,26 @@ def get_timer(timertype):
def get_machine_details():
import platform
if _debug:
print 'Getting machine details...'
buildno, builddate = platform.python_build()
python = platform.python_version()
if python > '2.0':
try:
unichr(100000)
except ValueError:
# UCS2 build (standard)
unicode = 'UCS2'
else:
# UCS4 build (most recent Linux distros)
unicode = 'UCS4'
else:
try:
unichr(100000)
except ValueError:
# UCS2 build (standard)
unicode = 'UCS2'
except NameError:
unicode = None
else:
# UCS4 build (most recent Linux distros)
unicode = 'UCS4'
bits, linkage = platform.architecture()
return {
'platform': platform.platform(),
'processor': platform.processor(),
'executable': sys.executable,
'implementation': platform.python_implementation(),
'python': platform.python_version(),
'compiler': platform.python_compiler(),
'buildno': buildno,
......@@ -134,17 +133,18 @@ def get_machine_details():
def print_machine_details(d, indent=''):
l = ['Machine Details:',
' Platform ID: %s' % d.get('platform', 'n/a'),
' Processor: %s' % d.get('processor', 'n/a'),
' Platform ID: %s' % d.get('platform', 'n/a'),
' Processor: %s' % d.get('processor', 'n/a'),
'',
'Python:',
' Executable: %s' % d.get('executable', 'n/a'),
' Version: %s' % d.get('python', 'n/a'),
' Compiler: %s' % d.get('compiler', 'n/a'),
' Bits: %s' % d.get('bits', 'n/a'),
' Build: %s (#%s)' % (d.get('builddate', 'n/a'),
d.get('buildno', 'n/a')),
' Unicode: %s' % d.get('unicode', 'n/a'),
' Implementation: %s' % d.get('implementation', 'n/a'),
' Executable: %s' % d.get('executable', 'n/a'),
' Version: %s' % d.get('python', 'n/a'),
' Compiler: %s' % d.get('compiler', 'n/a'),
' Bits: %s' % d.get('bits', 'n/a'),
' Build: %s (#%s)' % (d.get('builddate', 'n/a'),
d.get('buildno', 'n/a')),
' Unicode: %s' % d.get('unicode', 'n/a'),
]
print indent + string.join(l, '\n' + indent) + '\n'
......@@ -499,8 +499,9 @@ class Benchmark:
def calibrate(self):
print 'Calibrating tests. Please wait...'
print 'Calibrating tests. Please wait...',
if self.verbose:
print
print
print 'Test min max'
print '-' * LINE
......@@ -514,6 +515,11 @@ class Benchmark:
(name,
min(test.overhead_times) * MILLI_SECONDS,
max(test.overhead_times) * MILLI_SECONDS)
if self.verbose:
print
print 'Done with the calibration.'
else:
print 'done.'
print
def run(self):
......@@ -830,7 +836,9 @@ python pybench.py -s p25.pybench -c p21.pybench
print '-' * LINE
print 'PYBENCH %s' % __version__
print '-' * LINE
print '* using Python %s' % (string.split(sys.version)[0])
print '* using %s %s' % (
platform.python_implementation(),
string.join(string.split(sys.version), ' '))
# Switch off garbage collection
if not withgc:
......@@ -839,15 +847,23 @@ python pybench.py -s p25.pybench -c p21.pybench
except ImportError:
print '* Python version doesn\'t support garbage collection'
else:
gc.disable()
print '* disabled garbage collection'
try:
gc.disable()
except NotImplementedError:
print '* Python version doesn\'t support gc.disable'
else:
print '* disabled garbage collection'
# "Disable" sys check interval
if not withsyscheck:
# Too bad the check interval uses an int instead of a long...
value = 2147483647
sys.setcheckinterval(value)
print '* system check interval set to maximum: %s' % value
try:
sys.setcheckinterval(value)
except (AttributeError, NotImplementedError):
print '* Python version doesn\'t support sys.setcheckinterval'
else:
print '* system check interval set to maximum: %s' % value
if timer == TIMER_SYSTIMES_PROCESSTIME:
import systimes
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment