Commit 9a2d99e2 authored by Stefan Krah's avatar Stefan Krah

- Issue #10181: New memoryview implementation fixes multiple ownership

  and lifetime issues of dynamically allocated Py_buffer members (#9990)
  as well as crashes (#8305, #7433). Many new features have been added
  (See whatsnew/3.3), and the documentation has been updated extensively.
  The ndarray test object from _testbuffer.c implements all aspects of
  PEP-3118, so further development towards the complete implementation
  of the PEP can proceed in a test-driven manner.

  Thanks to Nick Coghlan, Antoine Pitrou and Pauli Virtanen for review
  and many ideas.

- Issue #12834: Fix incorrect results of memoryview.tobytes() for
  non-contiguous arrays.

- Issue #5231: Introduce memoryview.cast() method that allows changing
  format and shape without making a copy of the underlying memory.
parent 5a3d0462
......@@ -7,6 +7,7 @@ Buffer Protocol
.. sectionauthor:: Greg Stein <gstein@lyra.org>
.. sectionauthor:: Benjamin Peterson
.. sectionauthor:: Stefan Krah
.. index::
......@@ -20,7 +21,7 @@ as image processing or numeric analysis.
While each of these types have their own semantics, they share the common
characteristic of being backed by a possibly large memory buffer. It is
then desireable, in some situations, to access that buffer directly and
then desirable, in some situations, to access that buffer directly and
without intermediate copying.
Python provides such a facility at the C level in the form of the *buffer
......@@ -60,8 +61,10 @@ isn't needed anymore. Failure to do so could lead to various issues such as
resource leaks.
The buffer structure
====================
.. _buffer-structure:
Buffer structure
================
Buffer structures (or simply "buffers") are useful as a way to expose the
binary data from another object to the Python programmer. They can also be
......@@ -81,246 +84,400 @@ can be created.
.. c:type:: Py_buffer
.. c:member:: void *buf
.. c:member:: void \*obj
A new reference to the exporting object or *NULL*. The reference is owned
by the consumer and automatically decremented and set to *NULL* by
:c:func:`PyBuffer_Release`.
For temporary buffers that are wrapped by :c:func:`PyMemoryView_FromBuffer`
this field must be *NULL*.
A pointer to the start of the memory for the object.
.. c:member:: void \*buf
A pointer to the start of the logical structure described by the buffer
fields. This can be any location within the underlying physical memory
block of the exporter. For example, with negative :c:member:`~Py_buffer.strides`
the value may point to the end of the memory block.
For contiguous arrays, the value points to the beginning of the memory
block.
.. c:member:: Py_ssize_t len
:noindex:
The total length of the memory in bytes.
``product(shape) * itemsize``. For contiguous arrays, this is the length
of the underlying memory block. For non-contiguous arrays, it is the length
that the logical structure would have if it were copied to a contiguous
representation.
Accessing ``((char *)buf)[0] up to ((char *)buf)[len-1]`` is only valid
if the buffer has been obtained by a request that guarantees contiguity. In
most cases such a request will be :c:macro:`PyBUF_SIMPLE` or :c:macro:`PyBUF_WRITABLE`.
.. c:member:: int readonly
An indicator of whether the buffer is read only.
An indicator of whether the buffer is read-only. This field is controlled
by the :c:macro:`PyBUF_WRITABLE` flag.
.. c:member:: Py_ssize_t itemsize
Item size in bytes of a single element. Same as the value of :func:`struct.calcsize`
called on non-NULL :c:member:`~Py_buffer.format` values.
Important exception: If a consumer requests a buffer without the
:c:macro:`PyBUF_FORMAT` flag, :c:member:`~Py_Buffer.format` will
be set to *NULL*, but :c:member:`~Py_buffer.itemsize` still has
the value for the original format.
If :c:member:`~Py_Buffer.shape` is present, the equality
``product(shape) * itemsize == len`` still holds and the consumer
can use :c:member:`~Py_buffer.itemsize` to navigate the buffer.
If :c:member:`~Py_Buffer.shape` is *NULL* as a result of a :c:macro:`PyBUF_SIMPLE`
or a :c:macro:`PyBUF_WRITABLE` request, the consumer must disregard
:c:member:`~Py_buffer.itemsize` and assume ``itemsize == 1``.
.. c:member:: const char *format
:noindex:
.. c:member:: const char \*format
A *NULL* terminated string in :mod:`struct` module style syntax giving
the contents of the elements available through the buffer. If this is
*NULL*, ``"B"`` (unsigned bytes) is assumed.
A *NUL* terminated string in :mod:`struct` module style syntax describing
the contents of a single item. If this is *NULL*, ``"B"`` (unsigned bytes)
is assumed.
This field is controlled by the :c:macro:`PyBUF_FORMAT` flag.
.. c:member:: int ndim
The number of dimensions the memory represents as a multi-dimensional
array. If it is 0, :c:data:`strides` and :c:data:`suboffsets` must be
*NULL*.
.. c:member:: Py_ssize_t *shape
An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the
shape of the memory as a multi-dimensional array. Note that
``((*shape)[0] * ... * (*shape)[ndims-1])*itemsize`` should be equal to
:c:data:`len`.
.. c:member:: Py_ssize_t *strides
An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the
number of bytes to skip to get to a new element in each dimension.
.. c:member:: Py_ssize_t *suboffsets
An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim`. If these
suboffset numbers are greater than or equal to 0, then the value stored
along the indicated dimension is a pointer and the suboffset value
dictates how many bytes to add to the pointer after de-referencing. A
suboffset value that it negative indicates that no de-referencing should
occur (striding in a contiguous memory block).
Here is a function that returns a pointer to the element in an N-D array
pointed to by an N-dimensional index when there are both non-NULL strides
and suboffsets::
void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,
Py_ssize_t *suboffsets, Py_ssize_t *indices) {
char *pointer = (char*)buf;
int i;
for (i = 0; i < ndim; i++) {
pointer += strides[i] * indices[i];
if (suboffsets[i] >=0 ) {
pointer = *((char**)pointer) + suboffsets[i];
}
}
return (void*)pointer;
}
The number of dimensions the memory represents as an n-dimensional array.
If it is 0, :c:member:`~Py_Buffer.buf` points to a single item representing
a scalar. In this case, :c:member:`~Py_buffer.shape`, :c:member:`~Py_buffer.strides`
and :c:member:`~Py_buffer.suboffsets` MUST be *NULL*.
The macro :c:macro:`PyBUF_MAX_NDIM` limits the maximum number of dimensions
to 64. Exporters MUST respect this limit, consumers of multi-dimensional
buffers SHOULD be able to handle up to :c:macro:`PyBUF_MAX_NDIM` dimensions.
.. c:member:: Py_ssize_t itemsize
.. c:member:: Py_ssize_t \*shape
An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim`
indicating the shape of the memory as an n-dimensional array. Note that
``shape[0] * ... * shape[ndim-1] * itemsize`` MUST be equal to
:c:member:`~Py_buffer.len`.
Shape values are restricted to ``shape[n] >= 0``. The case
``shape[n] == 0`` requires special attention. See `complex arrays`_
for further information.
The shape array is read-only for the consumer.
.. c:member:: Py_ssize_t \*strides
An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim`
giving the number of bytes to skip to get to a new element in each
dimension.
Stride values can be any integer. For regular arrays, strides are
usually positive, but a consumer MUST be able to handle the case
``strides[n] <= 0``. See `complex arrays`_ for further information.
The strides array is read-only for the consumer.
.. c:member:: Py_ssize_t \*suboffsets
An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim`.
If ``suboffsets[n] >= 0``, the values stored along the nth dimension are
pointers and the suboffset value dictates how many bytes to add to each
pointer after de-referencing. A suboffset value that is negative
indicates that no de-referencing should occur (striding in a contiguous
memory block).
This is a storage for the itemsize (in bytes) of each element of the
shared memory. It is technically un-necessary as it can be obtained
using :c:func:`PyBuffer_SizeFromFormat`, however an exporter may know
this information without parsing the format string and it is necessary
to know the itemsize for proper interpretation of striding. Therefore,
storing it is more convenient and faster.
This type of array representation is used by the Python Imaging Library
(PIL). See `complex arrays`_ for further information how to access elements
of such an array.
.. c:member:: void *internal
The suboffsets array is read-only for the consumer.
.. c:member:: void \*internal
This is for use internally by the exporting object. For example, this
might be re-cast as an integer by the exporter and used to store flags
about whether or not the shape, strides, and suboffsets arrays must be
freed when the buffer is released. The consumer should never alter this
freed when the buffer is released. The consumer MUST NOT alter this
value.
.. _buffer-request-types:
Buffer-related functions
========================
Buffer request types
====================
Buffers are usually obtained by sending a buffer request to an exporting
object via :c:func:`PyObject_GetBuffer`. Since the complexity of the logical
structure of the memory can vary drastically, the consumer uses the *flags*
argument to specify the exact buffer type it can handle.
.. c:function:: int PyObject_CheckBuffer(PyObject *obj)
All :c:data:`Py_buffer` fields are unambiguously defined by the request
type.
request-independent fields
~~~~~~~~~~~~~~~~~~~~~~~~~~
The following fields are not influenced by *flags* and must always be filled in
with the correct values: :c:member:`~Py_buffer.obj`, :c:member:`~Py_buffer.buf`,
:c:member:`~Py_buffer.len`, :c:member:`~Py_buffer.itemsize`, :c:member:`~Py_buffer.ndim`.
Return 1 if *obj* supports the buffer interface otherwise 0. When 1 is
returned, it doesn't guarantee that :c:func:`PyObject_GetBuffer` will
succeed.
readonly, format
~~~~~~~~~~~~~~~~
.. c:function:: int PyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags)
.. c:macro:: PyBUF_WRITABLE
Export a view over some internal data from the target object *obj*.
*obj* must not be NULL, and *view* must point to an existing
:c:type:`Py_buffer` structure allocated by the caller (most uses of
this function will simply declare a local variable of type
:c:type:`Py_buffer`). The *flags* argument is a bit field indicating
what kind of buffer is requested. The buffer interface allows
for complicated memory layout possibilities; however, some callers
won't want to handle all the complexity and instead request a simple
view of the target object (using :c:macro:`PyBUF_SIMPLE` for a read-only
view and :c:macro:`PyBUF_WRITABLE` for a read-write view).
Controls the :c:member:`~Py_buffer.readonly` field. If set, the exporter
MUST provide a writable buffer or else report failure. Otherwise, the
exporter MAY provide either a read-only or writable buffer, but the choice
MUST be consistent for all consumers.
Some exporters may not be able to share memory in every possible way and
may need to raise errors to signal to some consumers that something is
just not possible. These errors should be a :exc:`BufferError` unless
there is another error that is actually causing the problem. The
exporter can use flags information to simplify how much of the
:c:data:`Py_buffer` structure is filled in with non-default values and/or
raise an error if the object can't support a simpler view of its memory.
.. c:macro:: PyBUF_FORMAT
On success, 0 is returned and the *view* structure is filled with useful
values. On error, -1 is returned and an exception is raised; the *view*
is left in an undefined state.
Controls the :c:member:`~Py_buffer.format` field. If set, this field MUST
be filled in correctly. Otherwise, this field MUST be *NULL*.
The following are the possible values to the *flags* arguments.
.. c:macro:: PyBUF_SIMPLE
:c:macro:`PyBUF_WRITABLE` can be \|'d to any of the flags in the next section.
Since :c:macro:`PyBUF_SIMPLE` is defined as 0, :c:macro:`PyBUF_WRITABLE`
can be used as a stand-alone flag to request a simple writable buffer.
This is the default flag. The returned buffer exposes a read-only
memory area. The format of data is assumed to be raw unsigned bytes,
without any particular structure. This is a "stand-alone" flag
constant. It never needs to be '|'d to the others. The exporter will
raise an error if it cannot provide such a contiguous buffer of bytes.
:c:macro:`PyBUF_FORMAT` can be \|'d to any of the flags except :c:macro:`PyBUF_SIMPLE`.
The latter already implies format ``B`` (unsigned bytes).
.. c:macro:: PyBUF_WRITABLE
Like :c:macro:`PyBUF_SIMPLE`, but the returned buffer is writable. If
the exporter doesn't support writable buffers, an error is raised.
shape, strides, suboffsets
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. c:macro:: PyBUF_STRIDES
The flags that control the logical structure of the memory are listed
in decreasing order of complexity. Note that each flag contains all bits
of the flags below it.
This implies :c:macro:`PyBUF_ND`. The returned buffer must provide
strides information (i.e. the strides cannot be NULL). This would be
used when the consumer can handle strided, discontiguous arrays.
Handling strides automatically assumes you can handle shape. The
exporter can raise an error if a strided representation of the data is
not possible (i.e. without the suboffsets).
.. c:macro:: PyBUF_ND
+-----------------------------+-------+---------+------------+
| Request | shape | strides | suboffsets |
+=============================+=======+=========+============+
| .. c:macro:: PyBUF_INDIRECT | yes | yes | if needed |
+-----------------------------+-------+---------+------------+
| .. c:macro:: PyBUF_STRIDES | yes | yes | NULL |
+-----------------------------+-------+---------+------------+
| .. c:macro:: PyBUF_ND | yes | NULL | NULL |
+-----------------------------+-------+---------+------------+
| .. c:macro:: PyBUF_SIMPLE | NULL | NULL | NULL |
+-----------------------------+-------+---------+------------+
The returned buffer must provide shape information. The memory will be
assumed C-style contiguous (last dimension varies the fastest). The
exporter may raise an error if it cannot provide this kind of
contiguous buffer. If this is not given then shape will be *NULL*.
.. c:macro:: PyBUF_C_CONTIGUOUS
PyBUF_F_CONTIGUOUS
PyBUF_ANY_CONTIGUOUS
contiguity requests
~~~~~~~~~~~~~~~~~~~
These flags indicate that the contiguity returned buffer must be
respectively, C-contiguous (last dimension varies the fastest), Fortran
contiguous (first dimension varies the fastest) or either one. All of
these flags imply :c:macro:`PyBUF_STRIDES` and guarantee that the
strides buffer info structure will be filled in correctly.
C or Fortran contiguity can be explicitly requested, with and without stride
information. Without stride information, the buffer must be C-contiguous.
.. c:macro:: PyBUF_INDIRECT
+-----------------------------------+-------+---------+------------+--------+
| Request | shape | strides | suboffsets | contig |
+===================================+=======+=========+============+========+
| .. c:macro:: PyBUF_C_CONTIGUOUS | yes | yes | NULL | C |
+-----------------------------------+-------+---------+------------+--------+
| .. c:macro:: PyBUF_F_CONTIGUOUS | yes | yes | NULL | F |
+-----------------------------------+-------+---------+------------+--------+
| .. c:macro:: PyBUF_ANY_CONTIGUOUS | yes | yes | NULL | C or F |
+-----------------------------------+-------+---------+------------+--------+
| .. c:macro:: PyBUF_ND | yes | NULL | NULL | C |
+-----------------------------------+-------+---------+------------+--------+
This flag indicates the returned buffer must have suboffsets
information (which can be NULL if no suboffsets are needed). This can
be used when the consumer can handle indirect array referencing implied
by these suboffsets. This implies :c:macro:`PyBUF_STRIDES`.
.. c:macro:: PyBUF_FORMAT
compound requests
~~~~~~~~~~~~~~~~~
The returned buffer must have true format information if this flag is
provided. This would be used when the consumer is going to be checking
for what 'kind' of data is actually stored. An exporter should always
be able to provide this information if requested. If format is not
explicitly requested then the format must be returned as *NULL* (which
means ``'B'``, or unsigned bytes).
All possible requests are fully defined by some combination of the flags in
the previous section. For convenience, the buffer protocol provides frequently
used combinations as single flags.
.. c:macro:: PyBUF_STRIDED
In the following table *U* stands for undefined contiguity. The consumer would
have to call :c:func:`PyBuffer_IsContiguous` to determine contiguity.
This is equivalent to ``(PyBUF_STRIDES | PyBUF_WRITABLE)``.
.. c:macro:: PyBUF_STRIDED_RO
This is equivalent to ``(PyBUF_STRIDES)``.
+-------------------------------+-------+---------+------------+--------+----------+--------+
| Request | shape | strides | suboffsets | contig | readonly | format |
+===============================+=======+=========+============+========+==========+========+
| .. c:macro:: PyBUF_FULL | yes | yes | if needed | U | 0 | yes |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_FULL_RO | yes | yes | if needed | U | 1 or 0 | yes |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_RECORDS | yes | yes | NULL | U | 0 | yes |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_RECORDS_RO | yes | yes | NULL | U | 1 or 0 | yes |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_STRIDED | yes | yes | NULL | U | 0 | NULL |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_STRIDED_RO | yes | yes | NULL | U | 1 or 0 | NULL |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_CONTIG | yes | NULL | NULL | C | 0 | NULL |
+-------------------------------+-------+---------+------------+--------+----------+--------+
| .. c:macro:: PyBUF_CONTIG_RO | yes | NULL | NULL | C | 1 or 0 | NULL |
+-------------------------------+-------+---------+------------+--------+----------+--------+
.. c:macro:: PyBUF_RECORDS
This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT |
PyBUF_WRITABLE)``.
Complex arrays
==============
.. c:macro:: PyBUF_RECORDS_RO
NumPy-style: shape and strides
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The logical structure of NumPy-style arrays is defined by :c:member:`~Py_buffer.itemsize`,
:c:member:`~Py_buffer.ndim`, :c:member:`~Py_buffer.shape` and :c:member:`~Py_buffer.strides`.
If ``ndim == 0``, the memory location pointed to by :c:member:`~Py_buffer.buf` is
interpreted as a scalar of size :c:member:`~Py_buffer.itemsize`. In that case,
both :c:member:`~Py_buffer.shape` and :c:member:`~Py_buffer.strides` are *NULL*.
If :c:member:`~Py_buffer.strides` is *NULL*, the array is interpreted as
a standard n-dimensional C-array. Otherwise, the consumer must access an
n-dimensional array as follows:
``ptr = (char *)buf + indices[0] * strides[0] + ... + indices[n-1] * strides[n-1]``
``item = *((typeof(item) *)ptr);``
As noted above, :c:member:`~Py_buffer.buf` can point to any location within
the actual memory block. An exporter can check the validity of a buffer with
this function:
.. code-block:: python
def verify_structure(memlen, itemsize, ndim, shape, strides, offset):
"""Verify that the parameters represent a valid array within
the bounds of the allocated memory:
char *mem: start of the physical memory block
memlen: length of the physical memory block
offset: (char *)buf - mem
"""
if offset % itemsize:
return False
if offset < 0 or offset+itemsize > memlen:
return False
if any(v % itemsize for v in strides):
return False
if ndim <= 0:
return ndim == 0 and not shape and not strides
if 0 in shape:
return True
imin = sum(strides[j]*(shape[j]-1) for j in range(ndim)
if strides[j] <= 0)
imax = sum(strides[j]*(shape[j]-1) for j in range(ndim)
if strides[j] > 0)
return 0 <= offset+imin and offset+imax+itemsize <= memlen
PIL-style: shape, strides and suboffsets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In addition to the regular items, PIL-style arrays can contain pointers
that must be followed in order to get to the next element in a dimension.
For example, the regular three-dimensional C-array ``char v[2][2][3]`` can
also be viewed as an array of 2 pointers to 2 two-dimensional arrays:
``char (*v[2])[2][3]``. In suboffsets representation, those two pointers
can be embedded at the start of :c:member:`~Py_buffer.buf`, pointing
to two ``char x[2][3]`` arrays that can be located anywhere in memory.
Here is a function that returns a pointer to the element in an N-D array
pointed to by an N-dimensional index when there are both non-NULL strides
and suboffsets::
void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,
Py_ssize_t *suboffsets, Py_ssize_t *indices) {
char *pointer = (char*)buf;
int i;
for (i = 0; i < ndim; i++) {
pointer += strides[i] * indices[i];
if (suboffsets[i] >=0 ) {
pointer = *((char**)pointer) + suboffsets[i];
}
}
return (void*)pointer;
}
This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT)``.
.. c:macro:: PyBUF_FULL
Buffer-related functions
========================
This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT |
PyBUF_WRITABLE)``.
.. c:function:: int PyObject_CheckBuffer(PyObject *obj)
.. c:macro:: PyBUF_FULL_RO
Return 1 if *obj* supports the buffer interface otherwise 0. When 1 is
returned, it doesn't guarantee that :c:func:`PyObject_GetBuffer` will
succeed.
This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT)``.
.. c:macro:: PyBUF_CONTIG
.. c:function:: int PyObject_GetBuffer(PyObject *exporter, Py_buffer *view, int flags)
This is equivalent to ``(PyBUF_ND | PyBUF_WRITABLE)``.
Send a request to *exporter* to fill in *view* as specified by *flags*.
If the exporter cannot provide a buffer of the exact type, it MUST raise
:c:data:`PyExc_BufferError`, set :c:member:`view->obj` to *NULL* and
return -1.
.. c:macro:: PyBUF_CONTIG_RO
On success, fill in *view*, set :c:member:`view->obj` to a new reference
to *exporter* and return 0.
This is equivalent to ``(PyBUF_ND)``.
Successful calls to :c:func:`PyObject_GetBuffer` must be paired with calls
to :c:func:`PyBuffer_Release`, similar to :c:func:`malloc` and :c:func:`free`.
Thus, after the consumer is done with the buffer, :c:func:`PyBuffer_Release`
must be called exactly once.
.. c:function:: void PyBuffer_Release(Py_buffer *view)
Release the buffer *view*. This should be called when the buffer is no
longer being used as it may free memory from it.
Release the buffer *view* and decrement the reference count for
:c:member:`view->obj`. This function MUST be called when the buffer
is no longer being used, otherwise reference leaks may occur.
It is an error to call this function on a buffer that was not obtained via
:c:func:`PyObject_GetBuffer`.
.. c:function:: Py_ssize_t PyBuffer_SizeFromFormat(const char *)
Return the implied :c:data:`~Py_buffer.itemsize` from the struct-stype
:c:data:`~Py_buffer.format`.
Return the implied :c:data:`~Py_buffer.itemsize` from :c:data:`~Py_buffer.format`.
This function is not yet implemented.
.. c:function:: int PyBuffer_IsContiguous(Py_buffer *view, char fortran)
.. c:function:: int PyBuffer_IsContiguous(Py_buffer *view, char order)
Return 1 if the memory defined by the *view* is C-style (*fortran* is
``'C'``) or Fortran-style (*fortran* is ``'F'``) contiguous or either one
(*fortran* is ``'A'``). Return 0 otherwise.
Return 1 if the memory defined by the *view* is C-style (*order* is
``'C'``) or Fortran-style (*order* is ``'F'``) contiguous or either one
(*order* is ``'A'``). Return 0 otherwise.
.. c:function:: void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t itemsize, char fortran)
.. c:function:: void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t itemsize, char order)
Fill the *strides* array with byte-strides of a contiguous (C-style if
*fortran* is ``'C'`` or Fortran-style if *fortran* is ``'F'``) array of the
*order* is ``'C'`` or Fortran-style if *order* is ``'F'``) array of the
given shape with the given number of bytes per element.
.. c:function:: int PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int infoflags)
.. c:function:: int PyBuffer_FillInfo(Py_buffer *view, PyObject *exporter, void *buf, Py_ssize_t len, int readonly, int flags)
Handle buffer requests for an exporter that wants to expose *buf* of size *len*
with writability set according to *readonly*. *buf* is interpreted as a sequence
of unsigned bytes.
The *flags* argument indicates the request type. This function always fills in
*view* as specified by flags, unless *buf* has been designated as read-only
and :c:macro:`PyBUF_WRITABLE` is set in *flags*.
On success, set :c:member:`view->obj` to a new reference to *exporter* and
return 0. Otherwise, raise :c:data:`PyExc_BufferError`, set
:c:member:`view->obj` to *NULL* and return -1;
If this function is used as part of a :ref:`getbufferproc <buffer-structs>`,
*exporter* MUST be set to the exporting object. Otherwise, *exporter* MUST
be NULL.
Fill in a buffer-info structure, *view*, correctly for an exporter that can
only share a contiguous chunk of memory of "unsigned bytes" of the given
length. Return 0 on success and -1 (with raising an error) on error.
......@@ -17,16 +17,19 @@ any other object.
Create a memoryview object from an object that provides the buffer interface.
If *obj* supports writable buffer exports, the memoryview object will be
readable and writable, otherwise it will be read-only.
read/write, otherwise it may be either read-only or read/write at the
discretion of the exporter.
.. c:function:: PyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
Create a memoryview object using *mem* as the underlying buffer.
*flags* can be one of :c:macro:`PyBUF_READ` or :c:macro:`PyBUF_WRITE`.
.. c:function:: PyObject *PyMemoryView_FromBuffer(Py_buffer *view)
Create a memoryview object wrapping the given buffer structure *view*.
The memoryview object then owns the buffer represented by *view*, which
means you shouldn't try to call :c:func:`PyBuffer_Release` yourself: it
will be done on deallocation of the memoryview object.
For simple byte buffers, :c:func:`PyMemoryView_FromMemory` is the preferred
function.
.. c:function:: PyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)
......@@ -43,10 +46,16 @@ any other object.
currently allowed to create subclasses of :class:`memoryview`.
.. c:function:: Py_buffer *PyMemoryView_GET_BUFFER(PyObject *obj)
.. c:function:: Py_buffer *PyMemoryView_GET_BUFFER(PyObject *mview)
Return a pointer to the memoryview's private copy of the exporter's buffer.
*mview* **must** be a memoryview instance; this macro doesn't check its type,
you must do it yourself or you will risk crashes.
.. c:function:: Py_buffer *PyMemoryView_GET_BASE(PyObject *mview)
Return a pointer to the buffer structure wrapped by the given
memoryview object. The object **must** be a memoryview instance;
this macro doesn't check its type, you must do it yourself or you
will risk crashes.
Return either a pointer to the exporting object that the memoryview is based
on or *NULL* if the memoryview has been created by one of the functions
:c:func:`PyMemoryView_FromMemory` or :c:func:`PyMemoryView_FromBuffer`.
*mview* **must** be a memoryview instance.
......@@ -1198,46 +1198,74 @@ Buffer Object Structures
.. sectionauthor:: Greg J. Stein <greg@lyra.org>
.. sectionauthor:: Benjamin Peterson
.. sectionauthor:: Stefan Krah
.. c:type:: PyBufferProcs
The :ref:`buffer interface <bufferobjects>` exports a model where an object can expose its internal
data.
This structure holds pointers to the functions required by the
:ref:`Buffer protocol <bufferobjects>`. The protocol defines how
an exporter object can expose its internal data to consumer objects.
If an object does not export the buffer interface, then its :attr:`tp_as_buffer`
member in the :c:type:`PyTypeObject` structure should be *NULL*. Otherwise, the
:attr:`tp_as_buffer` will point to a :c:type:`PyBufferProcs` structure.
.. c:member:: getbufferproc PyBufferProcs.bf_getbuffer
The signature of this function is::
.. c:type:: PyBufferProcs
int (PyObject *exporter, Py_buffer *view, int flags);
Handle a request to *exporter* to fill in *view* as specified by *flags*.
A standard implementation of this function will take these steps:
- Check if the request can be met. If not, raise :c:data:`PyExc_BufferError`,
set :c:data:`view->obj` to *NULL* and return -1.
- Fill in the requested fields.
- Increment an internal counter for the number of exports.
- Set :c:data:`view->obj` to *exporter* and increment :c:data:`view->obj`.
- Return 0.
The individual fields of *view* are described in section
:ref:`Buffer structure <buffer-structure>`, the rules how an exporter
must react to specific requests are in section
:ref:`Buffer request types <buffer-request-types>`.
All memory pointed to in the :c:type:`Py_buffer` structure belongs to
the exporter and must remain valid until there are no consumers left.
:c:member:`~Py_buffer.shape`, :c:member:`~Py_buffer.strides`,
:c:member:`~Py_buffer.suboffsets` and :c:member:`~Py_buffer.internal`
are read-only for the consumer.
:c:func:`PyBuffer_FillInfo` provides an easy way of exposing a simple
bytes buffer while dealing correctly with all request types.
:c:func:`PyObject_GetBuffer` is the interface for the consumer that
wraps this function.
.. c:member:: releasebufferproc PyBufferProcs.bf_releasebuffer
The signature of this function is::
void (PyObject *exporter, Py_buffer *view);
Structure used to hold the function pointers which define an implementation of
the buffer protocol.
Handle a request to release the resources of the buffer. If no resources
need to be released, this field may be *NULL*. A standard implementation
of this function will take these steps:
.. c:member:: getbufferproc bf_getbuffer
- Decrement an internal counter for the number of exports.
This should fill a :c:type:`Py_buffer` with the necessary data for
exporting the type. The signature of :data:`getbufferproc` is ``int
(PyObject *obj, Py_buffer *view, int flags)``. *obj* is the object to
export, *view* is the :c:type:`Py_buffer` struct to fill, and *flags* gives
the conditions the caller wants the memory under. (See
:c:func:`PyObject_GetBuffer` for all flags.) :c:member:`bf_getbuffer` is
responsible for filling *view* with the appropriate information.
(:c:func:`PyBuffer_FillView` can be used in simple cases.) See
:c:type:`Py_buffer`\s docs for what needs to be filled in.
- If the counter is 0, free all memory associated with *view*.
The exporter MUST use the :c:member:`~Py_buffer.internal` field to keep
track of buffer-specific resources (if present). This field is guaranteed
to remain constant, while a consumer MAY pass a copy of the original buffer
as the *view* argument.
.. c:member:: releasebufferproc bf_releasebuffer
This should release the resources of the buffer. The signature of
:c:data:`releasebufferproc` is ``void (PyObject *obj, Py_buffer *view)``.
If the :c:data:`bf_releasebuffer` function is not provided (i.e. it is
*NULL*), then it does not ever need to be called.
This function MUST NOT decrement :c:data:`view->obj`, since that is
done automatically in :c:func:`PyBuffer_Release`.
The exporter of the buffer interface must make sure that any memory
pointed to in the :c:type:`Py_buffer` structure remains valid until
releasebuffer is called. Exporters will need to define a
:c:data:`bf_releasebuffer` function if they can re-allocate their memory,
strides, shape, suboffsets, or format variables which they might share
through the struct bufferinfo.
See :c:func:`PyBuffer_Release`.
:c:func:`PyBuffer_Release` is the interface for the consumer that
wraps this function.
......@@ -2377,7 +2377,7 @@ memoryview type
:class:`memoryview` objects allow Python code to access the internal data
of an object that supports the :ref:`buffer protocol <bufferobjects>` without
copying. Memory is generally interpreted as simple bytes.
copying.
.. class:: memoryview(obj)
......@@ -2391,52 +2391,88 @@ copying. Memory is generally interpreted as simple bytes.
is a single byte, but other types such as :class:`array.array` may have
bigger elements.
``len(view)`` returns the total number of elements in the memoryview,
*view*. The :class:`~memoryview.itemsize` attribute will give you the
``len(view)`` is equal to the length of :class:`~memoryview.tolist`.
If ``view.ndim = 0``, the length is 1. If ``view.ndim = 1``, the length
is equal to the number of elements in the view. For higher dimensions,
the length is equal to the length of the nested list representation of
the view. The :class:`~memoryview.itemsize` attribute will give you the
number of bytes in a single element.
A :class:`memoryview` supports slicing to expose its data. Taking a single
index will return a single element as a :class:`bytes` object. Full
slicing will result in a subview::
A :class:`memoryview` supports slicing to expose its data. If
:class:`~memoryview.format` is one of the native format specifiers
from the :mod:`struct` module, indexing will return a single element
with the correct type. Full slicing will result in a subview::
>>> v = memoryview(b'abcefg')
>>> v[1]
98
>>> v[-1]
103
>>> v[1:4]
<memory at 0x7f3ddc9f4350>
>>> bytes(v[1:4])
b'bce'
Other native formats::
>>> import array
>>> a = array.array('l', [-11111111, 22222222, -33333333, 44444444])
>>> a[0]
-11111111
>>> a[-1]
44444444
>>> a[2:3].tolist()
[-33333333]
>>> a[::2].tolist()
[-11111111, -33333333]
>>> a[::-1].tolist()
[44444444, -33333333, 22222222, -11111111]
>>> v = memoryview(b'abcefg')
>>> v[1]
b'b'
>>> v[-1]
b'g'
>>> v[1:4]
<memory at 0x77ab28>
>>> bytes(v[1:4])
b'bce'
If the object the memoryview is over supports changing its data, the
memoryview supports slice assignment::
.. versionadded:: 3.3
If the underlying object is writable, the memoryview supports slice
assignment. Resizing is not allowed::
>>> data = bytearray(b'abcefg')
>>> v = memoryview(data)
>>> v.readonly
False
>>> v[0] = b'z'
>>> v[0] = ord(b'z')
>>> data
bytearray(b'zbcefg')
>>> v[1:4] = b'123'
>>> data
bytearray(b'z123fg')
>>> v[2] = b'spam'
>>> v[2:3] = b'spam'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: cannot modify size of memoryview object
Notice how the size of the memoryview object cannot be changed.
File "<stdin>", line 1, in <module>
ValueError: memoryview assignment: lvalue and rvalue have different structures
>>> v[2:6] = b'spam'
>>> data
bytearray(b'z1spam')
Memoryviews of hashable (read-only) types are also hashable and their
hash value matches the corresponding bytes object::
Memoryviews of hashable (read-only) types are also hashable. The hash
is defined as ``hash(m) == hash(m.tobytes())``::
>>> v = memoryview(b'abcefg')
>>> hash(v) == hash(b'abcefg')
True
>>> hash(v[2:4]) == hash(b'ce')
True
>>> hash(v[::-2]) == hash(b'abcefg'[::-2])
True
Hashing of multi-dimensional objects is supported::
>>> buf = bytes(list(range(12)))
>>> x = memoryview(buf)
>>> y = x.cast('B', shape=[2,2,3])
>>> x.tolist()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
>>> y.tolist()
[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]]
>>> hash(x) == hash(y) == hash(y.tobytes())
True
.. versionchanged:: 3.3
Memoryview objects are now hashable.
......@@ -2455,12 +2491,20 @@ copying. Memory is generally interpreted as simple bytes.
>>> bytes(m)
b'abc'
For non-contiguous arrays the result is equal to the flattened list
representation with all elements converted to bytes.
.. method:: tolist()
Return the data in the buffer as a list of integers. ::
Return the data in the buffer as a list of elements. ::
>>> memoryview(b'abc').tolist()
[97, 98, 99]
>>> import array
>>> a = array.array('d', [1.1, 2.2, 3.3])
>>> m = memoryview(a)
>>> m.tolist()
[1.1, 2.2, 3.3]
.. method:: release()
......@@ -2487,7 +2531,7 @@ copying. Memory is generally interpreted as simple bytes.
>>> with memoryview(b'abc') as m:
... m[0]
...
b'a'
97
>>> m[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
......@@ -2495,45 +2539,219 @@ copying. Memory is generally interpreted as simple bytes.
.. versionadded:: 3.2
.. method:: cast(format[, shape])
Cast a memoryview to a new format or shape. *shape* defaults to
``[byte_length//new_itemsize]``, which means that the result view
will be one-dimensional. The return value is a new memoryview, but
the buffer itself is not copied. Supported casts are 1D -> C-contiguous
and C-contiguous -> 1D. One of the formats must be a byte format
('B', 'b' or 'c'). The byte length of the result must be the same
as the original length.
Cast 1D/long to 1D/unsigned bytes::
>>> import array
>>> a = array.array('l', [1,2,3])
>>> x = memoryview(a)
>>> x.format
'l'
>>> x.itemsize
8
>>> len(x)
3
>>> x.nbytes
24
>>> y = x.cast('B')
>>> y.format
'B'
>>> y.itemsize
1
>>> len(y)
24
>>> y.nbytes
24
Cast 1D/unsigned bytes to 1D/char::
>>> b = bytearray(b'zyz')
>>> x = memoryview(b)
>>> x[0] = b'a'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: memoryview: invalid value for format "B"
>>> y = x.cast('c')
>>> y[0] = b'a'
>>> b
bytearray(b'ayz')
Cast 1D/bytes to 3D/ints to 1D/signed char::
>>> import struct
>>> buf = struct.pack("i"*12, *list(range(12)))
>>> x = memoryview(buf)
>>> y = x.cast('i', shape=[2,2,3])
>>> y.tolist()
[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]]
>>> y.format
'i'
>>> y.itemsize
4
>>> len(y)
2
>>> y.nbytes
48
>>> z = y.cast('b')
>>> z.format
'b'
>>> z.itemsize
1
>>> len(z)
48
>>> z.nbytes
48
Cast 1D/unsigned char to to 2D/unsigned long::
>>> buf = struct.pack("L"*6, *list(range(6)))
>>> x = memoryview(buf)
>>> y = x.cast('L', shape=[2,3])
>>> len(y)
2
>>> y.nbytes
48
>>> y.tolist()
[[0, 1, 2], [3, 4, 5]]
.. versionadded:: 3.3
There are also several readonly attributes available:
.. attribute:: obj
The underlying object of the memoryview::
>>> b = bytearray(b'xyz')
>>> m = memoryview(b)
>>> m.obj is b
True
.. versionadded:: 3.3
.. attribute:: nbytes
``nbytes == product(shape) * itemsize == len(m.tobytes())``. This is
the amount of space in bytes that the array would use in a contiguous
representation. It is not necessarily equal to len(m)::
>>> import array
>>> a = array.array('i', [1,2,3,4,5])
>>> m = memoryview(a)
>>> len(m)
5
>>> m.nbytes
20
>>> y = m[::2]
>>> len(y)
3
>>> y.nbytes
12
>>> len(y.tobytes())
12
Multi-dimensional arrays::
>>> import struct
>>> buf = struct.pack("d"*12, *[1.5*x for x in range(12)])
>>> x = memoryview(buf)
>>> y = x.cast('d', shape=[3,4])
>>> y.tolist()
[[0.0, 1.5, 3.0, 4.5], [6.0, 7.5, 9.0, 10.5], [12.0, 13.5, 15.0, 16.5]]
>>> len(y)
3
>>> y.nbytes
96
.. versionadded:: 3.3
.. attribute:: readonly
A bool indicating whether the memory is read only.
.. attribute:: format
A string containing the format (in :mod:`struct` module style) for each
element in the view. This defaults to ``'B'``, a simple bytestring.
element in the view. A memoryview can be created from exporters with
arbitrary format strings, but some methods (e.g. :meth:`tolist`) are
restricted to native single element formats. Special care must be taken
when comparing memoryviews. Since comparisons are required to return a
value for ``==`` and ``!=``, two memoryviews referencing the same
exporter can compare as not-equal if the exporter's format is not
understood::
>>> from ctypes import BigEndianStructure, c_long
>>> class BEPoint(BigEndianStructure):
... _fields_ = [("x", c_long), ("y", c_long)]
...
>>> point = BEPoint(100, 200)
>>> a = memoryview(point)
>>> b = memoryview(point)
>>> a == b
False
>>> a.tolist()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NotImplementedError: memoryview: unsupported format T{>l:x:>l:y:}
.. attribute:: itemsize
The size in bytes of each element of the memoryview::
>>> m = memoryview(array.array('H', [1,2,3]))
>>> import array, struct
>>> m = memoryview(array.array('H', [32000, 32001, 32002]))
>>> m.itemsize
2
>>> m[0]
b'\x01\x00'
>>> len(m[0]) == m.itemsize
32000
>>> struct.calcsize('H') == m.itemsize
True
.. attribute:: shape
A tuple of integers the length of :attr:`ndim` giving the shape of the
memory as a N-dimensional array.
.. attribute:: ndim
An integer indicating how many dimensions of a multi-dimensional array the
memory represents.
.. attribute:: shape
A tuple of integers the length of :attr:`ndim` giving the shape of the
memory as a N-dimensional array.
.. attribute:: strides
A tuple of integers the length of :attr:`ndim` giving the size in bytes to
access each element for each dimension of the array.
.. attribute:: readonly
.. attribute:: suboffsets
A bool indicating whether the memory is read only.
Used internally for PIL-style arrays. The value is informational only.
.. attribute:: c_contiguous
A bool indicating whether the memory is C-contiguous.
.. versionadded:: 3.3
.. attribute:: f_contiguous
A bool indicating whether the memory is Fortran contiguous.
.. versionadded:: 3.3
.. attribute:: contiguous
A bool indicating whether the memory is contiguous.
.. memoryview.suboffsets isn't documented because it only seems useful for C
.. versionadded:: 3.3
.. _typecontextmanager:
......
......@@ -49,6 +49,62 @@
This article explains the new features in Python 3.3, compared to 3.2.
.. _pep-3118:
PEP 3118: New memoryview implementation and buffer protocol documentation
=========================================================================
:issue:`10181` - memoryview bug fixes and features.
Written by Stefan Krah.
The new memoryview implementation comprehensively fixes all ownership and
lifetime issues of dynamically allocated fields in the Py_buffer struct
that led to multiple crash reports. Additionally, several functions that
crashed or returned incorrect results for non-contiguous or multi-dimensional
input have been fixed.
The memoryview object now has a PEP-3118 compliant getbufferproc()
that checks the consumer's request type. Many new features have been
added, most of them work in full generality for non-contiguous arrays
and arrays with suboffsets.
The documentation has been updated, clearly spelling out responsibilities
for both exporters and consumers. Buffer request flags are grouped into
basic and compound flags. The memory layout of non-contiguous and
multi-dimensional NumPy-style arrays is explained.
Features
--------
* All native single character format specifiers in struct module syntax
(optionally prefixed with '@') are now supported.
* With some restrictions, the cast() method allows changing of format and
shape of C-contiguous arrays.
* Multi-dimensional list representations are supported for any array type.
* Multi-dimensional comparisons are supported for any array type.
* All array types are hashable if the exporting object is hashable
and the view is read-only.
* Arbitrary slicing of any 1-D arrays type is supported. For example, it
is now possible to reverse a memoryview in O(1) by using a negative step.
API changes
-----------
* The maximum number of dimensions is officially limited to 64.
* The representation of empty shape, strides and suboffsets is now
an empty tuple instead of None.
* Accessing a memoryview element with format 'B' (unsigned bytes)
now returns an integer (in accordance with the struct module syntax).
For returning a bytes object the view must be cast to 'c' first.
.. _pep-393:
PEP 393: Flexible String Representation
......
......@@ -559,7 +559,7 @@ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx*/
/* Copy the data from the src buffer to the buffer of destination
*/
PyAPI_FUNC(int) PyBuffer_IsContiguous(Py_buffer *view, char fort);
PyAPI_FUNC(int) PyBuffer_IsContiguous(const Py_buffer *view, char fort);
PyAPI_FUNC(void) PyBuffer_FillContiguousStrides(int ndims,
......
......@@ -6,70 +6,64 @@
extern "C" {
#endif
#ifndef Py_LIMITED_API
PyAPI_DATA(PyTypeObject) _PyManagedBuffer_Type;
#endif
PyAPI_DATA(PyTypeObject) PyMemoryView_Type;
#define PyMemoryView_Check(op) (Py_TYPE(op) == &PyMemoryView_Type)
#ifndef Py_LIMITED_API
/* Get a pointer to the underlying Py_buffer of a memoryview object. */
/* Get a pointer to the memoryview's private copy of the exporter's buffer. */
#define PyMemoryView_GET_BUFFER(op) (&((PyMemoryViewObject *)(op))->view)
/* Get a pointer to the PyObject from which originates a memoryview object. */
/* Get a pointer to the exporting object (this may be NULL!). */
#define PyMemoryView_GET_BASE(op) (((PyMemoryViewObject *)(op))->view.obj)
#endif
PyAPI_FUNC(PyObject *) PyMemoryView_GetContiguous(PyObject *base,
int buffertype,
char fort);
/* Return a contiguous chunk of memory representing the buffer
from an object in a memory view object. If a copy is made then the
base object for the memory view will be a *new* bytes object.
Otherwise, the base-object will be the object itself and no
data-copying will be done.
The buffertype argument can be PyBUF_READ, PyBUF_WRITE,
PyBUF_SHADOW to determine whether the returned buffer
should be READONLY, WRITABLE, or set to update the
original buffer if a copy must be made. If buffertype is
PyBUF_WRITE and the buffer is not contiguous an error will
be raised. In this circumstance, the user can use
PyBUF_SHADOW to ensure that a a writable temporary
contiguous buffer is returned. The contents of this
contiguous buffer will be copied back into the original
object after the memoryview object is deleted as long as
the original object is writable and allows setting an
exclusive write lock. If this is not allowed by the
original object, then a BufferError is raised.
If the object is multi-dimensional and if fortran is 'F',
the first dimension of the underlying array will vary the
fastest in the buffer. If fortran is 'C', then the last
dimension will vary the fastest (C-style contiguous). If
fortran is 'A', then it does not matter and you will get
whatever the object decides is more efficient.
A new reference is returned that must be DECREF'd when finished.
*/
PyAPI_FUNC(PyObject *) PyMemoryView_FromObject(PyObject *base);
PyAPI_FUNC(PyObject *) PyMemoryView_FromMemory(char *mem, Py_ssize_t size,
int flags);
#ifndef Py_LIMITED_API
PyAPI_FUNC(PyObject *) PyMemoryView_FromBuffer(Py_buffer *info);
/* create new if bufptr is NULL
will be a new bytesobject in base */
#endif
PyAPI_FUNC(PyObject *) PyMemoryView_GetContiguous(PyObject *base,
int buffertype,
char order);
/* The struct is declared here so that macros can work, but it shouldn't
be considered public. Don't access those fields directly, use the macros
/* The structs are declared here so that macros can work, but they shouldn't
be considered public. Don't access their fields directly, use the macros
and functions instead! */
#ifndef Py_LIMITED_API
#define _Py_MANAGED_BUFFER_RELEASED 0x001 /* access to exporter blocked */
#define _Py_MANAGED_BUFFER_FREE_FORMAT 0x002 /* free format */
typedef struct {
PyObject_HEAD
Py_buffer view;
Py_hash_t hash;
int flags; /* state flags */
Py_ssize_t exports; /* number of direct memoryview exports */
Py_buffer master; /* snapshot buffer obtained from the original exporter */
} _PyManagedBufferObject;
/* static storage used for casting between formats */
#define _Py_MEMORYVIEW_MAX_FORMAT 3 /* must be >= 3 */
/* memoryview state flags */
#define _Py_MEMORYVIEW_RELEASED 0x001 /* access to master buffer blocked */
#define _Py_MEMORYVIEW_C 0x002 /* C-contiguous layout */
#define _Py_MEMORYVIEW_FORTRAN 0x004 /* Fortran contiguous layout */
#define _Py_MEMORYVIEW_SCALAR 0x008 /* scalar: ndim = 0 */
#define _Py_MEMORYVIEW_PIL 0x010 /* PIL-style layout */
typedef struct {
PyObject_VAR_HEAD
_PyManagedBufferObject *mbuf; /* managed buffer */
Py_hash_t hash; /* hash value for read-only views */
int flags; /* state flags */
Py_ssize_t exports; /* number of buffer re-exports */
Py_buffer view; /* private copy of the exporter's view */
char format[_Py_MEMORYVIEW_MAX_FORMAT]; /* used for casting */
Py_ssize_t ob_array[1]; /* shape, strides, suboffsets */
} PyMemoryViewObject;
#endif
......
......@@ -186,15 +186,16 @@ typedef struct bufferinfo {
Py_ssize_t *shape;
Py_ssize_t *strides;
Py_ssize_t *suboffsets;
Py_ssize_t smalltable[2]; /* static store for shape and strides of
mono-dimensional buffers. */
void *internal;
} Py_buffer;
typedef int (*getbufferproc)(PyObject *, Py_buffer *, int);
typedef void (*releasebufferproc)(PyObject *, Py_buffer *);
/* Flags for getting buffers */
/* Maximum number of dimensions */
#define PyBUF_MAX_NDIM 64
/* Flags for getting buffers */
#define PyBUF_SIMPLE 0
#define PyBUF_WRITABLE 0x0001
/* we used to include an E, backwards compatible alias */
......
......@@ -25,14 +25,17 @@ class Test(unittest.TestCase):
v = memoryview(ob)
try:
self.assertEqual(normalize(v.format), normalize(fmt))
if shape is not None:
if shape:
self.assertEqual(len(v), shape[0])
else:
self.assertEqual(len(v) * sizeof(itemtp), sizeof(ob))
self.assertEqual(v.itemsize, sizeof(itemtp))
self.assertEqual(v.shape, shape)
# ctypes object always have a non-strided memory block
self.assertEqual(v.strides, None)
# XXX Issue #12851: PyCData_NewGetBuffer() must provide strides
# if requested. memoryview currently reconstructs missing
# stride information, so this assert will fail.
# self.assertEqual(v.strides, ())
# they are always read/write
self.assertFalse(v.readonly)
......@@ -52,14 +55,15 @@ class Test(unittest.TestCase):
v = memoryview(ob)
try:
self.assertEqual(v.format, fmt)
if shape is not None:
if shape:
self.assertEqual(len(v), shape[0])
else:
self.assertEqual(len(v) * sizeof(itemtp), sizeof(ob))
self.assertEqual(v.itemsize, sizeof(itemtp))
self.assertEqual(v.shape, shape)
# ctypes object always have a non-strided memory block
self.assertEqual(v.strides, None)
# XXX Issue #12851
# self.assertEqual(v.strides, ())
# they are always read/write
self.assertFalse(v.readonly)
......@@ -110,34 +114,34 @@ native_types = [
## simple types
(c_char, "<c", None, c_char),
(c_byte, "<b", None, c_byte),
(c_ubyte, "<B", None, c_ubyte),
(c_short, "<h", None, c_short),
(c_ushort, "<H", None, c_ushort),
(c_char, "<c", (), c_char),
(c_byte, "<b", (), c_byte),
(c_ubyte, "<B", (), c_ubyte),
(c_short, "<h", (), c_short),
(c_ushort, "<H", (), c_ushort),
# c_int and c_uint may be aliases to c_long
#(c_int, "<i", None, c_int),
#(c_uint, "<I", None, c_uint),
#(c_int, "<i", (), c_int),
#(c_uint, "<I", (), c_uint),
(c_long, "<l", None, c_long),
(c_ulong, "<L", None, c_ulong),
(c_long, "<l", (), c_long),
(c_ulong, "<L", (), c_ulong),
# c_longlong and c_ulonglong are aliases on 64-bit platforms
#(c_longlong, "<q", None, c_longlong),
#(c_ulonglong, "<Q", None, c_ulonglong),
(c_float, "<f", None, c_float),
(c_double, "<d", None, c_double),
(c_float, "<f", (), c_float),
(c_double, "<d", (), c_double),
# c_longdouble may be an alias to c_double
(c_bool, "<?", None, c_bool),
(py_object, "<O", None, py_object),
(c_bool, "<?", (), c_bool),
(py_object, "<O", (), py_object),
## pointers
(POINTER(c_byte), "&<b", None, POINTER(c_byte)),
(POINTER(POINTER(c_long)), "&&<l", None, POINTER(POINTER(c_long))),
(POINTER(c_byte), "&<b", (), POINTER(c_byte)),
(POINTER(POINTER(c_long)), "&&<l", (), POINTER(POINTER(c_long))),
## arrays and pointers
......@@ -145,32 +149,32 @@ native_types = [
(c_float * 4 * 3 * 2, "(2,3,4)<f", (2,3,4), c_float),
(POINTER(c_short) * 2, "(2)&<h", (2,), POINTER(c_short)),
(POINTER(c_short) * 2 * 3, "(3,2)&<h", (3,2,), POINTER(c_short)),
(POINTER(c_short * 2), "&(2)<h", None, POINTER(c_short)),
(POINTER(c_short * 2), "&(2)<h", (), POINTER(c_short)),
## structures and unions
(Point, "T{<l:x:<l:y:}", None, Point),
(Point, "T{<l:x:<l:y:}", (), Point),
# packed structures do not implement the pep
(PackedPoint, "B", None, PackedPoint),
(Point2, "T{<l:x:<l:y:}", None, Point2),
(EmptyStruct, "T{}", None, EmptyStruct),
(PackedPoint, "B", (), PackedPoint),
(Point2, "T{<l:x:<l:y:}", (), Point2),
(EmptyStruct, "T{}", (), EmptyStruct),
# the pep does't support unions
(aUnion, "B", None, aUnion),
(aUnion, "B", (), aUnion),
## pointer to incomplete structure
(Incomplete, "B", None, Incomplete),
(POINTER(Incomplete), "&B", None, POINTER(Incomplete)),
(Incomplete, "B", (), Incomplete),
(POINTER(Incomplete), "&B", (), POINTER(Incomplete)),
# 'Complete' is a structure that starts incomplete, but is completed after the
# pointer type to it has been created.
(Complete, "T{<l:a:}", None, Complete),
(Complete, "T{<l:a:}", (), Complete),
# Unfortunately the pointer format string is not fixed...
(POINTER(Complete), "&B", None, POINTER(Complete)),
(POINTER(Complete), "&B", (), POINTER(Complete)),
## other
# function signatures are not implemented
(CFUNCTYPE(None), "X{}", None, CFUNCTYPE(None)),
(CFUNCTYPE(None), "X{}", (), CFUNCTYPE(None)),
]
......@@ -186,10 +190,10 @@ class LEPoint(LittleEndianStructure):
# and little endian machines.
#
endian_types = [
(BEPoint, "T{>l:x:>l:y:}", None, BEPoint),
(LEPoint, "T{<l:x:<l:y:}", None, LEPoint),
(POINTER(BEPoint), "&T{>l:x:>l:y:}", None, POINTER(BEPoint)),
(POINTER(LEPoint), "&T{<l:x:<l:y:}", None, POINTER(LEPoint)),
(BEPoint, "T{>l:x:>l:y:}", (), BEPoint),
(LEPoint, "T{<l:x:<l:y:}", (), LEPoint),
(POINTER(BEPoint), "&T{>l:x:>l:y:}", (), POINTER(BEPoint)),
(POINTER(LEPoint), "&T{<l:x:<l:y:}", (), POINTER(LEPoint)),
]
if __name__ == "__main__":
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -24,15 +24,14 @@ class AbstractMemoryTests:
return filter(None, [self.ro_type, self.rw_type])
def check_getitem_with_type(self, tp):
item = self.getitem_type
b = tp(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
self.assertEqual(m[0], item(b"a"))
self.assertIsInstance(m[0], bytes)
self.assertEqual(m[5], item(b"f"))
self.assertEqual(m[-1], item(b"f"))
self.assertEqual(m[-6], item(b"a"))
self.assertEqual(m[0], ord(b"a"))
self.assertIsInstance(m[0], int)
self.assertEqual(m[5], ord(b"f"))
self.assertEqual(m[-1], ord(b"f"))
self.assertEqual(m[-6], ord(b"a"))
# Bounds checking
self.assertRaises(IndexError, lambda: m[6])
self.assertRaises(IndexError, lambda: m[-7])
......@@ -76,7 +75,9 @@ class AbstractMemoryTests:
b = self.rw_type(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
m[0] = tp(b"0")
m[0] = ord(b'1')
self._check_contents(tp, b, b"1bcdef")
m[0:1] = tp(b"0")
self._check_contents(tp, b, b"0bcdef")
m[1:3] = tp(b"12")
self._check_contents(tp, b, b"012def")
......@@ -102,10 +103,17 @@ class AbstractMemoryTests:
# Wrong index/slice types
self.assertRaises(TypeError, setitem, 0.0, b"a")
self.assertRaises(TypeError, setitem, (0,), b"a")
self.assertRaises(TypeError, setitem, (slice(0,1,1), 0), b"a")
self.assertRaises(TypeError, setitem, (0, slice(0,1,1)), b"a")
self.assertRaises(TypeError, setitem, (0,), b"a")
self.assertRaises(TypeError, setitem, "a", b"a")
# Not implemented: multidimensional slices
slices = (slice(0,1,1), slice(0,1,2))
self.assertRaises(NotImplementedError, setitem, slices, b"a")
# Trying to resize the memory object
self.assertRaises(ValueError, setitem, 0, b"")
self.assertRaises(ValueError, setitem, 0, b"ab")
exc = ValueError if m.format == 'c' else TypeError
self.assertRaises(exc, setitem, 0, b"")
self.assertRaises(exc, setitem, 0, b"ab")
self.assertRaises(ValueError, setitem, slice(1,1), b"a")
self.assertRaises(ValueError, setitem, slice(0,2), b"a")
......@@ -175,7 +183,7 @@ class AbstractMemoryTests:
self.assertEqual(m.shape, (6,))
self.assertEqual(len(m), 6)
self.assertEqual(m.strides, (self.itemsize,))
self.assertEqual(m.suboffsets, None)
self.assertEqual(m.suboffsets, ())
return m
def test_attributes_readonly(self):
......@@ -209,12 +217,16 @@ class AbstractMemoryTests:
# If tp is a factory rather than a plain type, skip
continue
class MyView():
def __init__(self, base):
self.m = memoryview(base)
class MySource(tp):
pass
class MyObject:
pass
# Create a reference cycle through a memoryview object
# Create a reference cycle through a memoryview object.
# This exercises mbuf_clear().
b = MySource(tp(b'abc'))
m = self._view(b)
o = MyObject()
......@@ -226,6 +238,17 @@ class AbstractMemoryTests:
gc.collect()
self.assertTrue(wr() is None, wr())
# This exercises memory_clear().
m = MyView(tp(b'abc'))
o = MyObject()
m.x = m
m.o = o
wr = weakref.ref(o)
m = o = None
# The cycle must be broken
gc.collect()
self.assertTrue(wr() is None, wr())
def _check_released(self, m, tp):
check = self.assertRaisesRegex(ValueError, "released")
with check: bytes(m)
......@@ -283,9 +306,12 @@ class AbstractMemoryTests:
i = io.BytesIO(b'ZZZZ')
self.assertRaises(TypeError, i.readinto, m)
def test_getbuf_fail(self):
self.assertRaises(TypeError, self._view, {})
def test_hash(self):
# Memoryviews of readonly (hashable) types are hashable, and they
# hash as the corresponding object.
# hash as hash(obj.tobytes()).
tp = self.ro_type
if tp is None:
self.skipTest("no read-only type to test")
......
......@@ -773,8 +773,8 @@ class SizeofTest(unittest.TestCase):
check(int(PyLong_BASE), size(vh) + 2*self.longdigit)
check(int(PyLong_BASE**2-1), size(vh) + 2*self.longdigit)
check(int(PyLong_BASE**2), size(vh) + 3*self.longdigit)
# memory (Py_buffer + hash value)
check(memoryview(b''), size(h + 'PP2P2i7P' + 'P'))
# memoryview
check(memoryview(b''), size(h + 'PPiP4P2i5P3cP'))
# module
check(unittest, size(h + '3P'))
# None
......
......@@ -1041,6 +1041,7 @@ John Viega
Kannan Vijayan
Kurt Vile
Norman Vine
Pauli Virtanen
Frank Visser
Johannes Vogel
Sjoerd de Vries
......
......@@ -10,6 +10,23 @@ What's New in Python 3.3 Alpha 1?
Core and Builtins
-----------------
- Issue #10181: New memoryview implementation fixes multiple ownership
and lifetime issues of dynamically allocated Py_buffer members (#9990)
as well as crashes (#8305, #7433). Many new features have been added
(See whatsnew/3.3), and the documentation has been updated extensively.
The ndarray test object from _testbuffer.c implements all aspects of
PEP-3118, so further development towards the complete implementation
of the PEP can proceed in a test-driven manner.
Thanks to Nick Coghlan, Antoine Pitrou and Pauli Virtanen for review
and many ideas.
- Issue #12834: Fix incorrect results of memoryview.tobytes() for
non-contiguous arrays.
- Issue #5231: Introduce memoryview.cast() method that allows changing
format and shape without making a copy of the underlying memory.
- Issue #14084: Fix a file descriptor leak when importing a module with a
bad encoding.
......
......@@ -412,4 +412,15 @@
fun:SHA1_Update
}
{
test_buffer_non_debug
Memcheck:Addr4
fun:PyUnicodeUCS2_FSConverter
}
{
test_buffer_non_debug
Memcheck:Addr4
fun:PyUnicode_FSConverter
}
/* C Extension module to test all aspects of PEP-3118.
Written by Stefan Krah. */
#define PY_SSIZE_T_CLEAN
#include "Python.h"
/* struct module */
PyObject *structmodule = NULL;
PyObject *Struct = NULL;
PyObject *calcsize = NULL;
/* cache simple format string */
static const char *simple_fmt = "B";
PyObject *simple_format = NULL;
#define SIMPLE_FORMAT(fmt) (fmt == NULL || strcmp(fmt, "B") == 0)
/**************************************************************************/
/* NDArray Object */
/**************************************************************************/
static PyTypeObject NDArray_Type;
#define NDArray_Check(v) (Py_TYPE(v) == &NDArray_Type)
#define CHECK_LIST_OR_TUPLE(v) \
if (!PyList_Check(v) && !PyTuple_Check(v)) { \
PyErr_SetString(PyExc_TypeError, \
#v " must be a list or a tuple"); \
return NULL; \
} \
#define PyMem_XFree(v) \
do { if (v) PyMem_Free(v); } while (0)
/* Maximum number of dimensions. */
#define ND_MAX_NDIM (2 * PyBUF_MAX_NDIM)
/* Check for the presence of suboffsets in the first dimension. */
#define HAVE_PTR(suboffsets) (suboffsets && suboffsets[0] >= 0)
/* Adjust ptr if suboffsets are present. */
#define ADJUST_PTR(ptr, suboffsets) \
(HAVE_PTR(suboffsets) ? *((char**)ptr) + suboffsets[0] : ptr)
/* User configurable flags for the ndarray */
#define ND_VAREXPORT 0x001 /* change layout while buffers are exported */
/* User configurable flags for each base buffer */
#define ND_WRITABLE 0x002 /* mark base buffer as writable */
#define ND_FORTRAN 0x004 /* Fortran contiguous layout */
#define ND_SCALAR 0x008 /* scalar: ndim = 0 */
#define ND_PIL 0x010 /* convert to PIL-style array (suboffsets) */
#define ND_GETBUF_FAIL 0x020 /* test issue 7385 */
/* Default: NumPy style (strides), read-only, no var-export, C-style layout */
#define ND_DEFAULT 0x0
/* Internal flags for the base buffer */
#define ND_C 0x040 /* C contiguous layout (default) */
#define ND_OWN_ARRAYS 0x080 /* consumer owns arrays */
#define ND_UNUSED 0x100 /* initializer */
/* ndarray properties */
#define ND_IS_CONSUMER(nd) \
(((NDArrayObject *)nd)->head == &((NDArrayObject *)nd)->staticbuf)
/* ndbuf->flags properties */
#define ND_C_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_C)))
#define ND_FORTRAN_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_FORTRAN)))
#define ND_ANY_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_C|ND_FORTRAN)))
/* getbuffer() requests */
#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT)
#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS)
#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS)
#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS)
#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES)
#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND)
#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE)
#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT)
/* Single node of a list of base buffers. The list is needed to implement
changes in memory layout while exported buffers are active. */
static PyTypeObject NDArray_Type;
struct ndbuf;
typedef struct ndbuf {
struct ndbuf *next;
struct ndbuf *prev;
Py_ssize_t len; /* length of data */
Py_ssize_t offset; /* start of the array relative to data */
char *data; /* raw data */
int flags; /* capabilities of the base buffer */
Py_ssize_t exports; /* number of exports */
Py_buffer base; /* base buffer */
} ndbuf_t;
typedef struct {
PyObject_HEAD
int flags; /* ndarray flags */
ndbuf_t staticbuf; /* static buffer for re-exporting mode */
ndbuf_t *head; /* currently active base buffer */
} NDArrayObject;
static ndbuf_t *
ndbuf_new(Py_ssize_t nitems, Py_ssize_t itemsize, Py_ssize_t offset, int flags)
{
ndbuf_t *ndbuf;
Py_buffer *base;
Py_ssize_t len;
len = nitems * itemsize;
if (offset % itemsize) {
PyErr_SetString(PyExc_ValueError,
"offset must be a multiple of itemsize");
return NULL;
}
if (offset < 0 || offset+itemsize > len) {
PyErr_SetString(PyExc_ValueError, "offset out of bounds");
return NULL;
}
ndbuf = PyMem_Malloc(sizeof *ndbuf);
if (ndbuf == NULL) {
PyErr_NoMemory();
return NULL;
}
ndbuf->next = NULL;
ndbuf->prev = NULL;
ndbuf->len = len;
ndbuf->offset= offset;
ndbuf->data = PyMem_Malloc(len);
if (ndbuf->data == NULL) {
PyErr_NoMemory();
PyMem_Free(ndbuf);
return NULL;
}
ndbuf->flags = flags;
ndbuf->exports = 0;
base = &ndbuf->base;
base->obj = NULL;
base->buf = ndbuf->data;
base->len = len;
base->itemsize = 1;
base->readonly = 0;
base->format = NULL;
base->ndim = 1;
base->shape = NULL;
base->strides = NULL;
base->suboffsets = NULL;
base->internal = ndbuf;
return ndbuf;
}
static void
ndbuf_free(ndbuf_t *ndbuf)
{
Py_buffer *base = &ndbuf->base;
PyMem_XFree(ndbuf->data);
PyMem_XFree(base->format);
PyMem_XFree(base->shape);
PyMem_XFree(base->strides);
PyMem_XFree(base->suboffsets);
PyMem_Free(ndbuf);
}
static void
ndbuf_push(NDArrayObject *nd, ndbuf_t *elt)
{
elt->next = nd->head;
if (nd->head) nd->head->prev = elt;
nd->head = elt;
elt->prev = NULL;
}
static void
ndbuf_delete(NDArrayObject *nd, ndbuf_t *elt)
{
if (elt->prev)
elt->prev->next = elt->next;
else
nd->head = elt->next;
if (elt->next)
elt->next->prev = elt->prev;
ndbuf_free(elt);
}
static void
ndbuf_pop(NDArrayObject *nd)
{
ndbuf_delete(nd, nd->head);
}
static PyObject *
ndarray_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
NDArrayObject *nd;
nd = PyObject_New(NDArrayObject, &NDArray_Type);
if (nd == NULL)
return NULL;
nd->flags = 0;
nd->head = NULL;
return (PyObject *)nd;
}
static void
ndarray_dealloc(NDArrayObject *self)
{
if (self->head) {
if (ND_IS_CONSUMER(self)) {
Py_buffer *base = &self->head->base;
if (self->head->flags & ND_OWN_ARRAYS) {
PyMem_XFree(base->shape);
PyMem_XFree(base->strides);
PyMem_XFree(base->suboffsets);
}
PyBuffer_Release(base);
}
else {
while (self->head)
ndbuf_pop(self);
}
}
PyObject_Del(self);
}
static int
ndarray_init_staticbuf(PyObject *exporter, NDArrayObject *nd, int flags)
{
Py_buffer *base = &nd->staticbuf.base;
if (PyObject_GetBuffer(exporter, base, flags) < 0)
return -1;
nd->head = &nd->staticbuf;
nd->head->next = NULL;
nd->head->prev = NULL;
nd->head->len = -1;
nd->head->offset = -1;
nd->head->data = NULL;
nd->head->flags = base->readonly ? 0 : ND_WRITABLE;
nd->head->exports = 0;
return 0;
}
static void
init_flags(ndbuf_t *ndbuf)
{
if (ndbuf->base.ndim == 0)
ndbuf->flags |= ND_SCALAR;
if (ndbuf->base.suboffsets)
ndbuf->flags |= ND_PIL;
if (PyBuffer_IsContiguous(&ndbuf->base, 'C'))
ndbuf->flags |= ND_C;
if (PyBuffer_IsContiguous(&ndbuf->base, 'F'))
ndbuf->flags |= ND_FORTRAN;
}
/****************************************************************************/
/* Buffer/List conversions */
/****************************************************************************/
static Py_ssize_t *strides_from_shape(const ndbuf_t *, int flags);
/* Get number of members in a struct: see issue #12740 */
typedef struct {
PyObject_HEAD
Py_ssize_t s_size;
Py_ssize_t s_len;
} PyPartialStructObject;
static Py_ssize_t
get_nmemb(PyObject *s)
{
return ((PyPartialStructObject *)s)->s_len;
}
/* Pack all items into the buffer of 'obj'. The 'format' parameter must be
in struct module syntax. For standard C types, a single item is an integer.
For compound types, a single item is a tuple of integers. */
static int
pack_from_list(PyObject *obj, PyObject *items, PyObject *format,
Py_ssize_t itemsize)
{
PyObject *structobj, *pack_into;
PyObject *args, *offset;
PyObject *item, *tmp;
Py_ssize_t nitems; /* number of items */
Py_ssize_t nmemb; /* number of members in a single item */
Py_ssize_t i, j;
int ret = 0;
assert(PyObject_CheckBuffer(obj));
assert(PyList_Check(items) || PyTuple_Check(items));
structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL);
if (structobj == NULL)
return -1;
nitems = PySequence_Fast_GET_SIZE(items);
nmemb = get_nmemb(structobj);
assert(nmemb >= 1);
pack_into = PyObject_GetAttrString(structobj, "pack_into");
if (pack_into == NULL) {
Py_DECREF(structobj);
return -1;
}
/* nmemb >= 1 */
args = PyTuple_New(2 + nmemb);
if (args == NULL) {
Py_DECREF(pack_into);
Py_DECREF(structobj);
return -1;
}
offset = NULL;
for (i = 0; i < nitems; i++) {
/* Loop invariant: args[j] are borrowed references or NULL. */
PyTuple_SET_ITEM(args, 0, obj);
for (j = 1; j < 2+nmemb; j++)
PyTuple_SET_ITEM(args, j, NULL);
Py_XDECREF(offset);
offset = PyLong_FromSsize_t(i*itemsize);
if (offset == NULL) {
ret = -1;
break;
}
PyTuple_SET_ITEM(args, 1, offset);
item = PySequence_Fast_GET_ITEM(items, i);
if ((PyBytes_Check(item) || PyLong_Check(item) ||
PyFloat_Check(item)) && nmemb == 1) {
PyTuple_SET_ITEM(args, 2, item);
}
else if ((PyList_Check(item) || PyTuple_Check(item)) &&
PySequence_Length(item) == nmemb) {
for (j = 0; j < nmemb; j++) {
tmp = PySequence_Fast_GET_ITEM(item, j);
PyTuple_SET_ITEM(args, 2+j, tmp);
}
}
else {
PyErr_SetString(PyExc_ValueError,
"mismatch between initializer element and format string");
ret = -1;
break;
}
tmp = PyObject_CallObject(pack_into, args);
if (tmp == NULL) {
ret = -1;
break;
}
Py_DECREF(tmp);
}
Py_INCREF(obj); /* args[0] */
/* args[1]: offset is either NULL or should be dealloc'd */
for (i = 2; i < 2+nmemb; i++) {
tmp = PyTuple_GET_ITEM(args, i);
Py_XINCREF(tmp);
}
Py_DECREF(args);
Py_DECREF(pack_into);
Py_DECREF(structobj);
return ret;
}
/* Pack single element */
static int
pack_single(char *ptr, PyObject *item, const char *fmt, Py_ssize_t itemsize)
{
PyObject *structobj = NULL, *pack_into = NULL, *args = NULL;
PyObject *format = NULL, *mview = NULL, *zero = NULL;
Py_ssize_t i, nmemb;
int ret = -1;
PyObject *x;
if (fmt == NULL) fmt = "B";
format = PyUnicode_FromString(fmt);
if (format == NULL)
goto out;
structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL);
if (structobj == NULL)
goto out;
nmemb = get_nmemb(structobj);
assert(nmemb >= 1);
mview = PyMemoryView_FromMemory(ptr, itemsize, PyBUF_WRITE);
if (mview == NULL)
goto out;
zero = PyLong_FromLong(0);
if (zero == NULL)
goto out;
pack_into = PyObject_GetAttrString(structobj, "pack_into");
if (pack_into == NULL)
goto out;
args = PyTuple_New(2+nmemb);
if (args == NULL)
goto out;
PyTuple_SET_ITEM(args, 0, mview);
PyTuple_SET_ITEM(args, 1, zero);
if ((PyBytes_Check(item) || PyLong_Check(item) ||
PyFloat_Check(item)) && nmemb == 1) {
PyTuple_SET_ITEM(args, 2, item);
}
else if ((PyList_Check(item) || PyTuple_Check(item)) &&
PySequence_Length(item) == nmemb) {
for (i = 0; i < nmemb; i++) {
x = PySequence_Fast_GET_ITEM(item, i);
PyTuple_SET_ITEM(args, 2+i, x);
}
}
else {
PyErr_SetString(PyExc_ValueError,
"mismatch between initializer element and format string");
goto args_out;
}
x = PyObject_CallObject(pack_into, args);
if (x != NULL) {
Py_DECREF(x);
ret = 0;
}
args_out:
for (i = 0; i < 2+nmemb; i++)
Py_XINCREF(PyTuple_GET_ITEM(args, i));
Py_XDECREF(args);
out:
Py_XDECREF(pack_into);
Py_XDECREF(zero);
Py_XDECREF(mview);
Py_XDECREF(structobj);
Py_XDECREF(format);
return ret;
}
static void
copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize,
char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
char *mem)
{
Py_ssize_t i;
assert(ndim >= 1);
if (ndim == 1) {
if (!HAVE_PTR(dsuboffsets) && !HAVE_PTR(ssuboffsets) &&
dstrides[0] == itemsize && sstrides[0] == itemsize) {
memmove(dptr, sptr, shape[0] * itemsize);
}
else {
char *p;
assert(mem != NULL);
for (i=0, p=mem; i<shape[0]; p+=itemsize, sptr+=sstrides[0], i++) {
char *xsptr = ADJUST_PTR(sptr, ssuboffsets);
memcpy(p, xsptr, itemsize);
}
for (i=0, p=mem; i<shape[0]; p+=itemsize, dptr+=dstrides[0], i++) {
char *xdptr = ADJUST_PTR(dptr, dsuboffsets);
memcpy(xdptr, p, itemsize);
}
}
return;
}
for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) {
char *xdptr = ADJUST_PTR(dptr, dsuboffsets);
char *xsptr = ADJUST_PTR(sptr, ssuboffsets);
copy_rec(shape+1, ndim-1, itemsize,
xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL,
xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL,
mem);
}
}
static int
cmp_structure(Py_buffer *dest, Py_buffer *src)
{
Py_ssize_t i;
int same_fmt = ((dest->format == NULL && src->format == NULL) || \
(strcmp(dest->format, src->format) == 0));
if (!same_fmt ||
dest->itemsize != src->itemsize ||
dest->ndim != src->ndim)
return -1;
for (i = 0; i < dest->ndim; i++) {
if (dest->shape[i] != src->shape[i])
return -1;
if (dest->shape[i] == 0)
break;
}
return 0;
}
/* Copy src to dest. Both buffers must have the same format, itemsize,
ndim and shape. Copying is atomic, the function never fails with
a partial copy. */
static int
copy_buffer(Py_buffer *dest, Py_buffer *src)
{
char *mem = NULL;
assert(dest->ndim > 0);
if (cmp_structure(dest, src) < 0) {
PyErr_SetString(PyExc_ValueError,
"ndarray assignment: lvalue and rvalue have different structures");
return -1;
}
if ((dest->suboffsets && dest->suboffsets[dest->ndim-1] >= 0) ||
(src->suboffsets && src->suboffsets[src->ndim-1] >= 0) ||
dest->strides[dest->ndim-1] != dest->itemsize ||
src->strides[src->ndim-1] != src->itemsize) {
mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize);
if (mem == NULL) {
PyErr_NoMemory();
return -1;
}
}
copy_rec(dest->shape, dest->ndim, dest->itemsize,
dest->buf, dest->strides, dest->suboffsets,
src->buf, src->strides, src->suboffsets,
mem);
PyMem_XFree(mem);
return 0;
}
/* Unpack single element */
static PyObject *
unpack_single(char *ptr, const char *fmt, Py_ssize_t itemsize)
{
PyObject *x, *unpack_from, *mview;
if (fmt == NULL) {
fmt = "B";
itemsize = 1;
}
unpack_from = PyObject_GetAttrString(structmodule, "unpack_from");
if (unpack_from == NULL)
return NULL;
mview = PyMemoryView_FromMemory(ptr, itemsize, PyBUF_READ);
if (mview == NULL) {
Py_DECREF(unpack_from);
return NULL;
}
x = PyObject_CallFunction(unpack_from, "sO", fmt, mview);
Py_DECREF(unpack_from);
Py_DECREF(mview);
if (x == NULL)
return NULL;
if (PyTuple_GET_SIZE(x) == 1) {
PyObject *tmp = PyTuple_GET_ITEM(x, 0);
Py_INCREF(tmp);
Py_DECREF(x);
return tmp;
}
return x;
}
/* Unpack a multi-dimensional matrix into a nested list. Return a scalar
for ndim = 0. */
static PyObject *
unpack_rec(PyObject *unpack_from, char *ptr, PyObject *mview, char *item,
const Py_ssize_t *shape, const Py_ssize_t *strides,
const Py_ssize_t *suboffsets, Py_ssize_t ndim, Py_ssize_t itemsize)
{
PyObject *lst, *x;
Py_ssize_t i;
assert(ndim >= 0);
assert(shape != NULL);
assert(strides != NULL);
if (ndim == 0) {
memcpy(item, ptr, itemsize);
x = PyObject_CallFunctionObjArgs(unpack_from, mview, NULL);
if (x == NULL)
return NULL;
if (PyTuple_GET_SIZE(x) == 1) {
PyObject *tmp = PyTuple_GET_ITEM(x, 0);
Py_INCREF(tmp);
Py_DECREF(x);
return tmp;
}
return x;
}
lst = PyList_New(shape[0]);
if (lst == NULL)
return NULL;
for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
char *nextptr = ADJUST_PTR(ptr, suboffsets);
x = unpack_rec(unpack_from, nextptr, mview, item,
shape+1, strides+1, suboffsets ? suboffsets+1 : NULL,
ndim-1, itemsize);
if (x == NULL) {
Py_DECREF(lst);
return NULL;
}
PyList_SET_ITEM(lst, i, x);
}
return lst;
}
static PyObject *
ndarray_as_list(NDArrayObject *nd)
{
PyObject *structobj = NULL, *unpack_from = NULL;
PyObject *lst = NULL, *mview = NULL;
Py_buffer *base = &nd->head->base;
Py_ssize_t *shape = base->shape;
Py_ssize_t *strides = base->strides;
Py_ssize_t simple_shape[1];
Py_ssize_t simple_strides[1];
char *item = NULL;
PyObject *format;
char *fmt = base->format;
base = &nd->head->base;
if (fmt == NULL) {
PyErr_SetString(PyExc_ValueError,
"ndarray: tolist() does not support format=NULL, use "
"tobytes()");
return NULL;
}
if (shape == NULL) {
assert(ND_C_CONTIGUOUS(nd->head->flags));
assert(base->strides == NULL);
assert(base->ndim <= 1);
shape = simple_shape;
shape[0] = base->len;
strides = simple_strides;
strides[0] = base->itemsize;
}
else if (strides == NULL) {
assert(ND_C_CONTIGUOUS(nd->head->flags));
strides = strides_from_shape(nd->head, 0);
if (strides == NULL)
return NULL;
}
format = PyUnicode_FromString(fmt);
if (format == NULL)
goto out;
structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL);
Py_DECREF(format);
if (structobj == NULL)
goto out;
unpack_from = PyObject_GetAttrString(structobj, "unpack_from");
if (unpack_from == NULL)
goto out;
item = PyMem_Malloc(base->itemsize);
if (item == NULL) {
PyErr_NoMemory();
goto out;
}
mview = PyMemoryView_FromMemory(item, base->itemsize, PyBUF_WRITE);
if (mview == NULL)
goto out;
lst = unpack_rec(unpack_from, base->buf, mview, item,
shape, strides, base->suboffsets,
base->ndim, base->itemsize);
out:
Py_XDECREF(mview);
PyMem_XFree(item);
Py_XDECREF(unpack_from);
Py_XDECREF(structobj);
if (strides != base->strides && strides != simple_strides)
PyMem_XFree(strides);
return lst;
}
/****************************************************************************/
/* Initialize ndbuf */
/****************************************************************************/
/*
State of a new ndbuf during initialization. 'OK' means that initialization
is complete. 'PTR' means that a pointer has been initialized, but the
state of the memory is still undefined and ndbuf->offset is disregarded.
+-----------------+-----------+-------------+----------------+
| | ndbuf_new | init_simple | init_structure |
+-----------------+-----------+-------------+----------------+
| next | OK (NULL) | OK | OK |
+-----------------+-----------+-------------+----------------+
| prev | OK (NULL) | OK | OK |
+-----------------+-----------+-------------+----------------+
| len | OK | OK | OK |
+-----------------+-----------+-------------+----------------+
| offset | OK | OK | OK |
+-----------------+-----------+-------------+----------------+
| data | PTR | OK | OK |
+-----------------+-----------+-------------+----------------+
| flags | user | user | OK |
+-----------------+-----------+-------------+----------------+
| exports | OK (0) | OK | OK |
+-----------------+-----------+-------------+----------------+
| base.obj | OK (NULL) | OK | OK |
+-----------------+-----------+-------------+----------------+
| base.buf | PTR | PTR | OK |
+-----------------+-----------+-------------+----------------+
| base.len | len(data) | len(data) | OK |
+-----------------+-----------+-------------+----------------+
| base.itemsize | 1 | OK | OK |
+-----------------+-----------+-------------+----------------+
| base.readonly | 0 | OK | OK |
+-----------------+-----------+-------------+----------------+
| base.format | NULL | OK | OK |
+-----------------+-----------+-------------+----------------+
| base.ndim | 1 | 1 | OK |
+-----------------+-----------+-------------+----------------+
| base.shape | NULL | NULL | OK |
+-----------------+-----------+-------------+----------------+
| base.strides | NULL | NULL | OK |
+-----------------+-----------+-------------+----------------+
| base.suboffsets | NULL | NULL | OK |
+-----------------+-----------+-------------+----------------+
| base.internal | OK | OK | OK |
+-----------------+-----------+-------------+----------------+
*/
static Py_ssize_t
get_itemsize(PyObject *format)
{
PyObject *tmp;
Py_ssize_t itemsize;
tmp = PyObject_CallFunctionObjArgs(calcsize, format, NULL);
if (tmp == NULL)
return -1;
itemsize = PyLong_AsSsize_t(tmp);
Py_DECREF(tmp);
return itemsize;
}
static char *
get_format(PyObject *format)
{
PyObject *tmp;
char *fmt;
tmp = PyUnicode_AsASCIIString(format);
if (tmp == NULL)
return NULL;
fmt = PyMem_Malloc(PyBytes_GET_SIZE(tmp)+1);
if (fmt == NULL) {
PyErr_NoMemory();
Py_DECREF(tmp);
return NULL;
}
strcpy(fmt, PyBytes_AS_STRING(tmp));
Py_DECREF(tmp);
return fmt;
}
static int
init_simple(ndbuf_t *ndbuf, PyObject *items, PyObject *format,
Py_ssize_t itemsize)
{
PyObject *mview;
Py_buffer *base = &ndbuf->base;
int ret;
mview = PyMemoryView_FromBuffer(base);
if (mview == NULL)
return -1;
ret = pack_from_list(mview, items, format, itemsize);
Py_DECREF(mview);
if (ret < 0)
return -1;
base->readonly = !(ndbuf->flags & ND_WRITABLE);
base->itemsize = itemsize;
base->format = get_format(format);
if (base->format == NULL)
return -1;
return 0;
}
static Py_ssize_t *
seq_as_ssize_array(PyObject *seq, Py_ssize_t len, int is_shape)
{
Py_ssize_t *dest;
Py_ssize_t x, i;
dest = PyMem_Malloc(len * (sizeof *dest));
if (dest == NULL) {
PyErr_NoMemory();
return NULL;
}
for (i = 0; i < len; i++) {
PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i);
if (!PyLong_Check(tmp)) {
PyErr_Format(PyExc_ValueError,
"elements of %s must be integers",
is_shape ? "shape" : "strides");
PyMem_Free(dest);
return NULL;
}
x = PyLong_AsSsize_t(tmp);
if (PyErr_Occurred()) {
PyMem_Free(dest);
return NULL;
}
if (is_shape && x < 0) {
PyErr_Format(PyExc_ValueError,
"elements of shape must be integers >= 0");
PyMem_Free(dest);
return NULL;
}
dest[i] = x;
}
return dest;
}
static Py_ssize_t *
strides_from_shape(const ndbuf_t *ndbuf, int flags)
{
const Py_buffer *base = &ndbuf->base;
Py_ssize_t *s, i;
s = PyMem_Malloc(base->ndim * (sizeof *s));
if (s == NULL) {
PyErr_NoMemory();
return NULL;
}
if (flags & ND_FORTRAN) {
s[0] = base->itemsize;
for (i = 1; i < base->ndim; i++)
s[i] = s[i-1] * base->shape[i-1];
}
else {
s[base->ndim-1] = base->itemsize;
for (i = base->ndim-2; i >= 0; i--)
s[i] = s[i+1] * base->shape[i+1];
}
return s;
}
/* Bounds check:
len := complete length of allocated memory
offset := start of the array
A single array element is indexed by:
i = indices[0] * strides[0] + indices[1] * strides[1] + ...
imin is reached when all indices[n] combined with positive strides are 0
and all indices combined with negative strides are shape[n]-1, which is
the maximum index for the nth dimension.
imax is reached when all indices[n] combined with negative strides are 0
and all indices combined with positive strides are shape[n]-1.
*/
static int
verify_structure(Py_ssize_t len, Py_ssize_t itemsize, Py_ssize_t offset,
const Py_ssize_t *shape, const Py_ssize_t *strides,
Py_ssize_t ndim)
{
Py_ssize_t imin, imax;
Py_ssize_t n;
assert(ndim >= 0);
if (ndim == 0 && (offset < 0 || offset+itemsize > len))
goto invalid_combination;
for (n = 0; n < ndim; n++)
if (strides[n] % itemsize) {
PyErr_SetString(PyExc_ValueError,
"strides must be a multiple of itemsize");
return -1;
}
for (n = 0; n < ndim; n++)
if (shape[n] == 0)
return 0;
imin = imax = 0;
for (n = 0; n < ndim; n++)
if (strides[n] <= 0)
imin += (shape[n]-1) * strides[n];
else
imax += (shape[n]-1) * strides[n];
if (imin + offset < 0 || imax + offset + itemsize > len)
goto invalid_combination;
return 0;
invalid_combination:
PyErr_SetString(PyExc_ValueError,
"invalid combination of buffer, shape and strides");
return -1;
}
/*
Convert a NumPy-style array to an array using suboffsets to stride in
the first dimension. Requirements: ndim > 0.
Contiguous example
==================
Input:
------
shape = {2, 2, 3};
strides = {6, 3, 1};
suboffsets = NULL;
data = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
buf = &data[0]
Output:
-------
shape = {2, 2, 3};
strides = {sizeof(char *), 3, 1};
suboffsets = {0, -1, -1};
data = {p1, p2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
| | ^ ^
`---'---' |
| |
`---------------------'
buf = &data[0]
So, in the example the input resembles the three-dimensional array
char v[2][2][3], while the output resembles an array of two pointers
to two-dimensional arrays: char (*v[2])[2][3].
Non-contiguous example:
=======================
Input (with offset and negative strides):
-----------------------------------------
shape = {2, 2, 3};
strides = {-6, 3, -1};
offset = 8
suboffsets = NULL;
data = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
Output:
-------
shape = {2, 2, 3};
strides = {-sizeof(char *), 3, -1};
suboffsets = {2, -1, -1};
newdata = {p1, p2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
| | ^ ^ ^ ^
`---'---' | | `- p2+suboffsets[0]
| `-----------|--- p1+suboffsets[0]
`---------------------'
buf = &newdata[1] # striding backwards over the pointers.
suboffsets[0] is the same as the offset that one would specify if
the two {2, 3} subarrays were created directly, hence the name.
*/
static int
init_suboffsets(ndbuf_t *ndbuf)
{
Py_buffer *base = &ndbuf->base;
Py_ssize_t start, step;
Py_ssize_t imin, suboffset0;
Py_ssize_t addsize;
Py_ssize_t n;
char *data;
assert(base->ndim > 0);
assert(base->suboffsets == NULL);
/* Allocate new data with additional space for shape[0] pointers. */
addsize = base->shape[0] * (sizeof (char *));
/* Align array start to a multiple of 8. */
addsize = 8 * ((addsize + 7) / 8);
data = PyMem_Malloc(ndbuf->len + addsize);
if (data == NULL) {
PyErr_NoMemory();
return -1;
}
memcpy(data + addsize, ndbuf->data, ndbuf->len);
PyMem_Free(ndbuf->data);
ndbuf->data = data;
ndbuf->len += addsize;
base->buf = ndbuf->data;
/* imin: minimum index of the input array relative to ndbuf->offset.
suboffset0: offset for each sub-array of the output. This is the
same as calculating -imin' for a sub-array of ndim-1. */
imin = suboffset0 = 0;
for (n = 0; n < base->ndim; n++) {
if (base->shape[n] == 0)
break;
if (base->strides[n] <= 0) {
Py_ssize_t x = (base->shape[n]-1) * base->strides[n];
imin += x;
suboffset0 += (n >= 1) ? -x : 0;
}
}
/* Initialize the array of pointers to the sub-arrays. */
start = addsize + ndbuf->offset + imin;
step = base->strides[0] < 0 ? -base->strides[0] : base->strides[0];
for (n = 0; n < base->shape[0]; n++)
((char **)base->buf)[n] = (char *)base->buf + start + n*step;
/* Initialize suboffsets. */
base->suboffsets = PyMem_Malloc(base->ndim * (sizeof *base->suboffsets));
if (base->suboffsets == NULL) {
PyErr_NoMemory();
return -1;
}
base->suboffsets[0] = suboffset0;
for (n = 1; n < base->ndim; n++)
base->suboffsets[n] = -1;
/* Adjust strides for the first (zeroth) dimension. */
if (base->strides[0] >= 0) {
base->strides[0] = sizeof(char *);
}
else {
/* Striding backwards. */
base->strides[0] = -(Py_ssize_t)sizeof(char *);
if (base->shape[0] > 0)
base->buf = (char *)base->buf + (base->shape[0]-1) * sizeof(char *);
}
ndbuf->flags &= ~(ND_C|ND_FORTRAN);
ndbuf->offset = 0;
return 0;
}
static void
init_len(Py_buffer *base)
{
Py_ssize_t i;
base->len = 1;
for (i = 0; i < base->ndim; i++)
base->len *= base->shape[i];
base->len *= base->itemsize;
}
static int
init_structure(ndbuf_t *ndbuf, PyObject *shape, PyObject *strides,
Py_ssize_t ndim)
{
Py_buffer *base = &ndbuf->base;
base->ndim = (int)ndim;
if (ndim == 0) {
if (ndbuf->flags & ND_PIL) {
PyErr_SetString(PyExc_TypeError,
"ndim = 0 cannot be used in conjunction with ND_PIL");
return -1;
}
ndbuf->flags |= (ND_SCALAR|ND_C|ND_FORTRAN);
return 0;
}
/* shape */
base->shape = seq_as_ssize_array(shape, ndim, 1);
if (base->shape == NULL)
return -1;
/* strides */
if (strides) {
base->strides = seq_as_ssize_array(strides, ndim, 0);
}
else {
base->strides = strides_from_shape(ndbuf, ndbuf->flags);
}
if (base->strides == NULL)
return -1;
if (verify_structure(base->len, base->itemsize, ndbuf->offset,
base->shape, base->strides, ndim) < 0)
return -1;
/* buf */
base->buf = ndbuf->data + ndbuf->offset;
/* len */
init_len(base);
/* ndbuf->flags */
if (PyBuffer_IsContiguous(base, 'C'))
ndbuf->flags |= ND_C;
if (PyBuffer_IsContiguous(base, 'F'))
ndbuf->flags |= ND_FORTRAN;
/* convert numpy array to suboffset representation */
if (ndbuf->flags & ND_PIL) {
/* modifies base->buf, base->strides and base->suboffsets **/
return init_suboffsets(ndbuf);
}
return 0;
}
static ndbuf_t *
init_ndbuf(PyObject *items, PyObject *shape, PyObject *strides,
Py_ssize_t offset, PyObject *format, int flags)
{
ndbuf_t *ndbuf;
Py_ssize_t ndim;
Py_ssize_t nitems;
Py_ssize_t itemsize;
/* ndim = len(shape) */
CHECK_LIST_OR_TUPLE(shape)
ndim = PySequence_Fast_GET_SIZE(shape);
if (ndim > ND_MAX_NDIM) {
PyErr_Format(PyExc_ValueError,
"ndim must not exceed %d", ND_MAX_NDIM);
return NULL;
}
/* len(strides) = len(shape) */
if (strides) {
CHECK_LIST_OR_TUPLE(strides)
if (PySequence_Fast_GET_SIZE(strides) == 0)
strides = NULL;
else if (flags & ND_FORTRAN) {
PyErr_SetString(PyExc_TypeError,
"ND_FORTRAN cannot be used together with strides");
return NULL;
}
else if (PySequence_Fast_GET_SIZE(strides) != ndim) {
PyErr_SetString(PyExc_ValueError,
"len(shape) != len(strides)");
return NULL;
}
}
/* itemsize */
itemsize = get_itemsize(format);
if (itemsize <= 0) {
if (itemsize == 0) {
PyErr_SetString(PyExc_ValueError,
"itemsize must not be zero");
}
return NULL;
}
/* convert scalar to list */
if (ndim == 0) {
items = Py_BuildValue("(O)", items);
if (items == NULL)
return NULL;
}
else {
CHECK_LIST_OR_TUPLE(items)
Py_INCREF(items);
}
/* number of items */
nitems = PySequence_Fast_GET_SIZE(items);
if (nitems == 0) {
PyErr_SetString(PyExc_ValueError,
"initializer list or tuple must not be empty");
Py_DECREF(items);
return NULL;
}
ndbuf = ndbuf_new(nitems, itemsize, offset, flags);
if (ndbuf == NULL) {
Py_DECREF(items);
return NULL;
}
if (init_simple(ndbuf, items, format, itemsize) < 0)
goto error;
if (init_structure(ndbuf, shape, strides, ndim) < 0)
goto error;
Py_DECREF(items);
return ndbuf;
error:
Py_DECREF(items);
ndbuf_free(ndbuf);
return NULL;
}
/* initialize and push a new base onto the linked list */
static int
ndarray_push_base(NDArrayObject *nd, PyObject *items,
PyObject *shape, PyObject *strides,
Py_ssize_t offset, PyObject *format, int flags)
{
ndbuf_t *ndbuf;
ndbuf = init_ndbuf(items, shape, strides, offset, format, flags);
if (ndbuf == NULL)
return -1;
ndbuf_push(nd, ndbuf);
return 0;
}
#define PyBUF_UNUSED 0x10000
static int
ndarray_init(PyObject *self, PyObject *args, PyObject *kwds)
{
NDArrayObject *nd = (NDArrayObject *)self;
static char *kwlist[] = {
"obj", "shape", "strides", "offset", "format", "flags", "getbuf", NULL
};
PyObject *v = NULL; /* initializer: scalar, list, tuple or base object */
PyObject *shape = NULL; /* size of each dimension */
PyObject *strides = NULL; /* number of bytes to the next elt in each dim */
Py_ssize_t offset = 0; /* buffer offset */
PyObject *format = simple_format; /* struct module specifier: "B" */
int flags = ND_UNUSED; /* base buffer and ndarray flags */
int getbuf = PyBUF_UNUSED; /* re-exporter: getbuffer request flags */
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|OOnOii", kwlist,
&v, &shape, &strides, &offset, &format, &flags, &getbuf))
return -1;
/* NDArrayObject is re-exporter */
if (PyObject_CheckBuffer(v) && shape == NULL) {
if (strides || offset || format != simple_format ||
flags != ND_UNUSED) {
PyErr_SetString(PyExc_TypeError,
"construction from exporter object only takes a single "
"additional getbuf argument");
return -1;
}
getbuf = (getbuf == PyBUF_UNUSED) ? PyBUF_FULL_RO : getbuf;
if (ndarray_init_staticbuf(v, nd, getbuf) < 0)
return -1;
init_flags(nd->head);
return 0;
}
/* NDArrayObject is the original base object. */
if (getbuf != PyBUF_UNUSED) {
PyErr_SetString(PyExc_TypeError,
"getbuf argument only valid for construction from exporter "
"object");
return -1;
}
if (shape == NULL) {
PyErr_SetString(PyExc_TypeError,
"shape is a required argument when constructing from "
"list, tuple or scalar");
return -1;
}
if (flags == ND_UNUSED)
flags = ND_DEFAULT;
if (flags & ND_VAREXPORT) {
nd->flags |= ND_VAREXPORT;
flags &= ~ND_VAREXPORT;
}
/* Initialize and push the first base buffer onto the linked list. */
return ndarray_push_base(nd, v, shape, strides, offset, format, flags);
}
/* Push an additional base onto the linked list. */
static PyObject *
ndarray_push(PyObject *self, PyObject *args, PyObject *kwds)
{
NDArrayObject *nd = (NDArrayObject *)self;
static char *kwlist[] = {
"items", "shape", "strides", "offset", "format", "flags", NULL
};
PyObject *items = NULL; /* initializer: scalar, list or tuple */
PyObject *shape = NULL; /* size of each dimension */
PyObject *strides = NULL; /* number of bytes to the next elt in each dim */
PyObject *format = simple_format; /* struct module specifier: "B" */
Py_ssize_t offset = 0; /* buffer offset */
int flags = ND_UNUSED; /* base buffer flags */
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|OnOi", kwlist,
&items, &shape, &strides, &offset, &format, &flags))
return NULL;
if (flags & ND_VAREXPORT) {
PyErr_SetString(PyExc_ValueError,
"ND_VAREXPORT flag can only be used during object creation");
return NULL;
}
if (ND_IS_CONSUMER(nd)) {
PyErr_SetString(PyExc_BufferError,
"structure of re-exporting object is immutable");
return NULL;
}
if (!(nd->flags&ND_VAREXPORT) && nd->head->exports > 0) {
PyErr_Format(PyExc_BufferError,
"cannot change structure: %zd exported buffer%s",
nd->head->exports, nd->head->exports==1 ? "" : "s");
return NULL;
}
if (ndarray_push_base(nd, items, shape, strides,
offset, format, flags) < 0)
return NULL;
Py_RETURN_NONE;
}
/* Pop a base from the linked list (if possible). */
static PyObject *
ndarray_pop(PyObject *self, PyObject *dummy)
{
NDArrayObject *nd = (NDArrayObject *)self;
if (ND_IS_CONSUMER(nd)) {
PyErr_SetString(PyExc_BufferError,
"structure of re-exporting object is immutable");
return NULL;
}
if (nd->head->exports > 0) {
PyErr_Format(PyExc_BufferError,
"cannot change structure: %zd exported buffer%s",
nd->head->exports, nd->head->exports==1 ? "" : "s");
return NULL;
}
if (nd->head->next == NULL) {
PyErr_SetString(PyExc_BufferError,
"list only has a single base");
return NULL;
}
ndbuf_pop(nd);
Py_RETURN_NONE;
}
/**************************************************************************/
/* getbuffer */
/**************************************************************************/
static int
ndarray_getbuf(NDArrayObject *self, Py_buffer *view, int flags)
{
ndbuf_t *ndbuf = self->head;
Py_buffer *base = &ndbuf->base;
int baseflags = ndbuf->flags;
/* start with complete information */
*view = *base;
view->obj = NULL;
/* reconstruct format */
if (view->format == NULL)
view->format = "B";
if (base->ndim != 0 &&
((REQ_SHAPE(flags) && base->shape == NULL) ||
(REQ_STRIDES(flags) && base->strides == NULL))) {
/* The ndarray is a re-exporter that has been created without full
information for testing purposes. In this particular case the
ndarray is not a PEP-3118 compliant buffer provider. */
PyErr_SetString(PyExc_BufferError,
"re-exporter does not provide format, shape or strides");
return -1;
}
if (baseflags & ND_GETBUF_FAIL) {
PyErr_SetString(PyExc_BufferError,
"ND_GETBUF_FAIL: forced test exception");
return -1;
}
if (REQ_WRITABLE(flags) && base->readonly) {
PyErr_SetString(PyExc_BufferError,
"ndarray is not writable");
return -1;
}
if (!REQ_FORMAT(flags)) {
/* NULL indicates that the buffer's data type has been cast to 'B'.
view->itemsize is the _previous_ itemsize. If shape is present,
the equality product(shape) * itemsize = len still holds at this
point. The equality calcsize(format) = itemsize does _not_ hold
from here on! */
view->format = NULL;
}
if (REQ_C_CONTIGUOUS(flags) && !ND_C_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"ndarray is not C-contiguous");
return -1;
}
if (REQ_F_CONTIGUOUS(flags) && !ND_FORTRAN_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"ndarray is not Fortran contiguous");
return -1;
}
if (REQ_ANY_CONTIGUOUS(flags) && !ND_ANY_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"ndarray is not contiguous");
return -1;
}
if (!REQ_INDIRECT(flags) && (baseflags & ND_PIL)) {
PyErr_SetString(PyExc_BufferError,
"ndarray cannot be represented without suboffsets");
return -1;
}
if (!REQ_STRIDES(flags)) {
if (!ND_C_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"ndarray is not C-contiguous");
return -1;
}
view->strides = NULL;
}
if (!REQ_SHAPE(flags)) {
/* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous,
so base->buf = ndbuf->data. */
if (view->format != NULL) {
/* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do
not make sense. */
PyErr_Format(PyExc_BufferError,
"ndarray: cannot cast to unsigned bytes if the format flag "
"is present");
return -1;
}
/* product(shape) * itemsize = len and calcsize(format) = itemsize
do _not_ hold from here on! */
view->ndim = 1;
view->shape = NULL;
}
view->obj = (PyObject *)self;
Py_INCREF(view->obj);
self->head->exports++;
return 0;
}
static int
ndarray_releasebuf(NDArrayObject *self, Py_buffer *view)
{
if (!ND_IS_CONSUMER(self)) {
ndbuf_t *ndbuf = view->internal;
if (--ndbuf->exports == 0 && ndbuf != self->head)
ndbuf_delete(self, ndbuf);
}
return 0;
}
static PyBufferProcs ndarray_as_buffer = {
(getbufferproc)ndarray_getbuf, /* bf_getbuffer */
(releasebufferproc)ndarray_releasebuf /* bf_releasebuffer */
};
/**************************************************************************/
/* indexing/slicing */
/**************************************************************************/
static char *
ptr_from_index(Py_buffer *base, Py_ssize_t index)
{
char *ptr;
Py_ssize_t nitems; /* items in the first dimension */
if (base->shape)
nitems = base->shape[0];
else {
assert(base->ndim == 1 && SIMPLE_FORMAT(base->format));
nitems = base->len;
}
if (index < 0) {
index += nitems;
}
if (index < 0 || index >= nitems) {
PyErr_SetString(PyExc_IndexError, "index out of bounds");
return NULL;
}
ptr = (char *)base->buf;
if (base->strides == NULL)
ptr += base->itemsize * index;
else
ptr += base->strides[0] * index;
ptr = ADJUST_PTR(ptr, base->suboffsets);
return ptr;
}
static PyObject *
ndarray_item(NDArrayObject *self, Py_ssize_t index)
{
ndbuf_t *ndbuf = self->head;
Py_buffer *base = &ndbuf->base;
char *ptr;
if (base->ndim == 0) {
PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar");
return NULL;
}
ptr = ptr_from_index(base, index);
if (ptr == NULL)
return NULL;
if (base->ndim == 1) {
return unpack_single(ptr, base->format, base->itemsize);
}
else {
NDArrayObject *nd;
Py_buffer *subview;
nd = (NDArrayObject *)ndarray_new(&NDArray_Type, NULL, NULL);
if (nd == NULL)
return NULL;
if (ndarray_init_staticbuf((PyObject *)self, nd, PyBUF_FULL_RO) < 0) {
Py_DECREF(nd);
return NULL;
}
subview = &nd->staticbuf.base;
subview->buf = ptr;
subview->len /= subview->shape[0];
subview->ndim--;
subview->shape++;
if (subview->strides) subview->strides++;
if (subview->suboffsets) subview->suboffsets++;
init_flags(&nd->staticbuf);
return (PyObject *)nd;
}
}
/*
For each dimension, we get valid (start, stop, step, slicelength) quadruples
from PySlice_GetIndicesEx().
Slicing NumPy arrays
====================
A pointer to an element in a NumPy array is defined by:
ptr = (char *)buf + indices[0] * strides[0] +
... +
indices[ndim-1] * strides[ndim-1]
Adjust buf:
-----------
Adding start[n] for each dimension effectively adds the constant:
c = start[0] * strides[0] + ... + start[ndim-1] * strides[ndim-1]
Therefore init_slice() adds all start[n] directly to buf.
Adjust shape:
-------------
Obviously shape[n] = slicelength[n]
Adjust strides:
---------------
In the original array, the next element in a dimension is reached
by adding strides[n] to the pointer. In the sliced array, elements
may be skipped, so the next element is reached by adding:
strides[n] * step[n]
Slicing PIL arrays
==================
Layout:
-------
In the first (zeroth) dimension, PIL arrays have an array of pointers
to sub-arrays of ndim-1. Striding in the first dimension is done by
getting the index of the nth pointer, dereference it and then add a
suboffset to it. The arrays pointed to can best be seen a regular
NumPy arrays.
Adjust buf:
-----------
In the original array, buf points to a location (usually the start)
in the array of pointers. For the sliced array, start[0] can be
added to buf in the same manner as for NumPy arrays.
Adjust suboffsets:
------------------
Due to the dereferencing step in the addressing scheme, it is not
possible to adjust buf for higher dimensions. Recall that the
sub-arrays pointed to are regular NumPy arrays, so for each of
those arrays adding start[n] effectively adds the constant:
c = start[1] * strides[1] + ... + start[ndim-1] * strides[ndim-1]
This constant is added to suboffsets[0]. suboffsets[0] in turn is
added to each pointer right after dereferencing.
Adjust shape and strides:
-------------------------
Shape and strides are not influenced by the dereferencing step, so
they are adjusted in the same manner as for NumPy arrays.
Multiple levels of suboffsets
=============================
For a construct like an array of pointers to array of pointers to
sub-arrays of ndim-2:
suboffsets[0] = start[1] * strides[1]
suboffsets[1] = start[2] * strides[2] + ...
*/
static int
init_slice(Py_buffer *base, PyObject *key, int dim)
{
Py_ssize_t start, stop, step, slicelength;
if (PySlice_GetIndicesEx(key, base->shape[dim],
&start, &stop, &step, &slicelength) < 0) {
return -1;
}
if (base->suboffsets == NULL || dim == 0) {
adjust_buf:
base->buf = (char *)base->buf + base->strides[dim] * start;
}
else {
Py_ssize_t n = dim-1;
while (n >= 0 && base->suboffsets[n] < 0)
n--;
if (n < 0)
goto adjust_buf; /* all suboffsets are negative */
base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start;
}
base->shape[dim] = slicelength;
base->strides[dim] = base->strides[dim] * step;
return 0;
}
static int
copy_structure(Py_buffer *base)
{
Py_ssize_t *shape = NULL, *strides = NULL, *suboffsets = NULL;
Py_ssize_t i;
shape = PyMem_Malloc(base->ndim * (sizeof *shape));
strides = PyMem_Malloc(base->ndim * (sizeof *strides));
if (shape == NULL || strides == NULL)
goto err_nomem;
suboffsets = NULL;
if (base->suboffsets) {
suboffsets = PyMem_Malloc(base->ndim * (sizeof *suboffsets));
if (suboffsets == NULL)
goto err_nomem;
}
for (i = 0; i < base->ndim; i++) {
shape[i] = base->shape[i];
strides[i] = base->strides[i];
if (suboffsets)
suboffsets[i] = base->suboffsets[i];
}
base->shape = shape;
base->strides = strides;
base->suboffsets = suboffsets;
return 0;
err_nomem:
PyErr_NoMemory();
PyMem_XFree(shape);
PyMem_XFree(strides);
PyMem_XFree(suboffsets);
return -1;
}
static PyObject *
ndarray_subscript(NDArrayObject *self, PyObject *key)
{
NDArrayObject *nd;
ndbuf_t *ndbuf;
Py_buffer *base = &self->head->base;
if (base->ndim == 0) {
if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) {
return unpack_single(base->buf, base->format, base->itemsize);
}
else if (key == Py_Ellipsis) {
Py_INCREF(self);
return (PyObject *)self;
}
else {
PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar");
return NULL;
}
}
if (PyIndex_Check(key)) {
Py_ssize_t index = PyLong_AsSsize_t(key);
if (index == -1 && PyErr_Occurred())
return NULL;
return ndarray_item(self, index);
}
nd = (NDArrayObject *)ndarray_new(&NDArray_Type, NULL, NULL);
if (nd == NULL)
return NULL;
/* new ndarray is a consumer */
if (ndarray_init_staticbuf((PyObject *)self, nd, PyBUF_FULL_RO) < 0) {
Py_DECREF(nd);
return NULL;
}
/* copy shape, strides and suboffsets */
ndbuf = nd->head;
base = &ndbuf->base;
if (copy_structure(base) < 0) {
Py_DECREF(nd);
return NULL;
}
ndbuf->flags |= ND_OWN_ARRAYS;
if (PySlice_Check(key)) {
/* one-dimensional slice */
if (init_slice(base, key, 0) < 0)
goto err_occurred;
}
else if PyTuple_Check(key) {
/* multi-dimensional slice */
PyObject *tuple = key;
Py_ssize_t i, n;
n = PyTuple_GET_SIZE(tuple);
for (i = 0; i < n; i++) {
key = PyTuple_GET_ITEM(tuple, i);
if (!PySlice_Check(key))
goto type_error;
if (init_slice(base, key, (int)i) < 0)
goto err_occurred;
}
}
else {
goto type_error;
}
init_len(base);
init_flags(ndbuf);
return (PyObject *)nd;
type_error:
PyErr_Format(PyExc_TypeError,
"cannot index memory using \"%.200s\"",
key->ob_type->tp_name);
err_occurred:
Py_DECREF(nd);
return NULL;
}
static int
ndarray_ass_subscript(NDArrayObject *self, PyObject *key, PyObject *value)
{
NDArrayObject *nd;
Py_buffer *dest = &self->head->base;
Py_buffer src;
char *ptr;
Py_ssize_t index;
int ret = -1;
if (dest->readonly) {
PyErr_SetString(PyExc_TypeError, "ndarray is not writable");
return -1;
}
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "ndarray data cannot be deleted");
return -1;
}
if (dest->ndim == 0) {
if (key == Py_Ellipsis ||
(PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0)) {
ptr = (char *)dest->buf;
return pack_single(ptr, value, dest->format, dest->itemsize);
}
else {
PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar");
return -1;
}
}
if (dest->ndim == 1 && PyIndex_Check(key)) {
/* rvalue must be a single item */
index = PyLong_AsSsize_t(key);
if (index == -1 && PyErr_Occurred())
return -1;
else {
ptr = ptr_from_index(dest, index);
if (ptr == NULL)
return -1;
}
return pack_single(ptr, value, dest->format, dest->itemsize);
}
/* rvalue must be an exporter */
if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) == -1)
return -1;
nd = (NDArrayObject *)ndarray_subscript(self, key);
if (nd != NULL) {
dest = &nd->head->base;
ret = copy_buffer(dest, &src);
Py_DECREF(nd);
}
PyBuffer_Release(&src);
return ret;
}
static PyObject *
slice_indices(PyObject *self, PyObject *args)
{
PyObject *ret, *key, *tmp;
Py_ssize_t s[4]; /* start, stop, step, slicelength */
Py_ssize_t i, len;
if (!PyArg_ParseTuple(args, "On", &key, &len)) {
return NULL;
}
if (!PySlice_Check(key)) {
PyErr_SetString(PyExc_TypeError,
"first argument must be a slice object");
return NULL;
}
if (PySlice_GetIndicesEx(key, len, &s[0], &s[1], &s[2], &s[3]) < 0) {
return NULL;
}
ret = PyTuple_New(4);
if (ret == NULL)
return NULL;
for (i = 0; i < 4; i++) {
tmp = PyLong_FromSsize_t(s[i]);
if (tmp == NULL)
goto error;
PyTuple_SET_ITEM(ret, i, tmp);
}
return ret;
error:
Py_DECREF(ret);
return NULL;
}
static PyMappingMethods ndarray_as_mapping = {
NULL, /* mp_length */
(binaryfunc)ndarray_subscript, /* mp_subscript */
(objobjargproc)ndarray_ass_subscript /* mp_ass_subscript */
};
static PySequenceMethods ndarray_as_sequence = {
0, /* sq_length */
0, /* sq_concat */
0, /* sq_repeat */
(ssizeargfunc)ndarray_item, /* sq_item */
};
/**************************************************************************/
/* getters */
/**************************************************************************/
static PyObject *
ssize_array_as_tuple(Py_ssize_t *array, Py_ssize_t len)
{
PyObject *tuple, *x;
Py_ssize_t i;
if (array == NULL)
return PyTuple_New(0);
tuple = PyTuple_New(len);
if (tuple == NULL)
return NULL;
for (i = 0; i < len; i++) {
x = PyLong_FromSsize_t(array[i]);
if (x == NULL) {
Py_DECREF(tuple);
return NULL;
}
PyTuple_SET_ITEM(tuple, i, x);
}
return tuple;
}
static PyObject *
ndarray_get_flags(NDArrayObject *self, void *closure)
{
return PyLong_FromLong(self->head->flags);
}
static PyObject *
ndarray_get_offset(NDArrayObject *self, void *closure)
{
ndbuf_t *ndbuf = self->head;
return PyLong_FromSsize_t(ndbuf->offset);
}
static PyObject *
ndarray_get_obj(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
if (base->obj == NULL) {
Py_RETURN_NONE;
}
Py_INCREF(base->obj);
return base->obj;
}
static PyObject *
ndarray_get_nbytes(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return PyLong_FromSsize_t(base->len);
}
static PyObject *
ndarray_get_readonly(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return PyLong_FromLong(base->readonly);
}
static PyObject *
ndarray_get_itemsize(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return PyLong_FromSsize_t(base->itemsize);
}
static PyObject *
ndarray_get_format(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
char *fmt = base->format ? base->format : "";
return PyUnicode_FromString(fmt);
}
static PyObject *
ndarray_get_ndim(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return PyLong_FromSsize_t(base->ndim);
}
static PyObject *
ndarray_get_shape(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return ssize_array_as_tuple(base->shape, base->ndim);
}
static PyObject *
ndarray_get_strides(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return ssize_array_as_tuple(base->strides, base->ndim);
}
static PyObject *
ndarray_get_suboffsets(NDArrayObject *self, void *closure)
{
Py_buffer *base = &self->head->base;
return ssize_array_as_tuple(base->suboffsets, base->ndim);
}
static PyObject *
ndarray_c_contig(PyObject *self, PyObject *dummy)
{
NDArrayObject *nd = (NDArrayObject *)self;
int ret = PyBuffer_IsContiguous(&nd->head->base, 'C');
if (ret != ND_C_CONTIGUOUS(nd->head->flags)) {
PyErr_SetString(PyExc_RuntimeError,
"results from PyBuffer_IsContiguous() and flags differ");
return NULL;
}
return PyBool_FromLong(ret);
}
static PyObject *
ndarray_fortran_contig(PyObject *self, PyObject *dummy)
{
NDArrayObject *nd = (NDArrayObject *)self;
int ret = PyBuffer_IsContiguous(&nd->head->base, 'F');
if (ret != ND_FORTRAN_CONTIGUOUS(nd->head->flags)) {
PyErr_SetString(PyExc_RuntimeError,
"results from PyBuffer_IsContiguous() and flags differ");
return NULL;
}
return PyBool_FromLong(ret);
}
static PyObject *
ndarray_contig(PyObject *self, PyObject *dummy)
{
NDArrayObject *nd = (NDArrayObject *)self;
int ret = PyBuffer_IsContiguous(&nd->head->base, 'A');
if (ret != ND_ANY_CONTIGUOUS(nd->head->flags)) {
PyErr_SetString(PyExc_RuntimeError,
"results from PyBuffer_IsContiguous() and flags differ");
return NULL;
}
return PyBool_FromLong(ret);
}
static PyGetSetDef ndarray_getset [] =
{
/* ndbuf */
{ "flags", (getter)ndarray_get_flags, NULL, NULL, NULL},
{ "offset", (getter)ndarray_get_offset, NULL, NULL, NULL},
/* ndbuf.base */
{ "obj", (getter)ndarray_get_obj, NULL, NULL, NULL},
{ "nbytes", (getter)ndarray_get_nbytes, NULL, NULL, NULL},
{ "readonly", (getter)ndarray_get_readonly, NULL, NULL, NULL},
{ "itemsize", (getter)ndarray_get_itemsize, NULL, NULL, NULL},
{ "format", (getter)ndarray_get_format, NULL, NULL, NULL},
{ "ndim", (getter)ndarray_get_ndim, NULL, NULL, NULL},
{ "shape", (getter)ndarray_get_shape, NULL, NULL, NULL},
{ "strides", (getter)ndarray_get_strides, NULL, NULL, NULL},
{ "suboffsets", (getter)ndarray_get_suboffsets, NULL, NULL, NULL},
{ "c_contiguous", (getter)ndarray_c_contig, NULL, NULL, NULL},
{ "f_contiguous", (getter)ndarray_fortran_contig, NULL, NULL, NULL},
{ "contiguous", (getter)ndarray_contig, NULL, NULL, NULL},
{NULL}
};
static PyObject *
ndarray_tolist(PyObject *self, PyObject *dummy)
{
return ndarray_as_list((NDArrayObject *)self);
}
static PyObject *
ndarray_tobytes(PyObject *self, PyObject *dummy)
{
ndbuf_t *ndbuf = ((NDArrayObject *)self)->head;
Py_buffer *src = &ndbuf->base;
Py_buffer dest;
PyObject *ret = NULL;
char *mem;
if (ND_C_CONTIGUOUS(ndbuf->flags))
return PyBytes_FromStringAndSize(src->buf, src->len);
assert(src->shape != NULL);
assert(src->strides != NULL);
assert(src->ndim > 0);
mem = PyMem_Malloc(src->len);
if (mem == NULL) {
PyErr_NoMemory();
return NULL;
}
dest = *src;
dest.buf = mem;
dest.suboffsets = NULL;
dest.strides = strides_from_shape(ndbuf, 0);
if (dest.strides == NULL)
goto out;
if (copy_buffer(&dest, src) < 0)
goto out;
ret = PyBytes_FromStringAndSize(mem, src->len);
out:
PyMem_XFree(dest.strides);
PyMem_Free(mem);
return ret;
}
/* add redundant (negative) suboffsets for testing */
static PyObject *
ndarray_add_suboffsets(PyObject *self, PyObject *dummy)
{
NDArrayObject *nd = (NDArrayObject *)self;
Py_buffer *base = &nd->head->base;
Py_ssize_t i;
if (base->suboffsets != NULL) {
PyErr_SetString(PyExc_TypeError,
"cannot add suboffsets to PIL-style array");
return NULL;
}
if (base->strides == NULL) {
PyErr_SetString(PyExc_TypeError,
"cannot add suboffsets to array without strides");
return NULL;
}
base->suboffsets = PyMem_Malloc(base->ndim * (sizeof *base->suboffsets));
if (base->suboffsets == NULL) {
PyErr_NoMemory();
return NULL;
}
for (i = 0; i < base->ndim; i++)
base->suboffsets[i] = -1;
Py_RETURN_NONE;
}
/* Test PyMemoryView_FromBuffer(): return a memoryview from a static buffer.
Obviously this is fragile and only one such view may be active at any
time. Never use anything like this in real code! */
static char *infobuf = NULL;
static PyObject *
ndarray_memoryview_from_buffer(PyObject *self, PyObject *dummy)
{
const NDArrayObject *nd = (NDArrayObject *)self;
const Py_buffer *view = &nd->head->base;
const ndbuf_t *ndbuf;
static char format[ND_MAX_NDIM+1];
static Py_ssize_t shape[ND_MAX_NDIM];
static Py_ssize_t strides[ND_MAX_NDIM];
static Py_ssize_t suboffsets[ND_MAX_NDIM];
static Py_buffer info;
char *p;
if (!ND_IS_CONSUMER(nd))
ndbuf = nd->head; /* self is ndarray/original exporter */
else if (NDArray_Check(view->obj) && !ND_IS_CONSUMER(view->obj))
/* self is ndarray and consumer from ndarray/original exporter */
ndbuf = ((NDArrayObject *)view->obj)->head;
else {
PyErr_SetString(PyExc_TypeError,
"memoryview_from_buffer(): ndarray must be original exporter or "
"consumer from ndarray/original exporter");
return NULL;
}
info = *view;
p = PyMem_Realloc(infobuf, ndbuf->len);
if (p == NULL) {
PyMem_Free(infobuf);
PyErr_NoMemory();
infobuf = NULL;
return NULL;
}
else {
infobuf = p;
}
/* copy the complete raw data */
memcpy(infobuf, ndbuf->data, ndbuf->len);
info.buf = infobuf + ((char *)view->buf - ndbuf->data);
if (view->format) {
if (strlen(view->format) > ND_MAX_NDIM) {
PyErr_Format(PyExc_TypeError,
"memoryview_from_buffer: format is limited to %d characters",
ND_MAX_NDIM);
return NULL;
}
strcpy(format, view->format);
info.format = format;
}
if (view->ndim > ND_MAX_NDIM) {
PyErr_Format(PyExc_TypeError,
"memoryview_from_buffer: ndim is limited to %d", ND_MAX_NDIM);
return NULL;
}
if (view->shape) {
memcpy(shape, view->shape, view->ndim * sizeof(Py_ssize_t));
info.shape = shape;
}
if (view->strides) {
memcpy(strides, view->strides, view->ndim * sizeof(Py_ssize_t));
info.strides = strides;
}
if (view->suboffsets) {
memcpy(suboffsets, view->suboffsets, view->ndim * sizeof(Py_ssize_t));
info.suboffsets = suboffsets;
}
return PyMemoryView_FromBuffer(&info);
}
/* Get a single item from bufobj at the location specified by seq.
seq is a list or tuple of indices. The purpose of this function
is to check other functions against PyBuffer_GetPointer(). */
static PyObject *
get_pointer(PyObject *self, PyObject *args)
{
PyObject *ret = NULL, *bufobj, *seq;
Py_buffer view;
Py_ssize_t indices[ND_MAX_NDIM];
Py_ssize_t i;
void *ptr;
if (!PyArg_ParseTuple(args, "OO", &bufobj, &seq)) {
return NULL;
}
CHECK_LIST_OR_TUPLE(seq);
if (PyObject_GetBuffer(bufobj, &view, PyBUF_FULL_RO) < 0)
return NULL;
if (view.ndim > ND_MAX_NDIM) {
PyErr_Format(PyExc_ValueError,
"get_pointer(): ndim > %d", ND_MAX_NDIM);
goto out;
}
if (PySequence_Fast_GET_SIZE(seq) != view.ndim) {
PyErr_SetString(PyExc_ValueError,
"get_pointer(): len(indices) != ndim");
goto out;
}
for (i = 0; i < view.ndim; i++) {
PyObject *x = PySequence_Fast_GET_ITEM(seq, i);
indices[i] = PyLong_AsSsize_t(x);
if (PyErr_Occurred())
goto out;
if (indices[i] < 0 || indices[i] >= view.shape[i]) {
PyErr_Format(PyExc_ValueError,
"get_pointer(): invalid index %zd at position %zd",
indices[i], i);
goto out;
}
}
ptr = PyBuffer_GetPointer(&view, indices);
ret = unpack_single(ptr, view.format, view.itemsize);
out:
PyBuffer_Release(&view);
return ret;
}
static char
get_ascii_order(PyObject *order)
{
PyObject *ascii_order;
char ord;
if (!PyUnicode_Check(order)) {
PyErr_SetString(PyExc_TypeError,
"order must be a string");
return CHAR_MAX;
}
ascii_order = PyUnicode_AsASCIIString(order);
if (ascii_order == NULL) {
return CHAR_MAX;
}
ord = PyBytes_AS_STRING(ascii_order)[0];
Py_DECREF(ascii_order);
return ord;
}
/* Get a contiguous memoryview. */
static PyObject *
get_contiguous(PyObject *self, PyObject *args)
{
PyObject *obj;
PyObject *buffertype;
PyObject *order;
long type;
char ord;
if (!PyArg_ParseTuple(args, "OOO", &obj, &buffertype, &order)) {
return NULL;
}
if (!PyLong_Check(buffertype)) {
PyErr_SetString(PyExc_TypeError,
"buffertype must be PyBUF_READ or PyBUF_WRITE");
return NULL;
}
type = PyLong_AsLong(buffertype);
if (type == -1 && PyErr_Occurred()) {
return NULL;
}
ord = get_ascii_order(order);
if (ord == CHAR_MAX) {
return NULL;
}
return PyMemoryView_GetContiguous(obj, (int)type, ord);
}
static int
fmtcmp(const char *fmt1, const char *fmt2)
{
if (fmt1 == NULL) {
return fmt2 == NULL || strcmp(fmt2, "B") == 0;
}
if (fmt2 == NULL) {
return fmt1 == NULL || strcmp(fmt1, "B") == 0;
}
return strcmp(fmt1, fmt2) == 0;
}
static int
arraycmp(const Py_ssize_t *a1, const Py_ssize_t *a2, const Py_ssize_t *shape,
Py_ssize_t ndim)
{
Py_ssize_t i;
if (ndim == 1 && shape && shape[0] == 1) {
/* This is for comparing strides: For example, the array
[175], shape=[1], strides=[-5] is considered contiguous. */
return 1;
}
for (i = 0; i < ndim; i++) {
if (a1[i] != a2[i]) {
return 0;
}
}
return 1;
}
/* Compare two contiguous buffers for physical equality. */
static PyObject *
cmp_contig(PyObject *self, PyObject *args)
{
PyObject *b1, *b2; /* buffer objects */
Py_buffer v1, v2;
PyObject *ret;
int equal = 0;
if (!PyArg_ParseTuple(args, "OO", &b1, &b2)) {
return NULL;
}
if (PyObject_GetBuffer(b1, &v1, PyBUF_FULL_RO) < 0) {
PyErr_SetString(PyExc_TypeError,
"cmp_contig: first argument does not implement the buffer "
"protocol");
return NULL;
}
if (PyObject_GetBuffer(b2, &v2, PyBUF_FULL_RO) < 0) {
PyErr_SetString(PyExc_TypeError,
"cmp_contig: second argument does not implement the buffer "
"protocol");
PyBuffer_Release(&v1);
return NULL;
}
if (!(PyBuffer_IsContiguous(&v1, 'C')&&PyBuffer_IsContiguous(&v2, 'C')) &&
!(PyBuffer_IsContiguous(&v1, 'F')&&PyBuffer_IsContiguous(&v2, 'F'))) {
goto result;
}
/* readonly may differ if created from non-contiguous */
if (v1.len != v2.len ||
v1.itemsize != v2.itemsize ||
v1.ndim != v2.ndim ||
!fmtcmp(v1.format, v2.format) ||
!!v1.shape != !!v2.shape ||
!!v1.strides != !!v2.strides ||
!!v1.suboffsets != !!v2.suboffsets) {
goto result;
}
if ((v1.shape && !arraycmp(v1.shape, v2.shape, NULL, v1.ndim)) ||
(v1.strides && !arraycmp(v1.strides, v2.strides, v1.shape, v1.ndim)) ||
(v1.suboffsets && !arraycmp(v1.suboffsets, v2.suboffsets, NULL,
v1.ndim))) {
goto result;
}
if (memcmp((char *)v1.buf, (char *)v2.buf, v1.len) != 0) {
goto result;
}
equal = 1;
result:
PyBuffer_Release(&v1);
PyBuffer_Release(&v2);
ret = equal ? Py_True : Py_False;
Py_INCREF(ret);
return ret;
}
static PyObject *
is_contiguous(PyObject *self, PyObject *args)
{
PyObject *obj;
PyObject *order;
PyObject *ret = NULL;
Py_buffer view;
char ord;
if (!PyArg_ParseTuple(args, "OO", &obj, &order)) {
return NULL;
}
if (PyObject_GetBuffer(obj, &view, PyBUF_FULL_RO) < 0) {
PyErr_SetString(PyExc_TypeError,
"is_contiguous: object does not implement the buffer "
"protocol");
return NULL;
}
ord = get_ascii_order(order);
if (ord == CHAR_MAX) {
goto release;
}
ret = PyBuffer_IsContiguous(&view, ord) ? Py_True : Py_False;
Py_INCREF(ret);
release:
PyBuffer_Release(&view);
return ret;
}
static Py_hash_t
ndarray_hash(PyObject *self)
{
const NDArrayObject *nd = (NDArrayObject *)self;
const Py_buffer *view = &nd->head->base;
PyObject *bytes;
Py_hash_t hash;
if (!view->readonly) {
PyErr_SetString(PyExc_ValueError,
"cannot hash writable ndarray object");
return -1;
}
if (view->obj != NULL && PyObject_Hash(view->obj) == -1) {
return -1;
}
bytes = ndarray_tobytes(self, NULL);
if (bytes == NULL) {
return -1;
}
hash = PyObject_Hash(bytes);
Py_DECREF(bytes);
return hash;
}
static PyMethodDef ndarray_methods [] =
{
{ "tolist", ndarray_tolist, METH_NOARGS, NULL },
{ "tobytes", ndarray_tobytes, METH_NOARGS, NULL },
{ "push", (PyCFunction)ndarray_push, METH_VARARGS|METH_KEYWORDS, NULL },
{ "pop", ndarray_pop, METH_NOARGS, NULL },
{ "add_suboffsets", ndarray_add_suboffsets, METH_NOARGS, NULL },
{ "memoryview_from_buffer", ndarray_memoryview_from_buffer, METH_NOARGS, NULL },
{NULL}
};
static PyTypeObject NDArray_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
"ndarray", /* Name of this type */
sizeof(NDArrayObject), /* Basic object size */
0, /* Item size for varobject */
(destructor)ndarray_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_compare */
0, /* tp_repr */
0, /* tp_as_number */
&ndarray_as_sequence, /* tp_as_sequence */
&ndarray_as_mapping, /* tp_as_mapping */
(hashfunc)ndarray_hash, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
PyObject_GenericGetAttr, /* tp_getattro */
0, /* tp_setattro */
&ndarray_as_buffer, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
0, /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
ndarray_methods, /* tp_methods */
0, /* tp_members */
ndarray_getset, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
ndarray_init, /* tp_init */
0, /* tp_alloc */
ndarray_new, /* tp_new */
};
static struct PyMethodDef _testbuffer_functions[] = {
{"slice_indices", slice_indices, METH_VARARGS, NULL},
{"get_pointer", get_pointer, METH_VARARGS, NULL},
{"get_contiguous", get_contiguous, METH_VARARGS, NULL},
{"is_contiguous", is_contiguous, METH_VARARGS, NULL},
{"cmp_contig", cmp_contig, METH_VARARGS, NULL},
{NULL, NULL}
};
static struct PyModuleDef _testbuffermodule = {
PyModuleDef_HEAD_INIT,
"_testbuffer",
NULL,
-1,
_testbuffer_functions,
NULL,
NULL,
NULL,
NULL
};
PyMODINIT_FUNC
PyInit__testbuffer(void)
{
PyObject *m;
m = PyModule_Create(&_testbuffermodule);
if (m == NULL)
return NULL;
Py_TYPE(&NDArray_Type)=&PyType_Type;
Py_INCREF(&NDArray_Type);
PyModule_AddObject(m, "ndarray", (PyObject *)&NDArray_Type);
structmodule = PyImport_ImportModule("struct");
if (structmodule == NULL)
return NULL;
Struct = PyObject_GetAttrString(structmodule, "Struct");
calcsize = PyObject_GetAttrString(structmodule, "calcsize");
if (Struct == NULL || calcsize == NULL)
return NULL;
simple_format = PyUnicode_FromString(simple_fmt);
if (simple_format == NULL)
return NULL;
PyModule_AddIntConstant(m, "ND_MAX_NDIM", ND_MAX_NDIM);
PyModule_AddIntConstant(m, "ND_VAREXPORT", ND_VAREXPORT);
PyModule_AddIntConstant(m, "ND_WRITABLE", ND_WRITABLE);
PyModule_AddIntConstant(m, "ND_FORTRAN", ND_FORTRAN);
PyModule_AddIntConstant(m, "ND_SCALAR", ND_SCALAR);
PyModule_AddIntConstant(m, "ND_PIL", ND_PIL);
PyModule_AddIntConstant(m, "ND_GETBUF_FAIL", ND_GETBUF_FAIL);
PyModule_AddIntConstant(m, "PyBUF_SIMPLE", PyBUF_SIMPLE);
PyModule_AddIntConstant(m, "PyBUF_WRITABLE", PyBUF_WRITABLE);
PyModule_AddIntConstant(m, "PyBUF_FORMAT", PyBUF_FORMAT);
PyModule_AddIntConstant(m, "PyBUF_ND", PyBUF_ND);
PyModule_AddIntConstant(m, "PyBUF_STRIDES", PyBUF_STRIDES);
PyModule_AddIntConstant(m, "PyBUF_INDIRECT", PyBUF_INDIRECT);
PyModule_AddIntConstant(m, "PyBUF_C_CONTIGUOUS", PyBUF_C_CONTIGUOUS);
PyModule_AddIntConstant(m, "PyBUF_F_CONTIGUOUS", PyBUF_F_CONTIGUOUS);
PyModule_AddIntConstant(m, "PyBUF_ANY_CONTIGUOUS", PyBUF_ANY_CONTIGUOUS);
PyModule_AddIntConstant(m, "PyBUF_FULL", PyBUF_FULL);
PyModule_AddIntConstant(m, "PyBUF_FULL_RO", PyBUF_FULL_RO);
PyModule_AddIntConstant(m, "PyBUF_RECORDS", PyBUF_RECORDS);
PyModule_AddIntConstant(m, "PyBUF_RECORDS_RO", PyBUF_RECORDS_RO);
PyModule_AddIntConstant(m, "PyBUF_STRIDED", PyBUF_STRIDED);
PyModule_AddIntConstant(m, "PyBUF_STRIDED_RO", PyBUF_STRIDED_RO);
PyModule_AddIntConstant(m, "PyBUF_CONTIG", PyBUF_CONTIG);
PyModule_AddIntConstant(m, "PyBUF_CONTIG_RO", PyBUF_CONTIG_RO);
PyModule_AddIntConstant(m, "PyBUF_READ", PyBUF_READ);
PyModule_AddIntConstant(m, "PyBUF_WRITE", PyBUF_WRITE);
return m;
}
......@@ -275,95 +275,6 @@ test_lazy_hash_inheritance(PyObject* self)
}
/* Issue #7385: Check that memoryview() does not crash
* when bf_getbuffer returns an error
*/
static int
broken_buffer_getbuffer(PyObject *self, Py_buffer *view, int flags)
{
PyErr_SetString(
TestError,
"test_broken_memoryview: expected error in bf_getbuffer");
return -1;
}
static PyBufferProcs memoryviewtester_as_buffer = {
(getbufferproc)broken_buffer_getbuffer, /* bf_getbuffer */
0, /* bf_releasebuffer */
};
static PyTypeObject _MemoryViewTester_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
"memoryviewtester", /* Name of this type */
sizeof(PyObject), /* Basic object size */
0, /* Item size for varobject */
(destructor)PyObject_Del, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_compare */
0, /* tp_repr */
0, /* tp_as_number */
0, /* tp_as_sequence */
0, /* tp_as_mapping */
0, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
PyObject_GenericGetAttr, /* tp_getattro */
0, /* tp_setattro */
&memoryviewtester_as_buffer, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
0, /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
0, /* tp_methods */
0, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
0, /* tp_init */
0, /* tp_alloc */
PyType_GenericNew, /* tp_new */
};
static PyObject*
test_broken_memoryview(PyObject* self)
{
PyObject *obj = PyObject_New(PyObject, &_MemoryViewTester_Type);
PyObject *res;
if (obj == NULL) {
PyErr_Clear();
PyErr_SetString(
TestError,
"test_broken_memoryview: failed to create object");
return NULL;
}
res = PyMemoryView_FromObject(obj);
if (res || !PyErr_Occurred()){
PyErr_SetString(
TestError,
"test_broken_memoryview: memoryview() didn't raise an Exception");
Py_XDECREF(res);
Py_DECREF(obj);
return NULL;
}
PyErr_Clear();
Py_DECREF(obj);
Py_RETURN_NONE;
}
/* Tests of PyLong_{As, From}{Unsigned,}Long(), and (#ifdef HAVE_LONG_LONG)
PyLong_{As, From}{Unsigned,}LongLong().
......@@ -2421,7 +2332,6 @@ static PyMethodDef TestMethods[] = {
{"test_list_api", (PyCFunction)test_list_api, METH_NOARGS},
{"test_dict_iteration", (PyCFunction)test_dict_iteration,METH_NOARGS},
{"test_lazy_hash_inheritance", (PyCFunction)test_lazy_hash_inheritance,METH_NOARGS},
{"test_broken_memoryview", (PyCFunction)test_broken_memoryview,METH_NOARGS},
{"test_long_api", (PyCFunction)test_long_api, METH_NOARGS},
{"test_long_and_overflow", (PyCFunction)test_long_and_overflow,
METH_NOARGS},
......@@ -2684,7 +2594,6 @@ PyInit__testcapi(void)
return NULL;
Py_TYPE(&_HashInheritanceTester_Type)=&PyType_Type;
Py_TYPE(&_MemoryViewTester_Type)=&PyType_Type;
Py_TYPE(&test_structmembersType)=&PyType_Type;
Py_INCREF(&test_structmembersType);
......
......@@ -340,7 +340,7 @@ PyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags)
}
static int
_IsFortranContiguous(Py_buffer *view)
_IsFortranContiguous(const Py_buffer *view)
{
Py_ssize_t sd, dim;
int i;
......@@ -361,7 +361,7 @@ _IsFortranContiguous(Py_buffer *view)
}
static int
_IsCContiguous(Py_buffer *view)
_IsCContiguous(const Py_buffer *view)
{
Py_ssize_t sd, dim;
int i;
......@@ -382,16 +382,16 @@ _IsCContiguous(Py_buffer *view)
}
int
PyBuffer_IsContiguous(Py_buffer *view, char fort)
PyBuffer_IsContiguous(const Py_buffer *view, char order)
{
if (view->suboffsets != NULL) return 0;
if (fort == 'C')
if (order == 'C')
return _IsCContiguous(view);
else if (fort == 'F')
else if (order == 'F')
return _IsFortranContiguous(view);
else if (fort == 'A')
else if (order == 'A')
return (_IsCContiguous(view) || _IsFortranContiguous(view));
return 0;
}
......@@ -651,7 +651,7 @@ int
PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len,
int readonly, int flags)
{
if (view == NULL) return 0;
if (view == NULL) return 0; /* XXX why not -1? */
if (((flags & PyBUF_WRITABLE) == PyBUF_WRITABLE) &&
(readonly == 1)) {
PyErr_SetString(PyExc_BufferError,
......
/* Memoryview object implementation */
#include "Python.h"
#include <stddef.h>
/****************************************************************************/
/* ManagedBuffer Object */
/****************************************************************************/
/*
ManagedBuffer Object:
---------------------
The purpose of this object is to facilitate the handling of chained
memoryviews that have the same underlying exporting object. PEP-3118
allows the underlying object to change while a view is exported. This
could lead to unexpected results when constructing a new memoryview
from an existing memoryview.
Rather than repeatedly redirecting buffer requests to the original base
object, all chained memoryviews use a single buffer snapshot. This
snapshot is generated by the constructor _PyManagedBuffer_FromObject().
Ownership rules:
----------------
The master buffer inside a managed buffer is filled in by the original
base object. shape, strides, suboffsets and format are read-only for
all consumers.
A memoryview's buffer is a private copy of the exporter's buffer. shape,
strides and suboffsets belong to the memoryview and are thus writable.
If a memoryview itself exports several buffers via memory_getbuf(), all
buffer copies share shape, strides and suboffsets. In this case, the
arrays are NOT writable.
Reference count assumptions:
----------------------------
The 'obj' member of a Py_buffer must either be NULL or refer to the
exporting base object. In the Python codebase, all getbufferprocs
return a new reference to view.obj (example: bytes_buffer_getbuffer()).
PyBuffer_Release() decrements view.obj (if non-NULL), so the
releasebufferprocs must NOT decrement view.obj.
*/
#define IS_RELEASED(memobj) \
(((PyMemoryViewObject *) memobj)->view.buf == NULL)
#define XSTRINGIZE(v) #v
#define STRINGIZE(v) XSTRINGIZE(v)
#define CHECK_RELEASED(memobj) \
if (IS_RELEASED(memobj)) { \
PyErr_SetString(PyExc_ValueError, \
"operation forbidden on released memoryview object"); \
return NULL; \
#define CHECK_MBUF_RELEASED(mbuf) \
if (((_PyManagedBufferObject *)mbuf)->flags&_Py_MANAGED_BUFFER_RELEASED) { \
PyErr_SetString(PyExc_ValueError, \
"operation forbidden on released memoryview object"); \
return NULL; \
}
#define CHECK_RELEASED_INT(memobj) \
if (IS_RELEASED(memobj)) { \
PyErr_SetString(PyExc_ValueError, \
"operation forbidden on released memoryview object"); \
return -1; \
Py_LOCAL_INLINE(_PyManagedBufferObject *)
mbuf_alloc(void)
{
_PyManagedBufferObject *mbuf;
mbuf = (_PyManagedBufferObject *)
PyObject_GC_New(_PyManagedBufferObject, &_PyManagedBuffer_Type);
if (mbuf == NULL)
return NULL;
mbuf->flags = 0;
mbuf->exports = 0;
mbuf->master.obj = NULL;
_PyObject_GC_TRACK(mbuf);
return mbuf;
}
static PyObject *
_PyManagedBuffer_FromObject(PyObject *base)
{
_PyManagedBufferObject *mbuf;
mbuf = mbuf_alloc();
if (mbuf == NULL)
return NULL;
if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) {
/* mbuf->master.obj must be NULL. */
Py_DECREF(mbuf);
return NULL;
}
static Py_ssize_t
get_shape0(Py_buffer *buf)
{
if (buf->shape != NULL)
return buf->shape[0];
if (buf->ndim == 0)
return 1;
PyErr_SetString(PyExc_TypeError,
"exported buffer does not have any shape information associated "
"to it");
return -1;
/* Assume that master.obj is a new reference to base. */
assert(mbuf->master.obj == base);
return (PyObject *)mbuf;
}
static void
dup_buffer(Py_buffer *dest, Py_buffer *src)
mbuf_release(_PyManagedBufferObject *self)
{
*dest = *src;
if (src->ndim == 1 && src->shape != NULL) {
dest->shape = &(dest->smalltable[0]);
dest->shape[0] = get_shape0(src);
}
if (src->ndim == 1 && src->strides != NULL) {
dest->strides = &(dest->smalltable[1]);
dest->strides[0] = src->strides[0];
}
if (self->flags&_Py_MANAGED_BUFFER_RELEASED)
return;
/* NOTE: at this point self->exports can still be > 0 if this function
is called from mbuf_clear() to break up a reference cycle. */
self->flags |= _Py_MANAGED_BUFFER_RELEASED;
/* PyBuffer_Release() decrements master->obj and sets it to NULL. */
_PyObject_GC_UNTRACK(self);
PyBuffer_Release(&self->master);
}
static void
mbuf_dealloc(_PyManagedBufferObject *self)
{
assert(self->exports == 0);
mbuf_release(self);
if (self->flags&_Py_MANAGED_BUFFER_FREE_FORMAT)
PyMem_Free(self->master.format);
PyObject_GC_Del(self);
}
static int
memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags)
mbuf_traverse(_PyManagedBufferObject *self, visitproc visit, void *arg)
{
int res = 0;
CHECK_RELEASED_INT(self);
if (self->view.obj != NULL)
res = PyObject_GetBuffer(self->view.obj, view, flags);
if (view)
dup_buffer(view, &self->view);
return res;
Py_VISIT(self->master.obj);
return 0;
}
static void
memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view)
static int
mbuf_clear(_PyManagedBufferObject *self)
{
PyBuffer_Release(view);
assert(self->exports >= 0);
mbuf_release(self);
return 0;
}
PyTypeObject _PyManagedBuffer_Type = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"managedbuffer",
sizeof(_PyManagedBufferObject),
0,
(destructor)mbuf_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_reserved */
0, /* tp_repr */
0, /* tp_as_number */
0, /* tp_as_sequence */
0, /* tp_as_mapping */
0, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
PyObject_GenericGetAttr, /* tp_getattro */
0, /* tp_setattro */
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
0, /* tp_doc */
(traverseproc)mbuf_traverse, /* tp_traverse */
(inquiry)mbuf_clear /* tp_clear */
};
/****************************************************************************/
/* MemoryView Object */
/****************************************************************************/
/* In the process of breaking reference cycles mbuf_release() can be
called before memory_release(). */
#define BASE_INACCESSIBLE(mv) \
(((PyMemoryViewObject *)mv)->flags&_Py_MEMORYVIEW_RELEASED || \
((PyMemoryViewObject *)mv)->mbuf->flags&_Py_MANAGED_BUFFER_RELEASED)
#define CHECK_RELEASED(mv) \
if (BASE_INACCESSIBLE(mv)) { \
PyErr_SetString(PyExc_ValueError, \
"operation forbidden on released memoryview object"); \
return NULL; \
}
#define CHECK_RELEASED_INT(mv) \
if (BASE_INACCESSIBLE(mv)) { \
PyErr_SetString(PyExc_ValueError, \
"operation forbidden on released memoryview object"); \
return -1; \
}
#define CHECK_LIST_OR_TUPLE(v) \
if (!PyList_Check(v) && !PyTuple_Check(v)) { \
PyErr_SetString(PyExc_TypeError, \
#v " must be a list or a tuple"); \
return NULL; \
}
#define VIEW_ADDR(mv) (&((PyMemoryViewObject *)mv)->view)
/* Check for the presence of suboffsets in the first dimension. */
#define HAVE_PTR(suboffsets) (suboffsets && suboffsets[0] >= 0)
/* Adjust ptr if suboffsets are present. */
#define ADJUST_PTR(ptr, suboffsets) \
(HAVE_PTR(suboffsets) ? *((char**)ptr) + suboffsets[0] : ptr)
/* Memoryview buffer properties */
#define MV_C_CONTIGUOUS(flags) (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C))
#define MV_F_CONTIGUOUS(flags) \
(flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_FORTRAN))
#define MV_ANY_CONTIGUOUS(flags) \
(flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN))
/* Fast contiguity test. Caller must ensure suboffsets==NULL and ndim==1. */
#define MV_CONTIGUOUS_NDIM1(view) \
((view)->shape[0] == 1 || (view)->strides[0] == (view)->itemsize)
/* getbuffer() requests */
#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT)
#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS)
#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS)
#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS)
#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES)
#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND)
#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE)
#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT)
PyDoc_STRVAR(memory_doc,
"memoryview(object)\n\
\n\
Create a new memoryview object which references the given object.");
PyObject *
PyMemoryView_FromBuffer(Py_buffer *info)
{
PyMemoryViewObject *mview;
if (info->buf == NULL) {
PyErr_SetString(PyExc_ValueError,
"cannot make memory view from a buffer with a NULL data pointer");
return NULL;
}
mview = (PyMemoryViewObject *)
PyObject_GC_New(PyMemoryViewObject, &PyMemoryView_Type);
if (mview == NULL)
return NULL;
mview->hash = -1;
dup_buffer(&mview->view, info);
/* NOTE: mview->view.obj should already have been incref'ed as
part of PyBuffer_FillInfo(). */
_PyObject_GC_TRACK(mview);
return (PyObject *)mview;
}
/**************************************************************************/
/* Copy memoryview buffers */
/**************************************************************************/
PyObject *
PyMemoryView_FromObject(PyObject *base)
{
PyMemoryViewObject *mview;
Py_buffer view;
/* The functions in this section take a source and a destination buffer
with the same logical structure: format, itemsize, ndim and shape
are identical, with ndim > 0.
if (!PyObject_CheckBuffer(base)) {
PyErr_SetString(PyExc_TypeError,
"cannot make memory view because object does "
"not have the buffer interface");
return NULL;
}
NOTE: All buffers are assumed to have PyBUF_FULL information, which
is the case for memoryviews! */
if (PyObject_GetBuffer(base, &view, PyBUF_FULL_RO) < 0)
return NULL;
mview = (PyMemoryViewObject *)PyMemoryView_FromBuffer(&view);
if (mview == NULL) {
PyBuffer_Release(&view);
return NULL;
}
/* Assumptions: ndim >= 1. The macro tests for a corner case that should
perhaps be explicitly forbidden in the PEP. */
#define HAVE_SUBOFFSETS_IN_LAST_DIM(view) \
(view->suboffsets && view->suboffsets[dest->ndim-1] >= 0)
return (PyObject *)mview;
Py_LOCAL_INLINE(int)
last_dim_is_contiguous(Py_buffer *dest, Py_buffer *src)
{
assert(dest->ndim > 0 && src->ndim > 0);
return (!HAVE_SUBOFFSETS_IN_LAST_DIM(dest) &&
!HAVE_SUBOFFSETS_IN_LAST_DIM(src) &&
dest->strides[dest->ndim-1] == dest->itemsize &&
src->strides[src->ndim-1] == src->itemsize);
}
static PyObject *
memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
/* Check that the logical structure of the destination and source buffers
is identical. */
static int
cmp_structure(Py_buffer *dest, Py_buffer *src)
{
PyObject *obj;
static char *kwlist[] = {"object", 0};
const char *dfmt, *sfmt;
int i;
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memoryview", kwlist,
&obj)) {
return NULL;
assert(dest->format && src->format);
dfmt = dest->format[0] == '@' ? dest->format+1 : dest->format;
sfmt = src->format[0] == '@' ? src->format+1 : src->format;
if (strcmp(dfmt, sfmt) != 0 ||
dest->itemsize != src->itemsize ||
dest->ndim != src->ndim) {
goto value_error;
}
return PyMemoryView_FromObject(obj);
}
for (i = 0; i < dest->ndim; i++) {
if (dest->shape[i] != src->shape[i])
goto value_error;
if (dest->shape[i] == 0)
break;
}
return 0;
value_error:
PyErr_SetString(PyExc_ValueError,
"ndarray assignment: lvalue and rvalue have different structures");
return -1;
}
/* Base case for recursive multi-dimensional copying. Contiguous arrays are
copied with very little overhead. Assumptions: ndim == 1, mem == NULL or
sizeof(mem) == shape[0] * itemsize. */
static void
_strided_copy_nd(char *dest, char *src, int nd, Py_ssize_t *shape,
Py_ssize_t *strides, Py_ssize_t itemsize, char fort)
copy_base(const Py_ssize_t *shape, Py_ssize_t itemsize,
char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
char *mem)
{
int k;
Py_ssize_t outstride;
if (nd==0) {
memcpy(dest, src, itemsize);
}
else if (nd == 1) {
for (k = 0; k<shape[0]; k++) {
memcpy(dest, src, itemsize);
dest += itemsize;
src += strides[0];
}
if (mem == NULL) { /* contiguous */
Py_ssize_t size = shape[0] * itemsize;
if (dptr + size < sptr || sptr + size < dptr)
memcpy(dptr, sptr, size); /* no overlapping */
else
memmove(dptr, sptr, size);
}
else {
if (fort == 'F') {
/* Copy first dimension first,
second dimension second, etc...
Set up the recursive loop backwards so that final
dimension is actually copied last.
*/
outstride = itemsize;
for (k=1; k<nd-1;k++) {
outstride *= shape[k];
}
for (k=0; k<shape[nd-1]; k++) {
_strided_copy_nd(dest, src, nd-1, shape,
strides, itemsize, fort);
dest += outstride;
src += strides[nd-1];
}
char *p;
Py_ssize_t i;
for (i=0, p=mem; i < shape[0]; p+=itemsize, sptr+=sstrides[0], i++) {
char *xsptr = ADJUST_PTR(sptr, ssuboffsets);
memcpy(p, xsptr, itemsize);
}
else {
/* Copy last dimension first,
second-to-last dimension second, etc.
Set up the recursion so that the
first dimension is copied last
*/
outstride = itemsize;
for (k=1; k < nd; k++) {
outstride *= shape[k];
}
for (k=0; k<shape[0]; k++) {
_strided_copy_nd(dest, src, nd-1, shape+1,
strides+1, itemsize,
fort);
dest += outstride;
src += strides[0];
}
for (i=0, p=mem; i < shape[0]; p+=itemsize, dptr+=dstrides[0], i++) {
char *xdptr = ADJUST_PTR(dptr, dsuboffsets);
memcpy(xdptr, p, itemsize);
}
}
return;
}
static int
_indirect_copy_nd(char *dest, Py_buffer *view, char fort)
/* Recursively copy a source buffer to a destination buffer. The two buffers
have the same ndim, shape and itemsize. */
static void
copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize,
char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
char *mem)
{
Py_ssize_t *indices;
int k;
Py_ssize_t elements;
char *ptr;
void (*func)(int, Py_ssize_t *, const Py_ssize_t *);
Py_ssize_t i;
if (view->ndim > PY_SSIZE_T_MAX / sizeof(Py_ssize_t)) {
PyErr_NoMemory();
return -1;
}
assert(ndim >= 1);
indices = (Py_ssize_t *)PyMem_Malloc(sizeof(Py_ssize_t)*view->ndim);
if (indices == NULL) {
PyErr_NoMemory();
return -1;
}
for (k=0; k<view->ndim;k++) {
indices[k] = 0;
if (ndim == 1) {
copy_base(shape, itemsize,
dptr, dstrides, dsuboffsets,
sptr, sstrides, ssuboffsets,
mem);
return;
}
elements = 1;
for (k=0; k<view->ndim; k++) {
elements *= view->shape[k];
}
if (fort == 'F') {
func = _Py_add_one_to_index_F;
}
else {
func = _Py_add_one_to_index_C;
}
while (elements--) {
func(view->ndim, indices, view->shape);
ptr = PyBuffer_GetPointer(view, indices);
memcpy(dest, ptr, view->itemsize);
dest += view->itemsize;
}
for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) {
char *xdptr = ADJUST_PTR(dptr, dsuboffsets);
char *xsptr = ADJUST_PTR(sptr, ssuboffsets);
PyMem_Free(indices);
return 0;
copy_rec(shape+1, ndim-1, itemsize,
xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL,
xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL,
mem);
}
}
/*
Get a the data from an object as a contiguous chunk of memory (in
either 'C' or 'F'ortran order) even if it means copying it into a
separate memory area.
Returns a new reference to a Memory view object. If no copy is needed,
the memory view object points to the original memory and holds a
lock on the original. If a copy is needed, then the memory view object
points to a brand-new Bytes object (and holds a memory lock on it).
buffertype
PyBUF_READ buffer only needs to be read-only
PyBUF_WRITE buffer needs to be writable (give error if not contiguous)
PyBUF_SHADOW buffer needs to be writable so shadow it with
a contiguous buffer if it is not. The view will point to
the shadow buffer which can be written to and then
will be copied back into the other buffer when the memory
view is de-allocated. While the shadow buffer is
being used, it will have an exclusive write lock on
the original buffer.
*/
PyObject *
PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char fort)
/* Faster copying of one-dimensional arrays. */
static int
copy_single(Py_buffer *dest, Py_buffer *src)
{
PyMemoryViewObject *mem;
PyObject *bytes;
Py_buffer *view;
int flags;
char *dest;
if (!PyObject_CheckBuffer(obj)) {
PyErr_SetString(PyExc_TypeError,
"object does not support the buffer interface");
return NULL;
}
mem = PyObject_GC_New(PyMemoryViewObject, &PyMemoryView_Type);
if (mem == NULL)
return NULL;
char *mem = NULL;
view = &mem->view;
flags = PyBUF_FULL_RO;
switch(buffertype) {
case PyBUF_WRITE:
flags = PyBUF_FULL;
break;
}
assert(dest->ndim == 1);
if (PyObject_GetBuffer(obj, view, flags) != 0) {
Py_DECREF(mem);
return NULL;
}
if (cmp_structure(dest, src) < 0)
return -1;
if (PyBuffer_IsContiguous(view, fort)) {
/* no copy needed */
_PyObject_GC_TRACK(mem);
return (PyObject *)mem;
}
/* otherwise a copy is needed */
if (buffertype == PyBUF_WRITE) {
Py_DECREF(mem);
PyErr_SetString(PyExc_BufferError,
"writable contiguous buffer requested "
"for a non-contiguousobject.");
return NULL;
}
bytes = PyBytes_FromStringAndSize(NULL, view->len);
if (bytes == NULL) {
Py_DECREF(mem);
return NULL;
}
dest = PyBytes_AS_STRING(bytes);
/* different copying strategy depending on whether
or not any pointer de-referencing is needed
*/
/* strided or in-direct copy */
if (view->suboffsets==NULL) {
_strided_copy_nd(dest, view->buf, view->ndim, view->shape,
view->strides, view->itemsize, fort);
}
else {
if (_indirect_copy_nd(dest, view, fort) < 0) {
Py_DECREF(bytes);
Py_DECREF(mem);
return NULL;
if (!last_dim_is_contiguous(dest, src)) {
mem = PyMem_Malloc(dest->shape[0] * dest->itemsize);
if (mem == NULL) {
PyErr_NoMemory();
return -1;
}
PyBuffer_Release(view); /* XXX ? */
}
_PyObject_GC_TRACK(mem);
return (PyObject *)mem;
}
copy_base(dest->shape, dest->itemsize,
dest->buf, dest->strides, dest->suboffsets,
src->buf, src->strides, src->suboffsets,
mem);
static PyObject *
memory_format_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyUnicode_FromString(self->view.format);
}
if (mem)
PyMem_Free(mem);
static PyObject *
memory_itemsize_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyLong_FromSsize_t(self->view.itemsize);
return 0;
}
static PyObject *
_IntTupleFromSsizet(int len, Py_ssize_t *vals)
/* Recursively copy src to dest. Both buffers must have the same basic
structure. Copying is atomic, the function never fails with a partial
copy. */
static int
copy_buffer(Py_buffer *dest, Py_buffer *src)
{
int i;
PyObject *o;
PyObject *intTuple;
char *mem = NULL;
if (vals == NULL) {
Py_INCREF(Py_None);
return Py_None;
}
intTuple = PyTuple_New(len);
if (!intTuple)
return NULL;
for (i=0; i<len; i++) {
o = PyLong_FromSsize_t(vals[i]);
if (!o) {
Py_DECREF(intTuple);
return NULL;
assert(dest->ndim > 0);
if (cmp_structure(dest, src) < 0)
return -1;
if (!last_dim_is_contiguous(dest, src)) {
mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize);
if (mem == NULL) {
PyErr_NoMemory();
return -1;
}
PyTuple_SET_ITEM(intTuple, i, o);
}
return intTuple;
}
static PyObject *
memory_shape_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.shape);
}
copy_rec(dest->shape, dest->ndim, dest->itemsize,
dest->buf, dest->strides, dest->suboffsets,
src->buf, src->strides, src->suboffsets,
mem);
static PyObject *
memory_strides_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.strides);
if (mem)
PyMem_Free(mem);
return 0;
}
static PyObject *
memory_suboffsets_get(PyMemoryViewObject *self)
/* Initialize strides for a C-contiguous array. */
Py_LOCAL_INLINE(void)
init_strides_from_shape(Py_buffer *view)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets);
Py_ssize_t i;
assert(view->ndim > 0);
view->strides[view->ndim-1] = view->itemsize;
for (i = view->ndim-2; i >= 0; i--)
view->strides[i] = view->strides[i+1] * view->shape[i+1];
}
static PyObject *
memory_readonly_get(PyMemoryViewObject *self)
/* Initialize strides for a Fortran-contiguous array. */
Py_LOCAL_INLINE(void)
init_fortran_strides_from_shape(Py_buffer *view)
{
CHECK_RELEASED(self);
return PyBool_FromLong(self->view.readonly);
Py_ssize_t i;
assert(view->ndim > 0);
view->strides[0] = view->itemsize;
for (i = 1; i < view->ndim; i++)
view->strides[i] = view->strides[i-1] * view->shape[i-1];
}
static PyObject *
memory_ndim_get(PyMemoryViewObject *self)
/* Copy src to a C-contiguous representation. Assumptions:
len(mem) == src->len. */
static int
buffer_to_c_contiguous(char *mem, Py_buffer *src)
{
CHECK_RELEASED(self);
return PyLong_FromLong(self->view.ndim);
Py_buffer dest;
Py_ssize_t *strides;
int ret;
assert(src->shape != NULL);
assert(src->strides != NULL);
strides = PyMem_Malloc(src->ndim * (sizeof *src->strides));
if (strides == NULL) {
PyErr_NoMemory();
return -1;
}
/* initialize dest as a C-contiguous buffer */
dest = *src;
dest.buf = mem;
/* shape is constant and shared */
dest.strides = strides;
init_strides_from_shape(&dest);
dest.suboffsets = NULL;
ret = copy_buffer(&dest, src);
PyMem_Free(strides);
return ret;
}
static PyGetSetDef memory_getsetlist[] ={
{"format", (getter)memory_format_get, NULL, NULL},
{"itemsize", (getter)memory_itemsize_get, NULL, NULL},
{"shape", (getter)memory_shape_get, NULL, NULL},
{"strides", (getter)memory_strides_get, NULL, NULL},
{"suboffsets", (getter)memory_suboffsets_get, NULL, NULL},
{"readonly", (getter)memory_readonly_get, NULL, NULL},
{"ndim", (getter)memory_ndim_get, NULL, NULL},
{NULL, NULL, NULL, NULL},
};
/****************************************************************************/
/* Constructors */
/****************************************************************************/
static PyObject *
memory_tobytes(PyMemoryViewObject *mem, PyObject *noargs)
/* Initialize values that are shared with the managed buffer. */
Py_LOCAL_INLINE(void)
init_shared_values(Py_buffer *dest, const Py_buffer *src)
{
CHECK_RELEASED(mem);
return PyObject_CallFunctionObjArgs(
(PyObject *) &PyBytes_Type, mem, NULL);
dest->obj = src->obj;
dest->buf = src->buf;
dest->len = src->len;
dest->itemsize = src->itemsize;
dest->readonly = src->readonly;
dest->format = src->format ? src->format : "B";
dest->internal = src->internal;
}
/* TODO: rewrite this function using the struct module to unpack
each buffer item */
static PyObject *
memory_tolist(PyMemoryViewObject *mem, PyObject *noargs)
/* Copy shape and strides. Reconstruct missing values. */
static void
init_shape_strides(Py_buffer *dest, const Py_buffer *src)
{
Py_buffer *view = &(mem->view);
Py_ssize_t i;
PyObject *res, *item;
char *buf;
CHECK_RELEASED(mem);
if (strcmp(view->format, "B") || view->itemsize != 1) {
PyErr_SetString(PyExc_NotImplementedError,
"tolist() only supports byte views");
return NULL;
if (src->ndim == 0) {
dest->shape = NULL;
dest->strides = NULL;
return;
}
if (view->ndim != 1) {
PyErr_SetString(PyExc_NotImplementedError,
"tolist() only supports one-dimensional objects");
return NULL;
if (src->ndim == 1) {
dest->shape[0] = src->shape ? src->shape[0] : src->len / src->itemsize;
dest->strides[0] = src->strides ? src->strides[0] : src->itemsize;
return;
}
res = PyList_New(view->len);
if (res == NULL)
return NULL;
buf = view->buf;
for (i = 0; i < view->len; i++) {
item = PyLong_FromUnsignedLong((unsigned char) *buf);
if (item == NULL) {
Py_DECREF(res);
return NULL;
}
PyList_SET_ITEM(res, i, item);
buf++;
for (i = 0; i < src->ndim; i++)
dest->shape[i] = src->shape[i];
if (src->strides) {
for (i = 0; i < src->ndim; i++)
dest->strides[i] = src->strides[i];
}
else {
init_strides_from_shape(dest);
}
return res;
}
static void
do_release(PyMemoryViewObject *self)
Py_LOCAL_INLINE(void)
init_suboffsets(Py_buffer *dest, const Py_buffer *src)
{
if (self->view.obj != NULL) {
PyBuffer_Release(&(self->view));
Py_ssize_t i;
if (src->suboffsets == NULL) {
dest->suboffsets = NULL;
return;
}
self->view.obj = NULL;
self->view.buf = NULL;
for (i = 0; i < src->ndim; i++)
dest->suboffsets[i] = src->suboffsets[i];
}
/* len = product(shape) * itemsize */
Py_LOCAL_INLINE(void)
init_len(Py_buffer *view)
{
Py_ssize_t i, len;
len = 1;
for (i = 0; i < view->ndim; i++)
len *= view->shape[i];
len *= view->itemsize;
view->len = len;
}
/* Initialize memoryview buffer properties. */
static void
init_flags(PyMemoryViewObject *mv)
{
const Py_buffer *view = &mv->view;
int flags = 0;
switch (view->ndim) {
case 0:
flags |= (_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|
_Py_MEMORYVIEW_FORTRAN);
break;
case 1:
if (MV_CONTIGUOUS_NDIM1(view))
flags |= (_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
break;
default:
if (PyBuffer_IsContiguous(view, 'C'))
flags |= _Py_MEMORYVIEW_C;
if (PyBuffer_IsContiguous(view, 'F'))
flags |= _Py_MEMORYVIEW_FORTRAN;
break;
}
if (view->suboffsets) {
flags |= _Py_MEMORYVIEW_PIL;
flags &= ~(_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
}
mv->flags = flags;
}
/* Allocate a new memoryview and perform basic initialization. New memoryviews
are exclusively created through the mbuf_add functions. */
Py_LOCAL_INLINE(PyMemoryViewObject *)
memory_alloc(int ndim)
{
PyMemoryViewObject *mv;
mv = (PyMemoryViewObject *)
PyObject_GC_NewVar(PyMemoryViewObject, &PyMemoryView_Type, 3*ndim);
if (mv == NULL)
return NULL;
mv->mbuf = NULL;
mv->hash = -1;
mv->flags = 0;
mv->exports = 0;
mv->view.ndim = ndim;
mv->view.shape = mv->ob_array;
mv->view.strides = mv->ob_array + ndim;
mv->view.suboffsets = mv->ob_array + 2 * ndim;
_PyObject_GC_TRACK(mv);
return mv;
}
/*
Return a new memoryview that is registered with mbuf. If src is NULL,
use mbuf->master as the underlying buffer. Otherwise, use src.
The new memoryview has full buffer information: shape and strides
are always present, suboffsets as needed. Arrays are copied to
the memoryview's ob_array field.
*/
static PyObject *
memory_enter(PyObject *self, PyObject *args)
mbuf_add_view(_PyManagedBufferObject *mbuf, const Py_buffer *src)
{
CHECK_RELEASED(self);
Py_INCREF(self);
return self;
PyMemoryViewObject *mv;
Py_buffer *dest;
if (src == NULL)
src = &mbuf->master;
if (src->ndim > PyBUF_MAX_NDIM) {
PyErr_SetString(PyExc_ValueError,
"memoryview: number of dimensions must not exceed "
STRINGIZE(PyBUF_MAX_NDIM));
return NULL;
}
mv = memory_alloc(src->ndim);
if (mv == NULL)
return NULL;
dest = &mv->view;
init_shared_values(dest, src);
init_shape_strides(dest, src);
init_suboffsets(dest, src);
init_flags(mv);
mv->mbuf = mbuf;
Py_INCREF(mbuf);
mbuf->exports++;
return (PyObject *)mv;
}
/* Register an incomplete view: shape, strides, suboffsets and flags still
need to be initialized. Use 'ndim' instead of src->ndim to determine the
size of the memoryview's ob_array.
Assumption: ndim <= PyBUF_MAX_NDIM. */
static PyObject *
memory_exit(PyObject *self, PyObject *args)
mbuf_add_incomplete_view(_PyManagedBufferObject *mbuf, const Py_buffer *src,
int ndim)
{
do_release((PyMemoryViewObject *) self);
Py_RETURN_NONE;
PyMemoryViewObject *mv;
Py_buffer *dest;
if (src == NULL)
src = &mbuf->master;
assert(ndim <= PyBUF_MAX_NDIM);
mv = memory_alloc(ndim);
if (mv == NULL)
return NULL;
dest = &mv->view;
init_shared_values(dest, src);
mv->mbuf = mbuf;
Py_INCREF(mbuf);
mbuf->exports++;
return (PyObject *)mv;
}
static PyMethodDef memory_methods[] = {
{"release", memory_exit, METH_NOARGS},
{"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, NULL},
{"tolist", (PyCFunction)memory_tolist, METH_NOARGS, NULL},
{"__enter__", memory_enter, METH_NOARGS},
{"__exit__", memory_exit, METH_VARARGS},
{NULL, NULL} /* sentinel */
};
/* Expose a raw memory area as a view of contiguous bytes. flags can be
PyBUF_READ or PyBUF_WRITE. view->format is set to "B" (unsigned bytes).
The memoryview has complete buffer information. */
PyObject *
PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
{
_PyManagedBufferObject *mbuf;
PyObject *mv;
int readonly;
assert(mem != NULL);
assert(flags == PyBUF_READ || flags == PyBUF_WRITE);
mbuf = mbuf_alloc();
if (mbuf == NULL)
return NULL;
readonly = (flags == PyBUF_WRITE) ? 0 : 1;
(void)PyBuffer_FillInfo(&mbuf->master, NULL, mem, size, readonly,
PyBUF_FULL_RO);
mv = mbuf_add_view(mbuf, NULL);
Py_DECREF(mbuf);
return mv;
}
/* Create a memoryview from a given Py_buffer. For simple byte views,
PyMemoryView_FromMemory() should be used instead.
This function is the only entry point that can create a master buffer
without full information. Because of this fact init_shape_strides()
must be able to reconstruct missing values. */
PyObject *
PyMemoryView_FromBuffer(Py_buffer *info)
{
_PyManagedBufferObject *mbuf;
PyObject *mv;
if (info->buf == NULL) {
PyErr_SetString(PyExc_ValueError,
"PyMemoryView_FromBuffer(): info->buf must not be NULL");
return NULL;
}
mbuf = mbuf_alloc();
if (mbuf == NULL)
return NULL;
/* info->obj is either NULL or a borrowed reference. This reference
should not be decremented in PyBuffer_Release(). */
mbuf->master = *info;
mbuf->master.obj = NULL;
mv = mbuf_add_view(mbuf, NULL);
Py_DECREF(mbuf);
return mv;
}
/* Create a memoryview from an object that implements the buffer protocol.
If the object is a memoryview, the new memoryview must be registered
with the same managed buffer. Otherwise, a new managed buffer is created. */
PyObject *
PyMemoryView_FromObject(PyObject *v)
{
_PyManagedBufferObject *mbuf;
if (PyMemoryView_Check(v)) {
PyMemoryViewObject *mv = (PyMemoryViewObject *)v;
CHECK_RELEASED(mv);
return mbuf_add_view(mv->mbuf, &mv->view);
}
else if (PyObject_CheckBuffer(v)) {
PyObject *ret;
mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(v);
if (mbuf == NULL)
return NULL;
ret = mbuf_add_view(mbuf, NULL);
Py_DECREF(mbuf);
return ret;
}
PyErr_Format(PyExc_TypeError,
"memoryview: %.200s object does not have the buffer interface",
Py_TYPE(v)->tp_name);
return NULL;
}
/* Copy the format string from a base object that might vanish. */
static int
mbuf_copy_format(_PyManagedBufferObject *mbuf, const char *fmt)
{
if (fmt != NULL) {
char *cp = PyMem_Malloc(strlen(fmt)+1);
if (cp == NULL) {
PyErr_NoMemory();
return -1;
}
mbuf->master.format = strcpy(cp, fmt);
mbuf->flags |= _Py_MANAGED_BUFFER_FREE_FORMAT;
}
return 0;
}
/*
Return a memoryview that is based on a contiguous copy of src.
Assumptions: src has PyBUF_FULL_RO information, src->ndim > 0.
Ownership rules:
1) As usual, the returned memoryview has a private copy
of src->shape, src->strides and src->suboffsets.
2) src->format is copied to the master buffer and released
in mbuf_dealloc(). The releasebufferproc of the bytes
object is NULL, so it does not matter that mbuf_release()
passes the altered format pointer to PyBuffer_Release().
*/
static PyObject *
memory_from_contiguous_copy(Py_buffer *src, char order)
{
_PyManagedBufferObject *mbuf;
PyMemoryViewObject *mv;
PyObject *bytes;
Py_buffer *dest;
int i;
assert(src->ndim > 0);
assert(src->shape != NULL);
bytes = PyBytes_FromStringAndSize(NULL, src->len);
if (bytes == NULL)
return NULL;
mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(bytes);
Py_DECREF(bytes);
if (mbuf == NULL)
return NULL;
if (mbuf_copy_format(mbuf, src->format) < 0) {
Py_DECREF(mbuf);
return NULL;
}
mv = (PyMemoryViewObject *)mbuf_add_incomplete_view(mbuf, NULL, src->ndim);
Py_DECREF(mbuf);
if (mv == NULL)
return NULL;
dest = &mv->view;
/* shared values are initialized correctly except for itemsize */
dest->itemsize = src->itemsize;
/* shape and strides */
for (i = 0; i < src->ndim; i++) {
dest->shape[i] = src->shape[i];
}
if (order == 'C' || order == 'A') {
init_strides_from_shape(dest);
}
else {
init_fortran_strides_from_shape(dest);
}
/* suboffsets */
dest->suboffsets = NULL;
/* flags */
init_flags(mv);
if (copy_buffer(dest, src) < 0) {
Py_DECREF(mv);
return NULL;
}
return (PyObject *)mv;
}
/*
Return a new memoryview object based on a contiguous exporter with
buffertype={PyBUF_READ, PyBUF_WRITE} and order={'C', 'F'ortran, or 'A'ny}.
The logical structure of the input and output buffers is the same
(i.e. tolist(input) == tolist(output)), but the physical layout in
memory can be explicitly chosen.
As usual, if buffertype=PyBUF_WRITE, the exporter's buffer must be writable,
otherwise it may be writable or read-only.
If the exporter is already contiguous with the desired target order,
the memoryview will be directly based on the exporter.
Otherwise, if the buffertype is PyBUF_READ, the memoryview will be
based on a new bytes object. If order={'C', 'A'ny}, use 'C' order,
'F'ortran order otherwise.
*/
PyObject *
PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)
{
PyMemoryViewObject *mv;
PyObject *ret;
Py_buffer *view;
assert(buffertype == PyBUF_READ || buffertype == PyBUF_WRITE);
assert(order == 'C' || order == 'F' || order == 'A');
mv = (PyMemoryViewObject *)PyMemoryView_FromObject(obj);
if (mv == NULL)
return NULL;
view = &mv->view;
if (buffertype == PyBUF_WRITE && view->readonly) {
PyErr_SetString(PyExc_BufferError,
"underlying buffer is not writable");
Py_DECREF(mv);
return NULL;
}
if (PyBuffer_IsContiguous(view, order))
return (PyObject *)mv;
if (buffertype == PyBUF_WRITE) {
PyErr_SetString(PyExc_BufferError,
"writable contiguous buffer requested "
"for a non-contiguous object.");
Py_DECREF(mv);
return NULL;
}
ret = memory_from_contiguous_copy(view, order);
Py_DECREF(mv);
return ret;
}
static PyObject *
memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
{
PyObject *obj;
static char *kwlist[] = {"object", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memoryview", kwlist,
&obj)) {
return NULL;
}
return PyMemoryView_FromObject(obj);
}
/****************************************************************************/
/* Release/GC management */
/****************************************************************************/
/* Inform the managed buffer that this particular memoryview will not access
the underlying buffer again. If no other memoryviews are registered with
the managed buffer, the underlying buffer is released instantly and
marked as inaccessible for both the memoryview and the managed buffer.
This function fails if the memoryview itself has exported buffers. */
static int
_memory_release(PyMemoryViewObject *self)
{
if (self->flags & _Py_MEMORYVIEW_RELEASED)
return 0;
if (self->exports == 0) {
self->flags |= _Py_MEMORYVIEW_RELEASED;
assert(self->mbuf->exports > 0);
if (--self->mbuf->exports == 0)
mbuf_release(self->mbuf);
return 0;
}
if (self->exports > 0) {
PyErr_Format(PyExc_BufferError,
"memoryview has %zd exported buffer%s", self->exports,
self->exports==1 ? "" : "s");
return -1;
}
Py_FatalError("_memory_release(): negative export count");
return -1;
}
static PyObject *
memory_release(PyMemoryViewObject *self)
{
if (_memory_release(self) < 0)
return NULL;
Py_RETURN_NONE;
}
static void
memory_dealloc(PyMemoryViewObject *self)
{
assert(self->exports == 0);
_PyObject_GC_UNTRACK(self);
do_release(self);
(void)_memory_release(self);
Py_CLEAR(self->mbuf);
PyObject_GC_Del(self);
}
static int
memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg)
{
Py_VISIT(self->mbuf);
return 0;
}
static int
memory_clear(PyMemoryViewObject *self)
{
(void)_memory_release(self);
Py_CLEAR(self->mbuf);
return 0;
}
static PyObject *
memory_repr(PyMemoryViewObject *self)
memory_enter(PyObject *self, PyObject *args)
{
if (IS_RELEASED(self))
return PyUnicode_FromFormat("<released memory at %p>", self);
else
return PyUnicode_FromFormat("<memory at %p>", self);
CHECK_RELEASED(self);
Py_INCREF(self);
return self;
}
static Py_hash_t
memory_hash(PyMemoryViewObject *self)
static PyObject *
memory_exit(PyObject *self, PyObject *args)
{
if (self->hash == -1) {
Py_buffer *view = &self->view;
CHECK_RELEASED_INT(self);
if (view->ndim > 1) {
PyErr_SetString(PyExc_NotImplementedError,
"can't hash multi-dimensional memoryview object");
return memory_release((PyMemoryViewObject *)self);
}
/****************************************************************************/
/* Casting format and shape */
/****************************************************************************/
#define IS_BYTE_FORMAT(f) (f == 'b' || f == 'B' || f == 'c')
Py_LOCAL_INLINE(Py_ssize_t)
get_native_fmtchar(char *result, const char *fmt)
{
Py_ssize_t size = -1;
if (fmt[0] == '@') fmt++;
switch (fmt[0]) {
case 'c': case 'b': case 'B': size = sizeof(char); break;
case 'h': case 'H': size = sizeof(short); break;
case 'i': case 'I': size = sizeof(int); break;
case 'l': case 'L': size = sizeof(long); break;
#ifdef HAVE_LONG_LONG
case 'q': case 'Q': size = sizeof(PY_LONG_LONG); break;
#endif
case 'n': case 'N': size = sizeof(Py_ssize_t); break;
case 'f': size = sizeof(float); break;
case 'd': size = sizeof(double); break;
#ifdef HAVE_C99_BOOL
case '?': size = sizeof(_Bool); break;
#else
case '?': size = sizeof(char); break;
#endif
case 'P': size = sizeof(void *); break;
}
if (size > 0 && fmt[1] == '\0') {
*result = fmt[0];
return size;
}
return -1;
}
/* Cast a memoryview's data type to 'format'. The input array must be
C-contiguous. At least one of input-format, output-format must have
byte size. The output array is 1-D, with the same byte length as the
input array. Thus, view->len must be a multiple of the new itemsize. */
static int
cast_to_1D(PyMemoryViewObject *mv, PyObject *format)
{
Py_buffer *view = &mv->view;
PyObject *asciifmt;
char srcchar, destchar;
Py_ssize_t itemsize;
int ret = -1;
assert(view->ndim >= 1);
assert(Py_SIZE(mv) == 3*view->ndim);
assert(view->shape == mv->ob_array);
assert(view->strides == mv->ob_array + view->ndim);
assert(view->suboffsets == mv->ob_array + 2*view->ndim);
if (get_native_fmtchar(&srcchar, view->format) < 0) {
PyErr_SetString(PyExc_ValueError,
"memoryview: source format must be a native single character "
"format prefixed with an optional '@'");
return ret;
}
asciifmt = PyUnicode_AsASCIIString(format);
if (asciifmt == NULL)
return ret;
itemsize = get_native_fmtchar(&destchar, PyBytes_AS_STRING(asciifmt));
if (itemsize < 0) {
PyErr_SetString(PyExc_ValueError,
"memoryview: destination format must be a native single "
"character format prefixed with an optional '@'");
goto out;
}
if (!IS_BYTE_FORMAT(srcchar) && !IS_BYTE_FORMAT(destchar)) {
PyErr_SetString(PyExc_TypeError,
"memoryview: cannot cast between two non-byte formats");
goto out;
}
if (view->len % itemsize) {
PyErr_SetString(PyExc_TypeError,
"memoryview: length is not a multiple of itemsize");
goto out;
}
strncpy(mv->format, PyBytes_AS_STRING(asciifmt),
_Py_MEMORYVIEW_MAX_FORMAT);
mv->format[_Py_MEMORYVIEW_MAX_FORMAT-1] = '\0';
view->format = mv->format;
view->itemsize = itemsize;
view->ndim = 1;
view->shape[0] = view->len / view->itemsize;
view->strides[0] = view->itemsize;
view->suboffsets = NULL;
init_flags(mv);
ret = 0;
out:
Py_DECREF(asciifmt);
return ret;
}
/* The memoryview must have space for 3*len(seq) elements. */
static Py_ssize_t
copy_shape(Py_ssize_t *shape, const PyObject *seq, Py_ssize_t ndim,
Py_ssize_t itemsize)
{
Py_ssize_t x, i;
Py_ssize_t len = itemsize;
for (i = 0; i < ndim; i++) {
PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i);
if (!PyLong_Check(tmp)) {
PyErr_SetString(PyExc_TypeError,
"memoryview.cast(): elements of shape must be integers");
return -1;
}
if (view->strides && view->strides[0] != view->itemsize) {
PyErr_SetString(PyExc_NotImplementedError,
"can't hash strided memoryview object");
x = PyLong_AsSsize_t(tmp);
if (x == -1 && PyErr_Occurred()) {
return -1;
}
if (!view->readonly) {
PyErr_SetString(PyExc_ValueError,
"can't hash writable memoryview object");
if (x <= 0) {
/* In general elements of shape may be 0, but not for casting. */
PyErr_Format(PyExc_ValueError,
"memoryview.cast(): elements of shape must be integers > 0");
return -1;
}
if (view->obj != NULL && PyObject_Hash(view->obj) == -1) {
/* Keep the original error message */
if (x > PY_SSIZE_T_MAX / len) {
PyErr_Format(PyExc_ValueError,
"memoryview.cast(): product(shape) > SSIZE_MAX");
return -1;
}
/* Can't fail */
self->hash = _Py_HashBytes((unsigned char *) view->buf, view->len);
len *= x;
shape[i] = x;
}
return self->hash;
}
/* Sequence methods */
static Py_ssize_t
memory_length(PyMemoryViewObject *self)
{
CHECK_RELEASED_INT(self);
return get_shape0(&self->view);
return len;
}
/* Alternate version of memory_subcript that only accepts indices.
Used by PySeqIter_New().
*/
static PyObject *
memory_item(PyMemoryViewObject *self, Py_ssize_t result)
/* Cast a 1-D array to a new shape. The result array will be C-contiguous.
If the result array does not have exactly the same byte length as the
input array, raise ValueError. */
static int
cast_to_ND(PyMemoryViewObject *mv, const PyObject *shape, int ndim)
{
Py_buffer *view = &(self->view);
Py_buffer *view = &mv->view;
Py_ssize_t len;
CHECK_RELEASED(self);
assert(view->ndim == 1); /* ndim from cast_to_1D() */
assert(Py_SIZE(mv) == 3*(ndim==0?1:ndim)); /* ndim of result array */
assert(view->shape == mv->ob_array);
assert(view->strides == mv->ob_array + (ndim==0?1:ndim));
assert(view->suboffsets == NULL);
view->ndim = ndim;
if (view->ndim == 0) {
PyErr_SetString(PyExc_IndexError,
"invalid indexing of 0-dim memory");
view->shape = NULL;
view->strides = NULL;
len = view->itemsize;
}
else {
len = copy_shape(view->shape, shape, ndim, view->itemsize);
if (len < 0)
return -1;
init_strides_from_shape(view);
}
if (view->len != len) {
PyErr_SetString(PyExc_TypeError,
"memoryview: product(shape) * itemsize != buffer size");
return -1;
}
init_flags(mv);
return 0;
}
static int
zero_in_shape(PyMemoryViewObject *mv)
{
Py_buffer *view = &mv->view;
Py_ssize_t i;
for (i = 0; i < view->ndim; i++)
if (view->shape[i] == 0)
return 1;
return 0;
}
/*
Cast a copy of 'self' to a different view. The input view must
be C-contiguous. The function always casts the input view to a
1-D output according to 'format'. At least one of input-format,
output-format must have byte size.
If 'shape' is given, the 1-D view from the previous step will
be cast to a C-contiguous view with new shape and strides.
All casts must result in views that will have the exact byte
size of the original input. Otherwise, an error is raised.
*/
static PyObject *
memory_cast(PyMemoryViewObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"format", "shape", NULL};
PyMemoryViewObject *mv = NULL;
PyObject *shape = NULL;
PyObject *format;
Py_ssize_t ndim = 1;
CHECK_RELEASED(self);
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist,
&format, &shape)) {
return NULL;
}
if (!PyUnicode_Check(format)) {
PyErr_SetString(PyExc_TypeError,
"memoryview: format argument must be a string");
return NULL;
}
if (!MV_C_CONTIGUOUS(self->flags)) {
PyErr_SetString(PyExc_TypeError,
"memoryview: casts are restricted to C-contiguous views");
return NULL;
}
if (zero_in_shape(self)) {
PyErr_SetString(PyExc_TypeError,
"memoryview: cannot cast view with zeros in shape or strides");
return NULL;
}
if (view->ndim == 1) {
/* Return a bytes object */
char *ptr;
ptr = (char *)view->buf;
if (result < 0) {
result += get_shape0(view);
}
if ((result < 0) || (result >= get_shape0(view))) {
PyErr_SetString(PyExc_IndexError,
"index out of bounds");
return NULL;
}
if (view->strides == NULL)
ptr += view->itemsize * result;
else
ptr += view->strides[0] * result;
if (view->suboffsets != NULL &&
view->suboffsets[0] >= 0) {
ptr = *((char **)ptr) + view->suboffsets[0];
}
return PyBytes_FromStringAndSize(ptr, view->itemsize);
} else {
/* Return a new memory-view object */
Py_buffer newview;
memset(&newview, 0, sizeof(newview));
/* XXX: This needs to be fixed so it actually returns a sub-view */
return PyMemoryView_FromBuffer(&newview);
if (shape) {
CHECK_LIST_OR_TUPLE(shape)
ndim = PySequence_Fast_GET_SIZE(shape);
if (ndim > PyBUF_MAX_NDIM) {
PyErr_SetString(PyExc_ValueError,
"memoryview: number of dimensions must not exceed "
STRINGIZE(PyBUF_MAX_NDIM));
return NULL;
}
if (self->view.ndim != 1 && ndim != 1) {
PyErr_SetString(PyExc_TypeError,
"memoryview: cast must be 1D -> ND or ND -> 1D");
return NULL;
}
}
mv = (PyMemoryViewObject *)
mbuf_add_incomplete_view(self->mbuf, &self->view, ndim==0 ? 1 : (int)ndim);
if (mv == NULL)
return NULL;
if (cast_to_1D(mv, format) < 0)
goto error;
if (shape && cast_to_ND(mv, shape, (int)ndim) < 0)
goto error;
return (PyObject *)mv;
error:
Py_DECREF(mv);
return NULL;
}
/**************************************************************************/
/* getbuffer */
/**************************************************************************/
static int
memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags)
{
Py_buffer *base = &self->view;
int baseflags = self->flags;
CHECK_RELEASED_INT(self);
/* start with complete information */
*view = *base;
view->obj = NULL;
if (REQ_WRITABLE(flags) && base->readonly) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer is not writable");
return -1;
}
if (!REQ_FORMAT(flags)) {
/* NULL indicates that the buffer's data type has been cast to 'B'.
view->itemsize is the _previous_ itemsize. If shape is present,
the equality product(shape) * itemsize = len still holds at this
point. The equality calcsize(format) = itemsize does _not_ hold
from here on! */
view->format = NULL;
}
if (REQ_C_CONTIGUOUS(flags) && !MV_C_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer is not C-contiguous");
return -1;
}
if (REQ_F_CONTIGUOUS(flags) && !MV_F_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer is not Fortran contiguous");
return -1;
}
if (REQ_ANY_CONTIGUOUS(flags) && !MV_ANY_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer is not contiguous");
return -1;
}
if (!REQ_INDIRECT(flags) && (baseflags & _Py_MEMORYVIEW_PIL)) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer requires suboffsets");
return -1;
}
if (!REQ_STRIDES(flags)) {
if (!MV_C_CONTIGUOUS(baseflags)) {
PyErr_SetString(PyExc_BufferError,
"memoryview: underlying buffer is not C-contiguous");
return -1;
}
view->strides = NULL;
}
if (!REQ_SHAPE(flags)) {
/* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous,
so base->buf = ndbuf->data. */
if (view->format != NULL) {
/* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do
not make sense. */
PyErr_Format(PyExc_BufferError,
"ndarray: cannot cast to unsigned bytes if the format flag "
"is present");
return -1;
}
/* product(shape) * itemsize = len and calcsize(format) = itemsize
do _not_ hold from here on! */
view->ndim = 1;
view->shape = NULL;
}
view->obj = (PyObject *)self;
Py_INCREF(view->obj);
self->exports++;
return 0;
}
static void
memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view)
{
self->exports--;
return;
/* PyBuffer_Release() decrements view->obj after this function returns. */
}
/* Buffer methods */
static PyBufferProcs memory_as_buffer = {
(getbufferproc)memory_getbuf, /* bf_getbuffer */
(releasebufferproc)memory_releasebuf, /* bf_releasebuffer */
};
/****************************************************************************/
/* Optimized pack/unpack for all native format specifiers */
/****************************************************************************/
/*
Fix exceptions:
1) Include format string in the error message.
2) OverflowError -> ValueError.
3) The error message from PyNumber_Index() is not ideal.
*/
static int
type_error_int(const char *fmt)
{
PyErr_Format(PyExc_TypeError,
"memoryview: invalid type for format '%s'", fmt);
return -1;
}
static int
value_error_int(const char *fmt)
{
PyErr_Format(PyExc_ValueError,
"memoryview: invalid value for format '%s'", fmt);
return -1;
}
static int
fix_error_int(const char *fmt)
{
assert(PyErr_Occurred());
if (PyErr_ExceptionMatches(PyExc_TypeError)) {
PyErr_Clear();
return type_error_int(fmt);
}
else if (PyErr_ExceptionMatches(PyExc_OverflowError) ||
PyErr_ExceptionMatches(PyExc_ValueError)) {
PyErr_Clear();
return value_error_int(fmt);
}
return -1;
}
/* Accept integer objects or objects with an __index__() method. */
static long
pylong_as_ld(PyObject *item)
{
PyObject *tmp;
long ld;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return -1;
ld = PyLong_AsLong(tmp);
Py_DECREF(tmp);
return ld;
}
static unsigned long
pylong_as_lu(PyObject *item)
{
PyObject *tmp;
unsigned long lu;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return (unsigned long)-1;
lu = PyLong_AsUnsignedLong(tmp);
Py_DECREF(tmp);
return lu;
}
#ifdef HAVE_LONG_LONG
static PY_LONG_LONG
pylong_as_lld(PyObject *item)
{
PyObject *tmp;
PY_LONG_LONG lld;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return -1;
lld = PyLong_AsLongLong(tmp);
Py_DECREF(tmp);
return lld;
}
static unsigned PY_LONG_LONG
pylong_as_llu(PyObject *item)
{
PyObject *tmp;
unsigned PY_LONG_LONG llu;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return (unsigned PY_LONG_LONG)-1;
llu = PyLong_AsUnsignedLongLong(tmp);
Py_DECREF(tmp);
return llu;
}
#endif
static Py_ssize_t
pylong_as_zd(PyObject *item)
{
PyObject *tmp;
Py_ssize_t zd;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return -1;
zd = PyLong_AsSsize_t(tmp);
Py_DECREF(tmp);
return zd;
}
static size_t
pylong_as_zu(PyObject *item)
{
PyObject *tmp;
size_t zu;
tmp = PyNumber_Index(item);
if (tmp == NULL)
return (size_t)-1;
zu = PyLong_AsSize_t(tmp);
Py_DECREF(tmp);
return zu;
}
/* Timings with the ndarray from _testbuffer.c indicate that using the
struct module is around 15x slower than the two functions below. */
#define UNPACK_SINGLE(dest, ptr, type) \
do { \
type x; \
memcpy((char *)&x, ptr, sizeof x); \
dest = x; \
} while (0)
/* Unpack a single item. 'fmt' can be any native format character in struct
module syntax. This function is very sensitive to small changes. With this
layout gcc automatically generates a fast jump table. */
Py_LOCAL_INLINE(PyObject *)
unpack_single(const char *ptr, const char *fmt)
{
unsigned PY_LONG_LONG llu;
unsigned long lu;
size_t zu;
PY_LONG_LONG lld;
long ld;
Py_ssize_t zd;
double d;
unsigned char uc;
void *p;
switch (fmt[0]) {
/* signed integers and fast path for 'B' */
case 'B': uc = *((unsigned char *)ptr); goto convert_uc;
case 'b': ld = *((signed char *)ptr); goto convert_ld;
case 'h': UNPACK_SINGLE(ld, ptr, short); goto convert_ld;
case 'i': UNPACK_SINGLE(ld, ptr, int); goto convert_ld;
case 'l': UNPACK_SINGLE(ld, ptr, long); goto convert_ld;
/* boolean */
#ifdef HAVE_C99_BOOL
case '?': UNPACK_SINGLE(ld, ptr, _Bool); goto convert_bool;
#else
case '?': UNPACK_SINGLE(ld, ptr, char); goto convert_bool;
#endif
/* unsigned integers */
case 'H': UNPACK_SINGLE(lu, ptr, unsigned short); goto convert_lu;
case 'I': UNPACK_SINGLE(lu, ptr, unsigned int); goto convert_lu;
case 'L': UNPACK_SINGLE(lu, ptr, unsigned long); goto convert_lu;
/* native 64-bit */
#ifdef HAVE_LONG_LONG
case 'q': UNPACK_SINGLE(lld, ptr, PY_LONG_LONG); goto convert_lld;
case 'Q': UNPACK_SINGLE(llu, ptr, unsigned PY_LONG_LONG); goto convert_llu;
#endif
/* ssize_t and size_t */
case 'n': UNPACK_SINGLE(zd, ptr, Py_ssize_t); goto convert_zd;
case 'N': UNPACK_SINGLE(zu, ptr, size_t); goto convert_zu;
/* floats */
case 'f': UNPACK_SINGLE(d, ptr, float); goto convert_double;
case 'd': UNPACK_SINGLE(d, ptr, double); goto convert_double;
/* bytes object */
case 'c': goto convert_bytes;
/* pointer */
case 'P': UNPACK_SINGLE(p, ptr, void *); goto convert_pointer;
/* default */
default: goto err_format;
}
convert_uc:
/* PyLong_FromUnsignedLong() is slower */
return PyLong_FromLong(uc);
convert_ld:
return PyLong_FromLong(ld);
convert_lu:
return PyLong_FromUnsignedLong(lu);
convert_lld:
return PyLong_FromLongLong(lld);
convert_llu:
return PyLong_FromUnsignedLongLong(llu);
convert_zd:
return PyLong_FromSsize_t(zd);
convert_zu:
return PyLong_FromSize_t(zu);
convert_double:
return PyFloat_FromDouble(d);
convert_bool:
return PyBool_FromLong(ld);
convert_bytes:
return PyBytes_FromStringAndSize(ptr, 1);
convert_pointer:
return PyLong_FromVoidPtr(p);
err_format:
PyErr_Format(PyExc_NotImplementedError,
"memoryview: format %s not supported", fmt);
return NULL;
}
#define PACK_SINGLE(ptr, src, type) \
do { \
type x; \
x = (type)src; \
memcpy(ptr, (char *)&x, sizeof x); \
} while (0)
/* Pack a single item. 'fmt' can be any native format character in
struct module syntax. */
static int
pack_single(char *ptr, PyObject *item, const char *fmt)
{
unsigned PY_LONG_LONG llu;
unsigned long lu;
size_t zu;
PY_LONG_LONG lld;
long ld;
Py_ssize_t zd;
double d;
void *p;
switch (fmt[0]) {
/* signed integers */
case 'b': case 'h': case 'i': case 'l':
ld = pylong_as_ld(item);
if (ld == -1 && PyErr_Occurred())
goto err_occurred;
switch (fmt[0]) {
case 'b':
if (ld < SCHAR_MIN || ld > SCHAR_MAX) goto err_range;
*((signed char *)ptr) = (signed char)ld; break;
case 'h':
if (ld < SHRT_MIN || ld > SHRT_MAX) goto err_range;
PACK_SINGLE(ptr, ld, short); break;
case 'i':
if (ld < INT_MIN || ld > INT_MAX) goto err_range;
PACK_SINGLE(ptr, ld, int); break;
default: /* 'l' */
PACK_SINGLE(ptr, ld, long); break;
}
break;
/* unsigned integers */
case 'B': case 'H': case 'I': case 'L':
lu = pylong_as_lu(item);
if (lu == (unsigned long)-1 && PyErr_Occurred())
goto err_occurred;
switch (fmt[0]) {
case 'B':
if (lu > UCHAR_MAX) goto err_range;
*((unsigned char *)ptr) = (unsigned char)lu; break;
case 'H':
if (lu > USHRT_MAX) goto err_range;
PACK_SINGLE(ptr, lu, unsigned short); break;
case 'I':
if (lu > UINT_MAX) goto err_range;
PACK_SINGLE(ptr, lu, unsigned int); break;
default: /* 'L' */
PACK_SINGLE(ptr, lu, unsigned long); break;
}
break;
/* native 64-bit */
#ifdef HAVE_LONG_LONG
case 'q':
lld = pylong_as_lld(item);
if (lld == -1 && PyErr_Occurred())
goto err_occurred;
PACK_SINGLE(ptr, lld, PY_LONG_LONG);
break;
case 'Q':
llu = pylong_as_llu(item);
if (llu == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())
goto err_occurred;
PACK_SINGLE(ptr, llu, unsigned PY_LONG_LONG);
break;
#endif
/* ssize_t and size_t */
case 'n':
zd = pylong_as_zd(item);
if (zd == -1 && PyErr_Occurred())
goto err_occurred;
PACK_SINGLE(ptr, zd, Py_ssize_t);
break;
case 'N':
zu = pylong_as_zu(item);
if (zu == (size_t)-1 && PyErr_Occurred())
goto err_occurred;
PACK_SINGLE(ptr, zu, size_t);
break;
/* floats */
case 'f': case 'd':
d = PyFloat_AsDouble(item);
if (d == -1.0 && PyErr_Occurred())
goto err_occurred;
if (fmt[0] == 'f') {
PACK_SINGLE(ptr, d, float);
}
else {
PACK_SINGLE(ptr, d, double);
}
break;
/* bool */
case '?':
ld = PyObject_IsTrue(item);
if (ld < 0)
return -1; /* preserve original error */
#ifdef HAVE_C99_BOOL
PACK_SINGLE(ptr, ld, _Bool);
#else
PACK_SINGLE(ptr, ld, char);
#endif
break;
/* bytes object */
case 'c':
if (!PyBytes_Check(item))
return type_error_int(fmt);
if (PyBytes_GET_SIZE(item) != 1)
return value_error_int(fmt);
*ptr = PyBytes_AS_STRING(item)[0];
break;
/* pointer */
case 'P':
p = PyLong_AsVoidPtr(item);
if (p == NULL && PyErr_Occurred())
goto err_occurred;
PACK_SINGLE(ptr, p, void *);
break;
/* default */
default: goto err_format;
}
return 0;
err_occurred:
return fix_error_int(fmt);
err_range:
return value_error_int(fmt);
err_format:
PyErr_Format(PyExc_NotImplementedError,
"memoryview: format %s not supported", fmt);
return -1;
}
/****************************************************************************/
/* Representations */
/****************************************************************************/
/* allow explicit form of native format */
Py_LOCAL_INLINE(const char *)
adjust_fmt(const Py_buffer *view)
{
const char *fmt;
fmt = (view->format[0] == '@') ? view->format+1 : view->format;
if (fmt[0] && fmt[1] == '\0')
return fmt;
PyErr_Format(PyExc_NotImplementedError,
"memoryview: unsupported format %s", view->format);
return NULL;
}
/* Base case for multi-dimensional unpacking. Assumption: ndim == 1. */
static PyObject *
tolist_base(const char *ptr, const Py_ssize_t *shape,
const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
const char *fmt)
{
PyObject *lst, *item;
Py_ssize_t i;
lst = PyList_New(shape[0]);
if (lst == NULL)
return NULL;
for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
const char *xptr = ADJUST_PTR(ptr, suboffsets);
item = unpack_single(xptr, fmt);
if (item == NULL) {
Py_DECREF(lst);
return NULL;
}
PyList_SET_ITEM(lst, i, item);
}
return lst;
}
/* Unpack a multi-dimensional array into a nested list.
Assumption: ndim >= 1. */
static PyObject *
tolist_rec(const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape,
const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
const char *fmt)
{
PyObject *lst, *item;
Py_ssize_t i;
assert(ndim >= 1);
assert(shape != NULL);
assert(strides != NULL);
if (ndim == 1)
return tolist_base(ptr, shape, strides, suboffsets, fmt);
lst = PyList_New(shape[0]);
if (lst == NULL)
return NULL;
for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
const char *xptr = ADJUST_PTR(ptr, suboffsets);
item = tolist_rec(xptr, ndim-1, shape+1,
strides+1, suboffsets ? suboffsets+1 : NULL,
fmt);
if (item == NULL) {
Py_DECREF(lst);
return NULL;
}
PyList_SET_ITEM(lst, i, item);
}
return lst;
}
/* Return a list representation of the memoryview. Currently only buffers
with native format strings are supported. */
static PyObject *
memory_tolist(PyMemoryViewObject *mv, PyObject *noargs)
{
const Py_buffer *view = &(mv->view);
const char *fmt;
CHECK_RELEASED(mv);
fmt = adjust_fmt(view);
if (fmt == NULL)
return NULL;
if (view->ndim == 0) {
return unpack_single(view->buf, fmt);
}
else if (view->ndim == 1) {
return tolist_base(view->buf, view->shape,
view->strides, view->suboffsets,
fmt);
}
else {
return tolist_rec(view->buf, view->ndim, view->shape,
view->strides, view->suboffsets,
fmt);
}
}
static PyObject *
memory_tobytes(PyMemoryViewObject *self, PyObject *dummy)
{
Py_buffer *src = VIEW_ADDR(self);
PyObject *bytes = NULL;
CHECK_RELEASED(self);
if (MV_C_CONTIGUOUS(self->flags)) {
return PyBytes_FromStringAndSize(src->buf, src->len);
}
bytes = PyBytes_FromStringAndSize(NULL, src->len);
if (bytes == NULL)
return NULL;
if (buffer_to_c_contiguous(PyBytes_AS_STRING(bytes), src) < 0) {
Py_DECREF(bytes);
return NULL;
}
return bytes;
}
static PyObject *
memory_repr(PyMemoryViewObject *self)
{
if (self->flags & _Py_MEMORYVIEW_RELEASED)
return PyUnicode_FromFormat("<released memory at %p>", self);
else
return PyUnicode_FromFormat("<memory at %p>", self);
}
/**************************************************************************/
/* Indexing and slicing */
/**************************************************************************/
/* Get the pointer to the item at index. */
static char *
ptr_from_index(Py_buffer *view, Py_ssize_t index)
{
char *ptr;
Py_ssize_t nitems; /* items in the first dimension */
assert(view->shape);
assert(view->strides);
nitems = view->shape[0];
if (index < 0) {
index += nitems;
}
if (index < 0 || index >= nitems) {
PyErr_SetString(PyExc_IndexError, "index out of bounds");
return NULL;
}
ptr = (char *)view->buf;
ptr += view->strides[0] * index;
ptr = ADJUST_PTR(ptr, view->suboffsets);
return ptr;
}
/* Return the item at index. In a one-dimensional view, this is an object
with the type specified by view->format. Otherwise, the item is a sub-view.
The function is used in memory_subscript() and memory_as_sequence. */
static PyObject *
memory_item(PyMemoryViewObject *self, Py_ssize_t index)
{
Py_buffer *view = &(self->view);
const char *fmt;
CHECK_RELEASED(self);
fmt = adjust_fmt(view);
if (fmt == NULL)
return NULL;
if (view->ndim == 0) {
PyErr_SetString(PyExc_TypeError, "invalid indexing of 0-dim memory");
return NULL;
}
if (view->ndim == 1) {
char *ptr = ptr_from_index(view, index);
if (ptr == NULL)
return NULL;
return unpack_single(ptr, fmt);
}
PyErr_SetString(PyExc_NotImplementedError,
"multi-dimensional sub-views are not implemented");
return NULL;
}
Py_LOCAL_INLINE(int)
init_slice(Py_buffer *base, PyObject *key, int dim)
{
Py_ssize_t start, stop, step, slicelength;
if (PySlice_GetIndicesEx(key, base->shape[dim],
&start, &stop, &step, &slicelength) < 0) {
return -1;
}
if (base->suboffsets == NULL || dim == 0) {
adjust_buf:
base->buf = (char *)base->buf + base->strides[dim] * start;
}
else {
Py_ssize_t n = dim-1;
while (n >= 0 && base->suboffsets[n] < 0)
n--;
if (n < 0)
goto adjust_buf; /* all suboffsets are negative */
base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start;
}
base->shape[dim] = slicelength;
base->strides[dim] = base->strides[dim] * step;
return 0;
}
static int
is_multislice(PyObject *key)
{
Py_ssize_t size, i;
if (!PyTuple_Check(key))
return 0;
size = PyTuple_GET_SIZE(key);
if (size == 0)
return 0;
for (i = 0; i < size; i++) {
PyObject *x = PyTuple_GET_ITEM(key, i);
if (!PySlice_Check(x))
return 0;
}
return 1;
}
/*
mem[obj] returns a bytes object holding the data for one element if
obj fully indexes the memory view or another memory-view object
if it does not.
/* mv[obj] returns an object holding the data for one element if obj
fully indexes the memoryview or another memoryview object if it
does not.
0-d memory-view objects can be referenced using ... or () but
not with anything else.
*/
0-d memoryview objects can be referenced using mv[...] or mv[()]
but not with anything else. */
static PyObject *
memory_subscript(PyMemoryViewObject *self, PyObject *key)
{
......@@ -611,247 +2030,567 @@ memory_subscript(PyMemoryViewObject *self, PyObject *key)
view = &(self->view);
CHECK_RELEASED(self);
if (view->ndim == 0) {
if (key == Py_Ellipsis ||
(PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) {
if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) {
const char *fmt = adjust_fmt(view);
if (fmt == NULL)
return NULL;
return unpack_single(view->buf, fmt);
}
else if (key == Py_Ellipsis) {
Py_INCREF(self);
return (PyObject *)self;
}
else {
PyErr_SetString(PyExc_IndexError,
"invalid indexing of 0-dim memory");
PyErr_SetString(PyExc_TypeError,
"invalid indexing of 0-dim memory");
return NULL;
}
}
if (PyIndex_Check(key)) {
Py_ssize_t result;
result = PyNumber_AsSsize_t(key, NULL);
if (result == -1 && PyErr_Occurred())
return NULL;
return memory_item(self, result);
Py_ssize_t index;
index = PyNumber_AsSsize_t(key, PyExc_IndexError);
if (index == -1 && PyErr_Occurred())
return NULL;
return memory_item(self, index);
}
else if (PySlice_Check(key)) {
Py_ssize_t start, stop, step, slicelength;
PyMemoryViewObject *sliced;
if (PySlice_GetIndicesEx(key, get_shape0(view),
&start, &stop, &step, &slicelength) < 0) {
sliced = (PyMemoryViewObject *)mbuf_add_view(self->mbuf, view);
if (sliced == NULL)
return NULL;
if (init_slice(&sliced->view, key, 0) < 0) {
Py_DECREF(sliced);
return NULL;
}
if (step == 1 && view->ndim == 1) {
Py_buffer newview;
void *newbuf = (char *) view->buf
+ start * view->itemsize;
int newflags = view->readonly
? PyBUF_CONTIG_RO : PyBUF_CONTIG;
/* XXX There should be an API to create a subbuffer */
if (view->obj != NULL) {
if (PyObject_GetBuffer(view->obj, &newview, newflags) == -1)
return NULL;
}
else {
newview = *view;
}
newview.buf = newbuf;
newview.len = slicelength * newview.itemsize;
newview.format = view->format;
newview.shape = &(newview.smalltable[0]);
newview.shape[0] = slicelength;
newview.strides = &(newview.itemsize);
return PyMemoryView_FromBuffer(&newview);
}
PyErr_SetNone(PyExc_NotImplementedError);
init_len(&sliced->view);
init_flags(sliced);
return (PyObject *)sliced;
}
else if (is_multislice(key)) {
PyErr_SetString(PyExc_NotImplementedError,
"multi-dimensional slicing is not implemented");
return NULL;
}
PyErr_Format(PyExc_TypeError,
"cannot index memory using \"%.200s\"",
key->ob_type->tp_name);
PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
return NULL;
}
/* Need to support assigning memory if we can */
static int
memory_ass_sub(PyMemoryViewObject *self, PyObject *key, PyObject *value)
{
Py_ssize_t start, len, bytelen;
Py_buffer srcview;
Py_buffer *view = &(self->view);
char *srcbuf, *destbuf;
Py_buffer src;
const char *fmt;
char *ptr;
CHECK_RELEASED_INT(self);
fmt = adjust_fmt(view);
if (fmt == NULL)
return -1;
if (view->readonly) {
PyErr_SetString(PyExc_TypeError,
"cannot modify read-only memory");
PyErr_SetString(PyExc_TypeError, "cannot modify read-only memory");
return -1;
}
if (value == NULL) {
PyErr_SetString(PyExc_TypeError,
"cannot delete memory");
return -1;
}
if (view->ndim != 1) {
PyErr_SetNone(PyExc_NotImplementedError);
PyErr_SetString(PyExc_TypeError, "cannot delete memory");
return -1;
}
if (PyIndex_Check(key)) {
start = PyNumber_AsSsize_t(key, NULL);
if (start == -1 && PyErr_Occurred())
return -1;
if (start < 0) {
start += get_shape0(view);
if (view->ndim == 0) {
if (key == Py_Ellipsis ||
(PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) {
ptr = (char *)view->buf;
return pack_single(ptr, value, fmt);
}
if ((start < 0) || (start >= get_shape0(view))) {
PyErr_SetString(PyExc_IndexError,
"index out of bounds");
else {
PyErr_SetString(PyExc_TypeError,
"invalid indexing of 0-dim memory");
return -1;
}
len = 1;
}
else if (PySlice_Check(key)) {
Py_ssize_t stop, step;
if (view->ndim != 1) {
PyErr_SetString(PyExc_NotImplementedError,
"memoryview assignments are currently restricted to ndim = 1");
return -1;
}
if (PySlice_GetIndicesEx(key, get_shape0(view),
&start, &stop, &step, &len) < 0) {
if (PyIndex_Check(key)) {
Py_ssize_t index = PyNumber_AsSsize_t(key, PyExc_IndexError);
if (index == -1 && PyErr_Occurred())
return -1;
}
if (step != 1) {
PyErr_SetNone(PyExc_NotImplementedError);
ptr = ptr_from_index(view, index);
if (ptr == NULL)
return -1;
return pack_single(ptr, value, fmt);
}
/* one-dimensional: fast path */
if (PySlice_Check(key) && view->ndim == 1) {
Py_buffer dest; /* sliced view */
Py_ssize_t arrays[3];
int ret = -1;
/* rvalue must be an exporter */
if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) < 0)
return ret;
dest = *view;
dest.shape = &arrays[0]; dest.shape[0] = view->shape[0];
dest.strides = &arrays[1]; dest.strides[0] = view->strides[0];
if (view->suboffsets) {
dest.suboffsets = &arrays[2]; dest.suboffsets[0] = view->suboffsets[0];
}
if (init_slice(&dest, key, 0) < 0)
goto end_block;
dest.len = dest.shape[0] * dest.itemsize;
ret = copy_single(&dest, &src);
end_block:
PyBuffer_Release(&src);
return ret;
}
else {
PyErr_Format(PyExc_TypeError,
"cannot index memory using \"%.200s\"",
key->ob_type->tp_name);
else if (PySlice_Check(key) || is_multislice(key)) {
/* Call memory_subscript() to produce a sliced lvalue, then copy
rvalue into lvalue. This is already implemented in _testbuffer.c. */
PyErr_SetString(PyExc_NotImplementedError,
"memoryview slice assignments are currently restricted "
"to ndim = 1");
return -1;
}
if (PyObject_GetBuffer(value, &srcview, PyBUF_CONTIG_RO) == -1) {
return -1;
PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
return -1;
}
static Py_ssize_t
memory_length(PyMemoryViewObject *self)
{
CHECK_RELEASED_INT(self);
return self->view.ndim == 0 ? 1 : self->view.shape[0];
}
/* As mapping */
static PyMappingMethods memory_as_mapping = {
(lenfunc)memory_length, /* mp_length */
(binaryfunc)memory_subscript, /* mp_subscript */
(objobjargproc)memory_ass_sub, /* mp_ass_subscript */
};
/* As sequence */
static PySequenceMethods memory_as_sequence = {
0, /* sq_length */
0, /* sq_concat */
0, /* sq_repeat */
(ssizeargfunc)memory_item, /* sq_item */
};
/**************************************************************************/
/* Comparisons */
/**************************************************************************/
#define CMP_SINGLE(p, q, type) \
do { \
type x; \
type y; \
memcpy((char *)&x, p, sizeof x); \
memcpy((char *)&y, q, sizeof y); \
equal = (x == y); \
} while (0)
Py_LOCAL_INLINE(int)
unpack_cmp(const char *p, const char *q, const char *fmt)
{
int equal;
switch (fmt[0]) {
/* signed integers and fast path for 'B' */
case 'B': return *((unsigned char *)p) == *((unsigned char *)q);
case 'b': return *((signed char *)p) == *((signed char *)q);
case 'h': CMP_SINGLE(p, q, short); return equal;
case 'i': CMP_SINGLE(p, q, int); return equal;
case 'l': CMP_SINGLE(p, q, long); return equal;
/* boolean */
#ifdef HAVE_C99_BOOL
case '?': CMP_SINGLE(p, q, _Bool); return equal;
#else
case '?': CMP_SINGLE(p, q, char); return equal;
#endif
/* unsigned integers */
case 'H': CMP_SINGLE(p, q, unsigned short); return equal;
case 'I': CMP_SINGLE(p, q, unsigned int); return equal;
case 'L': CMP_SINGLE(p, q, unsigned long); return equal;
/* native 64-bit */
#ifdef HAVE_LONG_LONG
case 'q': CMP_SINGLE(p, q, PY_LONG_LONG); return equal;
case 'Q': CMP_SINGLE(p, q, unsigned PY_LONG_LONG); return equal;
#endif
/* ssize_t and size_t */
case 'n': CMP_SINGLE(p, q, Py_ssize_t); return equal;
case 'N': CMP_SINGLE(p, q, size_t); return equal;
/* floats */
/* XXX DBL_EPSILON? */
case 'f': CMP_SINGLE(p, q, float); return equal;
case 'd': CMP_SINGLE(p, q, double); return equal;
/* bytes object */
case 'c': return *p == *q;
/* pointer */
case 'P': CMP_SINGLE(p, q, void *); return equal;
/* Py_NotImplemented */
default: return -1;
}
/* XXX should we allow assignment of different item sizes
as long as the byte length is the same?
(e.g. assign 2 shorts to a 4-byte slice) */
if (srcview.itemsize != view->itemsize) {
PyErr_Format(PyExc_TypeError,
"mismatching item sizes for \"%.200s\" and \"%.200s\"",
view->obj->ob_type->tp_name, srcview.obj->ob_type->tp_name);
goto _error;
}
bytelen = len * view->itemsize;
if (bytelen != srcview.len) {
PyErr_SetString(PyExc_ValueError,
"cannot modify size of memoryview object");
goto _error;
}
/* Do the actual copy */
destbuf = (char *) view->buf + start * view->itemsize;
srcbuf = (char *) srcview.buf;
if (destbuf + bytelen < srcbuf || srcbuf + bytelen < destbuf)
/* No overlapping */
memcpy(destbuf, srcbuf, bytelen);
else
memmove(destbuf, srcbuf, bytelen);
}
PyBuffer_Release(&srcview);
return 0;
/* Base case for recursive array comparisons. Assumption: ndim == 1. */
static int
cmp_base(const char *p, const char *q, const Py_ssize_t *shape,
const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
const char *fmt)
{
Py_ssize_t i;
int equal;
for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
const char *xp = ADJUST_PTR(p, psuboffsets);
const char *xq = ADJUST_PTR(q, qsuboffsets);
equal = unpack_cmp(xp, xq, fmt);
if (equal <= 0)
return equal;
}
_error:
PyBuffer_Release(&srcview);
return -1;
return 1;
}
/* Recursively compare two multi-dimensional arrays that have the same
logical structure. Assumption: ndim >= 1. */
static int
cmp_rec(const char *p, const char *q,
Py_ssize_t ndim, const Py_ssize_t *shape,
const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
const char *fmt)
{
Py_ssize_t i;
int equal;
assert(ndim >= 1);
assert(shape != NULL);
assert(pstrides != NULL);
assert(qstrides != NULL);
if (ndim == 1) {
return cmp_base(p, q, shape,
pstrides, psuboffsets,
qstrides, qsuboffsets,
fmt);
}
for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
const char *xp = ADJUST_PTR(p, psuboffsets);
const char *xq = ADJUST_PTR(q, qsuboffsets);
equal = cmp_rec(xp, xq, ndim-1, shape+1,
pstrides+1, psuboffsets ? psuboffsets+1 : NULL,
qstrides+1, qsuboffsets ? qsuboffsets+1 : NULL,
fmt);
if (equal <= 0)
return equal;
}
return 1;
}
static PyObject *
memory_richcompare(PyObject *v, PyObject *w, int op)
{
Py_buffer vv, ww;
int equal = 0;
PyObject *res;
Py_buffer wbuf, *vv, *ww = NULL;
const char *vfmt, *wfmt;
int equal = -1; /* Py_NotImplemented */
vv.obj = NULL;
ww.obj = NULL;
if (op != Py_EQ && op != Py_NE)
goto _notimpl;
if ((PyMemoryView_Check(v) && IS_RELEASED(v)) ||
(PyMemoryView_Check(w) && IS_RELEASED(w))) {
goto result; /* Py_NotImplemented */
assert(PyMemoryView_Check(v));
if (BASE_INACCESSIBLE(v)) {
equal = (v == w);
goto _end;
goto result;
}
if (PyObject_GetBuffer(v, &vv, PyBUF_CONTIG_RO) == -1) {
PyErr_Clear();
goto _notimpl;
vv = VIEW_ADDR(v);
if (PyMemoryView_Check(w)) {
if (BASE_INACCESSIBLE(w)) {
equal = (v == w);
goto result;
}
ww = VIEW_ADDR(w);
}
else {
if (PyObject_GetBuffer(w, &wbuf, PyBUF_FULL_RO) < 0) {
PyErr_Clear();
goto result; /* Py_NotImplemented */
}
ww = &wbuf;
}
if (PyObject_GetBuffer(w, &ww, PyBUF_CONTIG_RO) == -1) {
vfmt = adjust_fmt(vv);
wfmt = adjust_fmt(ww);
if (vfmt == NULL || wfmt == NULL) {
PyErr_Clear();
goto _notimpl;
goto result; /* Py_NotImplemented */
}
if (vv.itemsize != ww.itemsize || vv.len != ww.len)
goto _end;
if (cmp_structure(vv, ww) < 0) {
PyErr_Clear();
equal = 0;
goto result;
}
equal = !memcmp(vv.buf, ww.buf, vv.len);
if (vv->ndim == 0) {
equal = unpack_cmp(vv->buf, ww->buf, vfmt);
}
else if (vv->ndim == 1) {
equal = cmp_base(vv->buf, ww->buf, vv->shape,
vv->strides, vv->suboffsets,
ww->strides, ww->suboffsets,
vfmt);
}
else {
equal = cmp_rec(vv->buf, ww->buf, vv->ndim, vv->shape,
vv->strides, vv->suboffsets,
ww->strides, ww->suboffsets,
vfmt);
}
_end:
PyBuffer_Release(&vv);
PyBuffer_Release(&ww);
if ((equal && op == Py_EQ) || (!equal && op == Py_NE))
result:
if (equal < 0)
res = Py_NotImplemented;
else if ((equal && op == Py_EQ) || (!equal && op == Py_NE))
res = Py_True;
else
res = Py_False;
if (ww == &wbuf)
PyBuffer_Release(ww);
Py_INCREF(res);
return res;
}
/**************************************************************************/
/* Hash */
/**************************************************************************/
static Py_hash_t
memory_hash(PyMemoryViewObject *self)
{
if (self->hash == -1) {
Py_buffer *view = &self->view;
char *mem = view->buf;
CHECK_RELEASED_INT(self);
if (!view->readonly) {
PyErr_SetString(PyExc_ValueError,
"cannot hash writable memoryview object");
return -1;
}
if (view->obj != NULL && PyObject_Hash(view->obj) == -1) {
/* Keep the original error message */
return -1;
}
if (!MV_C_CONTIGUOUS(self->flags)) {
mem = PyMem_Malloc(view->len);
if (mem == NULL) {
PyErr_NoMemory();
return -1;
}
if (buffer_to_c_contiguous(mem, view) < 0) {
PyMem_Free(mem);
return -1;
}
}
/* Can't fail */
self->hash = _Py_HashBytes((unsigned char *)mem, view->len);
if (mem != view->buf)
PyMem_Free(mem);
}
_notimpl:
PyBuffer_Release(&vv);
PyBuffer_Release(&ww);
Py_RETURN_NOTIMPLEMENTED;
return self->hash;
}
static int
memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg)
/**************************************************************************/
/* getters */
/**************************************************************************/
static PyObject *
_IntTupleFromSsizet(int len, Py_ssize_t *vals)
{
if (self->view.obj != NULL)
Py_VISIT(self->view.obj);
return 0;
int i;
PyObject *o;
PyObject *intTuple;
if (vals == NULL)
return PyTuple_New(0);
intTuple = PyTuple_New(len);
if (!intTuple)
return NULL;
for (i=0; i<len; i++) {
o = PyLong_FromSsize_t(vals[i]);
if (!o) {
Py_DECREF(intTuple);
return NULL;
}
PyTuple_SET_ITEM(intTuple, i, o);
}
return intTuple;
}
static int
memory_clear(PyMemoryViewObject *self)
static PyObject *
memory_obj_get(PyMemoryViewObject *self)
{
PyBuffer_Release(&self->view);
return 0;
Py_buffer *view = &self->view;
CHECK_RELEASED(self);
if (view->obj == NULL) {
Py_RETURN_NONE;
}
Py_INCREF(view->obj);
return view->obj;
}
static PyObject *
memory_nbytes_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyLong_FromSsize_t(self->view.len);
}
/* As mapping */
static PyMappingMethods memory_as_mapping = {
(lenfunc)memory_length, /* mp_length */
(binaryfunc)memory_subscript, /* mp_subscript */
(objobjargproc)memory_ass_sub, /* mp_ass_subscript */
};
static PyObject *
memory_format_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyUnicode_FromString(self->view.format);
}
static PySequenceMethods memory_as_sequence = {
0, /* sq_length */
0, /* sq_concat */
0, /* sq_repeat */
(ssizeargfunc)memory_item, /* sq_item */
static PyObject *
memory_itemsize_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyLong_FromSsize_t(self->view.itemsize);
}
static PyObject *
memory_shape_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.shape);
}
static PyObject *
memory_strides_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.strides);
}
static PyObject *
memory_suboffsets_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets);
}
static PyObject *
memory_readonly_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyBool_FromLong(self->view.readonly);
}
static PyObject *
memory_ndim_get(PyMemoryViewObject *self)
{
CHECK_RELEASED(self);
return PyLong_FromLong(self->view.ndim);
}
static PyObject *
memory_c_contiguous(PyMemoryViewObject *self, PyObject *dummy)
{
CHECK_RELEASED(self);
return PyBool_FromLong(MV_C_CONTIGUOUS(self->flags));
}
static PyObject *
memory_f_contiguous(PyMemoryViewObject *self, PyObject *dummy)
{
CHECK_RELEASED(self);
return PyBool_FromLong(MV_F_CONTIGUOUS(self->flags));
}
static PyObject *
memory_contiguous(PyMemoryViewObject *self, PyObject *dummy)
{
CHECK_RELEASED(self);
return PyBool_FromLong(MV_ANY_CONTIGUOUS(self->flags));
}
static PyGetSetDef memory_getsetlist[] = {
{"obj", (getter)memory_obj_get, NULL, NULL},
{"nbytes", (getter)memory_nbytes_get, NULL, NULL},
{"readonly", (getter)memory_readonly_get, NULL, NULL},
{"itemsize", (getter)memory_itemsize_get, NULL, NULL},
{"format", (getter)memory_format_get, NULL, NULL},
{"ndim", (getter)memory_ndim_get, NULL, NULL},
{"shape", (getter)memory_shape_get, NULL, NULL},
{"strides", (getter)memory_strides_get, NULL, NULL},
{"suboffsets", (getter)memory_suboffsets_get, NULL, NULL},
{"c_contiguous", (getter)memory_c_contiguous, NULL, NULL},
{"f_contiguous", (getter)memory_f_contiguous, NULL, NULL},
{"contiguous", (getter)memory_contiguous, NULL, NULL},
{NULL, NULL, NULL, NULL},
};
/* Buffer methods */
static PyBufferProcs memory_as_buffer = {
(getbufferproc)memory_getbuf, /* bf_getbuffer */
(releasebufferproc)memory_releasebuf, /* bf_releasebuffer */
static PyMethodDef memory_methods[] = {
{"release", (PyCFunction)memory_release, METH_NOARGS},
{"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, NULL},
{"tolist", (PyCFunction)memory_tolist, METH_NOARGS, NULL},
{"cast", (PyCFunction)memory_cast, METH_VARARGS|METH_KEYWORDS, NULL},
{"__enter__", memory_enter, METH_NOARGS},
{"__exit__", memory_exit, METH_VARARGS},
{NULL, NULL}
};
PyTypeObject PyMemoryView_Type = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"memoryview",
sizeof(PyMemoryViewObject),
0,
"memoryview", /* tp_name */
offsetof(PyMemoryViewObject, ob_array), /* tp_basicsize */
sizeof(Py_ssize_t), /* tp_itemsize */
(destructor)memory_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
......
......@@ -1650,6 +1650,9 @@ _Py_ReadyTypes(void)
if (PyType_Ready(&PyProperty_Type) < 0)
Py_FatalError("Can't initialize property type");
if (PyType_Ready(&_PyManagedBuffer_Type) < 0)
Py_FatalError("Can't initialize managed buffer type");
if (PyType_Ready(&PyMemoryView_Type) < 0)
Py_FatalError("Can't initialize memoryview type");
......
<?xml version="1.0" encoding="Windows-1252"?>
<VisualStudioProject
ProjectType="Visual C++"
Version="9.00"
Name="_testbuffer"
ProjectGUID="{A2697BD3-28C1-4AEC-9106-8B748639FD16}"
RootNamespace="_testbuffer"
Keyword="Win32Proj"
TargetFrameworkVersion="196613"
>
<Platforms>
<Platform
Name="Win32"
/>
<Platform
Name="x64"
/>
</Platforms>
<ToolFiles>
</ToolFiles>
<Configurations>
<Configuration
Name="Debug|Win32"
ConfigurationType="2"
InheritedPropertySheets=".\pyd_d.vsprops"
CharacterSet="0"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="Debug|x64"
ConfigurationType="2"
InheritedPropertySheets=".\pyd_d.vsprops;.\x64.vsprops"
CharacterSet="0"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
TargetEnvironment="3"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="Release|Win32"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="Release|x64"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
TargetEnvironment="3"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="PGInstrument|Win32"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops;.\pginstrument.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="PGInstrument|x64"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops;.\pginstrument.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
TargetEnvironment="3"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
TargetMachine="17"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="PGUpdate|Win32"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops;.\pgupdate.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
<Configuration
Name="PGUpdate|x64"
ConfigurationType="2"
InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops;.\pgupdate.vsprops"
CharacterSet="0"
WholeProgramOptimization="1"
>
<Tool
Name="VCPreBuildEventTool"
/>
<Tool
Name="VCCustomBuildTool"
/>
<Tool
Name="VCXMLDataGeneratorTool"
/>
<Tool
Name="VCWebServiceProxyGeneratorTool"
/>
<Tool
Name="VCMIDLTool"
TargetEnvironment="3"
/>
<Tool
Name="VCCLCompilerTool"
/>
<Tool
Name="VCManagedResourceCompilerTool"
/>
<Tool
Name="VCResourceCompilerTool"
/>
<Tool
Name="VCPreLinkEventTool"
/>
<Tool
Name="VCLinkerTool"
BaseAddress="0x1e1F0000"
TargetMachine="17"
/>
<Tool
Name="VCALinkTool"
/>
<Tool
Name="VCManifestTool"
/>
<Tool
Name="VCXDCMakeTool"
/>
<Tool
Name="VCBscMakeTool"
/>
<Tool
Name="VCFxCopTool"
/>
<Tool
Name="VCAppVerifierTool"
/>
<Tool
Name="VCPostBuildEventTool"
/>
</Configuration>
</Configurations>
<References>
</References>
<Files>
<Filter
Name="Source Files"
>
<File
RelativePath="..\Modules\_testbuffer.c"
>
</File>
</Filter>
</Files>
<Globals>
</Globals>
</VisualStudioProject>
......@@ -142,6 +142,11 @@ Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "python3dll", "python3dll.vc
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "xxlimited", "xxlimited.vcproj", "{F749B822-B489-4CA5-A3AD-CE078F5F338A}"
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "_testbuffer", "_testbuffer.vcproj", "{A2697BD3-28C1-4AEC-9106-8B748639FD16}"
ProjectSection(ProjectDependencies) = postProject
{CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26} = {CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26}
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
......@@ -609,6 +614,22 @@ Global
{F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|Win32.Build.0 = Release|Win32
{F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|x64.ActiveCfg = Release|x64
{F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|x64.Build.0 = Release|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|Win32.ActiveCfg = Debug|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|Win32.Build.0 = Debug|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|x64.ActiveCfg = Debug|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|x64.Build.0 = Debug|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|Win32.ActiveCfg = PGInstrument|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|Win32.Build.0 = PGInstrument|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|x64.ActiveCfg = PGInstrument|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|x64.Build.0 = PGInstrument|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|Win32.ActiveCfg = PGUpdate|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|Win32.Build.0 = PGUpdate|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|x64.ActiveCfg = PGUpdate|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|x64.Build.0 = PGUpdate|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|Win32.ActiveCfg = Release|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|Win32.Build.0 = Release|Win32
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|x64.ActiveCfg = Release|x64
{A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
......
......@@ -92,6 +92,9 @@ _socket
_testcapi
tests of the Python C API, run via Lib/test/test_capi.py, and
implemented by module Modules/_testcapimodule.c
_testbuffer
buffer protocol tests, run via Lib/test/test_buffer.py, and
implemented by module Modules/_testbuffer.c
pyexpat
Python wrapper for accelerated XML parsing, which incorporates stable
code from the Expat project: http://sourceforge.net/projects/expat/
......
......@@ -530,6 +530,8 @@ class PyBuildExt(build_ext):
# Python C API test module
exts.append( Extension('_testcapi', ['_testcapimodule.c'],
depends=['testcapi_long.h']) )
# Python PEP-3118 (buffer protocol) test module
exts.append( Extension('_testbuffer', ['_testbuffer.c']) )
# profiler (_lsprof is for cProfile.py)
exts.append( Extension('_lsprof', ['_lsprof.c', 'rotatingtree.c']) )
# static Unicode character database
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment