Commit 3f788f04 authored by Yury V. Zaytsev's avatar Yury V. Zaytsev

Introduce more consistent capitalization of Python, Cython and NumPy

parent 2328ce75
......@@ -19,7 +19,7 @@ Fortran
=======
=====
Numpy
NumPy
=====
......
......@@ -239,7 +239,7 @@ Parameters
* New references are returned
.. todo::
link or label here the one ref count caveat for numpy.
link or label here the one ref count caveat for NumPy.
* The name ``object`` can be used to explicitly declare something as a Python Object.
......
......@@ -8,7 +8,7 @@ Profiling
This part describes the profiling abilities of Cython. If you are familiar
with profiling pure Python code, you can only read the first section
(:ref:`profiling_basics`). If you are not familiar with python profiling you
(:ref:`profiling_basics`). If you are not familiar with Python profiling you
should also read the tutorial (:ref:`profiling_tutorial`) which takes you
through a complete example step by step.
......@@ -58,7 +58,7 @@ function only::
Profiling Tutorial
==================
This will be a complete tutorial, start to finish, of profiling python code,
This will be a complete tutorial, start to finish, of profiling Python code,
turning it into Cython code and keep profiling until it is fast enough.
As a toy example, we would like to evaluate the summation of the reciprocals of
......@@ -73,7 +73,7 @@ relation we want to use has been proven by Euler in 1735 and is known as the
\frac{1}{2^2} + \dots + \frac{1}{k^2} \big) \approx
6 \big( \frac{1}{1^2} + \frac{1}{2^2} + \dots + \frac{1}{n^2} \big)
A simple python code for evaluating the truncated sum looks like this::
A simple Python code for evaluating the truncated sum looks like this::
#!/usr/bin/env python
# encoding: utf-8
......@@ -90,7 +90,7 @@ A simple python code for evaluating the truncated sum looks like this::
On my box, this needs approximately 4 seconds to run the function with the
default n. The higher we choose n, the better will be the approximation for
:math:`\pi`. An experienced python programmer will already see plenty of
:math:`\pi`. An experienced Python programmer will already see plenty of
places to optimize this code. But remember the golden rule of optimization:
Never optimize without having profiled. Let me repeat this: **Never** optimize
without having profiled your code. Your thoughts about which part of your
......@@ -130,7 +130,7 @@ Running this on my box gives the following output::
This contains the information that the code runs in 6.2 CPU seconds. Note that
the code got slower by 2 seconds because it ran inside the cProfile module. The
table contains the real valuable information. You might want to check the
python `profiling documentation <http://docs.python.org/library/profile.html>`_
Python `profiling documentation <http://docs.python.org/library/profile.html>`_
for the nitty gritty details. The most important columns here are totime (total
time spent in this function **not** counting functions that were called by this
function) and cumtime (total time spent in this function **also** counting the
......@@ -140,7 +140,7 @@ in recip_square. Also half a second is spent in range ... of course we should
have used xrange for such a big iteration. And in fact, just changing range to
xrange makes the code run in 5.8 seconds.
We could optimize a lot in the pure python version, but since we are interested
We could optimize a lot in the pure Python version, but since we are interested
in Cython, let's move forward and bring this module to Cython. We would do this
anyway at some time to get the loop run faster. Here is our first Cython version::
......@@ -294,7 +294,7 @@ approx_pi with a call to sqrt from the C stdlib; but this is not necessarily
faster than calling pow(x,0.5).
Even so, the result we achieved here is quite satisfactory: we came up with a
solution that is much faster then our original python version while retaining
solution that is much faster then our original Python version while retaining
functionality and readability.
......@@ -99,9 +99,9 @@ would be interpreted as::
cpdef foo(self, int i):
print "Big" if i > 1000 else "Small"
The special cython module can also be imported and used within the augmenting
:file:`.pxd` file. This makes it possible to add types to a pure python file without
changing the file itself. For example, the following python file
The special Cython module can also be imported and used within the augmenting
:file:`.pxd` file. This makes it possible to add types to a pure Python file without
changing the file itself. For example, the following Python file
:file:`dostuff.py`::
def dostuff(n):
......@@ -128,14 +128,14 @@ signature, for instance.
Types
-----
There are numerous types built in to the cython module. One has all the
There are numerous types built in to the Cython module. One has all the
standard C types, namely ``char``, ``short``, ``int``, ``long``, ``longlong``
as well as their unsigned versions ``uchar``, ``ushort``, ``uint``, ``ulong``,
``ulonglong``. One also has ``bint`` and ``Py_ssize_t``. For each type, there
are pointer types ``p_int``, ``pp_int``, . . ., up to three levels deep in
interpreted mode, and infinitely deep in compiled mode. The Python types int,
long and bool are interpreted as C ``int``, ``long`` and ``bint``
respectively. Also, the python types ``list``, ``dict``, ``tuple``, . . . may
respectively. Also, the Python types ``list``, ``dict``, ``tuple``, . . . may
be used, as well as any user defined types.
Pointer types may be constructed with ``cython.pointer(cython.int)``, and
......
......@@ -12,7 +12,7 @@ Memoryviews are similar to the current NumPy array buffer support
(``np.ndarray[np.float64_t, ndim=2]``), but
they have more features and cleaner syntax.
Memoryviews are more general than the old numpy aray buffer support, because
Memoryviews are more general than the old NumPy aray buffer support, because
they can handle a wider variety of sources of array data. For example, they can
handle C arrays and the Cython array type (:ref:`view_cython_arrays`).
......@@ -43,7 +43,7 @@ Quickstart
cdef int [:, :, :] cyarr_view = cyarr
# Show the sum of all the arrays before altering it
print "Numpy sum of the Numpy array before assignments:", narr.sum()
print "NumPy sum of the NumPy array before assignments:", narr.sum()
# We can copy the values from one memoryview into another using a single
# statement, by either indexing with ... or (NumPy-style) with a colon.
......@@ -56,8 +56,8 @@ Quickstart
carr_view[0, 0, 0] = 100
cyarr_view[0, 0, 0] = 1000
# Assigning into the memoryview on the Numpy array alters the latter
print "Numpy sum of Numpy array after assignments:", narr.sum()
# Assigning into the memoryview on the NumPy array alters the latter
print "NumPy sum of NumPy array after assignments:", narr.sum()
# A function using a memoryview does not usually need the GIL
cpdef int sum3d(int[:, :, :] arr) nogil:
......@@ -71,9 +71,9 @@ Quickstart
total += arr[i, j, k]
return total
# A function accepting a memoryview knows how to use a Numpy array,
# A function accepting a memoryview knows how to use a NumPy array,
# a C array, a Cython array...
print "Memoryview sum of Numpy array is", sum3d(narr)
print "Memoryview sum of NumPy array is", sum3d(narr)
print "Memoryview sum of C array is", sum3d(carr)
print "Memoryview sum of Cython array is", sum3d(cyarr)
# ... and of course, a memoryview.
......@@ -81,9 +81,9 @@ Quickstart
This code should give the following output::
Numpy sum of the Numpy array before assignments: 351
Numpy sum of Numpy array after assignments: 81
Memoryview sum of Numpy array is 81
NumPy sum of the NumPy array before assignments: 351
NumPy sum of NumPy array after assignments: 81
Memoryview sum of NumPy array is 81
Memoryview sum of C array is 451
Memoryview sum of Cython array is 1351
Memoryview sum of C memoryview is 451
......@@ -95,7 +95,7 @@ Indexing and Slicing
--------------------
Indexing and slicing can be done with or without the GIL. It basically works
like Numpy. If indices are specified for every dimension you will get an element
like NumPy. If indices are specified for every dimension you will get an element
of the base type (e.g. `int`), otherwise you will get a new view. An Ellipsis
means you get consecutive slices for every unspecified dimension::
......@@ -130,7 +130,7 @@ Transposing
-----------
In most cases (see below), the memoryview can be transposed in the same way that
Numpy slices can be transposed::
NumPy slices can be transposed::
cdef int[:, ::1] c_contig = ...
cdef int[::1, :] f_contig = c_contig.T
......@@ -144,7 +144,7 @@ See :ref:`view_general_layouts` for details.
Newaxis
-------
As for Numpy, new axes can be introduced by indexing an array with ``None`` ::
As for NumPy, new axes can be introduced by indexing an array with ``None`` ::
cdef double[:] myslice = np.linspace(0, 10, num=50)
......@@ -201,7 +201,7 @@ Python buffer support
Cython memoryviews support nearly all objects exporting the interface of Python
`new style buffers`_. This is the buffer interface described in `PEP 3118`_.
Numpy arrays support this interface, as do :ref:`view_cython_arrays`. The
NumPy arrays support this interface, as do :ref:`view_cython_arrays`. The
"nearly all" is because the Python buffer interface allows the *elements* in the
data array to themselves be pointers; Cython memoryviews do not yet support
this.
......@@ -225,7 +225,7 @@ means either direct (no pointer) or indirect (pointer). Data packing means your
data may be contiguous or not contiguous in memory, and may use *strides* to
identify the jumps in memory consecutive indices need to take for each dimension.
Numpy arrays provide a good model of strided direct data access, so we'll use
NumPy arrays provide a good model of strided direct data access, so we'll use
them for a refresher on the concepts of C and Fortran contiguous arrays, and
data strides.
......@@ -233,10 +233,10 @@ Brief recap on C, Fortran and strided memory layouts
----------------------------------------------------
The simplest data layout might be a C contiguous array. This is the default
layout in Numpy and Cython arrays. C contiguous means that the array data is
layout in NumPy and Cython arrays. C contiguous means that the array data is
continuous in memory (see below) and that neighboring elements in the first
dimension of the array are furthest apart in memory, whereas neighboring
elements in the last dimension are closest together. For example, in Numpy::
elements in the last dimension are closest together. For example, in NumPy::
In [2]: arr = np.array([['0', '1', '2'], ['3', '4', '5']], dtype='S1')
......@@ -275,7 +275,7 @@ An array can be contiguous without being C or Fortran order::
In [10]: c_contig.transpose((1, 0, 2)).strides
Out[10]: (4, 12, 1)
Slicing an Numpy array can easily make it not contiguous::
Slicing an NumPy array can easily make it not contiguous::
In [11]: sliced = c_contig[:,1,:]
In [12]: sliced.strides
......@@ -309,7 +309,7 @@ memoryview will be on top of a 3D C contiguous layout, you could write::
cdef int[:, :, ::1] c_contiguous = c_contig
where ``c_contig`` could be a C contiguous Numpy array. The ``::1`` at the 3rd
where ``c_contig`` could be a C contiguous NumPy array. The ``::1`` at the 3rd
position means that the elements in this 3rd dimension will be one element apart
in memory. If you know you will have a 3D Fortran contiguous array::
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment