Commit 5e03b2ed authored by gabrieldemarmiesse's avatar gabrieldemarmiesse

Some rewording.

parent fc37e45b
...@@ -2,6 +2,7 @@ import numpy as np ...@@ -2,6 +2,7 @@ import numpy as np
DTYPE = np.intc DTYPE = np.intc
# It is possible to declare types in the function declaration.
def naive_convolve(int [:,:] f, int [:,:] g): def naive_convolve(int [:,:] f, int [:,:] g):
if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1: if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
raise ValueError("Only odd dimensions on filter supported") raise ValueError("Only odd dimensions on filter supported")
......
...@@ -6,7 +6,7 @@ Working with NumPy ...@@ -6,7 +6,7 @@ Working with NumPy
integration described here. They are easier to use than the buffer syntax integration described here. They are easier to use than the buffer syntax
below, have less overhead, and can be passed around without requiring the GIL. below, have less overhead, and can be passed around without requiring the GIL.
They should be preferred to the syntax presented in this page. They should be preferred to the syntax presented in this page.
See :ref:`Typed Memoryviews <memoryviews>`. See :ref:`Cython for NumPy users <numpy_tutorial>`.
You can use NumPy from Cython exactly the same as in regular Python, but by You can use NumPy from Cython exactly the same as in regular Python, but by
doing so you are losing potentially high speedups because Cython has support doing so you are losing potentially high speedups because Cython has support
......
...@@ -207,22 +207,22 @@ After building this and continuing my (very informal) benchmarks, I get: ...@@ -207,22 +207,22 @@ After building this and continuing my (very informal) benchmarks, I get:
So in the end, adding types make the Cython code slower? So in the end, adding types make the Cython code slower?
What happend is that most of the time spend in this code is spent on line What happend is that most of the time spend in this code is spent on line
60. :: 54. ::
value += g[smid - s, tmid - t] * f[v, w] value += g[smid - s, tmid - t] * f[v, w]
So what made this line so much slower than in the pure Python version? So what made this line so much slower than in the pure Python version?
``g`` and ``f`` are still NumPy arrays, so Python objects, and expect ``g`` and ``f`` are still NumPy arrays, so Python objects, and expect
Python integers as indexes. Here we give C integers. So every time Python integers as indexes. Here we give C integer. So every time
Cython reaches this line, it has to convert all the C integers to Python Cython reaches this line, it has to convert all the C integers to Python
integers. Since this line is called very often, it outweight the speed integers. Since this line is called very often, it outweight the speed
benefits of the pure C loops that were created from the ``range()`` earlier. benefits of the pure C loops that were created from the ``range()`` earlier.
Furthermore, ``g[smid - s, tmid - t] * f[v, w]`` returns a Python integer Furthermore, ``g[smid - s, tmid - t] * f[v, w]`` returns a Python integer
and ``value`` is a C integers, Cython has to do a type conversion again. and ``value`` is a C integers, so Cython has to do a type conversion again.
In the end those types conversions add up. And made our convolution really In the end those types conversions add up. And made our convolution really
slow. But this can be solved easily by using memoryviews. slow. But this problem can be solved easily by using memoryviews.
Efficient indexing with memoryviews Efficient indexing with memoryviews
=================================== ===================================
...@@ -243,6 +243,13 @@ the NumPy array isn't contiguous in memory. ...@@ -243,6 +243,13 @@ the NumPy array isn't contiguous in memory.
They can be indexed by C integers, thus allowing fast access to the They can be indexed by C integers, thus allowing fast access to the
NumPy array data. NumPy array data.
Here is how to declare a memoryview of integers::
cdef int [:] foo # 1D memoryview
cdef int [:, :] foo # 2D memoryview
cdef int [:, :, :] foo # 3D memoryview
... # You get the idea.
No data is copied from the NumPy array to the memoryview in our example. No data is copied from the NumPy array to the memoryview in our example.
As the name implies, it is only a "view" of the memory. So we can use As the name implies, it is only a "view" of the memory. So we can use
``h`` for efficient indexing and return then ``h_np`` ``h`` for efficient indexing and return then ``h_np``
...@@ -266,8 +273,8 @@ Note the importance of this change. ...@@ -266,8 +273,8 @@ Note the importance of this change.
We're now 290 times faster than an interpreted version of Python. We're now 290 times faster than an interpreted version of Python.
Memoryviews can be used with slices too, or even Memoryviews can be used with slices too, or even
with Python arrays. Check out the `memoryview page <memoryviews>`to see what they with Python arrays. Check out the :ref:`memoryview page <memoryviews>` to
can do for you. see what they can do for you.
Tuning indexing further Tuning indexing further
======================== ========================
...@@ -319,18 +326,20 @@ Declaring the NumPy arrays as contiguous ...@@ -319,18 +326,20 @@ Declaring the NumPy arrays as contiguous
For extra speed gains, if you know that the NumPy arrays you are For extra speed gains, if you know that the NumPy arrays you are
providing are contiguous in memory, you can declare the providing are contiguous in memory, you can declare the
memoryview as holding data contiguous in memory. memoryview as contiguous.
We give an example on an array that has 3 dimensions. We give an example on an array that has 3 dimensions.
If they are C-contiguous you have to declare the memoryview like this:: If you want to give Cython the information that the data is C-contiguous
you have to declare the memoryview like this::
cdef int [:,:,::1] a cdef int [:,:,::1] a
if they are F-contiguous, you can declare the memoryview like this:: If you want to give Cython the information that the data is C-contiguous
you have to declare the memoryview like this::
cdef int [::1, :, :] a cdef int [::1, :, :] a
If all this makes no sense to you, you can skip it, the performance gains are If all this makes no sense to you, you can skip this part, the performance gains are
not that important. If you still want to understand what contiguous arrays are not that important. If you still want to understand what contiguous arrays are
all about, you can see `this answer on StackOverflow all about, you can see `this answer on StackOverflow
<https://stackoverflow.com/questions/26998223/what-is-the-difference-between-contiguous-and-non-contiguous-arrays>`_. <https://stackoverflow.com/questions/26998223/what-is-the-difference-between-contiguous-and-non-contiguous-arrays>`_.
...@@ -354,7 +363,7 @@ the ``infer_types=True`` compiler directive at the top of the file. ...@@ -354,7 +363,7 @@ the ``infer_types=True`` compiler directive at the top of the file.
It will save you quite a bit of typing. It will save you quite a bit of typing.
Note that since type declarations must happen at the top indentation level, Note that since type declarations must happen at the top indentation level,
Cython won't infer the type of variable declared for the first time Cython won't infer the type of variables declared for the first time
in other indentation levels. It would change too much the meaning of in other indentation levels. It would change too much the meaning of
our code. This is why, we must still declare manually the type of the our code. This is why, we must still declare manually the type of the
``value`` variable. ``value`` variable.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment