Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Gwenaël Samain
cython
Commits
7a4fe204
Commit
7a4fe204
authored
Jun 15, 2018
by
gabrieldemarmiesse
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Moved the second piece of code in numpy.rst to the examples directory for testing.
parent
ff577a2b
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
76 additions
and
70 deletions
+76
-70
docs/examples/tutorial/numpy/convolve2.pyx
docs/examples/tutorial/numpy/convolve2.pyx
+74
-0
docs/src/tutorial/numpy.rst
docs/src/tutorial/numpy.rst
+2
-70
No files found.
docs/examples/tutorial/numpy/convolve2.pyx
0 → 100644
View file @
7a4fe204
# tag: numpy
# You can ignore the previous line.
# It's for internal testing of the cython documentation.
from
__future__
import
division
import
numpy
as
np
# "cimport" is used to import special compile-time information
# about the numpy module (this is stored in a file numpy.pxd which is
# currently part of the Cython distribution).
cimport
numpy
as
np
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
DTYPE
=
np
.
int
# "ctypedef" assigns a corresponding compile-time type to DTYPE_t. For
# every type in the numpy module there's a corresponding compile-time
# type with a _t-suffix.
ctypedef
np
.
int_t
DTYPE_t
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
#
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
def
naive_convolve
(
np
.
ndarray
f
,
np
.
ndarray
g
):
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
assert
f
.
dtype
==
DTYPE
and
g
.
dtype
==
DTYPE
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
cdef
int
vmax
=
f
.
shape
[
0
]
cdef
int
wmax
=
f
.
shape
[
1
]
cdef
int
smax
=
g
.
shape
[
0
]
cdef
int
tmax
=
g
.
shape
[
1
]
cdef
int
smid
=
smax
//
2
cdef
int
tmid
=
tmax
//
2
cdef
int
xmax
=
vmax
+
2
*
smid
cdef
int
ymax
=
wmax
+
2
*
tmid
cdef
np
.
ndarray
h
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
DTYPE
)
cdef
int
x
,
y
,
s
,
t
,
v
,
w
# It is very important to type ALL your variables. You do not get any
# warnings if not, only much slower code (they are implicitly typed as
# Python objects).
cdef
int
s_from
,
s_to
,
t_from
,
t_to
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef
DTYPE_t
value
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h
docs/src/tutorial/numpy.rst
View file @
7a4fe204
...
@@ -105,77 +105,9 @@ Adding types
...
@@ -105,77 +105,9 @@ Adding types
=============
=============
To add types we use custom Cython syntax, so we are now breaking Python source
To add types we use custom Cython syntax, so we are now breaking Python source
compatibility. Consider this code (*read the comments!*) :
:
compatibility. Consider this code (*read the comments!*) :
from __future__ import division
.. literalinclude:: ../../examples/tutorial/numpy/convolve2.pyx
import numpy as np
# "cimport" is used to import special compile-time information
# about the numpy module (this is stored in a file numpy.pxd which is
# currently part of the Cython distribution).
cimport numpy as np
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
DTYPE = np.int
# "ctypedef" assigns a corresponding compile-time type to DTYPE_t. For
# every type in the numpy module there's a corresponding compile-time
# type with a _t-suffix.
ctypedef np.int_t DTYPE_t
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
#
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
def naive_convolve(np.ndarray f, np.ndarray g):
if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
raise ValueError("Only odd dimensions on filter supported")
assert f.dtype == DTYPE and g.dtype == DTYPE
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
cdef int vmax = f.shape[0]
cdef int wmax = f.shape[1]
cdef int smax = g.shape[0]
cdef int tmax = g.shape[1]
cdef int smid = smax // 2
cdef int tmid = tmax // 2
cdef int xmax = vmax + 2*smid
cdef int ymax = wmax + 2*tmid
cdef np.ndarray h = np.zeros([xmax, ymax], dtype=DTYPE)
cdef int x, y, s, t, v, w
# It is very important to type ALL your variables. You do not get any
# warnings if not, only much slower code (they are implicitly typed as
# Python objects).
cdef int s_from, s_to, t_from, t_to
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef DTYPE_t value
for x in range(xmax):
for y in range(ymax):
s_from = max(smid - x, -smid)
s_to = min((xmax - x) - smid, smid + 1)
t_from = max(tmid - y, -tmid)
t_to = min((ymax - y) - tmid, tmid + 1)
value = 0
for s in range(s_from, s_to):
for t in range(t_from, t_to):
v = x - smid + s
w = y - tmid + t
value += g[smid - s, tmid - t] * f[v, w]
h[x, y] = value
return h
After building this and continuing my (very informal) benchmarks, I get:
After building this and continuing my (very informal) benchmarks, I get:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment