Commit d57caed5 authored by Andrew MacIntyre's avatar Andrew MacIntyre

Performance of list([]) in 2.3 came up in a thread on comp.lang.python,

which can be reviewed via
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2003-12/1011.html

Duncan Booth investigated, and discovered that an "optimisation" was
in fact a pessimisation for small numbers of elements in a source list,
compared to not having the optimisation, although with large numbers
of elements in the source list the optimisation was quite beneficial.

He posted his change to comp.lang.python (but not to SF).

Further research has confirmed his assessment that the optimisation only
becomes a net win when the source list has more than 100 elements.

I also found that the optimisation could apply to tuples as well,
but the gains only arrive with source tuples larger than about 320
elements and are nowhere near as significant as the gains with lists,
(~95% gain @ 10000 elements for lists, ~20% gain @ 10000 elements for
tuples) so I haven't proceeded with this.

The code as it was applied the optimisation to list subclasses as
well, and this also appears to be a net loss for all reasonable sized
sources (~80-100% for up to 100 elements, ~20% for more than 500
elements; I tested up to 10000 elements).

Duncan also suggested special casing empty lists, which I've extended
to all empty sequences.

On the basis that list_fill() is only ever called with a list for the
result argument, testing for the source being the destination has
now happens before testing source types.
parent 4d046393
......@@ -62,6 +62,7 @@ Finn Bock
Paul Boddie
Matthew Boedicker
David Bolen
Duncan Booth
Jurjen Bos
Peter Bosch
Eric Bouck
......
......@@ -118,6 +118,14 @@ Core and builtins
same as split() except that it scans the string from the end
working towards the beginning. See SF feature request 801847.
- in a thread on comp.lang.python, several people noted that list()
was much slower than in 2.1 and earlier versions of Python, when used
to create new lists from existing lists. Duncan Booth did some
research that uncovered an optimisation that, for lists below
about 100 elements, was actually slower than the normal case. The
special case criteria have been tightened to rectify the performance
regression.
Extension modules
-----------------
......
......@@ -2234,6 +2234,13 @@ list_richcompare(PyObject *v, PyObject *w, int op)
return PyObject_RichCompare(vl->ob_item[i], wl->ob_item[i], op);
}
/* empirically determined threshold for activating an optimisation
* in list_fill() - 100 seems close to optimum for current CPUs and
* compilers, as of December '03.
* see also comment in list_fill().
*/
#define LISTFILL_OPT_THRESHOLD 100
/* Adapted from newer code by Tim */
static int
list_fill(PyListObject *result, PyObject *v)
......@@ -2242,14 +2249,29 @@ list_fill(PyListObject *result, PyObject *v)
int n; /* guess for result list size */
int i;
/* if source is destination, we're done. */
if (v == (PyObject *)result)
return 0;
n = result->ob_size;
/* Special-case list(a_list), for speed. */
if (PyList_Check(v)) {
if (v == (PyObject *)result)
return 0; /* source is destination, we're done */
return list_ass_slice(result, 0, n, v);
}
/* Special-case list(a_list), for speed:
* - if the source has 0 elements, there's nothing to copy.
* - if the source has more than a threshold number of elements
* slice assignment is a faster way of filling the target
* (the exact threshold is subject to empirical determination).
* Also special case any other zero length sequence, including
* subclasses of list, there being nothing to copy.
*/
if (PyList_CheckExact(v)) {
i = ((PyListObject*)v)->ob_size;
if (i == 0)
return 0;
if (i > LISTFILL_OPT_THRESHOLD)
return list_ass_slice(result, 0, n, v);
} else
if (PySequence_Check(v) && PySequence_Size(v) == 0)
return 0;
/* Empty previous contents */
if (n != 0) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment