Commit f0dc1892 authored by monty@hundin.mysql.fi's avatar monty@hundin.mysql.fi

Fix ORDER BY ... DESC optimization

parent 545e596e
......@@ -410,6 +410,7 @@ MySQL Language Reference
* SET OPTION:: @code{SET OPTION} syntax
* SET TRANSACTION:: @code{SET TRANSACTION} syntax
* GRANT:: @code{GRANT} and @code{REVOKE} syntax
* HANDLER:: @code{HANDLER} syntax
* CREATE INDEX:: @code{CREATE INDEX} syntax
* DROP INDEX:: @code{DROP INDEX} syntax
* Comments:: Comment syntax
......@@ -664,6 +665,7 @@ Replication in MySQL
MySQL Full-text Search
* Fulltext Search::
* Fulltext Fine-tuning::
* Fulltext Features to Appear in MySQL 4.0::
* Fulltext TODO::
......@@ -14139,6 +14141,7 @@ to restart @code{mysqld} with @code{--skip-grant-tables} to run
* SET OPTION:: @code{SET OPTION} syntax
* SET TRANSACTION:: @code{SET TRANSACTION} syntax
* GRANT:: @code{GRANT} and @code{REVOKE} syntax
* HANDLER:: @code{HANDLER} syntax
* CREATE INDEX:: @code{CREATE INDEX} syntax
* DROP INDEX:: @code{DROP INDEX} syntax
* Comments:: Comment syntax
......@@ -19410,6 +19413,8 @@ alter_specification:
or DROP [COLUMN] col_name
or DROP PRIMARY KEY
or DROP INDEX index_name
or DISABLE KEYS
or ENABLE KEYS
or RENAME [TO] new_tbl_name
or ORDER BY col
or table_options
......@@ -19571,6 +19576,15 @@ If you use @code{ALTER TABLE} on a @code{MyISAM} table, all non-unique
indexes are created in a separate batch (like in @code{REPAIR}).
This should make @code{ALTER TABLE} much faster when you have many indexes.
@item
Since @strong{MySQL 4.0} this feature could be activated explicitly.
@code{ALTER TABLE ... DISABLE KEYS} makes @strong{MySQL} to stop updating
non-unique indexes for @code{MyISAM} table.
@code{ALTER TABLE ... ENABLE KEYS} then should be used to recreate missing
indexes. As @strong{MySQL} does it with special algorithm which is much
faster then inserting keys one by one, disabling keys could give a
considerable speedup on bulk inserts.
@item
@findex mysql_info()
With the C API function @code{mysql_info()}, you can find out how many
......@@ -21880,6 +21894,9 @@ differ somewhat:
| delayed_queue_size | 1000 |
| flush | OFF |
| flush_time | 0 |
| ft_min_word_len | 4 |
| ft_max_word_len | 254 |
| ft_max_word_len_for_sort| 20 |
| have_bdb | YES |
| have_gemini | NO |
| have_innodb | YES |
......@@ -22063,6 +22080,31 @@ tables will be closed (to free up resources and sync things to disk). We
only recommend this option on Win95, Win98, or on systems where you have
very little resources.
@item @code{ft_min_word_len}
The minimum length of the word to be included in a @code{FULLTEXT} index.
@strong{Note: @code{FULLTEXT} index have to be rebuilt (e.g. with
@code{OPTIMIZE TABLE}) after changing this variable.}
@item @code{ft_max_word_len}
The maximum length of the word to be included in a @code{FULLTEXT} index.
@strong{Note: @code{FULLTEXT} index have to be rebuilt after changing
this variable.}
@item @code{ft_max_word_len_sort}
The maximum length of the word in a @code{FULLTEXT} index
to be used in fast index recreation method in
@code{REPAIR}, @code{CREATE INDEX}, @code{OPTIMIZE TABLE}, or
@code{ALTER TABLE}. Longer words are inserted the slow way.
The rule of the thumb is as follows: with @code{ft_max_word_len_sort}
increasing, @strong{MySQL} will create bigger temporary files
(thus slowing the process down, due to disk I/O), and will put
fewer keys in one sort block (againg, decreasing the efficiency).
When @code{ft_max_word_len_sort} is too small, instead,
@strong{MySQL} will insert a lot of words into index the slow way -
but short words will be inserted very fast. It applies only to
index recreation during @code{REPAIR}, @code{CREATE INDEX},
@code{OPTIMIZE TABLE}, or @code{ALTER TABLE}.
@item @code{have_bdb}
@code{YES} if @code{mysqld} supports Berkeley DB tables. @code{DISABLED}
if @code{--skip-bdb} is used.
......@@ -23156,7 +23198,7 @@ You can set the default isolation level for @code{mysqld} with
@findex GRANT
@findex REVOKE
@node GRANT, CREATE INDEX, SET TRANSACTION, Reference
@node GRANT, HANDLER, SET TRANSACTION, Reference
@section @code{GRANT} and @code{REVOKE} Syntax
@example
......@@ -23386,11 +23428,52 @@ dropped only with explicit @code{REVOKE} commands or by manipulating the
@strong{MySQL} grant tables.
@end itemize
@findex HANDLER
@node HANDLER, CREATE INDEX, GRANT, Reference
@section @code{HANDLER} Syntax
@example
HANDLER table OPEN [ AS alias ]
HANDLER table READ index @{ = | >= | <= | < @} (value1, value2, ... ) [ WHERE ... ] [LIMIT ... ]
HANDLER table READ index @{ FIRST | NEXT | PREV | LAST @} [ WHERE ... ] [LIMIT ... ]
HANDLER table READ @{ FIRST | NEXT @} [ WHERE ... ] [LIMIT ... ]
HANDLER table CLOSE
@end example
The @code{HANDLER} statement provides direct access to @strong{MySQL} table
interface, bypassing SQL optimizer. Thus, it is faster then SELECT.
The first form of @code{HANDLER} statement opens a table, making
in accessible via the following @code{HANDLER ... READ} routines.
The second form fetches one (or, specified by @code{LIMIT} clause) row
where the index specified complies to the condition and @code{WHERE}
condition is met. If the index consists of several parts (spans over
several columns) the values are specified in comma-separated list,
providing values only for few first columns is possible.
The third form fetches one (or, specified by @code{LIMIT} clause) row
from the table in index order, matching @code{WHERE} condition.
The fourth form (without index specification) fetches one (or, specified
by @code{LIMIT} clause) row from the table in natural row order (as stored
in data file) matching @code{WHERE} condition. It is faster than
@code{HANDLER table READ index} when full table scan is desired.
The last form closes the table, opened with @code{HANDLER ... OPEN}.
@code{HANDLER} is somewhat low-level statement, for example it does not
provide consistency. That is @code{HANDLER ... OPEN} does @strong{not}
takes a snapshot of the table, and does @strong{not} locks the table. The
above means, that after @code{HANDLER ... OPEN} table data can be
modified (by this or other thread) and these modifications may appear only
partially in @code{HANDLER ... NEXT} or @code{HANDLER ... PREV} scans.
@cindex indexes
@cindex indexes, multi-part
@cindex multi-part index
@findex CREATE INDEX
@node CREATE INDEX, DROP INDEX, GRANT, Reference
@node CREATE INDEX, DROP INDEX, HANDLER, Reference
@section @code{CREATE INDEX} Syntax
@example
......@@ -30809,13 +30892,10 @@ unless you know what you are doing!
@itemize
@item
Minimal length of word to be indexed is defined in
@code{myisam/ftdefs.h} file by the line
@example
#define MIN_WORD_LEN 4
@end example
Change it to the value you prefer, recompile @strong{MySQL}, and rebuild
your @code{FULLTEXT} indexes.
Minimal length of word to be indexed is defined by @strong{MySQL}
variable @code{ft_min_word_length}. @xref{SHOW VARIABLES}.
Change it to the value you prefer, and rebuild
your @code{FULLTEXT} indexes (e.g. with @code{OPTIMIZE TABLE}).
@item
The stopword list is defined in @code{myisam/ft_static.c}
......@@ -30881,9 +30961,6 @@ index operations (querying/dumping/statistics).
@itemize @bullet
@item Make all operations with @code{FULLTEXT} index @strong{faster}.
@item Support for braces @code{()} in boolean full-text search.
@item Phrase search, proximity operators
@item Boolean search can work without @code{FULLTEXT} index
(yes, @strong{very} slow).
@item Support for "always-index words". They could be any strings
the user wants to treat as words, examples are "C++", "AS/400", "TCP/IP", etc.
@item Support for full-text search in @code{MERGE} tables.
......@@ -32543,8 +32620,12 @@ Execute a @code{FLUSH TABLES} statement or the shell command @code{mysqladmin
flush-tables}.
@end enumerate
This procedure will be built into @code{LOAD DATA INFILE} in some future
version of @strong{MySQL}.
Since @strong{MySQL 4.0} you can also use
@code{ALTER TABLE tbl_name DISABLE KEYS} instead of
@code{myisamchk --keys-used=0 -rq /path/to/db/tbl_name} and
@code{ALTER TABLE tbl_name ENABLE KEYS} instead of
@code{myisamchk -r -q /path/to/db/tbl_name}. This way you can also skip
@code{FLUSH TABLES} steps.
@item
You can speed up insertions by locking your tables:
......@@ -46004,6 +46085,8 @@ Responsible for @strong{MySQL} configure.
Full-text search.
@item
Added keys to the @code{MERGE} library.
@item
@code{HANDLER} command.
@end itemize
@item Jeremy Cole
......@@ -46344,6 +46427,11 @@ Our TODO section contains what we plan to have in 4.0. @xref{TODO MySQL 4.0}.
@itemize @bullet
@item
Added @code{ALTER TABLE table_name DISABLE KEYS} and
@code{ALTER TABLE table_name ENABLE KEYS} commands.
@item
@code{LOAD DATA FROM MASTER} "auto-magically" sets up a slave.
@item
Renamed @code{safe_mysqld} to @code{mysqld_safe}.
@item
Allow one to use @code{IN} instead of @code{FROM} in @code{SHOW} commands.
......@@ -46471,6 +46559,9 @@ not yet 100% confident in this code.
@appendixsubsec Changes in release 3.23.40
@itemize @bullet
@item
Added option @code{--warnings} to @code{mysqld}. Now @code{mysqld}
only prints the error @code{Aborted connection} if this option is used.
@item
Fixed parser to allow floats of type @code{1.0e1} (no sign after @code{e}).
@item
Option @code{--force} to @code{myisamchk} now also updates states.
......@@ -51899,6 +51990,8 @@ Multiple result sets.
Change the protocol to allow binary transfer of values. To do this
efficiently, we need to add an API to allow binding of variables.
@item
Add @code{PREPARE} of statements and sending of parameters to @code{mysqld}.
@item
Make it possible to specify @code{long_query_time} with a granularity
in microseconds.
@item
......@@ -112,58 +112,118 @@ member_id nickname voornaam
1
2
table type possible_keys key key_len ref rows Extra
t1 index NULL a 20 NULL 10 Using index
t1 range a a 20 NULL 2 where used; Using index
a b c
1 NULL NULL
1 NULL b
table type possible_keys key key_len ref rows Extra
t1 range a a 4 NULL 10 where used; Using index
a b c
2 3 c
2 2 b
2 2 a
2 1 b
2 1 a
1 3 b
1 1 b
1 1 b
1 1 NULL
1 NULL b
1 NULL NULL
table type possible_keys key key_len ref rows Extra
t1 ref a a 4 const 5 where used; Using index; Using filesort
a b c
1 3 b
1 1 NULL
1 1 b
1 1 b
2 0 a
2 0 b
1 NULL NULL
1 NULL b
table type possible_keys key key_len ref rows Extra
t1 ref a a 9 const,const 2 where used; Using index; Using filesort
a b c
1 NULL NULL
1 NULL b
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 8 where used; Using index; Using filesort
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 5 where used; Using index
table type possible_keys key key_len ref rows Extra
t1 ref a a 9 const,const 1 where used; Using index; Using filesort
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 6 where used; Using index
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 5 where used; Using index
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 2 where used; Using index; Using filesort
table type possible_keys key key_len ref rows Extra
t1 index NULL a 18 NULL 11 Using index
a b c
1 0
1 0 b
1 1
1 1 b
1 1 b
1 3 b
2 1 a
2 1 b
2 1 c
2 2 a
2 2 b
2 3 c
table type possible_keys key key_len ref rows Extra
t1 index NULL a 20 NULL 10 Using index
t1 index NULL a 18 NULL 11 Using index
a b c
2 1 c
2 3 c
2 2 b
2 2 a
2 1 b
2 1 a
2 0 b
2 0 a
1 3 b
1 1 b
1 1 b
1 1 NULL
1 NULL b
1 NULL NULL
1 1
1 0 b
1 0
table type possible_keys key key_len ref rows Extra
t1 range a a 20 NULL 2 where used; Using index
t1 range a a 18 NULL 3 where used; Using index
a b c
1 1 b
1 1 b
table type possible_keys key key_len ref rows Extra
t1 range a a 4 NULL 5 where used; Using index
t1 range a a 4 NULL 6 where used; Using index
a b c
1 1 b
1 1 b
1 1 NULL
1 1
1 0 b
1 0
count(*)
9
a b c
2 3 c
2 2 b
2 2 a
2 1 b
2 1 a
1 3 b
1 1 b
1 1 b
1 1
table type possible_keys key key_len ref rows Extra
t1 range a a 9 NULL 7 where used; Using index
t1 range a a 8 NULL 10 where used; Using index
a b c
2 1 c
2 1 b
2 1 a
2 0 b
2 0 a
1 1 b
1 1 b
1 1 NULL
1 1
1 0 b
1 0
table type possible_keys key key_len ref rows Extra
t1 range a a 4 NULL 4 where used; Using index
t1 range a a 4 NULL 5 where used; Using index
a b c
1 3 b
1 1 b
1 1 b
1 1 NULL
1 NULL b
1 NULL NULL
1 1
1 0 b
1 0
......@@ -168,8 +168,8 @@ drop table t1,t2,t3;
#bug reported by Wouter de Jong
drop table if exists members;
CREATE TABLE members (
drop table if exists t1;
CREATE TABLE t1 (
member_id int(11) NOT NULL auto_increment,
inschrijf_datum varchar(20) NOT NULL default '',
lastchange_datum varchar(20) NOT NULL default '',
......@@ -200,24 +200,51 @@ CREATE TABLE members (
PRIMARY KEY (member_id)
) TYPE=MyISAM PACK_KEYS=1;
insert into members (member_id) values (1),(2),(3);
select member_id, nickname, voornaam FROM members
insert into t1 (member_id) values (1),(2),(3);
select member_id, nickname, voornaam FROM t1
ORDER by lastchange_datum DESC LIMIT 2;
drop table members;
drop table t1;
#
# Test optimization of ORDER BY DESC
#
create table t1 (a int not null, b int, c varchar(10), key (a, b, c));
insert into t1 values (1, NULL, NULL), (1, NULL, 'b'), (1, 1, NULL), (1, 1, 'b'), (1, 1, 'b'), (2, 0, 'a'), (2, 0, 'b'), (2, 1, 'a'), (2, 1, 'b'), (2, 1, 'c');
insert into t1 values (1, NULL, NULL), (1, NULL, 'b'), (1, 1, NULL), (1, 1, 'b'), (1, 1, 'b'), (2, 1, 'a'), (2, 1, 'b'), (2, 2, 'a'), (2, 2, 'b'), (2, 3, 'c'),(1,3,'b');
explain select * from t1 where (a = 1 and b is null and c = 'b') or (a > 2) order by a desc;
select * from t1 where (a = 1 and b is null and c = 'b') or (a > 2) order by a desc;
explain select * from t1 where a >= 1 and a < 3 order by a desc;
select * from t1 where a >= 1 and a < 3 order by a desc;
explain select * from t1 where a = 1 order by a desc, b desc;
select * from t1 where a = 1 order by a desc, b desc;
explain select * from t1 where a = 1 and b is null order by a desc, b desc;
select * from t1 where a = 1 and b is null order by a desc, b desc;
explain select * from t1 where a >= 1 and a < 3 and b >0 order by a desc,b desc;
explain select * from t1 where a = 2 and b >0 order by a desc,b desc;
explain select * from t1 where a = 2 and b is null order by a desc,b desc;
explain select * from t1 where a = 2 and (b is null or b > 0) order by a
desc,b desc;
explain select * from t1 where a = 2 and b > 0 order by a desc,b desc;
explain select * from t1 where a = 2 and b < 2 order by a desc,b desc;
#
# Test things when we don't have NULL keys
#
alter table t1 modify b int not null, modify c varchar(10) not null;
explain select * from t1 order by a, b, c;
select * from t1 order by a, b, c;
explain select * from t1 order by a desc, b desc, c desc;
select * from t1 order by a desc, b desc, c desc;
# test multiple ranges, NO_MAX_RANGE and EQ_RANGE
explain select * from t1 where (a = 1 and b is null and c = 'b') or (a > 2) order by a desc;
explain select * from t1 where (a = 1 and b = 1 and c = 'b') or (a > 2) order by a desc;
select * from t1 where (a = 1 and b = 1 and c = 'b') or (a > 2) order by a desc;
# test NEAR_MAX, NO_MIN_RANGE
explain select * from t1 where a < 2 and b <= 1 order by a desc, b desc;
select * from t1 where a < 2 and b <= 1 order by a desc, b desc;
select count(*) from t1 where a < 5 and b > 0;
select * from t1 where a < 5 and b > 0 order by a desc,b desc;
# test HA_READ_AFTER_KEY (at the end of the file), NEAR_MIN
explain select * from t1 where a between 1 and 3 and b <= 1 order by a desc, b desc;
select * from t1 where a between 1 and 3 and b <= 1 order by a desc, b desc;
......@@ -226,4 +253,4 @@ explain select * from t1 where a between 0 and 1 order by a desc, b desc;
select * from t1 where a between 0 and 1 order by a desc, b desc;
drop table t1;
/* vim:set ft=sql sw=2 noet: */
......@@ -382,7 +382,7 @@ SQL_SELECT::~SQL_SELECT()
#undef index // Fix for Unixware 7
QUICK_SELECT::QUICK_SELECT(TABLE *table,uint key_nr,bool no_alloc)
:error(0),index(key_nr),max_used_key_length(0),head(table),
:dont_free(0),error(0),index(key_nr),max_used_key_length(0),head(table),
it(ranges),range(0)
{
if (!no_alloc)
......@@ -399,8 +399,11 @@ QUICK_SELECT::QUICK_SELECT(TABLE *table,uint key_nr,bool no_alloc)
QUICK_SELECT::~QUICK_SELECT()
{
if (!dont_free)
{
file->index_end();
free_root(&alloc,MYF(0));
}
}
int QUICK_SELECT::init()
......@@ -2516,6 +2519,7 @@ int QUICK_SELECT::cmp_next(QUICK_RANGE *range)
return (range->flag & NEAR_MAX) ? 1 : 0; // Exact match
}
/*
* This is a hack: we inherit from QUICK_SELECT so that we can use the
* get_next() interface, but we have to hold a pointer to the original
......@@ -2525,29 +2529,44 @@ int QUICK_SELECT::cmp_next(QUICK_RANGE *range)
* which handle the ranges and implement the get_next() function. But
* for now, this seems to work right at least.
*/
QUICK_SELECT_DESC::QUICK_SELECT_DESC(QUICK_SELECT *q)
: QUICK_SELECT(*q), quick(q), rev_it(rev_ranges)
QUICK_SELECT_DESC::QUICK_SELECT_DESC(QUICK_SELECT *q, uint used_key_parts)
: QUICK_SELECT(*q), rev_it(rev_ranges)
{
bool not_read_after_key = file->option_flag() & HA_NOT_READ_AFTER_KEY;
for (QUICK_RANGE *r = it++; r; r = it++)
{
rev_ranges.push_front(r);
if (not_read_after_key && range_reads_after_key(r))
if (not_read_after_key && range_reads_after_key(r) ||
test_if_null_range(r,used_key_parts))
{
it.rewind(); // Reset range
error = HA_ERR_UNSUPPORTED;
break;
dont_free=1; // Don't free memory from 'q'
return;
}
}
/* Remove EQ_RANGE flag for keys that are not using the full key */
for (QUICK_RANGE *r = rev_it++; r; r = rev_it++)
{
if ((r->flag & EQ_RANGE) &&
head->key_info[index].key_length != r->max_length)
r->flag&= ~EQ_RANGE;
}
rev_it.rewind();
q->dont_free=1; // Don't free shared mem
delete q;
}
int QUICK_SELECT_DESC::get_next()
{
DBUG_ENTER("QUICK_SELECT_DESC::get_next");
/* The max key is handled as follows:
* - if there is NO_MAX_RANGE, start at the end and move backwards
* - if it is an EQ_RANGE and max key covers the entire key, go directly
* to the key and read through it (sorting backwards is
* - if it is an EQ_RANGE, which means that max key covers the entire
* key, go directly to the key and read through it (sorting backwards is
* same as sorting forwards)
* - if it is NEAR_MAX, go to the key or next, step back once, and
* move backwards
......@@ -2560,8 +2579,8 @@ int QUICK_SELECT_DESC::get_next()
int result;
if (range)
{ // Already read through key
result = ((range->flag & EQ_RANGE) ?
file->index_next_same(record, (byte*) range->min_key,
result = ((range->flag & EQ_RANGE)
? file->index_next_same(record, (byte*) range->min_key,
range->min_length) :
file->index_prev(record));
if (!result)
......@@ -2587,8 +2606,7 @@ int QUICK_SELECT_DESC::get_next()
continue;
}
if (range->flag & EQ_RANGE &&
head->key_info[index].key_length == range->max_length)
if (range->flag & EQ_RANGE)
{
result = file->index_read(record, (byte*) range->max_key,
range->max_length, HA_READ_KEY_EXACT);
......@@ -2598,9 +2616,10 @@ int QUICK_SELECT_DESC::get_next()
dbug_assert(range->flag & NEAR_MAX || range_reads_after_key(range));
/* Note: even if max_key is only a prefix, HA_READ_AFTER_KEY will
* do the right thing - go past all keys which match the prefix */
file->index_read(record, (byte*) range->max_key, range->max_length,
result=file->index_read(record, (byte*) range->max_key,
range->max_length,
((range->flag & NEAR_MAX) ?
HA_READ_KEY_OR_PREV : HA_READ_AFTER_KEY));
HA_READ_KEY_EXACT : HA_READ_AFTER_KEY));
result = file->index_prev(record);
}
if (result)
......@@ -2629,7 +2648,7 @@ int QUICK_SELECT_DESC::cmp_prev(QUICK_RANGE *range)
if (range->flag & NO_MIN_RANGE)
return (0); /* key can't be to small */
KEY_PART *key_part = quick->key_parts;
KEY_PART *key_part = key_parts;
for (char *key = range->min_key, *end = key + range->min_length;
key < end;
key += key_part++->part_length)
......@@ -2659,15 +2678,64 @@ int QUICK_SELECT_DESC::cmp_prev(QUICK_RANGE *range)
/*
* True if this range will require using HA_READ_AFTER_KEY
See comment in get_next() about this
*/
bool QUICK_SELECT_DESC::range_reads_after_key(QUICK_RANGE *range)
{
// See comment in get_next()
return range->flag & (NO_MAX_RANGE | NEAR_MAX) ? 1 :
(range->flag & EQ_RANGE &&
head->key_info[index].key_length == range->max_length) ? 0 : 1;
return ((range->flag & (NO_MAX_RANGE | NEAR_MAX)) ||
!(range->flag & EQ_RANGE) ||
head->key_info[index].key_length != range->max_length) ? 1 : 0;
}
/* True if we are reading over a key that may have a NULL value */
bool QUICK_SELECT_DESC::test_if_null_range(QUICK_RANGE *range,
uint used_key_parts)
{
uint offset,end;
KEY_PART *key_part = key_parts,
*key_part_end= key_part+used_key_parts;
for (offset= 0, end = min(range->min_length, range->max_length) ;
offset < end && key_part != key_part_end ;
offset += key_part++->part_length)
{
uint null_length=test(key_part->null_bit);
if (!memcmp((char*) range->min_key+offset, (char*) range->max_key+offset,
key_part->part_length + null_length))
{
offset+=null_length;
continue;
}
if (null_length && range->min_key[offset])
return 1; // min_key is null and max_key isn't
// Range doesn't cover NULL. This is ok if there is no more null parts
break;
}
/*
If the next min_range is > NULL, then we can use this, even if
it's a NULL key
Example: SELECT * FROM t1 WHERE a = 2 AND b >0 ORDER BY a DESC,b DESC;
*/
if (key_part != key_part_end && key_part->null_bit)
{
if (offset >= range->min_length || range->min_key[offset])
return 1; // Could be null
key_part++;
}
/*
If any of the key parts used in the ORDER BY could be NULL, we can't
use the key to sort the data.
*/
for (; key_part != key_part_end ; key_part++)
if (key_part->null_bit)
return 1; // Covers null part
return 0;
}
/*****************************************************************************
** Print a quick range for debugging
** TODO:
......
......@@ -54,9 +54,10 @@ class QUICK_RANGE :public Sql_alloc {
{}
};
class QUICK_SELECT {
public:
bool next;
bool next,dont_free;
int error;
uint index,max_used_key_length;
TABLE *head;
......@@ -80,16 +81,17 @@ public:
bool unique_key_range();
};
class QUICK_SELECT_DESC: public QUICK_SELECT
{
public:
QUICK_SELECT_DESC(QUICK_SELECT *q);
QUICK_SELECT_DESC(QUICK_SELECT *q, uint used_key_parts);
int get_next();
private:
int cmp_prev(QUICK_RANGE *range);
bool range_reads_after_key(QUICK_RANGE *range);
QUICK_SELECT *quick;
bool test_if_null_range(QUICK_RANGE *range, uint used_key_parts);
void reset(void) { next=0; rev_it.rewind(); }
List<QUICK_RANGE> rev_ranges;
List_iterator<QUICK_RANGE> rev_it;
};
......
......@@ -290,11 +290,7 @@ int mysql_delete(THD *thd,
** delete multiple tables from join
***************************************************************************/
#ifndef DBUG_OFF
#define MEM_STRIP_BUF_SIZE 2048
#else
#define MEM_STRIP_BUF_SIZE sortbuffer_size
#endif
#define MEM_STRIP_BUF_SIZE sortbuff_size
#ifndef SINISAS_STRIP
int refposcmp2(void* arg, const void *a,const void *b)
......
......@@ -5153,9 +5153,11 @@ part_of_refkey(TABLE *table,Field *field)
** Returns: 1 if key is ok.
** 0 if key can't be used
** -1 if reverse key can be used
** used_key_parts is set to key parts used if length != 0
*****************************************************************************/
static int test_if_order_by_key(ORDER *order, TABLE *table, uint idx)
static int test_if_order_by_key(ORDER *order, TABLE *table, uint idx,
uint *used_key_parts)
{
KEY_PART_INFO *key_part,*key_part_end;
key_part=table->key_info[idx].key_part;
......@@ -5187,6 +5189,7 @@ static int test_if_order_by_key(ORDER *order, TABLE *table, uint idx)
reverse=flag; // Remember if reverse
key_part++;
}
*used_key_parts= (uint) (key_part - table->key_info[idx].key_part);
return reverse;
}
......@@ -5249,17 +5252,38 @@ test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order,ha_rows select_limit,
if (ref_key >= 0)
{
int order_direction;
uint used_key_parts;
/* Check if we get the rows in requested sorted order by using the key */
if ((usable_keys & ((key_map) 1 << ref_key)) &&
(order_direction = test_if_order_by_key(order,table,ref_key)))
(order_direction = test_if_order_by_key(order,table,ref_key,
&used_key_parts)))
{
if (order_direction == -1 && select && select->quick)
if (order_direction == -1)
{
if (select && select->quick)
{
// ORDER BY ref_key DESC
select->quick = new QUICK_SELECT_DESC(select->quick);
if (select->quick->error)
QUICK_SELECT_DESC *tmp=new QUICK_SELECT_DESC(select->quick,
used_key_parts);
if (!tmp || tmp->error)
{
delete tmp;
DBUG_RETURN(0); // Reverse sort not supported
}
select->quick=tmp;
DBUG_RETURN(1);
}
if (tab->ref.key_parts < used_key_parts)
{
/*
SELECT * FROM t1 WHERE a=1 ORDER BY a DESC,b DESC
TODO:
Add a new traversal function to read last matching row and
traverse backwards.
*/
DBUG_RETURN(0);
}
}
DBUG_RETURN(1); /* No need to sort */
}
}
......@@ -5280,10 +5304,11 @@ test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order,ha_rows select_limit,
for (nr=0; keys ; keys>>=1, nr++)
{
uint not_used;
if (keys & 1)
{
int flag;
if ((flag=test_if_order_by_key(order,table,nr)))
if ((flag=test_if_order_by_key(order, table, nr, &not_used)))
{
if (!no_changes)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment