Commit 6c127155 authored by unknown's avatar unknown

Fixed a bug with SELECT DISTINCT and HAVING


Docs/manual.texi:
  Update AIX information
support-files/Makefile.am:
  Removed mysql-max spec
parent 64dcaea4
...@@ -5939,12 +5939,15 @@ A reasonable @code{tar} to unpack the distribution. GNU @code{tar} is ...@@ -5939,12 +5939,15 @@ A reasonable @code{tar} to unpack the distribution. GNU @code{tar} is
known to work. Sun @code{tar} is known to have problems. known to work. Sun @code{tar} is known to have problems.
@item @item
A working ANSI C++ compiler. @code{gcc} >= 2.8.1, @code{egcs} >= A working ANSI C++ compiler. @code{gcc} >= 2.95.2, @code{egcs} >= 1.0.2
1.0.2, SGI C++, and SunPro C++ are some of the compilers that are known to or @code{egcs 2.91.66}, SGI C++, and SunPro C++ are some of the
work. @code{libg++} is not needed when using @code{gcc}. @code{gcc} compilers that are known to work. @code{libg++} is not needed when
2.7.x has a bug that makes it impossible to compile some perfectly legal using @code{gcc}. @code{gcc} 2.7.x has a bug that makes it impossible
C++ files, such as @file{sql/sql_base.cc}. If you only have @code{gcc} 2.7.x, to compile some perfectly legal C++ files, such as
you must upgrade your @code{gcc} to be able to compile @strong{MySQL}. @file{sql/sql_base.cc}. If you only have @code{gcc} 2.7.x, you must
upgrade your @code{gcc} to be able to compile @strong{MySQL}. @code{gcc}
2.8.1 is also known to have problems on some platforms so it should be
avoided if there exists a new compiler for the platform..
@code{gcc} >= 2.95.2 is recommended when compiling @strong{MySQL} @code{gcc} >= 2.95.2 is recommended when compiling @strong{MySQL}
Version 3.23.x. Version 3.23.x.
...@@ -8536,8 +8539,8 @@ We recommend the following @code{configure} line with @code{egcs} and ...@@ -8536,8 +8539,8 @@ We recommend the following @code{configure} line with @code{egcs} and
@code{gcc 2.95} on AIX: @code{gcc 2.95} on AIX:
@example @example
CC="gcc -pipe -mcpu=power2 -Wa,-many" \ CC="gcc -pipe -mcpu=power -Wa,-many" \
CXX="gcc -pipe -mcpu=power2 -Wa,-many" \ CXX="gcc -pipe -mcpu=power -Wa,-many" \
CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti" \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti" \
./configure --prefix=/usr/local/mysql --with-low-memory ./configure --prefix=/usr/local/mysql --with-low-memory
@end example @end example
...@@ -8549,6 +8552,21 @@ available. We don't know if the @code{-fno-exceptions} is required with ...@@ -8549,6 +8552,21 @@ available. We don't know if the @code{-fno-exceptions} is required with
option generates faster code, we recommend that you should always use this option generates faster code, we recommend that you should always use this
option with @code{egcs / gcc}. option with @code{egcs / gcc}.
If you get a problem with assembler code try changing the -mcpu=xxx to
match your cpu. Typically power2, power, or powerpc may need to be used,
alternatively you might need to use 604 or 604e. I'm not positive but I
would think using "power" would likely be safe most of the time, even on
a power2 machine.
If you don't know what your cpu is then do a "uname -m", this will give
you back a string that looks like "000514676700", with a format of
xxyyyyyymmss where xx and ss are always 0's, yyyyyy is a unique system
id and mm is the id of the CPU Planar. A chart of these values can be
found at
@uref{http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds5/uname.htm}.
This will give you a machine type and a machine model you can use to
determine what type of cpu you have.
If you have problems with signals (@strong{MySQL} dies unexpectedly If you have problems with signals (@strong{MySQL} dies unexpectedly
under high load) you may have found an OS bug with threads and under high load) you may have found an OS bug with threads and
signals. In this case you can tell @strong{MySQL} not to use signals by signals. In this case you can tell @strong{MySQL} not to use signals by
...@@ -8569,6 +8587,29 @@ On some versions of AIX, linking with @code{libbind.a} makes ...@@ -8569,6 +8587,29 @@ On some versions of AIX, linking with @code{libbind.a} makes
@code{getservbyname} core dump. This is an AIX bug and should be reported @code{getservbyname} core dump. This is an AIX bug and should be reported
to IBM. to IBM.
For AIX 4.2.1 and gcc you have to do the following changes.
After configuring, edit @file{config.h} and @file{include/my_config.h}
and change the line that says
@example
#define HAVE_SNPRINTF 1
@end example
to
@example
#undef HAVE_SNPRINTF
@end example
And finally, in @file{mysqld.cc} you need to add a prototype for initgoups.
@example
#ifdef _AIX41
extern "C" int initgroups(const char *,int);
#endif
@end example
@node HP-UX 10.20, HP-UX 11.x, IBM-AIX, Source install system issues @node HP-UX 10.20, HP-UX 11.x, IBM-AIX, Source install system issues
@subsection HP-UX Version 10.20 Notes @subsection HP-UX Version 10.20 Notes
...@@ -23777,7 +23818,7 @@ is not signaled to the other servers. ...@@ -23777,7 +23818,7 @@ is not signaled to the other servers.
@section MERGE Tables @section MERGE Tables
@code{MERGE} tables are new in @strong{MySQL} Version 3.23.25. The code @code{MERGE} tables are new in @strong{MySQL} Version 3.23.25. The code
is still in beta, but should stabilize soon! is still in gamma, but should be resonable stable.
A @code{MERGE} table is a collection of identical @code{MyISAM} tables A @code{MERGE} table is a collection of identical @code{MyISAM} tables
that can be used as one. You can only @code{SELECT}, @code{DELETE}, and that can be used as one. You can only @code{SELECT}, @code{DELETE}, and
...@@ -23790,8 +23831,8 @@ will only clear the mapping for the table, not delete everything in the ...@@ -23790,8 +23831,8 @@ will only clear the mapping for the table, not delete everything in the
mapped tables. (We plan to fix this in 4.0). mapped tables. (We plan to fix this in 4.0).
With identical tables we mean that all tables are created with identical With identical tables we mean that all tables are created with identical
column information. You can't put a MERGE over tables where the columns column and key information. You can't put a MERGE over tables where the
are packed differently or doesn't have exactly the same columns. columns are packed differently or doesn't have exactly the same columns.
Some of the tables can however be compressed with @code{myisampack}. Some of the tables can however be compressed with @code{myisampack}.
@xref{myisampack}. @xref{myisampack}.
...@@ -23826,8 +23867,10 @@ More efficient repairs. It's easier to repair the individual files that ...@@ -23826,8 +23867,10 @@ More efficient repairs. It's easier to repair the individual files that
are mapped to a @code{MERGE} file than trying to repair a real big file. are mapped to a @code{MERGE} file than trying to repair a real big file.
@item @item
Instant mapping of many files as one. A @code{MERGE} table uses the Instant mapping of many files as one. A @code{MERGE} table uses the
index of the individual tables. It doesn't need an index of its one. index of the individual tables. It doesn't need to maintain an index of
This makes @code{MERGE} table collections VERY fast to make or remap. its one. This makes @code{MERGE} table collections VERY fast to make or
remap. Note that you must specify the key definitions when you create
a @code{MERGE} table!.
@item @item
If you have a set of tables that you join to a big table on demand or If you have a set of tables that you join to a big table on demand or
batch, you should instead create a @code{MERGE} table on them on demand. batch, you should instead create a @code{MERGE} table on them on demand.
...@@ -43032,8 +43075,8 @@ An open source client for exploring databases and executing SQL. Supports ...@@ -43032,8 +43075,8 @@ An open source client for exploring databases and executing SQL. Supports
A query tool for @strong{MySQL} and PostgreSQL. A query tool for @strong{MySQL} and PostgreSQL.
@item @uref{http://dbman.linux.cz/,dbMan} @item @uref{http://dbman.linux.cz/,dbMan}
A query tool written in Perl. Uses DBI and Tk. A query tool written in Perl. Uses DBI and Tk.
@item @uref{http://www.mysql.com/Downloads/Win32/Msc201.EXE, Mascon 2.1.15} @item @uref{http://www.mysql.com/Downloads/Win32/Msc201.EXE, Mascon 202}
@item @uref{http://www.mysql.com/Downloads/Win32/FrMsc201.EXE, Free Mascon 2.1.14} @item @uref{http://www.mysql.com/Downloads/Win32/FrMsc202.EXE, Free Mascon 202}
Mascon is a powerful Win32 GUI for the administering @strong{MySQL} server Mascon is a powerful Win32 GUI for the administering @strong{MySQL} server
databases. Mascon's features include visual table design, connections to databases. Mascon's features include visual table design, connections to
multiple servers, data and blob editing of tables, security setting, SQL multiple servers, data and blob editing of tables, security setting, SQL
...@@ -44050,6 +44093,9 @@ not yet 100% confident in this code. ...@@ -44050,6 +44093,9 @@ not yet 100% confident in this code.
@appendixsubsec Changes in release 3.23.38 @appendixsubsec Changes in release 3.23.38
@itemize @bullet @itemize @bullet
@item @item
Fixed bug when too many rows where removed when using
@code{SELECT DISTINCT ... HAVING}.
@item
@code{SHOW CREATE TABLE} now returns @code{TEMPORARY} for temporary tables. @code{SHOW CREATE TABLE} now returns @code{TEMPORARY} for temporary tables.
@item @item
Added @code{Rows_examined} to slow query log. Added @code{Rows_examined} to slow query log.
...@@ -36,7 +36,8 @@ const char *join_type_str[]={ "UNKNOWN","system","const","eq_ref","ref", ...@@ -36,7 +36,8 @@ const char *join_type_str[]={ "UNKNOWN","system","const","eq_ref","ref",
static bool make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds, static bool make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
DYNAMIC_ARRAY *keyuse,List<Item_func_match> &ftfuncs); DYNAMIC_ARRAY *keyuse,List<Item_func_match> &ftfuncs);
static bool update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab, static bool update_ref_and_keys(THD *thd, DYNAMIC_ARRAY *keyuse,
JOIN_TAB *join_tab,
uint tables,COND *conds,table_map table_map, uint tables,COND *conds,table_map table_map,
List<Item_func_match> &ftfuncs); List<Item_func_match> &ftfuncs);
static int sort_keyuse(KEYUSE *a,KEYUSE *b); static int sort_keyuse(KEYUSE *a,KEYUSE *b);
...@@ -106,12 +107,14 @@ static uint find_shortest_key(TABLE *table, key_map usable_keys); ...@@ -106,12 +107,14 @@ static uint find_shortest_key(TABLE *table, key_map usable_keys);
static bool test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order, static bool test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order,
ha_rows select_limit); ha_rows select_limit);
static int create_sort_index(JOIN_TAB *tab,ORDER *order,ha_rows select_limit); static int create_sort_index(JOIN_TAB *tab,ORDER *order,ha_rows select_limit);
static int remove_duplicates(JOIN *join,TABLE *entry,List<Item> &fields); static bool fix_having(JOIN *join, Item **having);
static int remove_duplicates(JOIN *join,TABLE *entry,List<Item> &fields,
Item *having);
static int remove_dup_with_compare(THD *thd, TABLE *entry, Field **field, static int remove_dup_with_compare(THD *thd, TABLE *entry, Field **field,
ulong offset); ulong offset,Item *having);
static int remove_dup_with_hash_index(THD *thd, TABLE *table, static int remove_dup_with_hash_index(THD *thd, TABLE *table,
uint field_count, Field **first_field, uint field_count, Field **first_field,
ulong key_length); ulong key_length,Item *having);
static int join_init_cache(THD *thd,JOIN_TAB *tables,uint table_count); static int join_init_cache(THD *thd,JOIN_TAB *tables,uint table_count);
static ulong used_blob_length(CACHE_FIELD **ptr); static ulong used_blob_length(CACHE_FIELD **ptr);
static bool store_record_in_cache(JOIN_CACHE *cache); static bool store_record_in_cache(JOIN_CACHE *cache);
...@@ -717,8 +720,11 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds, ...@@ -717,8 +720,11 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds,
if (select_distinct && ! group) if (select_distinct && ! group)
{ {
thd->proc_info="Removing duplicates"; thd->proc_info="Removing duplicates";
if (remove_duplicates(&join,tmp_table,fields)) if (having)
goto err; /* purecov: inspected */ having->update_used_tables();
if (remove_duplicates(&join,tmp_table,fields, having))
goto err; /* purecov: inspected */
having=0;
select_distinct=0; select_distinct=0;
} }
tmp_table->reginfo.lock_type=TL_UNLOCK; tmp_table->reginfo.lock_type=TL_UNLOCK;
...@@ -749,28 +755,8 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds, ...@@ -749,28 +755,8 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds,
/* If we have already done the group, add HAVING to sorted table */ /* If we have already done the group, add HAVING to sorted table */
if (having && ! group && ! join.sort_and_group) if (having && ! group && ! join.sort_and_group)
{ {
having->update_used_tables(); // Some tables may have been const if (fix_having(&join,&having))
JOIN_TAB *table=&join.join_tab[join.const_tables]; goto err;
table_map used_tables= join.const_table_map | table->table->map;
Item* sort_table_cond=make_cond_for_table(having,used_tables,used_tables);
if (sort_table_cond)
{
if (!table->select)
if (!(table->select=new SQL_SELECT))
goto err;
if (!table->select->cond)
table->select->cond=sort_table_cond;
else // This should never happen
if (!(table->select->cond=new Item_cond_and(table->select->cond,
sort_table_cond)))
goto err;
table->select_cond=table->select->cond;
DBUG_EXECUTE("where",print_where(table->select->cond,
"select and having"););
having=make_cond_for_table(having,~ (table_map) 0,~used_tables);
DBUG_EXECUTE("where",print_where(conds,"having after sort"););
}
} }
if (create_sort_index(&join.join_tab[join.const_tables], if (create_sort_index(&join.join_tab[join.const_tables],
group ? group : order, group ? group : order,
...@@ -941,7 +927,7 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds, ...@@ -941,7 +927,7 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
} }
if (conds || outer_join) if (conds || outer_join)
if (update_ref_and_keys(keyuse_array,stat,join->tables, if (update_ref_and_keys(join->thd,keyuse_array,stat,join->tables,
conds,~outer_join,ftfuncs)) conds,~outer_join,ftfuncs))
DBUG_RETURN(1); DBUG_RETURN(1);
...@@ -1442,8 +1428,9 @@ sort_keyuse(KEYUSE *a,KEYUSE *b) ...@@ -1442,8 +1428,9 @@ sort_keyuse(KEYUSE *a,KEYUSE *b)
*/ */
static bool static bool
update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables, update_ref_and_keys(THD *thd, DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,
COND *cond, table_map normal_tables,List<Item_func_match> &ftfuncs) uint tables, COND *cond, table_map normal_tables,
List<Item_func_match> &ftfuncs)
{ {
uint and_level,i,found_eq_constant; uint and_level,i,found_eq_constant;
...@@ -1451,8 +1438,7 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables, ...@@ -1451,8 +1438,7 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables,
KEY_FIELD *key_fields,*end; KEY_FIELD *key_fields,*end;
if (!(key_fields=(KEY_FIELD*) if (!(key_fields=(KEY_FIELD*)
my_malloc(sizeof(key_fields[0])* thd->alloc((sizeof(key_fields[0])*thd->cond_count+1)*2)))
(current_thd->cond_count+1)*2,MYF(0))))
return TRUE; /* purecov: inspected */ return TRUE; /* purecov: inspected */
and_level=0; end=key_fields; and_level=0; end=key_fields;
if (cond) if (cond)
...@@ -1466,14 +1452,10 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables, ...@@ -1466,14 +1452,10 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables,
} }
} }
if (init_dynamic_array(keyuse,sizeof(KEYUSE),20,64)) if (init_dynamic_array(keyuse,sizeof(KEYUSE),20,64))
{
my_free((gptr) key_fields,MYF(0));
return TRUE; return TRUE;
}
/* fill keyuse with found key parts */ /* fill keyuse with found key parts */
for (KEY_FIELD *field=key_fields ; field != end ; field++) for (KEY_FIELD *field=key_fields ; field != end ; field++)
add_key_part(keyuse,field); add_key_part(keyuse,field);
my_free((gptr) key_fields,MYF(0));
} }
if (ftfuncs.elements) if (ftfuncs.elements)
...@@ -1894,7 +1876,7 @@ cache_record_length(JOIN *join,uint idx) ...@@ -1894,7 +1876,7 @@ cache_record_length(JOIN *join,uint idx)
{ {
uint length; uint length;
JOIN_TAB **pos,**end; JOIN_TAB **pos,**end;
THD *thd=current_thd; THD *thd=join->thd;
length=0; length=0;
for (pos=join->best_ref+join->const_tables,end=join->best_ref+idx ; for (pos=join->best_ref+join->const_tables,end=join->best_ref+idx ;
...@@ -2076,7 +2058,7 @@ get_best_combination(JOIN *join) ...@@ -2076,7 +2058,7 @@ get_best_combination(JOIN *join)
} }
else else
{ {
THD *thd=current_thd; THD *thd=join->thd;
for (i=0 ; i < keyparts ; keyuse++,i++) for (i=0 ; i < keyparts ; keyuse++,i++)
{ {
while (keyuse->keypart != i || while (keyuse->keypart != i ||
...@@ -4433,7 +4415,8 @@ join_init_read_record(JOIN_TAB *tab) ...@@ -4433,7 +4415,8 @@ join_init_read_record(JOIN_TAB *tab)
{ {
if (tab->select && tab->select->quick) if (tab->select && tab->select->quick)
tab->select->quick->reset(); tab->select->quick->reset();
init_read_record(&tab->read_record,current_thd, tab->table, tab->select,1,1); init_read_record(&tab->read_record, tab->join->thd, tab->table,
tab->select,1,1);
return (*tab->read_record.read_record)(&tab->read_record); return (*tab->read_record.read_record)(&tab->read_record);
} }
...@@ -5265,6 +5248,38 @@ err: ...@@ -5265,6 +5248,38 @@ err:
} }
/*
** Add the HAVING criteria to table->select
*/
static bool fix_having(JOIN *join, Item **having)
{
(*having)->update_used_tables(); // Some tables may have been const
JOIN_TAB *table=&join->join_tab[join->const_tables];
table_map used_tables= join->const_table_map | table->table->map;
Item* sort_table_cond=make_cond_for_table(*having,used_tables,used_tables);
if (sort_table_cond)
{
if (!table->select)
if (!(table->select=new SQL_SELECT))
return 1;
if (!table->select->cond)
table->select->cond=sort_table_cond;
else // This should never happen
if (!(table->select->cond=new Item_cond_and(table->select->cond,
sort_table_cond)))
return 1;
table->select_cond=table->select->cond;
DBUG_EXECUTE("where",print_where(table->select_cond,
"select and having"););
*having=make_cond_for_table(*having,~ (table_map) 0,~used_tables);
DBUG_EXECUTE("where",print_where(*having,"having after make_cond"););
}
return 0;
}
/***************************************************************************** /*****************************************************************************
** Remove duplicates from tmp table ** Remove duplicates from tmp table
** This should be recoded to add a uniuqe index to the table and remove ** This should be recoded to add a uniuqe index to the table and remove
...@@ -5305,7 +5320,7 @@ static void free_blobs(Field **ptr) ...@@ -5305,7 +5320,7 @@ static void free_blobs(Field **ptr)
static int static int
remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields) remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields, Item *having)
{ {
int error; int error;
ulong reclength,offset; ulong reclength,offset;
...@@ -5342,9 +5357,10 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields) ...@@ -5342,9 +5357,10 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields)
sortbuff_size))) sortbuff_size)))
error=remove_dup_with_hash_index(join->thd, entry, error=remove_dup_with_hash_index(join->thd, entry,
field_count, first_field, field_count, first_field,
reclength); reclength, having);
else else
error=remove_dup_with_compare(join->thd, entry, first_field, offset); error=remove_dup_with_compare(join->thd, entry, first_field, offset,
having);
free_blobs(first_field); free_blobs(first_field);
DBUG_RETURN(error); DBUG_RETURN(error);
...@@ -5352,19 +5368,19 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields) ...@@ -5352,19 +5368,19 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields)
static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field, static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
ulong offset) ulong offset, Item *having)
{ {
handler *file=table->file; handler *file=table->file;
char *org_record,*new_record; char *org_record,*new_record, *record;
int error; int error;
ulong reclength=table->reclength-offset; ulong reclength=table->reclength-offset;
DBUG_ENTER("remove_dup_with_compare"); DBUG_ENTER("remove_dup_with_compare");
org_record=(char*) table->record[0]+offset; org_record=(char*) (record=table->record[0])+offset;
new_record=(char*) table->record[1]+offset; new_record=(char*) table->record[1]+offset;
file->rnd_init(); file->rnd_init();
error=file->rnd_next(table->record[0]); error=file->rnd_next(record);
for (;;) for (;;)
{ {
if (thd->killed) if (thd->killed)
...@@ -5381,6 +5397,12 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field, ...@@ -5381,6 +5397,12 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
break; break;
goto err; goto err;
} }
if (having && !having->val_int())
{
if ((error=file->delete_row(record)))
goto err;
continue;
}
if (copy_blobs(first_field)) if (copy_blobs(first_field))
{ {
my_error(ER_OUT_OF_SORTMEMORY,MYF(0)); my_error(ER_OUT_OF_SORTMEMORY,MYF(0));
...@@ -5393,7 +5415,7 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field, ...@@ -5393,7 +5415,7 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
bool found=0; bool found=0;
for (;;) for (;;)
{ {
if ((error=file->rnd_next(table->record[0]))) if ((error=file->rnd_next(record)))
{ {
if (error == HA_ERR_RECORD_DELETED) if (error == HA_ERR_RECORD_DELETED)
continue; continue;
...@@ -5403,19 +5425,19 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field, ...@@ -5403,19 +5425,19 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
} }
if (compare_record(table, first_field) == 0) if (compare_record(table, first_field) == 0)
{ {
if ((error=file->delete_row(table->record[0]))) if ((error=file->delete_row(record)))
goto err; goto err;
} }
else if (!found) else if (!found)
{ {
found=1; found=1;
file->position(table->record[0]); // Remember position file->position(record); // Remember position
} }
} }
if (!found) if (!found)
break; // End of file break; // End of file
/* Restart search on next row */ /* Restart search on next row */
error=file->restart_rnd_next(table->record[0],file->ref); error=file->restart_rnd_next(record,file->ref);
} }
file->extra(HA_EXTRA_NO_CACHE); file->extra(HA_EXTRA_NO_CACHE);
...@@ -5436,7 +5458,8 @@ err: ...@@ -5436,7 +5458,8 @@ err:
static int remove_dup_with_hash_index(THD *thd, TABLE *table, static int remove_dup_with_hash_index(THD *thd, TABLE *table,
uint field_count, uint field_count,
Field **first_field, Field **first_field,
ulong key_length) ulong key_length,
Item *having)
{ {
byte *key_buffer, *key_pos, *record=table->record[0]; byte *key_buffer, *key_pos, *record=table->record[0];
int error; int error;
...@@ -5484,6 +5507,12 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table, ...@@ -5484,6 +5507,12 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table,
break; break;
goto err; goto err;
} }
if (having && !having->val_int())
{
if ((error=file->delete_row(record)))
goto err;
continue;
}
/* copy fields to key buffer */ /* copy fields to key buffer */
field_length=field_lengths; field_length=field_lengths;
...@@ -5499,7 +5528,8 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table, ...@@ -5499,7 +5528,8 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table,
if ((error=file->delete_row(record))) if ((error=file->delete_row(record)))
goto err; goto err;
} }
(void) hash_insert(&hash, key_pos-key_length); else
(void) hash_insert(&hash, key_pos-key_length);
key_pos+=extra_length; key_pos+=extra_length;
} }
my_free((char*) key_buffer,MYF(0)); my_free((char*) key_buffer,MYF(0));
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
## Process this file with automake to create Makefile.in ## Process this file with automake to create Makefile.in
EXTRA_DIST = mysql.spec.sh \ EXTRA_DIST = mysql.spec.sh \
mysql-max.spec.sh \
my-small.cnf.sh \ my-small.cnf.sh \
my-medium.cnf.sh \ my-medium.cnf.sh \
my-large.cnf.sh \ my-large.cnf.sh \
...@@ -34,7 +33,6 @@ pkgdata_DATA = my-small.cnf \ ...@@ -34,7 +33,6 @@ pkgdata_DATA = my-small.cnf \
my-huge.cnf \ my-huge.cnf \
mysql-log-rotate \ mysql-log-rotate \
mysql-@VERSION@.spec \ mysql-@VERSION@.spec \
mysql-max-@VERSION@.spec \
binary-configure binary-configure
pkgdata_SCRIPTS = mysql.server pkgdata_SCRIPTS = mysql.server
...@@ -44,7 +42,6 @@ CLEANFILES = my-small.cnf \ ...@@ -44,7 +42,6 @@ CLEANFILES = my-small.cnf \
my-large.cnf \ my-large.cnf \
my-huge.cnf \ my-huge.cnf \
mysql.spec \ mysql.spec \
mysql-max-@VERSION@.spec \
mysql-@VERSION@.spec \ mysql-@VERSION@.spec \
mysql-log-rotate \ mysql-log-rotate \
mysql.server \ mysql.server \
...@@ -55,10 +52,6 @@ mysql-@VERSION@.spec: mysql.spec ...@@ -55,10 +52,6 @@ mysql-@VERSION@.spec: mysql.spec
rm -f $@ rm -f $@
cp mysql.spec $@ cp mysql.spec $@
mysql-max-@VERSION@.spec: mysql-max.spec
rm -f $@
cp mysql-max.spec $@
SUFFIXES = .sh SUFFIXES = .sh
.sh: .sh:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment