- 05 Jul, 2006 3 commits
-
-
unknown authored
into mysql.com:/home/mydev/mysql-5.0-ateam myisam/mi_key.c: Auto merged mysql-test/r/gis-rtree.result: Auto merged mysql-test/t/gis-rtree.test: Auto merged myisam/mi_check.c: SCCS merged
-
unknown authored
into mysql.com:/home/mydev/mysql-5.0-ateam libmysqld/libmysqld.c: Auto merged myisam/mi_rkey.c: Auto merged mysql-test/r/func_sapdb.result: Auto merged mysql-test/r/symlink.result: Auto merged mysql-test/t/func_sapdb.test: Auto merged scripts/make_binary_distribution.sh: Auto merged sql/item_geofunc.h: Auto merged sql/item_timefunc.cc: Auto merged sql/sql_class.cc: Auto merged sql/sql_parse.cc: Auto merged libmysqld/lib_sql.cc: Manual merge mysql-test/r/func_time.result: Manual merge mysql-test/r/gis.result: Manual merge mysql-test/t/func_time.test: Manual merge mysql-test/t/gis.test: Manual merge sql-common/client.c: Manual merge
-
unknown authored
into mysql.com:/home/mydev/mysql-5.0-ateam myisam/mi_create.c: Auto merged mysql-test/r/ctype_utf8.result: Auto merged mysql-test/r/key.result: Auto merged mysql-test/r/myisam.result: Auto merged mysql-test/t/ctype_utf8.test: Auto merged mysql-test/t/key.test: Auto merged mysql-test/t/myisam.test: Auto merged sql/opt_sum.cc: Auto merged sql/table.cc: Auto merged support-files/mysql.spec.sh: Auto merged sql/field.cc: Manual merge
-
- 04 Jul, 2006 1 commit
-
-
unknown authored
into mysql.com:/home/mydev/mysql-4.1-bug14400 myisam/mi_rkey.c: Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert" Manual merge sql/sql_class.cc: Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert" Manual merge
-
- 29 Jun, 2006 1 commit
-
-
unknown authored
+ adopted signal to be as close as possible to 5.1...
-
- 28 Jun, 2006 2 commits
-
-
unknown authored
It was possible that fetching a record by an exact key value (including the record pointer) could return a record with a different key value. This happened only if a concurrent insert added a record with the searched key value after the fetching statement locked the table for read. The search succeded on the key value, but the record was rejected as it was past the file length that was remembered at start of the fetching statement. With other words it was rejected as being a concurrently inserted record. The action to recover from this problem was to fetch the record that is pointed at by the next key of the index. This was repeated until a record below the file length was found. I do now avoid this loop if an exact match was searched. If this match is beyond the file length, it is now treated as "key not found". There cannot be another key with the same record pointer. myisam/mi_rkey.c: Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert" Added a check for exact key match before searching for the next key that was not concurrently inserted. If an exact key match finds a concurrently inserted row, this must be treated as "key not found". sql/sql_class.cc: Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert" Fixed some DBUG_ENTER strings.
-
unknown authored
CHECK TABLE could complain about a fully intact spatial index. A wrong comparison operator was used for table checking. The result was that it checked for non-matching spatial keys. This succeeded if at least two different keys were present, but failed if only the matching key was present. I fixed the key comparison. myisam/mi_check.c: Bug#17877 - Corrupted spatial index Fixed the comparison operator for checking a spatial index. Using MBR_EQUAL | MBR_DATA to compare for equality and include the data pointer in the comparison. The latter finds the index entry that points to the current record. This is necessary for non-unique indexes. The old operator, SEARCH_SAME, is unknown to the rtree search functions and handled like MBR_DISJOINT. myisam/mi_key.c: Bug#17877 - Corrupted spatial index Added a missing DBUG_RETURN. myisam/rt_index.c: Bug#17877 - Corrupted spatial index Included the data pointer in the copy of the search key. This is necessary for searching the index entry that points to a specific record if the search_flag contains MBR_DATA. myisam/rt_mbr.c: Bug#17877 - Corrupted spatial index Extended the RT_CMP() macro with an assert for an unexpected comparison operator. mysql-test/r/gis-rtree.result: Bug#17877 - Corrupted spatial index The test result. mysql-test/t/gis-rtree.test: Bug#17877 - Corrupted spatial index The test case.
-
- 27 Jun, 2006 6 commits
-
-
unknown authored
Produce a warning if DATA/INDEX DIRECTORY is specified in ALTER TABLE statement. Ignoring of these options is documented in the symbolic links section of the manual. mysql-test/r/symlink.result: Modified test result according to fix for BUG#1662. sql/sql_parse.cc: Produce a warning if DATA/INDEX DIRECTORY is specified in ALTER TABLE statement.
-
unknown authored
Dec. 31st, 9999 is still a valid date, only starting with Jan 1st 10000 things become invalid (Bug #12356) mysql-test/r/func_sapdb.result: test cases for date range edge cases added mysql-test/r/func_time.result: test cases for date range edge cases added mysql-test/t/func_sapdb.test: test cases for date range edge cases added mysql-test/t/func_time.test: test cases for date range edge cases added
-
unknown authored
into mysql.com:/home/hf/work/mysql-4.1.clean
-
unknown authored
Very complex select statements can create temporary tables that are too big to be represented as a MyISAM table. This was not checked at table creation time, but only at open time. The result was an attempt to delete the "impossible" table. But if the server is built --with-raid, MyISAM tries to open the table before deleting the files. It needs to find out if the table uses the raid support and how many raid chunks there are. This is done with an open "for repair", which will almost always succeed. But in this case we have an "impossible" table. The open failed. Hence the files were not deleted. Also the error message was a bit unspecific. I turned an open error in this situation into the assumption of having no raid support on the table. Thus the normal data file is tried to be deleted. This may however leave existing raid chunks behind. I also added a check in mi_create() to prevent the creation of an "impossible" table. A more decriptive error message is given in this case. No test case. The required select statement is way too large for the test suite. I added a test script to the bug report. myisam/mi_create.c: Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail Added a check to mi_create() that the table description header of the index file does not exceed 64KB. The header has only 16 bits to encode its length. myisam/mi_delete_table.c: Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail Interpret error in table open as not having a raid configuration on the tbale. Thus try to delete the normal data file, but leave behind raid chunks if they exist.
-
unknown authored
- correction of previous patch
-
unknown authored
- make sure to allocate just enough pages in the fragments by using the actual row count from the backup, to avoid over allocation of pages to fragments, and thus avoid the bug ndb/include/kernel/GlobalSignalNumbers.h: Bug #19852 Restoring backup made from cluster with full data memory fails - distribute fragment complete to all participants to update row count ndb/include/kernel/signaldata/BackupContinueB.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - time slica writing of fragment info to ctl file ndb/include/kernel/signaldata/BackupImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records - new signal fragment complete to all participants ndb/include/kernel/signaldata/BackupSignalData.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/include/kernel/signaldata/DictTabInfo.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add min and max rows to dict tab info ndb/include/kernel/signaldata/LqhFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/kernel/signaldata/TupFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/ndbapi/NdbDictionary.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added get/set of min max rows ndb/src/common/debugger/signaldata/BackupImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/BackupSignalData.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/DictTabInfo.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to dict tab info ndb/src/common/debugger/signaldata/LqhFrag.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/backup/Backup.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/Backup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupFormat.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupInit.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new signal fragment complete to all participants ndb/src/kernel/blocks/dbdict/Dbdict.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dbdict/Dbdict.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dblqh/Dblqh.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dblqh/DblqhMain.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/Dbtup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req - move memory allocation to fragment to after adding of attributes to get correct headsize - allocate pages to fragments according to min rows setting ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - grow page allocation starting from 2 irrespective of first page allocation ndb/src/mgmsrv/MgmtSrvr.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/mgmsrv/MgmtSrvr.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/ndbapi/NdbDictionary.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/tools/restore/Restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/Restore.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/consumer_restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - set min in restore to the actual row count (this is the actual bug fix) sql/ha_ndbcluster.cc: Bug #19852 Restoring backup made from cluster with full data memory fails - set min and max rows according to sql definition
-
- 26 Jun, 2006 4 commits
-
-
unknown authored
into mysql.com:/Users/kent/mysql/bk/mysql-4.1-new scripts/make_binary_distribution.sh: Auto merged scripts/make_sharedlib_distribution.sh: Auto merged
-
unknown authored
For compatibility, don't use {..,..} in pattern matching make_binary_distribution.sh: Added .dylib and .sl as shared library extensions scripts/make_binary_distribution.sh: Added .dylib and .sl as shared library extensions scripts/make_sharedlib_distribution.sh: For compatibility, don't use {..,..} in pattern matching
-
unknown authored
into mysql.com:/home/hf/work/mysql-4.1.clean sql/sql_parse.cc: Auto merged
-
unknown authored
into mysql.com:/home/hf/work/mysql-4.1.clean
-
- 23 Jun, 2006 1 commit
-
-
unknown authored
An UNIQUE KEY consisting of NOT NULL columns was displayed as PRIMARY KEY in "DESC t1". According to the code, that was intentional behaviour for some reasons unknown to me. This code was written before bitkeeper time, so I cannot check who and why made this. After discussing on dev-public, a decision was made to remove this code mysql-test/r/key.result: Adding test case. mysql-test/t/key.test: Adding test case. sql/table.cc: Removing old wrong code
-
- 22 Jun, 2006 2 commits
-
-
unknown authored
Disable the simplistic auto dependency scan for test/bench (bug#20078) support-files/mysql.spec.sh: Disable the simplistic auto dependency scan for test/bench (bug#20078)
-
unknown authored
The AsBinary function returns VARCHAR data type with binary collation. It can cause problem for clients that treat that kind of data as different from BLOB type. So now AsBinary returns BLOB. mysql-test/r/gis.result: result fixed mysql-test/t/gis.test: test case added sql/item_geofunc.h: Now we return MYSQL_TYPE_BLOB for asBinary function
-
- 21 Jun, 2006 10 commits
-
-
unknown authored
into moonbone.local:/work/tmp_merge-4.1-opt-mysql
-
unknown authored
This bug in Field_string::cmp resulted in a wrong comparison with keys in partial indexes over multi-byte character fields. Given field a is declared as a varchar(16) collate utf8_unicode_ci INDEX(a(4)) gives us an example of such an index. Wrong key comparisons could lead to wrong result sets if the selected query execution plan used a range scan by a partial index over a utf8 character field. This also caused wrong results in many other cases. mysql-test/t/ctype_utf8.test: Added test cases for bug #14896. mysql-test/r/ctype_utf8.result: Added test cases for bug #14896. sql/field.cc: Fixed bug #14896. This bug in Field_string::cmp resulted in a wrong comparison with keys in partial indexes over multi-byte character fields. Given field a is declared as a varchar(16) collate utf8_unicode_ci INDEX(a(4)) gives us an example of such an index. Wrong key comparisons could lead to wrong result sets if the selected query execution plan used a range scan by a partial index over a utf8 character field. This also caused wrong results in many other cases.
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main mysql-test/mysql-test-run.sh: Auto merged
-
unknown authored
into may.pils.ru:/home/svoj/devel/mysql/BUG20357/mysql-4.1
-
unknown authored
-
unknown authored
into may.pils.ru:/home/svoj/devel/mysql/BUG20357/mysql-4.1 sql/opt_sum.cc: Auto merged mysql-test/r/myisam.result: SCCS merged mysql-test/t/myisam.test: SCCS merged
-
unknown authored
functions in queries Using MAX()/MIN() on table with disabled indexes (by ALTER TABLE) results in error 124 (wrong index) from storage engine. The problem was that optimizer use disabled index to optimize MAX()/MIN(). Normally it must skip disabled index and perform table scan. This patch skips disabled indexes for min/max optimization. mysql-test/r/myisam.result: Test case for BUG#20357. mysql-test/t/myisam.test: Test case for BUG#20357. sql/opt_sum.cc: Skip disabled/ignored indexes for min/max optimization.
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
-
unknown authored
into mysql.com:/usr/home/ram/work/mysql-5.0 mysql-test/r/func_str.result: Auto merged mysql-test/t/func_str.test: Auto merged mysql-test/t/func_time.test: Auto merged sql/item_strfunc.cc: Auto merged sql/item_strfunc.h: Auto merged mysql-test/r/func_time.result: SCCS merged
-
- 20 Jun, 2006 10 commits
-
-
unknown authored
Additional fix for #16377 for bigendian platforms sql_select.cc, select.result, select.test: After merge fix mysql-test/t/select.test: After merge fix mysql-test/r/select.result: After merge fix sql/sql_select.cc: After merge fix sql/field.h: Additional fix for #16377 for bigendian platforms sql/field.cc: Additional fix for #16377 for bigendian platforms
-
unknown authored
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
-
unknown authored
into moonbone.local:/work/tmp_merge-4.1-opt-mysql
-
unknown authored
mysql-test/t/select.test: Auto merged sql/item_cmpfunc.cc: Auto merged
-
unknown authored
Added test case for bug#18759 Incorrect string to numeric conversion. select.test: Added test case for bug#18759 Incorrect string to numeric conversion. item_cmpfunc.cc: Cleanup after fix for bug#18360 removal sql/item_cmpfunc.cc: Cleanup after fix for bug#18360 removal mysql-test/t/select.test: Added test case for bug#18759 Incorrect string to numeric conversion. mysql-test/r/select.result: Added test case for bug#18759 Incorrect string to numeric conversion.
-
unknown authored
mysql-test/r/insert_select.result: Auto merged mysql-test/t/insert_select.test: Auto merged
-
unknown authored
into mysql.com:/home/emurphy/mysql-5.0-heikki
-
unknown authored
Fixes bug#17264, for alter table on win32 for successfull operation completion it is used TL_WRITE(=10) lock instead of TL_WRITE_ALLOW_READ(=6), however here in innodb handler TL_WRTIE is lifted to TL_WRITE_ALLOW_WRITE, which causes race condition when several clients do alter table simultaneously. mysql-test/r/lock_multi.result: Test case for bug#17264. mysql-test/t/lock_multi.test: Test case for bug#17264
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main ndb/src/ndbapi/ndberror.c: Auto merged
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main ndb/src/kernel/blocks/dbdih/DbdihMain.cpp: Auto merged ndb/src/ndbapi/ndberror.c: Auto merged
-