- 06 Jul, 2006 3 commits
-
-
unknown authored
into gbichot3.local:/home/mysql_src/mysql-5.0
-
unknown authored
into gbichot3.local:/home/mysql_src/mysql-5.0 sql/handler.cc: Auto merged sql/handler.h: Auto merged sql/sql_insert.cc: Auto merged
-
unknown authored
a too large value": the bug was that if MySQL generated a value for an auto_increment column, based on auto_increment_* variables, and this value was bigger than the column's max possible value, then that max possible value was inserted (after issuing a warning). But this didn't honour auto_increment_* variables (and so could cause conflicts in a master-master replication where one master is supposed to generated only even numbers, and the other only odd numbers), so now we "round down" this max possible value to honour auto_increment_* variables, before inserting it. mysql-test/r/rpl_auto_increment.result: result update. Before the fix, the result was that master inserted 127 in t1 (which didn't honour auto_increment_* variables!), instead of failing with "duplicate key 125" like now. mysql-test/t/rpl_auto_increment.test: Test for BUG#20524 "auto_increment_* not observed when inserting a too large value". We also check the pathological case (table t2) where it's impossible to "round down". The fixer of BUG#20573 will be able to use table t2 for testing his fix. sql/handler.cc: If handler::update_auto_increment() generates a value larger than the field's max possible value, we used to simply insert this max possible value (after pushing a warning). Now we "round down" this max possible value to honour auto_increment_* variables (if at all possible), before trying the insertion.
-
- 05 Jul, 2006 1 commit
-
-
unknown authored
auto_increment breaks binlog": if slave's table had a higher auto_increment counter than master's (even though all rows of the two tables were identical), then in some cases, REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate statement-based (it inserted different values on slave from on master). write_record() contained a "thd->next_insert_id=0" to force an adjustment of thd->next_insert_id after the update or replacement. But it is this assigment introduced indeterminism of the statement on the slave, thus the bug. For ON DUPLICATE, we replace that assignment by a call to handler::adjust_next_insert_id_after_explicit_value() which is deterministic (does not depend on slave table's autoinc counter). For REPLACE, this assignment can simply be removed (as REPLACE can't insert a number larger than thd->next_insert_id). We also move a too early restore_auto_increment() down to when we really know that we can restore the value. mysql-test/r/rpl_insert_id.result: result update, without the bugfix, slave's "3 350" were "4 350". mysql-test/t/rpl_insert_id.test: test for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in auto_increment breaks binlog". There is, in this order: - a test of the bug for the case of REPLACE - a test of basic ON DUPLICATE KEY UPDATE functionality which was not tested before - a test of the bug for the case of ON DUPLICATE KEY UPDATE sql/handler.cc: the adjustment of next_insert_id if inserting a big explicit value, is moved to a separate method to be used elsewhere. sql/handler.h: see handler.cc sql/sql_insert.cc: restore_auto_increment() means "I know I won't use this autogenerated autoincrement value, you are free to reuse it for next row". But we were calling restore_auto_increment() in the case of REPLACE: if write_row() fails inserting the row, we don't know that we won't use the value, as we are going to try again by doing internally an UPDATE of the existing row, or a DELETE of the existing row and then an INSERT. So I move restore_auto_increment() further down, when we know for sure we failed all possibilities for the row. Additionally, in case of REPLACE, we don't need to reset THD::next_insert_id: the value of thd->next_insert_id will be suitable for the next row. In case of ON DUPLICATE KEY UPDATE, resetting thd->next_insert_id is also wrong (breaks statement-based binlog), but cannot simply be removed, as thd->next_insert_id must be adjusted if the explicit value exceeds it. We now do the adjustment by calling handler::adjust_next_insert_id_after_explicit_value() (which, contrary to thd->next_insert_id=0, does not depend on the slave table's autoinc counter, and so is deterministic).
-
- 03 Jul, 2006 4 commits
-
-
unknown authored
into mysql.com:/users/lthalmann/bkroot/mysql-5.0-rpl
-
unknown authored
Disabling 'rpl_openssl'. mysql-test/t/rpl_openssl.test: Disabling 'rpl_openssl'.
-
unknown authored
Enabling rpl_openssl.test for Windows to check that currently it still hangs (because I can't reproduce this on my machine). mysql-test/t/rpl_openssl.test: Enabling rpl_openssl.test for Windows
-
unknown authored
into mysql.com:/users/lthalmann/bk/MERGE/mysql-5.0-merge sql/ha_ndbcluster.cc: Auto merged
-
- 29 Jun, 2006 4 commits
- 28 Jun, 2006 8 commits
-
-
unknown authored
into mysql.com:/users/lthalmann/bk/MERGE/mysql-5.0-merge sql/ha_ndbcluster.cc: Auto merged
-
unknown authored
In the Windows build files, the "Max nt" configuration for some reason had the mysql_client_test project disabled. Enable it. VC++Files/mysql.sln: The "Max nt" configuration for some reason had the mysql_client_test project disabled. Enable it.
-
unknown authored
-
unknown authored
Improved definition of mysys configuration for -nt builds. VC++Files/mysql.sln: Use the name 'nt' instead of 'Release' for configuration. VC++Files/mysys/mysys.vcproj: Use the name 'nt' instead of 'Release' for configuration. Use separate output files for NT and non-NT configurations.
-
unknown authored
Make sure for the mysys project that __NT__ is defined in *nt solution configurations (but not in other configurations). VC++Files/mysql.sln: Define __NT__ in mysys for *nt configurations. VC++Files/mysys/mysys.vcproj: Add configurations with __NT__ defined. mysql-test/mysql-test-run.pl: Also allow testing a "Max nt" build.
-
unknown authored
-
unknown authored
into mysql.com:/home/alexi/bugs/mysql-5.0-19208
-
unknown authored
and BUG#19208 "Test 'rpl000017' hangs on Windows". Both bugs are caused by attempting to delete an opened file and to create immediatedly a new one with the same name. On Windows it can be supported only on NT-platforms (by using FILE_SHARE_DELETE mode and with renaming the file before deletion). Because deleting not-closed files is not supported on all platforms (e.g. Win 98|ME) this is to be considered harmful and should be eliminated by a "code redesign". VC++Files/mysys/mysys.vcproj: To be sure that __NT__ is defined for Win configurations. Temporary, to be changed in more appropriate way. include/my_sys.h: Adding my_delete_allow_opened to be invoked to delete a (possibly) not closed file on Windows NT-platforms. mysys/my_delete.c: Adding nt_share_delete() function implementing a (possibly) not closed file deletion on Windows NT. sql/log.cc: MYSQL_LOG::reset_logs(): Deleting usually not closed binlog files.
-
- 27 Jun, 2006 2 commits
-
-
unknown authored
- correction of previous patch
-
unknown authored
- make sure to allocate just enough pages in the fragments by using the actual row count from the backup, to avoid over allocation of pages to fragments, and thus avoid the bug ndb/include/kernel/GlobalSignalNumbers.h: Bug #19852 Restoring backup made from cluster with full data memory fails - distribute fragment complete to all participants to update row count ndb/include/kernel/signaldata/BackupContinueB.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - time slica writing of fragment info to ctl file ndb/include/kernel/signaldata/BackupImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records - new signal fragment complete to all participants ndb/include/kernel/signaldata/BackupSignalData.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/include/kernel/signaldata/DictTabInfo.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add min and max rows to dict tab info ndb/include/kernel/signaldata/LqhFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/kernel/signaldata/TupFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/ndbapi/NdbDictionary.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added get/set of min max rows ndb/src/common/debugger/signaldata/BackupImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/BackupSignalData.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/DictTabInfo.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to dict tab info ndb/src/common/debugger/signaldata/LqhFrag.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/backup/Backup.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/Backup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupFormat.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupInit.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new signal fragment complete to all participants ndb/src/kernel/blocks/dbdict/Dbdict.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dbdict/Dbdict.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dblqh/Dblqh.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dblqh/DblqhMain.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/Dbtup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req - move memory allocation to fragment to after adding of attributes to get correct headsize - allocate pages to fragments according to min rows setting ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - grow page allocation starting from 2 irrespective of first page allocation ndb/src/mgmsrv/MgmtSrvr.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/mgmsrv/MgmtSrvr.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/ndbapi/NdbDictionary.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/tools/restore/Restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/Restore.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/consumer_restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - set min in restore to the actual row count (this is the actual bug fix) sql/ha_ndbcluster.cc: Bug #19852 Restoring backup made from cluster with full data memory fails - set min and max rows according to sql definition
-
- 26 Jun, 2006 1 commit
-
-
unknown authored
into mysql.com:/users/lthalmann/bk/MERGE/mysql-5.0-merge
-
- 24 Jun, 2006 1 commit
-
-
unknown authored
Sometimes the helper connection (that is watching for the main connection to time out) would itself time out first, causing the test to fail. mysql-test/t/wait_timeout.test: Increase connection timeout in connection wait_con so we will not loose the connection that is watching for the real wait_timeout to trigger.
-
- 21 Jun, 2006 7 commits
-
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main mysql-test/mysql-test-run.sh: Auto merged
-
unknown authored
-
unknown authored
into mysql.com:/users/lthalmann/bk/MERGE/mysql-5.0-merge sql/ha_ndbcluster.cc: Auto merged
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
-
unknown authored
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
-
unknown authored
into mysql.com:/usr/home/ram/work/mysql-5.0 mysql-test/r/func_str.result: Auto merged mysql-test/t/func_str.test: Auto merged mysql-test/t/func_time.test: Auto merged sql/item_strfunc.cc: Auto merged sql/item_strfunc.h: Auto merged mysql-test/r/func_time.result: SCCS merged
-
- 20 Jun, 2006 9 commits
-
-
unknown authored
Additional fix for #16377 for bigendian platforms sql_select.cc, select.result, select.test: After merge fix mysql-test/t/select.test: After merge fix mysql-test/r/select.result: After merge fix sql/sql_select.cc: After merge fix sql/field.h: Additional fix for #16377 for bigendian platforms sql/field.cc: Additional fix for #16377 for bigendian platforms
-
unknown authored
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
-
unknown authored
into moonbone.local:/work/tmp_merge-4.1-opt-mysql
-
unknown authored
mysql-test/t/select.test: Auto merged sql/item_cmpfunc.cc: Auto merged
-
unknown authored
Added test case for bug#18759 Incorrect string to numeric conversion. select.test: Added test case for bug#18759 Incorrect string to numeric conversion. item_cmpfunc.cc: Cleanup after fix for bug#18360 removal sql/item_cmpfunc.cc: Cleanup after fix for bug#18360 removal mysql-test/t/select.test: Added test case for bug#18759 Incorrect string to numeric conversion. mysql-test/r/select.result: Added test case for bug#18759 Incorrect string to numeric conversion.
-
unknown authored
mysql-test/r/insert_select.result: Auto merged mysql-test/t/insert_select.test: Auto merged
-
unknown authored
into mysql.com:/home/emurphy/mysql-5.0-heikki
-
unknown authored
Fixes bug#17264, for alter table on win32 for successfull operation completion it is used TL_WRITE(=10) lock instead of TL_WRITE_ALLOW_READ(=6), however here in innodb handler TL_WRTIE is lifted to TL_WRITE_ALLOW_WRITE, which causes race condition when several clients do alter table simultaneously. mysql-test/r/lock_multi.result: Test case for bug#17264. mysql-test/t/lock_multi.test: Test case for bug#17264
-
unknown authored
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main ndb/src/ndbapi/ndberror.c: Auto merged
-