- 17 Dec, 2009 2 commits
-
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
Problem: inserting a record we don't set unused null bits in the record buffer if no default field values used. That may lead to wrong live checksum calculation. Fix: set unused null bits in the record buffer in such cases.
-
- 16 Dec, 2009 4 commits
-
-
Magne Mahre authored
The bug is caused by a race condition between the INSERT DELAYED thread and the client thread's FLUSH TABLE. The FLUSH TABLE does not guarantee (as is (wrongly) suggested in the test case) that the INSERT DELAYED is ever executed. The execution of the test case will thus not be deterministic. The fix has been to do a deterministic verification that both threads are complete by checking the content of the table.
-
Georgi Kodinov authored
-
This test case tests a circular replication of four hosts. A--->B--->C--->D--->A The replicate is slow and needs more time to replicate all data in the circle. The time it spends to replicate, sometimes, is longer than the time that wait_condition.inc spends to wait that all data has been replicated. This cause sporadical failure of this test case. This patch uses sync_slave_with_master to ensure that all data can be replicated successfully in the circle.
-
- 15 Dec, 2009 8 commits
-
-
Georgi Kodinov authored
int join_read_key(JOIN_TAB*) The eq_ref access method TABLE_REF (accessed through JOIN_TAB) to save state and to track if this is the first row it finds or not. This state was not reset on subquery re-execution causing an assert. Fixed by resetting the state before the subquery re-execution.
-
Georgi Kodinov authored
-
Mattias Jonsson authored
-
Georgi Kodinov authored
int join_read_key(JOIN_TAB*) The eq_ref access method TABLE_REF (accessed through JOIN_TAB) to save state and to track if this is the first row it finds or not. This state was not reset on subquery re-execution causing an assert. Fixed by resetting the state before the subquery re-execution.
-
Alexander Barkov authored
Problem: add_collation did not check that cs->number is smaller than the number of elements in the array all_charsets[], so server could crash when loading an Index.xml file with a collation ID greater the number of elements (for example when downgrading from 5.5). Fix: adding a condition to check that cs->number is not out of valid range.
-
Jon Olav Hauglid authored
check_key_in_view() had one code branch which returned with "return TRUE" rather than "DBUG_RETURN(TRUE)". Only affected debug builds. No test case added.
-
He Zhenxing authored
Problem is that purge_logs implementation in ndb (ndbcluster_binlog_index_purge_file) calls mysql_parse (with (thd->options & OPTION_BIN_LOG) === 0)) but MYSQL_BIN_LOG first takes LOCK_log and then checks thd->options Solution in this patch, changes so that rotate_and_purge does not hold LOCK_log when calling purge_logs_before_date. I think this is safe as other "purge"-function(s) is called wo/ holding LOCK_log, e.g purge_master_logs
-
'LOAD DATA CONCURRENT [LOCAL] INFILE ...' statment only is binlogged as 'LOAD DATA [LOCAL] INFILE ...' in SBR and MBR. As a result, if replication is on, queries on slaves will be blocked by the replication SQL thread. This patch write code to write 'CONCURRENT' into the log event if 'CONCURRENT' option is in the original statement in SBR and MBR.
-
- 14 Dec, 2009 9 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
All tests in the parts suite that use partitioning on a timezone-dependent expression were commented out, since it is not valid anymore.
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Mattias Jonsson authored
Includes both patch from bug#48737 (without test, which should go to next-mr) and test for bug#49028.
-
Andrei Elkin authored
-
Satya B authored
When applying innodb snapshot 1.0.6 the storage engine name for innodb plugin under windows was changed from INNODB_PLUGIN to INNOBASE. This is a wrong and changing back the name to INNODB_PLUGIN.
-
Alexey Kopytov authored
-
lars-erik.bjork@sun.com authored
-
- 13 Dec, 2009 3 commits
-
-
lars-erik.bjork@sun.com authored
5.0 buffer overflow for ER_UPDATE_INFO, or truncated info message in 5.1 5.0.86 has a buffer overflow/crash, and 5.1.40 has a truncated message. errmsg.txt contains this: ER_UPDATE_INFO rum "Linii identificate (matched): %ld Schimbate: %ld Atentionari (warnings): %ld" When that is sprintf'd into a buffer of STRING_BUFFER_USUAL_SIZE size, a buffer overflow can happen. The solution to this is to use MYSQL_ERRMSG_SIZE for the buffer size, instead of STRING_BUFFER_USUAL_SIZE. This will allow longer strings. To avoid potential crashes, we will also use my_snprintf instead of sprintf.
-
Alexey Kopytov authored
-
Alexey Kopytov authored
timestamp primary key Since TIMESTAMP values are adjusted by the current time zone settings in both numeric and string contexts, using any expressions involving TIMESTAMP values as a (sub)partitioning function leads to undeterministic behavior of partitioned tables. The effect may vary depending on a storage engine, it can be either incorrect data being retrieved or stored, or an assertion failure. The root cause of this is the fact that the calculated partition ID may differ from a previously calculated ID for the same data due to timezone adjustments of the partitioning expression value. Fixed by disabling any expressions involving TIMESTAMP values to be used in partitioning functions with the follwing two exceptions: 1. Creating or altering into a partitioned table that violates the above rule is not allowed, but opening existing such tables results in a warning rather than an error so that such tables could be fixed. 2. UNIX_TIMESTAMP() is the only way to get a timezone-independent value from a TIMESTAMP column, because it returns the internal representation (a time_t value) of a TIMESTAMP argument verbatim. So UNIX_TIMESTAMP(timestamp_column) is allowed and should be used to fix existing tables if one wants to use TIMESTAMP columns with partitioning.
-
- 12 Dec, 2009 1 commit
-
-
Staale Smedseng authored
As documented in the bug report, the double checked locking pattern has inherent issues, and cannot guarantee correct initialization. This patch replaces the logic in init_available_charsets() with the use of pthread_once(3). A wrapper function, my_pthread_once(), is introduced and is used in lieu of direct calls to init_available_charsets(). Related defines MY_PTHREAD_ONCE_* are also introduced. For the Windows platform, the implementation in lp:sysbench is ported. For single-thread use, a simple define calls the function and sets the pthread_once control variable. Charset initialization is modified to use my_pthread_once().
-
- 11 Dec, 2009 13 commits
-
-
Kent Boortz authored
compatibility, not to change the -D_WIN32_WINNT=0x0501 in 5.1, XP compatibility.
-
Kent Boortz authored
Windows 2000. Visual Studio 2003 and 2005 require _WIN32_WINNT >= 0x0500 (Win2000) for TryEnterCriticalSection.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Evgeny Potemkin authored
-
V Narayanan authored
-
V Narayanan authored
The fix inserts newline and comma characters as appropriate into the constraint reporting code to match the formatting required by SHOW CREATE TABLE. Additionally, a erroneously duplicated copy of check_if_incompatible_data() was removed from db2i_constraints.cc since the correct version is already in ha_ibmdb2i.cc.
-
V Narayanan authored
This fix changes the character set used within the IBMDB2I handler to hash table names to information about open tables. Previously, tables with names that differed only in letter case would hash to the same data structure. This caused incorrect behavior or errors when two such tables were in use simultaneously.
-
The help text for --init-slave=name: "Command(s) that are executed when a slave connects to this master". This text indicate that the --init-slave option is set on a master server, and the master server passes the option's argument to slave which connects to it. This is wrong. Actually the --init-slave option just can be set on a slave server, and then the slave server executes the argument each time the SQL thread starts. Correct the help text for --init-slave option as following: "Command(s) that are executed by a slave server each time the SQL thread starts."
-
The help text for --init-slave=name: "Command(s) that are executed when a slave connects to this master". This text indicate that the --init-slave option is set on a master server, and the master server passes the option's argument to slave which connects to it. This is wrong. Actually the --init-slave option just can be set on a slave server, and then the slave server executes the argument each time the SQL thread starts. Correct the help text for --init-slave option as following: "Command(s) that are executed by a slave server each time the SQL thread starts."
-