- 28 Aug, 2009 4 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 27 Aug, 2009 6 commits
-
-
Alfranio Correia authored
When a connection is dropped any remaining temporary table is also automatically dropped and the SQL statement of this operation is written to the binary log in order to drop such tables on the slave and keep the slave in sync. Specifically, the current code base creates the following type of statement: DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`; Unfortunately, appending the database to the table name in this manner circumvents the replicate-rewrite-db option (and any options that check the current database). To solve the issue, we started writing the statement to the binary as follows: use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
-
Alfranio Correia authored
Slave does not correctly handle "expected errors" leading to inconsistencies between the mater and slave. Specifically, when a statement changes both transactional and non-transactional tables, the transactional changes are automatically rolled back on the master but the slave ignores the error and does not roll them back thus leading to inconsistencies. To fix the problem, we automatically roll back a statement that fails on the slave but note that the transaction is not rolled back unless a "rollback" command is in the relay log file.
-
Sergey Glukhov authored
-
Sergey Glukhov authored
The crash happens because select_union object is used as result set for queries which have derived tables. select_union use temporary table as data storage and if fields count exceeds 10(count of values for procedure ANALYSE()) then we get a crash on fill_record() function.
-
Alfranio Correia authored
Updated main.mysqlbinlog_row_trans's result file as TRUNCATE statements are wrapped in BEGIN...COMMIT.
-
Georgi Kodinov authored
-
- 26 Aug, 2009 5 commits
-
-
Alfranio Correia authored
binlog Mixing transactional (T) and non-transactional (N) tables on behalf of a transaction may lead to inconsistencies among masters and slaves in STATEMENT mode. The problem stems from the fact that although modifications done to non-transactional tables on behalf of a transaction become immediately visible to other connections they do not immediately get to the binary log and therefore consistency is broken. Although there may be issues in mixing T and M tables in STATEMENT mode, there are safe combinations that clients find useful. In this bug, we fix the following issue. Mixing N and T tables in multi-level (e.g. a statement that fires a trigger) or multi-table table statements (e.g. update t1, t2...) were not handled correctly. In such cases, it was not possible to distinguish when a T table was updated if the sequence of changes was N and T. In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect that both a N and T tables were changed. To circumvent this issue, we check if an engine is registered in the handler's list and changed something which means that a T table was modified. Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or ROW modes completely safe.
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
Problem was that the partition containing NULL values was pruned away, since '2001-01-01' < '2001-02-00' but TO_DAYS('2001-02-00') is NULL. Added the NULL partition for RANGE/LIST partitioning on TO_DAYS() function to be scanned too. Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode (SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST partitioned table would add it).
-
Mattias Jonsson authored
There were a problem since pruning uses the field for comparison (while evaluate_join_record uses longlong), resulting in pruning failures when comparing DATE to DATETIME. Fix was to always comparing DATE vs DATETIME as DATETIME, by adding ' 00:00:00' to the DATE string. And adding optimization for comparing with 23:59:59, so that DATETIME_col > '2001-02-03 23:59:59' -> TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead of '>='.
-
- 24 Aug, 2009 4 commits
-
-
Davi Arnaut authored
The problem was that creating a DECIMAL column from a decimal value could lead to a failed assertion as decimal values can have a higher precision than those attached to a table. The assert could be triggered by creating a table from a decimal with a large (> 30) scale. Also, there was a problem in calculating the number of digits in the integral and fractional parts if both exceeded the maximum number of digits permitted by the new decimal type. The solution is to ensure that truncation procedure is executed when deducing a DECIMAL column from a decimal value of higher precision. If the integer part is equal to or bigger than the maximum precision for the DECIMAL type (65), the integer part is truncated to fit and the fractional becomes zero. Otherwise, the fractional part is truncated to fit into the space left after the integer part is copied. This patch borrows code and ideas from Martin Hansson's patch.
-
Georgi Kodinov authored
The code was using a special global buffer for the value of IS NULL ranges. This was not always long enough to be copied by a regular memcpy. As a result read buffer overflows may occur. Fixed by setting the null byte to 1 and setting the rest of the field disk image to NULL with a bzero (instead of relying on the buffer and memcpy()).
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 21 Aug, 2009 8 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
INSERT ... SELECT ... Problem was that when bulk insert is used on an empty table/partition, it disables the indexes for better performance, but in this specific case it also tries to read from that partition using an index, which is not possible since it has been disabled. Solution was to allow index reads on disabled indexes if there are no records. Also reverted the patch for bug#38005, since that was a workaround in the partitioning engine instead of a fix in myisam.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Martin Hansson authored
-
Martin Hansson authored
-
Ramil Kalimullin authored
(temporary) TABLE, crash Problem: if one has an open "HANDLER t1", further "TRUNCATE t1" doesn't close the handler and leaves handler table hash in an inconsistent state, that may lead to a server crash. Fix: TRUNCATE should implicitly close all open handlers. Doc. request: the fact should be described in the manual accordingly.
-
- 20 Aug, 2009 3 commits
-
-
Georgi Kodinov authored
-
Martin Hansson authored
-
Martin Hansson authored
view manipulations The bespoke flag was not properly reset after last call to fill_record. Fixed by resetting in caller mysql_update.
-
- 19 Aug, 2009 4 commits
-
-
Alfranio Correia authored
If the SQL Thread fails to execute an event due to a temporary error (e.g. ER_LOCK_DEADLOCK) and the option "--slave_transaction_retries" is set the SQL Thread should not be aborted and the transaction should be restarted from the beginning and re-executed. Unfortunately, a wrong interpretation of the THD::is_fatal_error was preventing this behavior. In a nutshell, "this variable is set to TRUE if an execution of a compound statement cannot continue. In particular, it is used to disable access to the CONTINUE or EXIT handlers of stored routines. So even temporary errors may have this variable set. To fix the bug, we have done what follows: DBUG_ENTER("has_temporary_error"); - if (thd->is_fatal_error) - DBUG_RETURN(0); - DBUG_EXECUTE_IF("all_errors_are_temporary_errors", if (thd->main_da.is_error()) {
-
Georgi Kodinov authored
The check for stack overflow was independent of the size of the structure stored in the heap. Fixed by adding sizeof(PARAM) to the requested free heap size.
-
Georgi Kodinov authored
view that has Group By Table access rights checking function check_grant() assumed that no view is opened when it's called. This is not true with nested views where the inner view needs materialization. In this case the view is already materialized when check_grant() is called for it. This caused check_grant() to not look for table level grants on the materialized view table. Fixed by checking if a view is already materialized and if it is check table level grants using the original table name (not the ones of the materialized temp table).
-
Georgi Kodinov authored
-
- 17 Aug, 2009 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 14 Aug, 2009 1 commit
-
-
Ramil Kalimullin authored
-
- 13 Aug, 2009 2 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free() Bug#45242: crash on win in mysql_close() -> free() Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE Bug#46030: rpl_truncate_3innodb causes server crash on windows Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2 When killing a user session on the server, it's necessary to interrupt (notify) the thread associated with the session that the connection is being killed so that the thread is woken up if waiting for I/O. On a few platforms (Mac, Windows and HP-UX) where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption procedure is to asynchronously close the underlying socket of the connection. In order to enable this schema, each connection serving thread registers its VIO (I/O interface) so that other threads can access it and close the connection. But only the owner thread of the VIO might delete it as to guarantee that other threads won't see freed memory (the thread unregisters the VIO before deleting it). A side note: closing the socket introduces a harmless race that might cause a thread attempt to read from a closed socket, but this is deemed acceptable. The problem is that this infrastructure was meant to only be used by server threads, but the slave I/O thread was registering the VIO of a mysql handle (a client API structure that represents a connection to another server instance) as a active connection of the thread. But under some circumstances such as network failures, the client API might destroy the VIO associated with a handle at will, yet the VIO wouldn't be properly unregistered. This could lead to accesses to freed data if a thread attempted to kill a slave I/O thread whose connection was already broken. There was a attempt to work around this by checking whether the socket was being interrupted, but this hack didn't work as intended due to the aforementioned race -- attempting to read from the socket would yield a "bad file descriptor" error. The solution is to add a hook to the client API that is called from the client code before the VIO of a handle is deleted. This hook allows the slave I/O thread to detach the active vio so it does not point to freed memory.
-