- 26 Aug, 2009 2 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
Problem was that the partition containing NULL values was pruned away, since '2001-01-01' < '2001-02-00' but TO_DAYS('2001-02-00') is NULL. Added the NULL partition for RANGE/LIST partitioning on TO_DAYS() function to be scanned too. Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode (SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST partitioned table would add it). mysql-test/include/partition_date_range.inc: Bug#20577: Partitions: use of to_days() function leads to selection failures Added include file to decrease test code duplication mysql-test/r/partition_pruning.result: Bug#20577: Partitions: use of to_days() function leads to selection failures Added test results mysql-test/r/partition_range.result: Bug#20577: Partitions: use of to_days() function leads to selection failures Updated test result. This fix adds the partition containing NULL values to the list of partitions to be scanned. mysql-test/t/partition_pruning.test: Bug#20577: Partitions: use of to_days() function leads to selection failures Added test case sql/item.h: Bug#20577: Partitions: use of to_days() function leads to selection failures Added MONOTONIC_*INCREASE_NOT_NULL values to be used by TO_DAYS. sql/item_timefunc.cc: Bug#20577: Partitions: use of to_days() function leads to selection failures Calculate the number of days as return value even for invalid dates. This is so that pruning can be used even for invalid dates. sql/opt_range.cc: Bug#20577: Partitions: use of to_days() function leads to selection failures Fixed a bug that added ALLOW_INVALID_DATES to sql_mode (SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST partitioned table would add it). sql/partition_info.h: Bug#20577: Partitions: use of to_days() function leads to selection failures Resetting ret_null_part when a single partition is to be used, this to avoid adding the NULL partition. sql/sql_partition.cc: Bug#20577: Partitions: use of to_days() function leads to selection failures Always include the NULL partition if RANGE or LIST. Use the returned value for the function for pruning, even if it is marked as NULL, so that even '2000-00-00' can be used for pruning, even if TO_DAYS('2000-00-00') is NULL. Changed == to >= in get_next_partition_id_list to avoid crash if part_iter->part_nums is not correctly setup.
-
- 24 Aug, 2009 3 commits
-
-
Davi Arnaut authored
The problem was that creating a DECIMAL column from a decimal value could lead to a failed assertion as decimal values can have a higher precision than those attached to a table. The assert could be triggered by creating a table from a decimal with a large (> 30) scale. Also, there was a problem in calculating the number of digits in the integral and fractional parts if both exceeded the maximum number of digits permitted by the new decimal type. The solution is to ensure that truncation procedure is executed when deducing a DECIMAL column from a decimal value of higher precision. If the integer part is equal to or bigger than the maximum precision for the DECIMAL type (65), the integer part is truncated to fit and the fractional becomes zero. Otherwise, the fractional part is truncated to fit into the space left after the integer part is copied. This patch borrows code and ideas from Martin Hansson's patch. mysql-test/r/type_newdecimal.result: Add test case result for Bug#45261. Also, update test case to reflect that an additive operation increases the precision of the resulting type by 1. mysql-test/t/type_newdecimal.test: Add test case for Bug#45261 sql/field.cc: Added DBUG_ASSERT to ensure object's invariant is maintained. Implement method to create a field to hold a decimal value from an item. sql/field.h: Explain member variable. Add method to create a new decimal field. sql/item.cc: The precision should only be capped when storing the value on a table. Also, this makes it impossible to calculate the integer part if Item::decimals (the scale) is larger than the precision. sql/item.h: Simplify calculation of integer part. sql/item_cmpfunc.cc: Do not limit the precision. It will be capped later. sql/item_func.cc: Use new method for allocating a new decimal field. Add a specialized method for retrieving the precision of a user variable item. sql/item_func.h: Add method to return the precision of a user variable. sql/item_sum.cc: Use new method for allocating a new decimal field. sql/my_decimal.h: The integer part could be improperly calculated for a decimal with 31 digits in the fractional part. sql/sql_select.cc: Use new method which truncates the integer or decimal parts as needed.
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 21 Aug, 2009 8 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
INSERT ... SELECT ... Problem was that when bulk insert is used on an empty table/partition, it disables the indexes for better performance, but in this specific case it also tries to read from that partition using an index, which is not possible since it has been disabled. Solution was to allow index reads on disabled indexes if there are no records. Also reverted the patch for bug#38005, since that was a workaround in the partitioning engine instead of a fix in myisam. mysql-test/r/partition.result: Bug#46639: 1030 (HY000): Got error 124 from storage engine on INSERT ... SELECT ... updated result file mysql-test/t/partition.test: Bug#46639: 1030 (HY000): Got error 124 from storage engine on INSERT ... SELECT ... Added testcase sql/ha_partition.cc: Bug#46639: 1030 (HY000): Got error 124 from storage engine on INSERT ... SELECT ... reverted the patch for bug#38005, since that was a workaround around this problem, not needed after fixing it in myisam. storage/myisam/mi_search.c: Bug#46639: 1030 (HY000): Got error 124 from storage engine on INSERT ... SELECT ... Return HA_ERR_END_OF_FILE instead of HA_ERR_WRONG_INDEX when there are no rows.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Martin Hansson authored
-
Martin Hansson authored
-
Ramil Kalimullin authored
(temporary) TABLE, crash Problem: if one has an open "HANDLER t1", further "TRUNCATE t1" doesn't close the handler and leaves handler table hash in an inconsistent state, that may lead to a server crash. Fix: TRUNCATE should implicitly close all open handlers. Doc. request: the fact should be described in the manual accordingly. mysql-test/r/handler_myisam.result: Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP (temporary) TABLE, crash - test result. mysql-test/t/handler_myisam.test: Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP (temporary) TABLE, crash - test case. sql/sql_delete.cc: Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP (temporary) TABLE, crash - remove all truncated tables from the HANDLER's hash.
-
- 20 Aug, 2009 3 commits
-
-
Georgi Kodinov authored
-
Martin Hansson authored
mysql-test/r/auto_increment.result: Bug#46616: Test result. mysql-test/t/auto_increment.test: Bug#46616: Test case. sql/sql_update.cc: Bug#46616: Fix.
-
Martin Hansson authored
view manipulations The bespoke flag was not properly reset after last call to fill_record. Fixed by resetting in caller mysql_update. mysql-test/r/auto_increment.result: Bug#46616: Test result. mysql-test/t/auto_increment.test: Bug#46616: Test case. sql/sql_update.cc: Bug#46616: Fix.
-
- 19 Aug, 2009 3 commits
-
-
Alfranio Correia authored
If the SQL Thread fails to execute an event due to a temporary error (e.g. ER_LOCK_DEADLOCK) and the option "--slave_transaction_retries" is set the SQL Thread should not be aborted and the transaction should be restarted from the beginning and re-executed. Unfortunately, a wrong interpretation of the THD::is_fatal_error was preventing this behavior. In a nutshell, "this variable is set to TRUE if an execution of a compound statement cannot continue. In particular, it is used to disable access to the CONTINUE or EXIT handlers of stored routines. So even temporary errors may have this variable set. To fix the bug, we have done what follows: DBUG_ENTER("has_temporary_error"); - if (thd->is_fatal_error) - DBUG_RETURN(0); - DBUG_EXECUTE_IF("all_errors_are_temporary_errors", if (thd->main_da.is_error()) {
-
Georgi Kodinov authored
view that has Group By Table access rights checking function check_grant() assumed that no view is opened when it's called. This is not true with nested views where the inner view needs materialization. In this case the view is already materialized when check_grant() is called for it. This caused check_grant() to not look for table level grants on the materialized view table. Fixed by checking if a view is already materialized and if it is check table level grants using the original table name (not the ones of the materialized temp table).
-
Georgi Kodinov authored
-
- 17 Aug, 2009 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 14 Aug, 2009 1 commit
-
-
Ramil Kalimullin authored
-
- 13 Aug, 2009 5 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free() Bug#45242: crash on win in mysql_close() -> free() Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE Bug#46030: rpl_truncate_3innodb causes server crash on windows Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2 When killing a user session on the server, it's necessary to interrupt (notify) the thread associated with the session that the connection is being killed so that the thread is woken up if waiting for I/O. On a few platforms (Mac, Windows and HP-UX) where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption procedure is to asynchronously close the underlying socket of the connection. In order to enable this schema, each connection serving thread registers its VIO (I/O interface) so that other threads can access it and close the connection. But only the owner thread of the VIO might delete it as to guarantee that other threads won't see freed memory (the thread unregisters the VIO before deleting it). A side note: closing the socket introduces a harmless race that might cause a thread attempt to read from a closed socket, but this is deemed acceptable. The problem is that this infrastructure was meant to only be used by server threads, but the slave I/O thread was registering the VIO of a mysql handle (a client API structure that represents a connection to another server instance) as a active connection of the thread. But under some circumstances such as network failures, the client API might destroy the VIO associated with a handle at will, yet the VIO wouldn't be properly unregistered. This could lead to accesses to freed data if a thread attempted to kill a slave I/O thread whose connection was already broken. There was a attempt to work around this by checking whether the socket was being interrupted, but this hack didn't work as intended due to the aforementioned race -- attempting to read from the socket would yield a "bad file descriptor" error. The solution is to add a hook to the client API that is called from the client code before the VIO of a handle is deleted. This hook allows the slave I/O thread to detach the active vio so it does not point to freed memory. server-tools/instance-manager/mysql_connection.cc: Add stub method required for linking. sql-common/client.c: Invoke hook. sql/client_settings.h: Export hook. sql/slave.cc: Introduce hook that clears the active VIO before it is freed by the client API.
-
Ramil Kalimullin authored
on SHOW CREATE TRIGGER + MERGE table Problem: SHOW CREATE TRIGGER erroneously relies on fact that we have the only underlying table for a trigger (wrong for merge tables). Fix: remove erroneous assert(). mysql-test/r/merge.result: Fix for bug #46614: Assertion in show_create_trigger() on SHOW CREATE TRIGGER + MERGE table - test result. mysql-test/t/merge.test: Fix for bug #46614: Assertion in show_create_trigger() on SHOW CREATE TRIGGER + MERGE table - test case. sql/sql_show.cc: Fix for bug #46614: Assertion in show_create_trigger() on SHOW CREATE TRIGGER + MERGE table - unnecessary assert() removed as we may have more than 1 tables open e.g. for a merge table.
-
Alfranio Correia authored
In STATEMENT based replication, a statement that failed on the master but that updated non-transactional tables is written to binary log with the error code appended to it. On the slave, the statement is executed and the same error is expected. However, when an "expected error" did not happen on the slave and was either ignored or was related to a concurrency issue on the master, the slave did not rollback the effects of the statement and as such inconsistencies might happen. To fix the problem, we automatically rollback a statement that should have failed on a slave but succeded and whose expected failure is either ignored or stems from a concurrency issue on the master.
-
unknown authored
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are binlogged even if either the DB, TABLE or EVENT does not exist. In contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT exists. This patch fixes the following cases for all the replication formats: CREATE DATABASE IF NOT EXISTS. CREATE TABLE IF NOT EXISTS, CREATE TABLE IF NOT EXISTS ... LIKE, CREAET TABLE IF NOT EXISTS ... SELECT. sql/sql_insert.cc: Part of the code was moved from the create_table_from_items to select_create::prepare. When replication is row based, CREATE TABLE IF NOT EXISTS.. SELECT is binlogged if the table exists.
-
- 12 Aug, 2009 11 commits
-
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
Mattias Jonsson authored
-
unknown authored
-
Konstantin Osipov authored
"CREATE TABLE TRANSACTIONAL PAGE_CHECKSUM ROW_FORMAT=PAGE accepted, does nothing". Put back stubs for members of structures that are shared between sql/ and pluggable storage engines. to not break ABI unnecessarily. To be NULL-merged into 5.4, where we do break the ABI already.
-
Sergey Vojtovich authored
not ready to run with innoplug-1.0.4.
-
Konstantin Osipov authored
PAGE_CHECKSUM ROW_FORMAT=PAGE accepted, does nothing" Remove unused code that would lead to warnings when compiling sql_yacc.yy. sql/handler.h: Remove unused defines. sql/sql_yacc.yy: Remove unused grammar. sql/table.h: Remove unused TABLE members.
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
unknown authored
-
unknown authored
Replication SQL thread does not set database default charset to thd->variables.collation_database properly, when executing LOAD DATA binlog. This bug can be repeated by using "LOAD DATA" command in STATEMENT mode. This patch adds code to find the default character set of the current database then assign it to thd->db_charset when slave server begins to execute a relay log. The test of this bug is added into rpl_loaddata_charset.test
-
- 11 Aug, 2009 1 commit
-
-
Davi Arnaut authored
-