- 09 Feb, 2010 3 commits
-
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD optimization properly. As a result subsequent queries may return incomplete rows (fields are initialized to default values).
-
- 07 Feb, 2010 1 commit
-
-
Luis Soares authored
logging is disabled Post-push fix: disabling test when running mysqld in embedded mode.
-
- 06 Feb, 2010 1 commit
-
-
Gleb Shchepa authored
Grouping by a subquery in a query with a distinct aggregate function lead to a wrong result (wrong and unordered grouping values). There are two related problems: 1) The query like this: SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c FROM t1 GROUP BY aa returned wrong result, because the outer reference "t1.a" in the subquery was substituted with the Item_ref item. The Item_ref item obtains data from the result_field object that refreshes once after the end of each group. This data is not applicable to filesort since filesort() doesn't care about groups (and doesn't update result_field objects with copy_fields() and so on). Also that data is not applicable to group separation algorithm: end_send_group() checks every record with test_if_group_changed() that evaluates Item_ref items, but it refreshes those Item_ref-s only after the end of group, that is a vicious circle and the grouped column values in the output are shifted. Fix: if a) we grouping by a subquery and b) that subquery has outer references to FROM list of the grouping query, then we substitute these outer references with Item_direct_ref like references under aggregate functions: Item_direct_ref obtains data directly from the current record. 2) The query with a non-trivial grouping expression like: SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c FROM t1 GROUP BY aa+0 also returned wrong result, since JOIN::exec() substitutes references to top-level aliases in SELECT list with Item_copy caching items. Item_copy items have same refreshing policy as Item_ref items, so the whole groping expression with Item_copy inside returns wrong result in filesort() and end_send_group(). Fix: include aliased items into GROUP BY item tree instead of Item_ref references to them.
-
- 05 Feb, 2010 3 commits
-
-
Luis Soares authored
logging is disabled The server would hit an assertion because of a DBUG violation. There was a missing DBUG_RETURN and instead a plain return was used. This patch replaces the return with DBUG_RETURN.
-
Luis Soares authored
into slow log While processing a statement, down the mysql_parse execution stack, the thd->enable_slow_log can be assigned to opt_log_slow_admin_statements, depending whether one is executing administrative statements, such as ALTER TABLE, OPTIMIZE, ANALYZE, etc, or not. This can have an impact on slow logging for statements that are executed after an administrative statement execution is completed. When executing statements directly from the user this is fine because, the thd->enable_slow_log is reset right at the beginning of the dispatch_command function, ie, everytime a new statement is set is set to execute. On the other hand, for slave SQL thread (sql_thd) the story is a bit different. When in SBR the sql_thd applies statements by calling mysql_parse. Right after, it calls log_slow_statement function to log them if they take too long. Calling mysql_parse directly is fine, but also means that dispatch_command function is bypassed. As a consequence, thd->enable_slow_log does not get a chance to be reset before the next statement to be executed by the sql_thd. If the statement just executed by the sql_thd was an administrative statement and logging of admin statements was disabled, this means that sql_thd->enable_slow_log will be set to 0 (disabled) from that moment on. End result: sql_thd stops logging slow statements. We fix this by resetting the value of sql_thd->enable_slow_log to the value of opt_log_slow_slave_statements right after log_slow_stement is called by the sql_thd.
-
Luis Soares authored
To 5.x Release Notes ===== This is a backport of BUG#23300 into 5.1 GA. Original cset revid (in betony): luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h Description =========== When using replication, the slave will not log any slow query logs queries replicated from the master, even if the option "--log-slow-slave-statements" is set and these take more than "log_query_time" to execute. In order to log slow queries in replicated thread one needs to set the --log-slow-slave-statements, so that the SQL thread is initialized with the correct switch. Although setting this flag correctly configures the slave thread option to log slow queries, there is an issue with the condition that is used to check whether to log the slow query or not. When replaying binlog events the statement contains the SET TIMESTAMP clause which will force the slow logging condition check to fail. Consequently, the slow query logging will not take place. This patch addresses this issue by removing the second condition from the log_slow_statements as it prevents slow queries to be binlogged and seems to be deprecated.
-
- 02 Feb, 2010 1 commit
-
-
Sergey Vojtovich authored
Performing fulltext prefix search (a word with truncation operator) may cause a dead-loop. ft_min_word_len value doesn't matter actually. The problem was introduced along with "smarter index merge" optimization.
-
- 29 Jan, 2010 1 commit
-
-
Georgi Kodinov authored
Fixed 2 problems : 1. test_if_order_by_key() was continuing on the primary key as if it has a primary key suffix (as the secondary keys do). This leads to crashes in ORDER BY <pk>,<pk>. Fixed by not treating the primary key as the secondary one and not depending on it being clustered with a primary key. 2. The cost calculation was trying to read the records per key when operating on ORDER BYs that order on all of the secondary key + some of the primary key. This leads to crashes because of out-of-bounds array access. Fixed by assuming we'll find 1 record per key in such cases.
-
- 05 Feb, 2010 1 commit
-
-
Davi Arnaut authored
The problem was that the dbug facility was being used after the per-thread dbug state had already been finalized. The was present in a few functions which invoked decrement_handler_count, which in turn invokes my_thread_end on Windows. In my_thread_end, the per-thread dbug state is finalized. Any use after the state is finalized ends up creating a new state. The solution is to process the exit of a function before the decrement_handler_count function is called.
-
- 28 Jan, 2010 2 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
-
- 27 Jan, 2010 9 commits
-
-
Andrei Elkin authored
merging patches prepared for 5.0 to 5.1-bt. That caused a few changes in the test file
-
Bjorn Munch authored
Define env. vars for both timeout settings Also incorporated 5.0 patch into mtr version 1
-
Staale Smedseng authored
--extended-insert Help message changed to the same as in the 5.1 online documentation.
-
Bjorn Munch authored
Define env. vars for both timeout settings This patch is for 5.0 (mtr v1) and should replaces for 5.1 up
-
Andrei Elkin authored
improving comments
-
Magne Mahre authored
WL#5182 is a follow-up to WL#5154, deprecating a few more options and system variables.
-
Staale Smedseng authored
-
Staale Smedseng authored
printstack() being present When Bug#47391 was fixed, no assumption was made that support for Solaris 8 was needed. Solaris 8 lacks printstack(), and the build breaks because of this. This patch adds a test for the presence of printstack() to configure.in for 5.0, and uses HAVE_PRINTSTACK to make decisions rather than the __sun define.
-
The 'rpl_get_master_version_and_clock' test verifies if the slave I/O thread tries to reconnect to master when it tries to get the values of the UNIX_TIMESTAMP, SERVER_ID from master under network disconnection. So the master server is restarted for making the transient network disconnection, during the period the COM_REGISTER_SLAVE failures are produced in server log file when the slave I/O thread tries to register on master. To fix the problem, suppress COM_REGISTER_SLAVE failures in server log file by mtr suppression, because they are expected.
-
- 26 Jan, 2010 3 commits
-
-
Davi Arnaut authored
MySQL's hash functions MD5 and SHA relied on the somewhat slow sprintf function to convert the digests to hex representations. This patch replaces the sprintf with a specific and inline hex conversion function. Patch contributed by Jan Steemann.
-
Luis Soares authored
NOTE: added TODO to the comments requested by reviewer during this merge.
-
Georgi Kodinov authored
should be exited before destroying the thread local storage.
-
- 25 Jan, 2010 2 commits
-
-
Andrei Elkin authored
When replicating from 4.1 master to 5.0 slave START SLAVE UNTIL can stop too late. The necessary in calculating of the beginning of an event the event's length did not correspond to the master's genuine information at the event's execution time. That piece of info was changed at the event's relay-logging due to binlog_version<4 event conversion by IO thread. Fixed with storing the master genuine Query_log_event size into a new status variable at relay-logging of the event. The stored info is extacted at the event execution and participate further to caclulate the correct start position of the event in the until-pos stopping routine. The new status variable's algorithm will be only active when the event comes from the master of version < 5.0 (binlog_version < 4).
-
- 24 Jan, 2010 1 commit
-
-
He Zhenxing authored
-
- 22 Jan, 2010 12 commits
-
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Detailed revision comments: r6471 | calvin | 2010-01-16 01:43:27 +0200 (Sat, 16 Jan 2010) | 4 lines branches/5.1: fix bug#49396: main.innodb test fails in embedded mode Change replace_result by using $MYSQLD_DATADIR. Tested in both embedded mode and normal server mode.
-
Sergey Glukhov authored
removed wrongly introduced strlen calls
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Detailed revision comments: r6492 | sunny | 2010-01-21 09:38:35 +0200 (Thu, 21 Jan 2010) | 1 line branches/5.1: Add reference to bug#47621 in the comment.
-
Sergey Vojtovich authored
Detailed revision comments: r6489 | sunny | 2010-01-21 02:57:50 +0200 (Thu, 21 Jan 2010) | 2 lines branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test into a separate test/result files.
-
Sergey Vojtovich authored
Detailed revision comments: r6488 | sunny | 2010-01-21 02:55:08 +0200 (Thu, 21 Jan 2010) | 2 lines branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test into a separate test/result files.
-
Sergey Vojtovich authored
Detailed revision comments: r6424 | marko | 2010-01-12 12:22:19 +0200 (Tue, 12 Jan 2010) | 16 lines branches/5.1: In innobase_initialize_autoinc(), do not attempt to read the maximum auto-increment value from the table if innodb_force_recovery is set to at least 4, so that writes are disabled. (Bug #46193) innobase_get_int_col_max_value(): Move the function definition before ha_innobase::innobase_initialize_autoinc(), because that function now calls this function. ha_innobase::innobase_initialize_autoinc(): Change the return type to void. Do not attempt to read the maximum auto-increment value from the table if innodb_force_recovery is set to at least 4. Issue ER_AUTOINC_READ_FAILED to the client when the auto-increment value cannot be read. rb://144 by Sunny, revised by Marko
-
Sergey Vojtovich authored
Detailed revision comments: r6422 | marko | 2010-01-12 11:34:27 +0200 (Tue, 12 Jan 2010) | 3 lines branches/5.1: Non-functional change: Make innobase_get_int_col_max_value() a static function. It does not access any fields of class ha_innobase.
-
Sergey Vojtovich authored
Detailed revision comments: r6421 | jyang | 2010-01-12 07:59:16 +0200 (Tue, 12 Jan 2010) | 8 lines branches/5.1: Fix bug #49238: Creating/Dropping a temporary table while at 1023 transactions will cause assert. Handle possible DB_TOO_MANY_CONCURRENT_TRXS when deleting metadata in row_drop_table_for_mysql(). rb://220, approved by Marko
-
In RBR, DDL statement will change binlog format to non row-based format before it is binlogged, but the binlog format was not be restored, and then manipulating a temporary table can not reset binlog format to row-based format rightly. So that the manipulated statement is binlogged with statement-based format. To fix the problem, restore the state of binlog format after the DDL statement is binlogged.
-
Magne Mahre authored
The WL#5154 commit added a couple of warning messages that was not fixed in the result files for two RPL tests.
-