- 17 Feb, 2012 2 commits
-
-
Sergei Golubchik authored
common icp callback in the handler.cc. It can also increment status counters, without making the engine dependent on the exact THD layout (that is different in embedded).
-
Igor Babaev authored
This bug led to wrong values of the use_count fields in some SEL_ARG trees that triggered complains on the server side when executing the test case for LP bug 800184 if a debug build of the server was used. This was the result of the incomplete fix for bug 800184. To complete it the following corrections had to be made: - the copy constructor for SEL_TREE must call the new function incr_refs_all() instead of the function incr_refs(), because references to next key parts from any SEL_ARG tree belonging to the list of the first key part has to be adjusted. - the method and_sel_tree of the class SEL_IMERGE must use the copy constructor of the SEL_TREE class to make a copy of its second argument before it ANDs it with any SEL_TREE tree from the processed SEL_IMERGE object.
-
- 16 Feb, 2012 3 commits
-
-
Sergey Petrunya authored
timestamp: Thu 2011-12-01 15:12:10 +0100 Fix for Bug#13430436 PERFORMANCE DEGRADATION IN SYSBENCH ON INNODB DUE TO ICP When running sysbench on InnoDB there is a performance degradation due to index condition pushdown (ICP). Several of the queries in sysbench have a WHERE condition that the optimizer uses for executing these queries as range scans. The upper and lower limit of the range scan will ensure that the WHERE condition is fulfilled. Still, the WHERE condition is part of the queries' condition and if ICP is enabled the condition will be pushed down to InnoDB as an index condition. Due to the range scan's upper and lower limits ensure that the WHERE condition is fulfilled, the pushed index condition will not filter out any records. As a result the use of ICP for these queries results in a performance overhead for sysbench. This overhead comes from using resources for determining the part of the condition that can be pushed down to InnoDB and overhead in InnoDB for executing the pushed index condition. With the default configuration for sysbench the range scans will use the primary key. This is a clustered index in InnoDB. Using ICP on a clustered index provides the lowest performance benefit since the entire record is part of the clustered index and in InnoDB it has the highest relative overhead for executing the pushed index condition. The fix for removing the overhead ICP introduces when running sysbench is to disable use of ICP when the index used by the query is a clustered index. When WL#6061 is implemented this change should be re-evaluated.
-
Sergey Petrunya authored
-
unknown authored
-
- 14 Feb, 2012 5 commits
-
-
unknown authored
Problem was that now we can merge derived table (subquery in the FROM clause). Fix: in case of detected conflict and presence of derived table "over" the table which cased the conflict - try materialization strategy.
-
Alexey Botchkov authored
tests for RTree keys recovery.
-
Sergey Petrunya authored
-
Sergey Petrunya authored
- Make equality propagation work correctly when done inside the OR branches
-
Igor Babaev authored
If the flag 'optimize_join_buffer_size' is set to 'off' and the value of the system variable 'join_buffer_size' is greater than the value of the system variable 'join_buffer_space_limit' than no join cache can be employed to join tables of the executed query. A bug in the function JOIN_CACHE::alloc_buffer allowed to use join buffer even in this case while another bug in the function revise_cache_usage could cause a crash of the server in this case if the chosen execution plan for the query contained outer join or semi-join operation.
-
- 13 Feb, 2012 1 commit
-
-
unknown authored
locked until we have finished clean up. Previously, the code released the lock without marking that the thread was running. This allowed a new slave thread to start while the old one was still in the middle of cleaning up, causing assertions and probably general mayhem.
-
- 12 Feb, 2012 2 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
- 11 Feb, 2012 4 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Protocol documentation (http://forge.mysql.com/wiki/MySQL_Internals_ClientServer_Protocol) says that initial packet sent by client if client wants SSL, consists of client capability flags only (4 bytes or 2 bytes edependent on protocol versionl). Some clients happen to send more in the initial SSL packet (C client, Python connector), while others (Java, .NET) follow the docs and send only client capability flags. A change that broke Java client was a newly introduced check that frst client packet has 32 or more bytes. This is generally wrong, if client capability flags contains CLIENT_SSL. Also, fixed the code such that read max client packet size and charset in the first packet prior to SSL handshake. With SSL, clients do not have to send this info, they can only send client flags. This is now fixed such that max packet size and charset are not read prior to SSL handshake, in case of SSL they are read from the "complete" client authentication packet after SSL initialization.
-
- 10 Feb, 2012 5 commits
-
-
unknown authored
-
Michael Widenius authored
-
Michael Widenius authored
-
unknown authored
-
unknown authored
The code was accessing a pointer in a mem_root that might be freed by another concurrent thread. Fix by moving the access to be done while the LOCK_thd_data is held, preventing the memory from being freed too early.
-
- 09 Feb, 2012 1 commit
-
-
unknown authored
The bug itself is fixed by the patch for bug lp:908269.
-
- 03 Feb, 2012 9 commits
-
-
Igor Babaev authored
-
unknown authored
-
unknown authored
-
unknown authored
-
unknown authored
-
Igor Babaev authored
The comment for the fix commit says: Due to the changes required by ICP we first copy a row from the InnoDB format to the MySQL row buffer and then copy it to the pre-fetch queue. This was done for the non-ICP code path too. This change removes the double copy for the latter.
-
unknown authored
-
unknown authored
For single table update/insert added deep check of single tables (single_table_updatable()). For multi-table view insert added additional check of target table (check_view_single_update). Multi-update was correct. Test suite for all cases added.
-
Igor Babaev authored
a significant performance drop for high concurrency bechmarks (bug #11765850 - 58854). Here's the comment of the patch commit: The bug is that the InnoDB pre-fetch cache was not being used in row_search_for_mysql(). Secondly the changeset that planted the bug also introduced some inefficient code. It would read an extra row, convert it to MySQL row format (for ICP==off), copy the row to the pre-fetch cache row buffer, then check for cache overflow and dequeue the row that was pushed if there was a possibility of a cache overflow.
-
- 02 Feb, 2012 1 commit
-
-
Igor Babaev authored
-
- 01 Feb, 2012 2 commits
-
-
Igor Babaev authored
-
unknown authored
Problem was in try to check/use Item_direct_ref of derived view when we have to use real Item_field under it.
-
- 31 Jan, 2012 1 commit
-
-
Sergei Golubchik authored
-
- 30 Jan, 2012 2 commits
-
-
Sergey Petrunya authored
- If LooseScan is used with quick select, require that quick select produces data in key order (this disables use of MRR, which can return data in arbitrary order).
-
Sergey Petrunya authored
-
- 28 Jan, 2012 2 commits
-
-
Igor Babaev authored
Applied the fix for bug #12546542 from the mysql-5.6 code line: JOIN_CACHE::join_records forgot to reset JOIN_TAB::first_unmatched in some cases.
-
Igor Babaev authored
-