- 06 Mar, 2009 1 commit
-
-
Matthias Leich authored
-
- 05 Mar, 2009 1 commit
-
-
Matthias Leich authored
modifications according to the reviews are included
-
- 03 Mar, 2009 10 commits
-
-
Matthias Leich authored
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures + Removal of a lot of other weaknesses found + modifications according to review
-
Timothy Smith authored
-
Timothy Smith authored
Bug #42152: Race condition in lock_is_table_exclusive() Detailed revision comments: r4005 | marko | 2009-01-20 16:22:36 +0200 (Tue, 20 Jan 2009) | 8 lines branches/5.1: lock_is_table_exclusive(): Acquire kernel_mutex before accessing table->locks and release kernel_mutex before returning from the function. This fixes a portential race condition in the "commit every 10,000 rows" in ALTER TABLE, CREATE INDEX, DROP INDEX, and OPTIMIZE TABLE. (Bug #42152) rb://80 approved by Heikki Tuuri
-
Timothy Smith authored
Bug #41571: MySQL segfaults after innodb recovery Detailed revision comments: r4004 | marko | 2009-01-20 16:19:00 +0200 (Tue, 20 Jan 2009) | 12 lines branches/5.1: Merge r4003 from branches/5.0: rec_set_nth_field(): When the field already is SQL null, do nothing when it is being changed to SQL null. (Bug #41571) Normally, MySQL does not pass "do-nothing" updates to the storage engine. When it does and a column of an InnoDB table that is in ROW_FORMAT=COMPACT is being updated from NULL to NULL, the InnoDB buffer pool will be corrupted without this fix. rb://81 approved by Heikki Tuuri
-
Timothy Smith authored
Bug #42075: dict_load_indexes failure in dict_load_table will corrupt the dictionary cache Detailed revision comments: r3930 | marko | 2009-01-14 15:51:30 +0200 (Wed, 14 Jan 2009) | 4 lines branches/5.1: dict_load_table(): If dict_load_indexes() fails, invoke dict_table_remove_from_cache() instead of dict_mem_table_free(), so that the data dictionary will not point to freed data. (Bug #42075, Issue #153, rb://76 approved by Heikki Tuuri)
-
Timothy Smith authored
Bug #38187: Error 153 when creating savepoints Detailed revision comments: r3911 | sunny | 2009-01-13 14:15:24 +0200 (Tue, 13 Jan 2009) | 13 lines branches/5.1: Fix Bug#38187 Error 153 when creating savepoints InnoDB previously treated savepoints as a stack e.g., SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- This would delete b and c. This fix changes the behavior to: SAVEPOINT a; SAVEPOINT b; SAVEPOINT c; SAVEPOINT b; <- Does not delete savepoint c
-
Timothy Smith authored
Bug #41571: MySQL segfaults after innodb recovery This 5.0 fix will not be pushed into 5.1; a separate fix (from innodb-5.1-ss4007) will be pushed into 5.1+. Detailed revision comments: r4003 | marko | 2009-01-20 16:12:50 +0200 (Tue, 20 Jan 2009) | 10 lines branches/5.0: rec_set_nth_field(): When the field already is SQL null, do nothing when it is being changed to SQL null. (Bug #41571) Normally, MySQL does not pass "do-nothing" updates to the storage engine. When it does and a column of an InnoDB table that is in ROW_FORMAT=COMPACT is being updated from NULL to NULL, the InnoDB buffer pool will be corrupted without this fix. rb://81 approved by Heikki Tuuri
-
Timothy Smith authored
Bug #18828: If InnoDB runs out of undo slots, it returns misleading 'table is full' This is a backport of code already in 5.1+. The error message change referred to in the detailed revision comments is still pending. Detailed revision comments: r3937 | calvin | 2009-01-15 03:11:56 +0200 (Thu, 15 Jan 2009) | 17 lines branches/5.0: Backport the fix for Bug#18828. Return DB_TOO_MANY_CONCURRENT_TRXS when we run out of UNDO slots in the rollback segment. The backport is requested by MySQL under bug#41529 - Safe handling of InnoDB running out of undo log slots. This is a partial fix since the MySQL error code requested to properly report the error condition back to the client has not yet materialized. Currently we have #ifdef'd the error code translation in ha_innodb.cc. This will have to be changed as and when MySQl add the new requested code or an equivalent code that we can then use. Given the above, currently we will get the old behavior, not the "fixed" and intended behavior. Approved by: Heikki (on IM)
-
Timothy Smith authored
Bug #39939: DROP TABLE/DISCARD TABLESPACE takes long time in buf_LRU_invalidate_tablespace() This was already fixed in 5.1+; this is a backport to 5.0. Detailed revision comments: r2743 | inaam | 2008-10-08 22:18:12 +0300 (Wed, 08 Oct 2008) | 13 lines branches/5.0: Backport of r2742 from branches/5.1: Fix Bug#39939 DROP TABLE/DISCARD TABLESPACE takes long time in buf_LRU_invalidate_tablespace() Improve implementation of buf_LRU_invalidate_tablespace by attempting hash index drop in batches instead of doing it one by one. Reviewed by: Heikki, Sunny, Marko Approved by: Heikki
-
Timothy Smith authored
-
- 02 Mar, 2009 4 commits
-
-
Bernt M. Johnsen authored
-
Bernt M. Johnsen authored
-
Bernt M. Johnsen authored
-
Bernt M. Johnsen authored
-
- 27 Feb, 2009 16 commits
-
-
Staale Smedseng authored
-
Staale Smedseng authored
-
Georgi Kodinov authored
-
Staale Smedseng authored
-
Staale Smedseng authored
-
Georgi Kodinov authored
return no rows The algorithm of determining the best key for loose index scan is doing a loop over the available indexes and selects the one that has the best cost. It retrieves the parameters of the current index into a set of variables. If the cost of using the current index is lower than the best cost so far it copies these variables into another set of variables that contain the information for the best index so far. After having checked all the indexes it uses these variables (outside of the index loop) to create the table read plan object instance. The was a single omission : the key_infix/key_infix_len variables were used outside of the loop without being preserved in the loop for the best index so far. This causes these variables to get overwritten by the next index(es) checked. Fixed by adding variables to hold the data for the current index, passing the new variables to the function that assigns values to them and copying the new variables into the existing ones when selecting a new current best index. To avoid further such problems moved the declarations of the variables used to keep information about the current index inside the loop's compound statement.
-
Patrick Crews authored
-
Ingo Struewing authored
-
Ingo Struewing authored
Some variable values were missing and perl constructs failed. Initialized the variables and refactored the gcov functions.
-
Magnus Svensson authored
-
Patrick Crews authored
-
Patrick Crews authored
-
Patrick Crews authored
Added ORDER BY clause to I_S query to ensure consistent order. There were differences between 5.1 and 6.0 output. Correcting it 5.1.
-
Georgi Kodinov authored
Fixed a warning in 5.1 caused by missing type cast.
-
Patrick Crews authored
-
Georgi Kodinov authored
-
- 26 Feb, 2009 8 commits
-
-
Bernt M. Johnsen authored
-
Georgi Kodinov authored
-
Bernt M. Johnsen authored
-
Georgi Kodinov authored
of a view are selected by * wildcard Backported a part of the fix for 36086 to 5.0
-
Patrick Crews authored
Fixed a typo in the bug fix patch.
-
Magnus Svensson authored
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
-