- 23 Nov, 2009 2 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
-
- 22 Nov, 2009 1 commit
-
-
In RBR, All statements operating on temporary tables should not be binlogged. Despite this fact, after executing 'TRUNCATE... ' on a temporary table, the command is still logged, even if in row-based mode. Consequently, this raises problems in the slave as the table may not exist, resulting in an execution failure. Ultimately, this causes the slave to report an error and abort. After this patch, 'TRUNCATE ...' statement on a temporary table will not be binlogged in RBR.
-
- 21 Nov, 2009 3 commits
-
-
Alfranio Correia authored
-
Davi Arnaut authored
-
Davi Arnaut authored
The problem is that the server could crash when attempting to access a non-conformant proc system table. One such case was a crash when invoking stored procedure related statements on a 5.1 server with a proc system table in the 5.0 format. The solution is to validate the proc system table format before attempts to access it are made. If the table is not in the format that the server expects, a message is written to the error log and the statement that caused the table to be accessed fails.
-
- 20 Nov, 2009 12 commits
-
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
Not all my_hash_insert() calls are checked for return value. This patch adds appropriate checks and failure responses where needed.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Vladislav Vaintroub authored
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
This patch introduce a limit on the time the query cache can block with a lock on SELECTs. Other operations which causes a change in the table data will still be blocked.
-
Georgi Kodinov authored
-
Vladislav Vaintroub authored
implement Davi's review suggestions (post-push fixes)
-
Georgi Kodinov authored
-
- 19 Nov, 2009 2 commits
-
-
Christopher Powers authored
Fixed crash caused by x64 int/long incompatibility introduced in Bug #29125.
-
Georgi Kodinov authored
When merging ranges during calculation of the result of OR to two range sets the current range may be obsoleted by the resulting merged range. The first overlapping range can be obsoleted as well. Fixed by moving the pointer to the first overlapping range to the pointer of the resulting union range. Added few comments at key places in key_or().
-
- 18 Nov, 2009 7 commits
-
-
Georgi Kodinov authored
Fixed 2 errors in comp_err executable : 1. Wrong (off by 1) length passed to my_checksum() 2. strmov() was used on overlapping strings. This is not legal according to the docs in stpcpy(). Used the overlap safe memmove() instead.
-
Sven Sandberg authored
Problem: Some system functions that could return different values on master and slave were not marked unsafe. In particular: GET_LOCK IS_FREE_LOCK IS_USED_LOCK MASTER_POS_WAIT RELEASE_LOCK SLEEP SYSDATE VERSION Fix: Mark these functions unsafe.
-
Jon Olav Hauglid authored
Fixed a problem with the test case when executed with ps-protocol. There the conflicing lock would be noticed during prepare, not during execution of the insert - leading to a different (but equally appropriate) error message.
-
Mattias Jonsson authored
-
Magne Mahre authored
-
Magne Mahre authored
DELETE IGNORE The ER_CANT_UPDATE_USED_TABLE_IN_SF_OR_TRG error was set in the diagnostics area when it happened, but the DELETE cleanup code never checked for a non-fatal error condition, thus trying to set diag.area to "ok". This triggered an assert checking that the diag.area was empty. The fix was to test if there existed a non-fatal error condition (thd->is_error() before ok'ing the operation.
-
Jon Olav Hauglid authored
The problem was a "self-deadlock" if the connection issuing INSERT DELAYED had both the global read lock (FLUSH TABLES WITH READ LOCK) and LOCK TABLES mode active. The table being inserted into had to be different from the table(s) locked by LOCK TABLES. For INSERT DELAYED, the connection thread waits until the handler thread has opened and locked its table before returning. But since the global read lock was active, the handler thread would be unable to lock and would wait for the global read lock to go away. So the handler thread would be waiting for the connection thread to release the global read lock while the connection thread was waiting for the handler thread to lock the table. This gave a "self-deadlock" (same connection, different threads). The deadlock would only happen if we also had LOCK TABLES mode since the INSERT otherwise will try to get protection against global read lock before starting the handler thread. It will then notice that the global read lock is owned by the same connection and report ER_CANT_UPDATE_WITH_READLOCK. This patch removes the deadlock by reporting ER_CANT_UPDATE_WITH_READLOCK also if we are inside LOCK TABLES mode. Test case added to delayed.test.
-
- 17 Nov, 2009 9 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
hery.ramilison@sun.com authored
-
Kent Boortz authored
-
Kent Boortz authored
-
Mattias Jonsson authored
-
Alexey Kopytov authored
WHERE conditions check_group_min_max() checks if the loose index scan optimization is applicable for a given WHERE condition, that is if the MIN/MAX attribute participates only in range predicates comparing the corresponding field with constants. The problem was that it considered the whole predicate suitable for the loose index scan optimization as soon as it encountered a constant as a predicate argument. This is obviously wrong for cases when a constant is the first argument of a predicate which does not satisfy the above condition. Fixed check_group_min_max() so that all arguments of the input predicate are considered to decide if it passes the test, even though a constant has already been encountered.
-
Anurag Shekhar authored
-
- 13 Nov, 2009 1 commit
-
-
Jorgen Loland authored
init_read_record() - (records.cc:274) Item_cond::used_tables_cache was accessed in records.cc#init_read_record() without being initialized. It had not been initialized because it was wrongly assumed that the Item's variables would not be accessed, and hence quick_fix_field() was used instead of fix_fields() to save a few CPU cycles at creation time. The fix is to properly initilize the Item by replacing quick_fix_field() with fix_fields().
-
- 12 Nov, 2009 3 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
-
Alexey Kopytov authored
-