- 18 Nov, 2009 1 commit
-
-
Magne Mahre authored
DELETE IGNORE The ER_CANT_UPDATE_USED_TABLE_IN_SF_OR_TRG error was set in the diagnostics area when it happened, but the DELETE cleanup code never checked for a non-fatal error condition, thus trying to set diag.area to "ok". This triggered an assert checking that the diag.area was empty. The fix was to test if there existed a non-fatal error condition (thd->is_error() before ok'ing the operation.
-
- 17 Nov, 2009 3 commits
-
-
Kent Boortz authored
-
Kent Boortz authored
-
Anurag Shekhar authored
-
- 13 Nov, 2009 1 commit
-
-
Jorgen Loland authored
init_read_record() - (records.cc:274) Item_cond::used_tables_cache was accessed in records.cc#init_read_record() without being initialized. It had not been initialized because it was wrongly assumed that the Item's variables would not be accessed, and hence quick_fix_field() was used instead of fix_fields() to save a few CPU cycles at creation time. The fix is to properly initilize the Item by replacing quick_fix_field() with fix_fields(). mysql-test/r/select.result: Add test for BUG#48052 mysql-test/t/select.test: Add test for BUG#48052 sql/sql_select.cc: Properly initialize Item_cond_and by calling fix_fields (instead of quick_fix_field) when the Item that "ANDs" WHERE clause conditions with HAVING clause conditions is created.
-
- 12 Nov, 2009 4 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
-
Alexey Kopytov authored
-
Magne Mahre authored
deadlock was encountered The bug is caused by an inconsistent handling of the IGNORE clause. A read from a const table caused a lock timeout (ER_LOCK_TIMEOUT) in innodb. Since the IGNORE clause was given, the timeout was converted into a warning instead of an error, thus not populating the diagnostics area. When innodb subsequently marked the transaction for rollback, mysql asserted since the diag.area was empty. This patch consists of only a test case, as the bug itself was fixed by the patch for Bug #46539
-
- 11 Nov, 2009 2 commits
-
-
Christopher Powers authored
-
Anurag Shekhar authored
on any access Archive engine for 5.1 (and latter) version uses a modified version of zlib (azlib). These two version are incompatible so a proper upgrade is needed before tables created in 5.0 can be used reliable. This upgrade can be performed using repair. But due to lack of test its risky to allow upgrade for now. This patch addresses only the crashing issue. Any attempt to repair will be blocked. Eventually repair can be allowed to run through (which will also cause an upgrade from older version to newer) but only after a thorough testing. mysql-test/r/archive.result: Updated result file for test case for bug#47012 mysql-test/std_data/bug47012.ARM: part of archive table (t1) created in mysql 5.0 mysql-test/std_data/bug47012.ARZ: part of archive table (t1) created in mysql 5.0 mysql-test/std_data/bug47012.frm: part of archive table (t1) created in mysql 5.0 mysql-test/t/archive.test: Added test case for bug#47012. storage/archive/azio.c: Fixed a minor issues (minor version overwriting version in stream structure) Removed assertion when an older version is found. Instead setting the correct version (2) in s->version If an unknown version is found marked it as corrupt. storage/archive/ha_archive.cc: Detecting the archive version in getShare and marking it as need to upgrade. Blocking open if the archive needs an upgrade. This can be allowed in case of open for repair to upgrade the archive but needs to tested.
-
- 10 Nov, 2009 1 commit
-
-
Christopher Powers authored
The crash occurs because SAFEMALLOC is defined for the MySQL server but not for the Archive or Federated engines, resulting in a parameter mismatch between the function prototype and definition for functions using the CALLER_INFO macro. storage/archive/CMakeLists.txt: Set SAFEMALLOC by default to be consistent with the server. storage/federated/CMakeLists.txt: Set SAFEMALLOC by default to be consistent with the server.
-
- 09 Nov, 2009 1 commit
-
-
Georgi Kodinov authored
memory The server was doing a bad class typecast causing setting of wrong value for the maximum number of items in an internal structure used in equality propagation. Fixed by not doing the wrong typecast and asserting the type of the Item where it should be done.
-
- 10 Nov, 2009 1 commit
-
-
Georgi Kodinov authored
values We should re-set the access method functions when changing the access method when switching to another index to avoid sorting. Fixed by doing a little re-engineering : encapsulating all the function assignment into a special function and calling it when flipping the indexes.
-
- 06 Nov, 2009 2 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
only const tables The problem was caused by two shortcuts in the optimizer that are inapplicable in the ROLLUP case. Normally in a case when only const tables are involved in a query, DISTINCT clause can be safely optimized away since there may be only one row produced by the join. Similarly, we don't need to create a temporary table to resolve DISTINCT/GROUP BY/ORDER BY. Both of these are inapplicable when the WITH ROLLUP modifier is present. Fixed by disabling the said optimizations for the WITH ROLLUP case. mysql-test/r/olap.result: Added a test case for bug #48475. mysql-test/t/olap.test: Added a test case for bug #48475. sql/sql_select.cc: Disabled const-only table optimizations for the WITH ROLLUP case.
-
- 04 Nov, 2009 4 commits
-
-
Timothy Smith authored
-
Timothy Smith authored
Just change mysql_foo to mysql_cv_foo for one cache-id variable name. There was only one bad variable name, present in 5.0 and 5.1, but not in the -pe branch.
-
Timothy Smith authored
-
Georgi Kodinov authored
-
- 03 Nov, 2009 6 commits
-
-
Timothy Smith authored
-
Timothy Smith authored
special chars This script failed when the user tried passwords with multiple spaces, \, # or ' characters. Now proper escaping and quoting is used in all contexts. This problem occurs in the Perl version of this script, too, so fix it in both places.
-
Timothy Smith authored
Remove a bash-ism (if ! ...).
-
Davi Arnaut authored
-
Konstantin Osipov authored
Bug#41756 "Strange error messages about locks from InnoDB". In JT_EQ_REF (join_read_key()) access method, don't try to unlock rows in the handler, unless certain that a) they were locked b) they are not used. Unlocking of rows is done by the logic of the nested join loop, and is unaware of the possible caching that the access method may have. This could lead to double unlocking, when a row was unlocked first after reading into the cache, and then when taken from cache, as well as to unlocking of rows which were actually used (but taken from cache). Delegate part of the unlocking logic to the access method, and in JT_EQ_REF count how many times a record was actually used in the join. Unlock it only if it's usage count is 0. Implemented review comments. mysql-test/r/bug41756.result: Add result file (Bug#41756) mysql-test/t/bug41756-master.opt: Use --innodb-locks-unsafe-for-binlog, as in 5.0 just using read_committed isolation is not sufficient to reproduce the bug. mysql-test/t/bug41756.test: Add a test file (Bug#41756) sql/item_subselect.cc: Complete struct READ_RECORD initialization with a new member to unlock records. sql/records.cc: Extend READ_RECORD API with a method to unlock read records. sql/sql_select.cc: In JT_EQ_REF (join_read_key()) access method, don't try to unlock rows in the handler, unless certain that a) they were locked b) they are not used. sql/sql_select.h: Add members to TABLE_REF to count TABLE_REF buffer usage count. sql/structs.h: Update declarations.
-
unknown authored
When a sessione is closed, all temporary tables of the session are automatically dropped and are binlogged. But it will be binlogged with wrong database names when the length of the temporary tables' database names are greater than the length of the current database name or the current database is not set. Query_log_event's db_len is forgot to set when Query_log_event's db is set. This patch wrote code to set db_len immediately after db has set.
-
- 02 Nov, 2009 1 commit
-
-
Davi Arnaut authored
Backport a ndb patch: fix bug with crash during restart, where a mbyte incorrectly could be skipped, leading to "end of log wo/ finding gci".
-
- 30 Oct, 2009 6 commits
-
-
Timothy Smith authored
Term::ReadKey" Add the missing module import. Also, while here, fix a few glaring problems with the script, and ensure that it behaves properly. It seems this script may have never been working correctly (e.g., reading password didn't chomp() the result, so password was set with \n at the end; comparing the re-typed password to original was done with inverted test). Add END { cleanup(); } block to ensure the script removes temporary working files. Add SIG{INT} / SIG{QUIT} handler. Do a bit of reorganization to make the code easier to understand. Limit failed connection attempts to 3. Use ./bin/mysql if it exists, and then fall back on mysql in PATH (before it assumed 'mysql' in the path). Print a nicer error if 'mysql' can't be called. This has been tested on Windows (ActivePerl from cmd.exe, no cygwin needed) and Linux.
-
Alexey Kopytov authored
-
Alexey Kopytov authored
with temporary tables There were two problems the test case from this bug was triggering: 1. JOIN::rollup_init() was supposed to wrap all constant Items into another object for queries with the WITH ROLLUP modifier to ensure they are never considered as constants and therefore are written into temporary tables if the optimizer chooses to employ them for DISTINCT/GROUP BY handling. However, JOIN::rollup_init() was called before make_join_statistics(), so Items corresponding to fields in const tables could not be handled as intended, which was causing all kinds of problems later in the query execution. In particular, create_tmp_table() assumed all constant items except "hidden" ones to be removed earlier by remove_const() which led to improperly initialized Field objects for the temporary table being created. This is what was causing crashes and valgrind errors in storage engines. 2. Even when the above problem had been fixed, the query from the test case produced incorrect results due to some DISTINCT/GROUP BY optimizations being performed by the optimizer that are inapplicable in the WITH ROLLUP case. Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations when the WITH ROLLUP modifier is present, and splitting the const-wrapping part of JOIN::rollup_init() into a separate method which is now invoked after make_join_statistics() when the const tables are already known. mysql-test/r/olap.result: Added a test case for bug #48131. mysql-test/t/olap.test: Added a test case for bug #48131. sql/sql_select.cc: 1. Disabled inapplicable DISTINCT/GROUP BY optimizations when the WITH ROLLUP modifier is present. 2. Split the const-wrapping part of JOIN::rollup_init() into a separate method. sql/sql_select.h: Added rollup_process_const_fields() declaration.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
subquery returning multiple rows Error handling was missing when handling subqueires in WHERE and when assigning a SELECT result to a @variable. This caused crash(es). Fixed by adding error handling code to both the WHERE condition evaluation and to assignment to an @variable.
-
Georgi Kodinov authored
having clause... The fix for bug 46184 was not very complete. It was not covering views using temporary tables and multiple tables in a FROM clause. Fixed by reverting the fix for 46184 and making a more general check that is checking at the right execution stage and for all of the non-supported cases. Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden. Updated the analyse.test and subselect.test accordingly.
-
- 29 Oct, 2009 1 commit
-
-
Georgi Kodinov authored
Queries with nested outer joins may lead to crashes or bad results because an internal data structure is not handled correctly. The optimizer uses bitmaps of nested JOINs to determine if certain table can be placed at a certain place in the JOIN order. It does maintain a bitmap describing in which JOINs last placed table is nested. When it puts a table it makes sure the bit of every JOIN that contains the table in question is set (because JOINs can be nested). It does that by recursively setting the bit for the next enclosing JOIN when this is the first table in the JOIN and recursively resetting the bit if it's the last table in the JOIN. When it removes a table from the join order it should do the opposite : recursively unset the bit if it's the only remaining table in this join and and recursively set the bit if it's removing the last table of a JOIN. There was an error in how the bits was set for the upper levels : when removing a table it was setting the bit for all the enclosing nested JOINs even if there were more tables left in the current JOIN (which practically means that the upper nested JOINs were not affected). Fixed by stopping the recursion at the relevant level. mysql-test/r/join.result: Bug #42116: test case mysql-test/t/join.test: Bug #42116: test case sql/sql_select.cc: Bug #41116: don't go up and set the bits if more tables in at the current JOIN level
-
- 28 Oct, 2009 1 commit
-
-
Sergey Glukhov authored
test result fix mysql-test/suite/funcs_1/r/is_columns_mysql.result: test result fix mysql-test/suite/funcs_1/r/is_statistics.result: test result fix
-
- 27 Oct, 2009 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
inside subquery Re-setting a fulltext index was a no-operation if not all the matches of a search were consumed by reading them. This was preventing a joined table using a fulltext index in a subquery that requires only 1 row of output (e.g. EXISTS) from working correctly because the second execution of the sub-query has the fulltext index cursor in a wrong state and was not finding results. Fixed by making the re-init code _ftb_init_index_search() to re-set open cursors in addition to depleted ones.
-
- 10 Nov, 2009 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-