- 20 Aug, 2010 1 commit
-
-
Georgi Kodinov authored
KILL_BAD_DATA is returned Two problems discovered with the LEAST()/GREATEST() functions: 1. The check for a null value should happen even after the second call to val_str() in the args. This is important because two subsequent calls to the same Item::val_str() may yield different results. Fixed by checking for NULL value before dereferencing the string result. 2. While looping over the arguments and evaluating them the loop should stop if there was an error evaluating so far or the statement was killed. Fixed by checking for error and bailing out.
-
- 16 Aug, 2010 1 commit
-
-
Sunny Bains authored
-
- 13 Aug, 2010 1 commit
-
-
Georgi Kodinov authored
An user assignment variable expression that's evaluated in a logical expression context (Item::val_bool()) can be pre-calculated in a temporary table for GROUP BY. However when the expression value is used after the temp table creation it was re-evaluated instead of being read from the temp table due to a missing val_bool_result() method. Fixed by implementing the method.
-
- 06 Aug, 2010 1 commit
-
-
Gleb Shchepa authored
The CONVERT_TZ function crashes the server when the timezone argument is an empty SET field value. 1) The CONVERT_TZ may find a timezone string in the tz_names hash. 2) A string representation of the empty SET is a String of zero length with the NULL pointer. 3) If the key argument length is zero, hash functions do comparison using the length of the record being compared against. I.e. a zero-length String buffer is an invalid argument for hash search functions, and if String points to NULL buffer, hashcmp() fails with SEGV accessing that memory. The my_tz_find function has been modified to treat empty Strings as invalid timezone values to skip unnecessary hash search.
-
- 05 Aug, 2010 2 commits
-
-
Sunny Bains authored
Handle overflow when reading value from SELECT MAX(C) FROM T; Call ha_innobase::info() after initializing the autoinc value in ha_innobase::open(). Fix for both the builtin and plugin. rb://402
-
Sunny Bains authored
The bug is due to a double delete of a BLOB, once via: rollback -> btr_cur_pessimistic_delete() and the second time via purge. The bug is in row_upd_clust_rec_by_insert(). There we relinquish ownership of the non-updated BLOB columns in btr_cur_mark_extern_inherited_fields() before building the row entry that will be inserted and whose contents will be logged in the UNDO log. However, we don't set the BLOB column later to INHERITED so that a possible rollback will not free the original row's non-updated BLOB entries. This is because the condition that checks for that is in : if (node->upd_ext) {}. node->upd_ext is non-NULL only if a BLOB column was updated and that column is part of some key ordering (see row_upd_replace()). This results in the non-update BLOB columns being deleted during a rollback and subsequently by purge again. rb://413
-
- 04 Aug, 2010 2 commits
-
-
Jimmy Yang authored
rb://399 approved by Sunny Bains
-
Jimmy Yang authored
foreign keys at once rb://391 approved by Heikki Z
-
- 03 Aug, 2010 2 commits
-
-
karen.langford@oracle.com authored
-
Alfranio Correia authored
-
- 02 Aug, 2010 4 commits
-
-
Alfranio Correia authored
A CREATE...SELECT that fails is written to the binary log if a non-transactional statement is updated. If the logging format is ROW, the CREATE statement and the changes are written to the binary log as distinct events and by consequence the created table is not rolled back in the slave. In this patch, we opted to let the slave goes out of sync by not writting to the binary log the CREATE statement. We do this by simply reseting the binary log's cache.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 01 Aug, 2010 1 commit
-
-
Gleb Shchepa authored
Queries may crash, if 1) the GREATEST or the LEAST function has a mixed list of numeric and LONGBLOB arguments and 2) the result of such a function goes through an intermediate temporary table. An Item that references a LONGBLOB field has max_length of UINT_MAX32 == (2^32 - 1). The current implementation of GREATEST/LEAST returns REAL result for a mixed list of numeric and string arguments (that contradicts with the current documentation, this contradiction was discussed and it was decided to update the documentation). The max_length of such a function call was calculated as a maximum of argument max_length values (i.e. UINT_MAX32). That max_length value of UINT_MAX32 was used as a length for the intermediate temporary table Field_double to hold GREATEST/LEAST function result. The Field_double::val_str() method call on that field allocates a String value. Since an allocation of String reserves an additional byte for a zero-termination, the size of String buffer was set to (UINT_MAX32 + 1), that caused an integer overflow: actually, an empty buffer of size 0 was allocated. An initialization of the "first" byte of that zero-size buffer with '\0' caused a crash. The Item_func_min_max::fix_length_and_dec() has been modified to calculate max_length for the REAL result like we do it for arithmetical operators. ****** Bug #54461: crash with longblob and union or update with subquery Queries may crash, if 1) the GREATEST or the LEAST function has a mixed list of numeric and LONGBLOB arguments and 2) the result of such a function goes through an intermediate temporary table. An Item that references a LONGBLOB field has max_length of UINT_MAX32 == (2^32 - 1). The current implementation of GREATEST/LEAST returns REAL result for a mixed list of numeric and string arguments (that contradicts with the current documentation, this contradiction was discussed and it was decided to update the documentation). The max_length of such a function call was calculated as a maximum of argument max_length values (i.e. UINT_MAX32). That max_length value of UINT_MAX32 was used as a length for the intermediate temporary table Field_double to hold GREATEST/LEAST function result. The Field_double::val_str() method call on that field allocates a String value. Since an allocation of String reserves an additional byte for a zero-termination, the size of String buffer was set to (UINT_MAX32 + 1), that caused an integer overflow: actually, an empty buffer of size 0 was allocated. An initialization of the "first" byte of that zero-size buffer with '\0' caused a crash. The Item_func_min_max::fix_length_and_dec() has been modified to calculate max_length for the REAL result like we do it for arithmetical operators.
-
- 30 Jul, 2010 9 commits
-
-
Davi Arnaut authored
Fix compiler warnings.
-
Luis Soares authored
-
Georgi Kodinov authored
-
Luis Soares authored
mostly because existing test result files were not updated.
-
Georgi Kodinov authored
In order to be able to check if the set of the grouping fields in a GROUP BY has changed (and thus to start a new group) the optimizer caches the current values of these fields in a set of Cached_item derived objects. The Cached_item_str, used for caching varchar and TEXT columns, is limited in length by the max_sort_length variable. A String buffer to store the value with an alloced length of either the max length of the string or the value of max_sort_length (whichever is smaller) in Cached_item_str's constructor. Then, at compare time the value of the string to compare to was truncated to the alloced length of the string buffer inside Cached_item_str. This is all fine and valid, but only if you're not assigning values near or equal to the alloced length of this buffer. Because when assigning values like this the alloced length is rounded up and as a result the next set of data will not match the group buffer, thus leading to wrong results because of the changed alloced_length. Fixed by preserving the original maximum length in the Cached_item_str's constructor and using this instead of the alloced_length to limit the string to compare to. Test case added.
-
Davi Arnaut authored
-
Davi Arnaut authored
Fix a regression (due to a typo) which caused spurious incorrect argument errors for long data stream parameters if all forms of logging were disabled (binary, general and slow logs).
-
Davi Arnaut authored
Fix a regression (due to a typo) which caused spurious incorrect argument errors for long data stream parameters if all forms of logging were disabled (binary, general and slow logs).
-
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries are written to the binlog using special types of log events. When mysqlbinlog reads such events, it re-creates the file in a temporary directory with a generated filename and outputs a "LOAD DATA INFILE" query where the filename is replaced by the generated file. The temporary file is not deleted by mysqlbinlog after termination. To fix the problem, in mixed mode we go to row-based. In SBR, we document it to remind user the tmpfile is left in a temporary directory.
-
- 29 Jul, 2010 4 commits
-
-
Vasil Dimov authored
-
Vasil Dimov authored
-
Alexander Barkov authored
Problem: The original patch didn't compile on debug_werror due to wrong format in printf("%d") for size_t variables. Fix: Adding cast to (int).
-
/*![:version:] Query Code */, where [:version:] is a sequence of 5 digits representing the mysql server version(e.g /*!50200 ... */), is a special comment that the query in it can be executed on those servers whose versions are larger than the version appearing in the comment. It leads to a security issue when slave's version is larger than master's. A malicious user can improve his privileges on slaves. Because slave SQL thread is running with SUPER privileges, so it can execute queries that he/she does not have privileges on master. This bug is fixed with the logic below: - To replace '!' with ' ' in the magic comments which are not applied on master. So they become common comments and will not be applied on slave. - Example: 'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/ will be binlogged as 'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
-
- 28 Jul, 2010 2 commits
-
-
Davi Arnaut authored
The problem is that the fix Bug#29784 was mistakenly reverted when updating YaSSL to a newer version. The solution is to re-apply the fix and this time actually add a meaningful test case so that possible regressions are caught.
-
Jimmy Yang authored
to mysql-5.1-innodb plugin.
-
- 24 Jul, 2010 1 commit
-
-
Davi Arnaut authored
Do not attempt to test the innodb plugin with the embedded server, it's not supported for now.
-
- 26 Jul, 2010 2 commits
-
-
Sven Sandberg authored
-
Alexander Barkov authored
Problem: The functions my_like_range_xxx() returned badly formed maximum strings for Asian character sets, which made problems for storage engines. Fix: - Removed a number my_like_range_xxx() implementations, which were in fact dumplicate code pieces. - Using generic my_like_range_mb() instead. - Setting max_sort_char member properly for Asian character sets - Adding unittest/strings/strings-t.c, to test that my_like_range_xxx() return well-formed min and max strings. Notes: - No additional tests in mysql/t/ available. Old tests cover the affected code well enough.
-
- 23 Jul, 2010 4 commits
-
-
Vasil Dimov authored
InnoDB Plugin 1.0.10 has been released with MySQL 5.1.49.
-
Alexey Kopytov authored
prepared statements Using GROUP_CONCAT() together with the WITH ROLLUP modifier could crash the server. The reason was a combination of several facts: 1. The Item_func_group_concat class stores pointers to ORDER objects representing the columns in the ORDER BY clause of GROUP_CONCAT(). 2. find_order_in_list() called from Item_func_group_concat::setup() modifies the ORDER objects so that their 'item' member points to the arguments list allocated in the Item_func_group_concat constructor. 3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of the original Item_func_group_concat object could be created by using the Item_func_group_concat::Item_func_group_concat(THD *thd, Item_func_group_concat *item) copy constructor. The latter essentially creates a shallow copy of the source object. Memory for the arguments array is allocated on thd->mem_root, but the pointers for arguments and ORDER are copied verbatim. What happens in the test case is that when executing the query for the first time, after a copy of the original Item_func_group_concat object has been created by JOIN::rollup_make_fields(), find_order_in_list() is called for this new object. It then resolves ORDER BY by modifying the ORDER objects so that they point to elements of the arguments array which is local to the cloned object. When thd->mem_root is freed upon completing the execution, pointers in the ORDER objects become invalid. Those ORDER objects, however, are also shared with the original Item_func_group_concat object which is preserved between executions of a prepared statement. So the first call to find_order_in_list() for the original object on the second execution tries to dereference an invalid pointer. The solution is to create copies of the ORDER objects when copying Item_func_group_concat to not leave any stale pointers in other instances with different lifecycles.
-
Dmitry Shulga authored
-
Vasil Dimov authored
-
- 22 Jul, 2010 2 commits
-
-
kevin.lewis@oracle.com authored
-
kevin.lewis@oracle.com authored
Bug#49542 - Do as the comment suggests and downgrade directory errors from find_file() to a warning unless they happen during a SHOW command.
-
- 21 Jul, 2010 1 commit
-
-
Georgi Kodinov authored
-