- 03 Aug, 2010 1 commit
-
-
Alfranio Correia authored
-
- 02 Aug, 2010 4 commits
-
-
Alfranio Correia authored
A CREATE...SELECT that fails is written to the binary log if a non-transactional statement is updated. If the logging format is ROW, the CREATE statement and the changes are written to the binary log as distinct events and by consequence the created table is not rolled back in the slave. In this patch, we opted to let the slave goes out of sync by not writting to the binary log the CREATE statement. We do this by simply reseting the binary log's cache.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 01 Aug, 2010 1 commit
-
-
Gleb Shchepa authored
Queries may crash, if 1) the GREATEST or the LEAST function has a mixed list of numeric and LONGBLOB arguments and 2) the result of such a function goes through an intermediate temporary table. An Item that references a LONGBLOB field has max_length of UINT_MAX32 == (2^32 - 1). The current implementation of GREATEST/LEAST returns REAL result for a mixed list of numeric and string arguments (that contradicts with the current documentation, this contradiction was discussed and it was decided to update the documentation). The max_length of such a function call was calculated as a maximum of argument max_length values (i.e. UINT_MAX32). That max_length value of UINT_MAX32 was used as a length for the intermediate temporary table Field_double to hold GREATEST/LEAST function result. The Field_double::val_str() method call on that field allocates a String value. Since an allocation of String reserves an additional byte for a zero-termination, the size of String buffer was set to (UINT_MAX32 + 1), that caused an integer overflow: actually, an empty buffer of size 0 was allocated. An initialization of the "first" byte of that zero-size buffer with '\0' caused a crash. The Item_func_min_max::fix_length_and_dec() has been modified to calculate max_length for the REAL result like we do it for arithmetical operators. ****** Bug #54461: crash with longblob and union or update with subquery Queries may crash, if 1) the GREATEST or the LEAST function has a mixed list of numeric and LONGBLOB arguments and 2) the result of such a function goes through an intermediate temporary table. An Item that references a LONGBLOB field has max_length of UINT_MAX32 == (2^32 - 1). The current implementation of GREATEST/LEAST returns REAL result for a mixed list of numeric and string arguments (that contradicts with the current documentation, this contradiction was discussed and it was decided to update the documentation). The max_length of such a function call was calculated as a maximum of argument max_length values (i.e. UINT_MAX32). That max_length value of UINT_MAX32 was used as a length for the intermediate temporary table Field_double to hold GREATEST/LEAST function result. The Field_double::val_str() method call on that field allocates a String value. Since an allocation of String reserves an additional byte for a zero-termination, the size of String buffer was set to (UINT_MAX32 + 1), that caused an integer overflow: actually, an empty buffer of size 0 was allocated. An initialization of the "first" byte of that zero-size buffer with '\0' caused a crash. The Item_func_min_max::fix_length_and_dec() has been modified to calculate max_length for the REAL result like we do it for arithmetical operators.
-
- 30 Jul, 2010 9 commits
-
-
Davi Arnaut authored
Fix compiler warnings.
-
Luis Soares authored
-
Georgi Kodinov authored
-
Luis Soares authored
mostly because existing test result files were not updated.
-
Georgi Kodinov authored
In order to be able to check if the set of the grouping fields in a GROUP BY has changed (and thus to start a new group) the optimizer caches the current values of these fields in a set of Cached_item derived objects. The Cached_item_str, used for caching varchar and TEXT columns, is limited in length by the max_sort_length variable. A String buffer to store the value with an alloced length of either the max length of the string or the value of max_sort_length (whichever is smaller) in Cached_item_str's constructor. Then, at compare time the value of the string to compare to was truncated to the alloced length of the string buffer inside Cached_item_str. This is all fine and valid, but only if you're not assigning values near or equal to the alloced length of this buffer. Because when assigning values like this the alloced length is rounded up and as a result the next set of data will not match the group buffer, thus leading to wrong results because of the changed alloced_length. Fixed by preserving the original maximum length in the Cached_item_str's constructor and using this instead of the alloced_length to limit the string to compare to. Test case added.
-
Davi Arnaut authored
-
Davi Arnaut authored
Fix a regression (due to a typo) which caused spurious incorrect argument errors for long data stream parameters if all forms of logging were disabled (binary, general and slow logs).
-
Davi Arnaut authored
Fix a regression (due to a typo) which caused spurious incorrect argument errors for long data stream parameters if all forms of logging were disabled (binary, general and slow logs).
-
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries are written to the binlog using special types of log events. When mysqlbinlog reads such events, it re-creates the file in a temporary directory with a generated filename and outputs a "LOAD DATA INFILE" query where the filename is replaced by the generated file. The temporary file is not deleted by mysqlbinlog after termination. To fix the problem, in mixed mode we go to row-based. In SBR, we document it to remind user the tmpfile is left in a temporary directory.
-
- 29 Jul, 2010 4 commits
-
-
Vasil Dimov authored
-
Vasil Dimov authored
-
Alexander Barkov authored
Problem: The original patch didn't compile on debug_werror due to wrong format in printf("%d") for size_t variables. Fix: Adding cast to (int).
-
/*![:version:] Query Code */, where [:version:] is a sequence of 5 digits representing the mysql server version(e.g /*!50200 ... */), is a special comment that the query in it can be executed on those servers whose versions are larger than the version appearing in the comment. It leads to a security issue when slave's version is larger than master's. A malicious user can improve his privileges on slaves. Because slave SQL thread is running with SUPER privileges, so it can execute queries that he/she does not have privileges on master. This bug is fixed with the logic below: - To replace '!' with ' ' in the magic comments which are not applied on master. So they become common comments and will not be applied on slave. - Example: 'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/ will be binlogged as 'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
-
- 28 Jul, 2010 2 commits
-
-
Davi Arnaut authored
The problem is that the fix Bug#29784 was mistakenly reverted when updating YaSSL to a newer version. The solution is to re-apply the fix and this time actually add a meaningful test case so that possible regressions are caught.
-
Jimmy Yang authored
to mysql-5.1-innodb plugin.
-
- 24 Jul, 2010 1 commit
-
-
Davi Arnaut authored
Do not attempt to test the innodb plugin with the embedded server, it's not supported for now.
-
- 26 Jul, 2010 2 commits
-
-
Sven Sandberg authored
-
Alexander Barkov authored
Problem: The functions my_like_range_xxx() returned badly formed maximum strings for Asian character sets, which made problems for storage engines. Fix: - Removed a number my_like_range_xxx() implementations, which were in fact dumplicate code pieces. - Using generic my_like_range_mb() instead. - Setting max_sort_char member properly for Asian character sets - Adding unittest/strings/strings-t.c, to test that my_like_range_xxx() return well-formed min and max strings. Notes: - No additional tests in mysql/t/ available. Old tests cover the affected code well enough.
-
- 23 Jul, 2010 4 commits
-
-
Vasil Dimov authored
InnoDB Plugin 1.0.10 has been released with MySQL 5.1.49.
-
Alexey Kopytov authored
prepared statements Using GROUP_CONCAT() together with the WITH ROLLUP modifier could crash the server. The reason was a combination of several facts: 1. The Item_func_group_concat class stores pointers to ORDER objects representing the columns in the ORDER BY clause of GROUP_CONCAT(). 2. find_order_in_list() called from Item_func_group_concat::setup() modifies the ORDER objects so that their 'item' member points to the arguments list allocated in the Item_func_group_concat constructor. 3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of the original Item_func_group_concat object could be created by using the Item_func_group_concat::Item_func_group_concat(THD *thd, Item_func_group_concat *item) copy constructor. The latter essentially creates a shallow copy of the source object. Memory for the arguments array is allocated on thd->mem_root, but the pointers for arguments and ORDER are copied verbatim. What happens in the test case is that when executing the query for the first time, after a copy of the original Item_func_group_concat object has been created by JOIN::rollup_make_fields(), find_order_in_list() is called for this new object. It then resolves ORDER BY by modifying the ORDER objects so that they point to elements of the arguments array which is local to the cloned object. When thd->mem_root is freed upon completing the execution, pointers in the ORDER objects become invalid. Those ORDER objects, however, are also shared with the original Item_func_group_concat object which is preserved between executions of a prepared statement. So the first call to find_order_in_list() for the original object on the second execution tries to dereference an invalid pointer. The solution is to create copies of the ORDER objects when copying Item_func_group_concat to not leave any stale pointers in other instances with different lifecycles.
-
Dmitry Shulga authored
-
Vasil Dimov authored
-
- 22 Jul, 2010 2 commits
-
-
kevin.lewis@oracle.com authored
-
kevin.lewis@oracle.com authored
Bug#49542 - Do as the comment suggests and downgrade directory errors from find_file() to a warning unless they happen during a SHOW command.
-
- 21 Jul, 2010 10 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
SHOW DATABASES LIKE ... was not converting to lowercase on comparison as the documentation is suggesting. Fixed it to behave similarly to SHOW TABLES LIKE ... and updated the failing on MacOSX lowercase_table2 test case.
-
Alexey Kopytov authored
-
Joerg Bruehe authored
-