- 16 Jan, 2013 2 commits
-
-
Anirudh Mangipudi authored
Problem: When a view, with a specific character set and collation, is created on another view with a different character set and collation the dump restoration results in an illegal mix of collations error. SOLUTION: To avoid this confusion of collations, the create table datatype being used is hardcoded as "tinyint NOT NULL". This will not matter as the table created will be dropped at runtime and specifically tinyint is used to avoid hitting the row size conflicts.
-
Neeraj Bisht authored
Consider the following query: SELECT f_1,..,f_m, AGGREGATE_FN(C) FROM t1 WHERE ... GROUP BY ... Loose index scan ("Using index for group-by") can be used for this query if there is an index 'i' covering all fields in the select list, and the GROUP BY clause makes up a prefix f1,...,fn of 'i'. Furthermore, according to rule NGA2 of get_best_group_min_max(), the WHERE clause must contain a conjunction of equality predicates for all fields fn+1,...,fm. The problem in this bug was that a query with WHERE clause that broke NGA2 was not detected and therefore used loose index scan. This lead to wrong result. The query had an index covering (c1,c2) and had: "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b') GROUP BY c1" or "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b') GROUP BY c1" This WHERE clause cannot be transformed to a conjunction of equality predicates. The solution is to introduce another rule, NGA3, that complements NGA2. NGA3 says that if a gap field (field between those listed in GROUP BY and C in the index) has a predicate, then there can only be one range in the query. This requirement is more strict than it has to be in theory. BUG 15947433 will deal with that.
-
- 15 Jan, 2013 1 commit
-
-
Neeraj Bisht authored
Problem:- In case of blob data field, UNION ALL doesn't give correct result. Analysis:- In MyISAM table, when we dont want to check for the distinct for particular key, we set the key_map to zero. While writing record in MyISAM table, we check the distinct with the help of keys, by checking whether that key is active in key_map and then writing the record. In case of blob field, we are checking for distinct by unique constraint, where we are not checking whether that unique key is active or not in key_map. Solution:- Before checking for distinct, check whether any key is active in key_map.
-
- 14 Jan, 2013 2 commits
-
-
Neeraj Bisht authored
MANY WILDCARDS CAUSES A SEGFAULT Back port from 5.6 and trunk
-
WITH AN ASSERTION Recently we added check to handle kill query signal for long operating queries. While the query interruption is reported it must to ensure cursor is restore to proper state for HANDLER interface to work correctly. Normal select query will not face this problem, as on recieving interrupt, select query is aborted and new select query result in re-initialization (including cursor). rb://1836. Approved by Marko.
-
- 12 Jan, 2013 1 commit
-
-
Nisha Gopalakrishnan authored
Analysis: -------- REPLACE operation provides incorrect output when user variable is supplied as an argument and there are multiple rows on which the operation is performed. Consider the example below: SET @var='(( 00000000 ++ 00000000 ))'; SELECT REPLACE(@var, '00000000', table_name) AS a FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql'; Invalid output: +---------------------------------------+ | REPLACE(@var, '00000000', TABLE_NAME) | +---------------------------------------+ | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | ...... ...... | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | +---------------------------------------+ The user argument supplied as the string to REPLACE operation is overwritten after the first iteration to '(( columns_priv ++ columns_priv ))'. The overwritten string after the first iteration is used for the subsequent REPLACE iteration. Since the pattern string is not found, it returns invalid output as mentioned above. Fix: --- If the Alloced_length is zero, realloc() and create a copy of the string which is then used for the REPLACE operation for every iteration.
-
- 11 Jan, 2013 1 commit
-
-
Aditya A authored
INCLUDES FIRST PARTITION WHEN PRUNING PROBLEM ------- TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which was stored in the first partition ,therefore the first partition was always included for the scan when range was specified. FIX --- The fix is a small optimization which we have included ,which will prune the scanning of NULL/first partition if the dates specified in the range are valid and in the same year and month . TO_SECONDS() function is not supported in 5.1 so removed it from the fix and test scripts for mysql-5.1 version.
-
- 10 Jan, 2013 2 commits
-
-
Chaithra Gopalareddy authored
INCORRECT RESULTS This is a backport of fix for Bug#13068506.
-
Praveenkumar Hulakund authored
AVAILABLE MEMORY IS TOO LOW Analysis: --------- In function "mysql_make_view", "table->view" is initialized after parsing(using File_parser::parse) the view definition. If "::parse" function fails then control is moved to label "err:". Here we have assert (table->view == thd->lex). This assert fails if "::parse" function fails, as table->view is not initialized yet. File_parser::parse fails if data being parsed is incorrect/ corrupted or when memory allocation fails. In this scenario its failing because of failure in memory allocation. Fix: --------- In case of failure in function "File_parser::parse", moving to label "err:" is incorrect. Modified code to move to label "end:".
-
- 09 Jan, 2013 1 commit
-
-
Sunny Bains authored
Backport fix from mysql-5.6.
-
- 08 Jan, 2013 1 commit
-
-
hery.ramilison@oracle.com authored
-
- 07 Jan, 2013 2 commits
-
-
Satya Bodapati authored
DIAGNOSTICS_AREA::SET_OK_STATUS Use DBUG_RETURN() instead of return() if DBUG_ENTER() is used in the function. This patch is to fix the Windows pb2 failure on mysql-5.1 Approved by Marko. rb#1792
-
Nirbhay Choubey authored
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1 Part 2: Fix for test failures on Windows.
-
- 04 Jan, 2013 2 commits
-
-
Satya Bodapati authored
DIAGNOSTICS_AREA::SET_OK_STATUS Test fails on 5.1 valgrind build. This is because of close(-1) system call. Fixed by adding extra checks for valid file descriptor. Approved by Vasil(Calvin). rb#1792
-
Nirbhay Choubey authored
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1 While converting directory name to filename, a file separator (FN_LIBCHAR) might get appended to the resulting file name. This can result in off-by-one error when length of the input string is equal to FN_REFLEN. In this case, the terminating '\0' gets written beyond the buffer allocated to store the result. Fixed by incrementing the dst buffer size by 1. As extra safety, switched to strnmov() and added a debug assert to check the length of the input file name. No test case added as the scenario is already covered by the test cases added for bugs in the description.
-
- 02 Jan, 2013 1 commit
-
-
Venkatesh Duggirala authored
Problem:If Disk becomes full while writing into the binlog, then the server instance hangs till someone frees the space. After user frees up the disk space, mysql server crashes with an assert (m_status != DA_EMPTY) Analysis: wait_for_free_space is being called in an infinite loop i.e., server instance will hang until someone frees up the space. So there is no need to set status bit in diagnostic area. Fix: Replace my_error/my_printf_error with sql_print_warning() which prints the warning in error log.
-
- 01 Jan, 2013 1 commit
-
-
Kent Boortz authored
-
- 29 Dec, 2012 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 28 Dec, 2012 1 commit
-
-
Venkatesh Duggirala authored
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER INSTANCE IS RUNNING, REMOVES PID FILE Fix: Before removing the pid file, ensure it was created by the same process, leave it intact otherwise.
-
- 27 Dec, 2012 2 commits
-
-
Nirbhay Choubey authored
Some shell interpreters do not support '-e' test primary to construct conditions. man test 1 (on S10) ...skip... -e file True if file exists. (Not available in sh.) ...skip... Hence, check for the existence of a file using '-e' might result in a syntax error on such shell programs. Fixed by replacing it by '-f'.
-
Mattias Jonsson authored
-
- 26 Dec, 2012 2 commits
-
-
Chaithra Gopalareddy authored
DOS ATTACKS Problem: For detailed description, see Bug#42502. This bug is a duplicate of Bug#42502. The complete fix for Bug#42502 was not made as proposed. Hence the bug still persists. Fix: Make the changes as proposed originally for the bugfix of 42502. Which is to remove the allocation of the memory before we actually check for any errors.
-
akhil.mohan@oracle.com authored
-
- 24 Dec, 2012 2 commits
-
-
Annamalai Gurusami authored
Fixing a pb2 issue. There is some difference in the output in my local machine and pb2 machines in the explain output.
-
Chaithra Gopalareddy authored
TO SIGNED Problem: When we are joining types (of fields) in case of a union, we usually upgrade the datatypes to the largest present in the query. In case of mediumint, it is not happening. Analysis: When joined with types LONG and LONGLONG, mediumint should get upgraded to LONG and LONGLONG respectively. W.r.t the given query, constant '1' will be created as a LONGLONG internally and SIGNED flag is enabled. As a result, while combining types for the field, LONGLONG along with MEDIUMINT gets converted to LONG first. LONG with MEDIUMINT(of the third select) gets converted to MEDIUMINT. SIGNED FLAG would be that of the first field's. As a result, the final result would be SIGNED MEDIUMINT. Fix: While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG is converted to LONGLONG and LONG respectively. Also, made some changes for FLOAT and DOUBLE.
-
- 20 Dec, 2012 1 commit
-
-
Tor Didriksen authored
DBUG_ENTER and DBUG_LEAVE must *always* match, otherwise all subsequent DBUG_ENTER calls will be poking into undefined stack frames.
-
- 21 Dec, 2012 1 commit
-
-
prabakaran thirumalai authored
Analysis: When thread cache is enabled, it does not properly initialize thd->start_utime when a thread is picked from the thread cache. This breaks the quota management mechanism. THD::time_out_user_resource_limits() resets m_user_connect->conn_per_hour to 0 based on thd->start_utime Fix: Initialize start_utime when cached thread is reused. Notes: Enabled back tests which were disabled because of this issue.
-
- 18 Dec, 2012 3 commits
-
-
Vasil Dimov authored
This is a followup to the fix of Bug#14628410 ASSERTION `! IS_SET()' FAILED IN DIAGNOSTICS_AREA::SET_OK_STATUS (satya.bodapati@oracle.com-20121213132316-5joz4phltx9yhjs7) In innobase_mysql_tmpfile(): allocate/open the file after the return(-1); statement.
-
Ahmad Abdullateef authored
IN QUERY CACHE CODE DESCRIPTION: MySQL Server crashes sporadically when Query Caching is on and the server has high contention among clients. ANALYSIS : Scenario 1: In Query_cache::move_by_type() when handling RESULT or its related blocks, Write Lock is acquired on its parent Query block. However the next and prev pointers are cached in local variables before lock acquisition. In an extremely high contention scenario there exists a possibility that Query_cache::append_result_data() is operating on the same query block and as a consequence might append a new Result block to the end of Result blocks Linked List of the Query. This would manipulate the next, prev pointers of the Block being processed in move_by_type(), however the local pointers still point to previous nodes there by causing Data Corruption leading to crash. FIX : Scenario 1: The next, prev pointers are now accessed only after Lock acquisition in Query_cache::move_by_type().
-
Vasil Dimov authored
SAME VERSION NUMBER 1.0.17 Now that InnoDB/InnoDB Plugin is no longer separately developed and distributed from the MySQL server it does not need its own version number. Thus use the MySQL version instead. "Removing" the version altogether is not feasible because the config variable 'innodb_version' cannot be removed in GA branches. Reviewed by: Marko (rb#1751)
-
- 14 Dec, 2012 2 commits
-
-
Ramil Kalimullin authored
Problem: tag's buffer overflow leads to a problem. Fix: bound check added.
-
Inaam Rana authored
BUF_PAGE_GET_GEN REDUNDANT? rb://1711 approved by: Marko Makela When decompressing a compressed page that had already been accessed in the buffer pool, do not attempt to merge buffered changes.
-
- 13 Dec, 2012 3 commits
-
-
Ravinder Thakur authored
File names with colon are being disallowed because of the Alternate Data Stream (ADS) feature of NTFS that could be misused. ADS allows data to be written to alternate streams of a normal file. The data in alternate streams cannot be seen by normal tools on Windows (explorer, cmd.exe). As a result someone can use this feature to hide large amount of data in alternate streams and admins will have no easy way of figuring out the files that are using that disk space. The fix also disallows ADS in the scenarios where file name is passed as some dynamic variable. An important thing about the fix is that it DOES NOT disallow ADS file names if they are not dynamic (i.e. if the file is created by using some option that needs local access to the MySQL server, for example error log file). The reasoning is that if some MySQL option related to files requires access to the local machine (it is not dynamic), then user can very well create data in ADS by some other means. This fixes only those scenarios which can allow users to create data in ADS over the wire. File names with colon are being disallowed only on Windows. UNIX (Linux in particular) supports NTFS, but it will not be a common scenario for someone to configure a NTFS file system to store MySQL data on Linux. Changes in file bug11761752-master.opt are needed due to bug number 15937938.
-
Satya Bodapati authored
The error code returned from Merge file/Temp file creation functions are ignored. Use the return codes of the row_merge_file_create() and innobase_mysql_tmpfile() to return the error to caller if file creation fails. Approved by Marko. rb#1618
-
Harin Vadodaria authored
DOPROCESSREPLY() Description: Function DoProcessReply() calls function decrypt_message() in a while loop without performing a check on available buffer space. This can cause buffer overflow and crash the server. This patch is fix provided by Sawtooth to resolve the issue.
-
- 12 Dec, 2012 1 commit
-
-
sayantan.dutta@oracle.com authored
-
- 11 Dec, 2012 3 commits
-
-
Dmitry Lenev authored
ROBUST AGAINST BUGS IN CALLERS". Both MDL subsystems and Table Definition Cache code assume that callers ensure that names of objects passed to them are not longer than NAME_LEN bytes. Unfortunately due to bugs in callers this assumption might be broken in some cases. As result we get nasty bugs causing buffer overruns when we construct MDL key or TDC key from object names. This patch makes TDC code more robust against such bugs by ensuring that we always checking size of result buffer when constructing TDC keys. This doesn't free its callers from ensuring that both db and table names are shorter than NAME_LEN bytes. But at least this steps prevents buffer overruns in case of bug in caller, replacing them with less harmful behavior. This is 5.1-only version of patch. This patch introduces new version of create_table_def_key() helper function which constructs TDC key without risk of result buffer overrun. Places in code that construct TDC keys were changed to use this function. Also changed rm_temporary_table() and open_new_frm() functions to avoid use of "unsafe" strmov() and strxmov() functions and use safer strnxmov() instead.
-
sayantan.dutta@oracle.com authored
-
Annamalai Gurusami authored
Problem: Before the ALTER TABLE statement, the array dict_index_t::stat_n_diff_key_vals had proper values calculated and updated. But after the ALTER TABLE statement, all the values of this array is 0. Because of this statistics returned by innodb_rec_per_key() is different before and after the ALTER TABLE statement. Running the ANALYZE TABLE command populates the statistics correctly. Solution: After ALTER TABLE statement, set the flag dict_table_t::stat_initialized correctly so that the table statistics will be recalculated properly when the table is next loaded. But note that we still don't choose the loose index scans. This fix only ensures that an ALTER TABLE does not change the optimizer plan. rb://1639 approved by Marko and Jimmy.
-
- 09 Dec, 2012 1 commit
-
-
Shivji Kumar Jha authored
patch to fix post push falures in pb2 BUG#15872504 - REMOVE MYSQL-TEST/INCLUDE/GET_BINLOG_DUMP_THREAD_ID.INC === Problem === The file named "mysql-test/include/get_binlog_dump_thread_id.inc" is not used anywhere. In any case, this file does wrong things in the wrong way: 1) The file seems to assume there is only one dump thread, but there may be many. 2) you can get this information in a much easier way using the command: "select thread_id from threads where processlist_command="Binlog Dump";" === Fix === removed file 'mysql-test/include/get_binlog_dump_thread_id.inc'
-