- 13 Dec, 2012 1 commit
-
-
Harin Vadodaria authored
DOPROCESSREPLY() Description: Function DoProcessReply() calls function decrypt_message() in a while loop without performing a check on available buffer space. This can cause buffer overflow and crash the server. This patch is fix provided by Sawtooth to resolve the issue.
-
- 12 Dec, 2012 1 commit
-
-
sayantan.dutta@oracle.com authored
-
- 11 Dec, 2012 3 commits
-
-
Dmitry Lenev authored
ROBUST AGAINST BUGS IN CALLERS". Both MDL subsystems and Table Definition Cache code assume that callers ensure that names of objects passed to them are not longer than NAME_LEN bytes. Unfortunately due to bugs in callers this assumption might be broken in some cases. As result we get nasty bugs causing buffer overruns when we construct MDL key or TDC key from object names. This patch makes TDC code more robust against such bugs by ensuring that we always checking size of result buffer when constructing TDC keys. This doesn't free its callers from ensuring that both db and table names are shorter than NAME_LEN bytes. But at least this steps prevents buffer overruns in case of bug in caller, replacing them with less harmful behavior. This is 5.1-only version of patch. This patch introduces new version of create_table_def_key() helper function which constructs TDC key without risk of result buffer overrun. Places in code that construct TDC keys were changed to use this function. Also changed rm_temporary_table() and open_new_frm() functions to avoid use of "unsafe" strmov() and strxmov() functions and use safer strnxmov() instead.
-
sayantan.dutta@oracle.com authored
-
Annamalai Gurusami authored
Problem: Before the ALTER TABLE statement, the array dict_index_t::stat_n_diff_key_vals had proper values calculated and updated. But after the ALTER TABLE statement, all the values of this array is 0. Because of this statistics returned by innodb_rec_per_key() is different before and after the ALTER TABLE statement. Running the ANALYZE TABLE command populates the statistics correctly. Solution: After ALTER TABLE statement, set the flag dict_table_t::stat_initialized correctly so that the table statistics will be recalculated properly when the table is next loaded. But note that we still don't choose the loose index scans. This fix only ensures that an ALTER TABLE does not change the optimizer plan. rb://1639 approved by Marko and Jimmy.
-
- 09 Dec, 2012 2 commits
-
-
Shivji Kumar Jha authored
patch to fix post push falures in pb2 BUG#15872504 - REMOVE MYSQL-TEST/INCLUDE/GET_BINLOG_DUMP_THREAD_ID.INC === Problem === The file named "mysql-test/include/get_binlog_dump_thread_id.inc" is not used anywhere. In any case, this file does wrong things in the wrong way: 1) The file seems to assume there is only one dump thread, but there may be many. 2) you can get this information in a much easier way using the command: "select thread_id from threads where processlist_command="Binlog Dump";" === Fix === removed file 'mysql-test/include/get_binlog_dump_thread_id.inc'
-
Shivji Kumar Jha authored
RPL_ROW_UNTIL TIMES OUT patch to fix post push falures in pb2
-
- 05 Dec, 2012 2 commits
-
-
Dmitry Lenev authored
Using too long table aliases in stored routines might have caused server crashes. Code in sp_head::merge_table_list() which is responsible for collecting information about tables used in stored routine was not aware of the fact that table alias might have arbitrary length. I.e. it assumed that table alias can't be longer than NAME_LEN bytes and allocated buffer for a key identifying table accordingly. This patch fixes the issue by ensuring that we use dynamically allocated buffer for table key when table alias is too long. By default stack based buffer is used in which NAME_LEN bytes are reserved for table alias.
-
Shivji Kumar Jha authored
=== Problem === The test is dependent on binlog positions and checks to see if the command 'START SLAVE' functions correctly with the 'UNTIL' clause added to it. The 'UNTIL' clause is added to specify that the slave should start and run until the SQL thread reaches a given point in the master binary log or in the slave relay log. The test uses hard coded values for MASTER_LOG_POS and RELAY_LOG_POS, instead of extracting it using query_get_value() function. There is a test 'rpl.rpl_row_until' which does the similar thing but uses query_get_value() function to set the values of MASTER_LOG_POS/ RELAY_LOG_POS. To be precise, rpl.rpl_row_until is a modified version of engines/func.rpl_row_until.test. The use of hard coded values may lead the slave to stop at a position which may differ from the expected position in the binlog file, an example being the failure of engines/funcs.rpl_row_until in mysql-5.1 given as: "query 'select * from t2' failed. Table 'test.t2' doesn't exist". In this case, the slave actually ran a couple of extra commands as a result of which the slave first deleted the table and then ran a select query on table, leading to the above mentioned failure. === Fix === 1) Fixed the code for failure seen in rpl.rpl_row_until. This test was also failing although the symptoms of failure were different. 2) Copied the contents from rpl.rpl_row_until into into engines/funcs.rpl.rpl_row_until. 3) Updated engines/funcs.rpl_row_until.result accordingly.
-
- 01 Dec, 2012 2 commits
-
-
Mattias Jonsson authored
-
Libing Song authored
FORMAT_DESCRIPTION_LOG_EVENT::CALC_SERVER_VERSION_SPLIT Problem: When reading a Format_description_log_event, it supposes MySQL version is always valid and DBUG_ASSERTION is used check the version number. However, user may give a wrong binlog offset, even give a faked binary event which includes an invalid MySQL version. This will cause server crash. Fix: The assertions are removed and an error will be reported if MySQL version in Format_description_log_event is invalid.
-
- 30 Nov, 2012 3 commits
-
-
Mattias Jonsson authored
IN DEACTIVATE_DDL_LOG_ENTRY Update of comments according to reviewers request.
-
Inaam Rana authored
revid that is being reverted: marko.makela@oracle.com-20121128070024-hb56t41limja8edz
-
Shivji Kumar Jha authored
=== Problem === The test is dependent on binlog positions and checks to see if the command 'START SLAVE' functions correctly with the 'UNTIL' clause added to it. The 'UNTIL' clause is added to specify that the slave should start and run until the SQL thread reaches a given point in the master binary log or in the slave relay log. The test uses hard coded values for MASTER_LOG_POS and RELAY_LOG_POS, instead of extracting it using query_get_value() function. There is a test 'rpl.rpl_row_until' which does the similar thing but uses query_get_value() function to set the values of MASTER_LOG_POS/ RELAY_LOG_POS. To be precise, rpl.rpl_row_until is a modified version of engines/func.rpl_row_until.test. The use of hard coded values may lead the slave to stop at a position which may differ from the expected position in the binlog file, an example being the failure of engines/funcs.rpl_row_until in mysql-5.1 given as: "query 'select * from t2' failed. Table 'test.t2' doesn't exist". In this case, the slave actually ran a couple of extra commands as a result of which the slave first deleted the table and then ran a select query on table, leading to the above mentioned failure. === Fix === 1) Fixed the code for failure seen in rpl.rpl_row_until. This test was also failing although the symptoms of failure were different. 2) Copied the contents from rpl.rpl_row_until into into engines/funcs.rpl.rpl_row_until. 3) Updated engines/funcs.rpl_row_until.result accordingly.
-
- 29 Nov, 2012 1 commit
-
-
Harin Vadodaria authored
Description: A very large database name causes buffer overflow in functions acl_get() and check_grant_db() in sql_acl.cc. It happens due to an unguarded string copy operation. This puts required sanity checks before copying db string to destination buffer.
-
- 28 Nov, 2012 2 commits
-
-
Yasufumi Kinoshita authored
The converted implicit lock should wait for the prior conflicting lock if found. rb://1437 approved by Marko
-
Marko Mäkelä authored
BUF_PAGE_GET_GEN REDUNDANT? buf_page_get_gen(): When decompressing a compressed page that had already been accessed in the buffer pool, do not attempt to merge buffered changes. rb:1602 approved by Inaam Rana
-
- 26 Nov, 2012 2 commits
-
-
sayantan.dutta@oracle.com authored
-
Yasufumi Kinoshita authored
trx_undo_prev_version_build() should confirm existence of inherited (not-own) external pages. Bug #14676084 : ROW_UNDO_MOD_UPD_DEL_SEC() DOESN'T NEED UNDO_ROW AND UNDO_EXT INITIALIZED mtr script could hit the assertion error !bpage->file_page_was_freed using this path. So, also fixed rb://1337 approved by Marko Makela.
-
- 21 Nov, 2012 1 commit
-
-
Harin Vadodaria authored
Description: Updated yassl to version 2.2.2
-
- 16 Nov, 2012 1 commit
-
-
Inaam Rana authored
rb://1546 approved by: Sunny Bains and Marko Makela Our dealing of buf_page_t::access_time flag is inaccurate. * If LRU eviction has not started we don't set the access_time * If LRU eviction is started we set it only if the block is not 'too old'. * Not a correctness issue but we hold buf_pool::mutex when setting the flag This patch fixes this by: * Setting flag unconditionally whenever the first page access happens * Use buf_page_t mutex to protect write to the flag
-
- 13 Nov, 2012 1 commit
-
-
Mattias Jonsson authored
The problem is related to the changes made in bug#13025132. get_partition_set can do dynamic pruning which limits the partitions to scan even further. This is not accounted for when setting the correct start of the preallocated record buffer used in the priority queue, thus leading to wrong buffer is used (including wrong preset partitioning id, connected to that buffer). Solution is to fast forward the buffer pointer to point to the correct partition record buffer.
-
- 16 Nov, 2012 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 15 Nov, 2012 2 commits
-
-
Marko Mäkelä authored
Remove a bogus debug assertion.
-
Marko Mäkelä authored
CHAR(n) in ROW_FORMAT=REDUNDANT tables is always fixed-length (n*mbmaxlen bytes), but in the temporary file it is variable-length (n*mbminlen to n*mbmaxlen bytes) for variable-length character sets, such as UTF-8. The temporary file format used during index creation and online ALTER TABLE is based on ROW_FORMAT=COMPACT. Thus, it should use the variable-length encoding even if the base table is in ROW_FORMAT=REDUNDNAT. dtype_get_fixed_size_low(): Replace an assertion-like check with a debug assertion. rec_init_offsets_comp_ordinary(), rec_convert_dtuple_to_rec_comp(): Make this an inline function. Replace 'ulint extra' with 'bool temp'. rec_get_converted_size_comp_prefix_low(): Renamed from rec_get_converted_size_comp_prefix(), and made inline. Add the parameter 'bool temp'. If temp=true, do not add REC_N_NEW_EXTRA_BYTES. rec_get_converted_size_comp_prefix(): Remove the comment about dict_table_is_comp(). This function is only to be called for other than ROW_FORMAT=REDUNDANT records. rec_get_converted_size_temp(): New function for computing temporary file record size. Omit REC_N_NEW_EXTRA_BYTES from the sizes. rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): New functions, for operating on temporary file records. rb:1559 approved by Jimmy Yang
-
- 14 Nov, 2012 2 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
- 12 Nov, 2012 1 commit
-
-
Yasufumi Kinoshita authored
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page. But in almost all of the cases, the father page should be root page, no upper pages. It is very rare path. In addition the leaf page should not be lifted unless the father page is root. Because the branch pages should not become the leaf pages. rb://1336 approved by Marko Makela.
-
- 09 Nov, 2012 2 commits
-
-
Annamalai Gurusami authored
When a new primary key is added to an InnoDB table, then the following steps are taken by InnoDB plugin: . let t1 be the original table. . a temporary table t1@00231 will be created by cloning t1. . all data will be copied from t1 to t1@00231. . rename t1 to t1@00232. . rename t1@00231 to t1. . drop t1@00232. The rename and drop operations involve file operations. But file operations cannot be rolled back. So in row_merge_rename_tables(), just after doing data dictionary update and before doing any file operations, generate redo logs for file operations and commit the transaction. This will ensure that any crash after this commit, the table is still recoverable by moving .ibd and .frm files. Manual recovery is required. During recovery, the rename file operation redo logs are processed. Previously this was being ignored. rb://1460 approved by Marko Makela.
-
Anirudh Mangipudi authored
TABLE DATA IF DUMPS MYSQL DATABA Problem: If mysqldump is run without --events (or with --skip-events) it will not dump the mysql.event table's data. This behaviour is inconsistent with that of --routines option, which does not affect the dumping of mysql.proc table. According to the Manual, --events (--skip-events) defines, if the Event Scheduler events for the dumped databases should be included in the mysqldump output and this has nothing to do with the mysql.event table itself. Solution: A warning has been added when mysqldump is used without --events (or with --skip-events) and a separate patch with the behavioral change will be prepared for 5.6/trunk.
-
- 08 Nov, 2012 2 commits
-
-
Aditya A authored
Analysis --------- my_stat() calls stat() and if the stat() call fails we try to set the variable my_errno which is actually a thread specific data . We try to get the address of this thread specific data using my_pthread_getspecifc(),but for the purge thread we have not defined any thread specific data so it returns null and when dereferencing null we get a segmentation fault. init_available_charsets() seen in the core stack is invoked through pthread_once() .pthread_once is used for one time initialization.Since free_charsets() is called before innodb plugin shutdown ,purge thread calls init_avaliable_charsets() which leads to the crash. Fix --- Call free_charsets() after the innodb plugin shutdown,since purge threads are still using the charsets.
-
Aditya A authored
Follow up patch to address the pb2 failures.
-
- 07 Nov, 2012 1 commit
-
-
Venkata Sidagam authored
Brief description: After insert some rows to MEMORY table with HASH key some rows can't be deleted in one step. Problem Analysis/solution: info->current_ptr will have the information about the current hash pointer from where we can traverse to the list to get all the remaining tuples. In hp_delete_key we are updating info->current_ptr with the last_pos based on the flag parameter(which is the keydef and last index are same). As part of the fix we are making it to zero only when the code flow reaches to the end of the function hp_delete_key() it means that the next record which has to get deleted will be at the starting of the list so, that in the next call to read record(heap_rnext()) will take line number 100 path instead of 102 path, please see the below code in file hp_rnext.c, function heap_rnext(). 99 else if (!info->current_ptr) /* Deleted or first call */ 100 pos= hp_search(info, keyinfo, info->lastkey, 0); 101 else 102 pos= hp_search(info, keyinfo, info->lastkey, 1); with that change the hp_search() will update the info->current_ptr with the record which needs to be deleted.
-
- 06 Nov, 2012 1 commit
-
-
Aditya A authored
PROBLEM ------- optimize on partiton will recreate the whole table instead of just partition. ANALYSIS -------- At present innodb doesn't support optimize option ,so we do a rebuild of the whole table and then call analyze() on the table.Presently for any optimize() option (on table or partition) we display the following info to the user "Table does not support optimize, doing recreate + analyze instead". FIX --- It was decided for GA versions(5.1 and 5.5) whenever the user tries to optimize a partition(s) we will will display the following info the user "Table does not support optimize on partitions. All partitions will be rebuilt and analyzed." Earlier partitions were not analyzed.Now all partitions will be analyzed. If the user wants to optimize the whole table ,we will display the previous info to the user. i.e "Table does not support optimize, doing recreate + analyze instead" For 5.6+ versions we will raise a new bug to support optimize() options in innodb.
-
- 05 Nov, 2012 1 commit
-
-
akhil.mohan@oracle.com authored
-
- 01 Nov, 2012 1 commit
-
-
Tor Didriksen authored
Add missing DBUG_RETURNs, otherwise the debug-stack gets messed up, and _db_enter_ and _db_exit_ will access data outside the current stack frame.
-
- 30 Oct, 2012 2 commits
-
-
Anirudh Mangipudi authored
TO 'MYISAM_SORT_BUFFER_SIZE' Problem: 'myisam_sort_buffer_size' is a parameter used by mysqld program only whereas 'sort_buffer_size' is used by mysqld and myisamchk programs. But the error message printed when myisamchk program is run with insufficient buffer size is myisam_sort_buffer_size is too small which may mislead to the server parameter myisam_sort_buffer_size. SOLUTION: A parameter 'myisam_sort_buffer_size' is added as an alias for 'sort_buffer_size' and the 'sort_buffer_size' parameter is marked as deprecated. So myisamchk also has both the parameters with the same role.
-
Shivji Kumar Jha authored
main.mysqlbinlog_row_innodb are skipped by mtr === Problem === The following tests are wrongly placed in main suite and as a result these are not run with proper binlog format combinations. Some are always skipped by mtr. 1) mysqlbinlog_row_myisam 2) mysqlbinlog_row_innodb 3) mysqlbinlog_row.test 4) mysqlbinlog_row_trans.test 5) mysqlbinlog-cp932 6) mysqlbinlog2 7) mysqlbinlog_base64 === Background === mtr runs the tests placed in main suite with binlog format=stmt. Those that need to be tested against binlog format=row or mixed or more than one binlog format and require only one mysql server are placed in binlog suite. mtr runs tests in binlog suite with all three binlog formats(stmt,row and mixed). === Fix === 1) Moved the test listed in problem section above to binlog suite. 2) Added prefix "binlog_" to the name of each test case moved. Renamed the coresponding result files and option files accordingly.
-
- 29 Oct, 2012 2 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
Alexander Nozdrin authored
-