- 03 Sep, 2013 1 commit
-
-
Hery Ramilison authored
-
- 30 Aug, 2013 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 29 Aug, 2013 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 23 Aug, 2013 1 commit
-
-
Neeraj Bisht authored
Problem:- In a Procedure, when we are comparing value of select query with IN clause and they both have different collation, cause error on first time execution and assert second time. procedure will have query like set @x = ((select a from t1) in (select d from t2));<---proc1 sel1 sel2 Analysis:- When we execute this proc1(first time) While resolving the fields of user variable, we will call Item_in_subselect::fix_fields while will resolve sel2. There in Item_in_subselect::select_transformer, we evaluate the left expression(sel1) and store it in Item_cache_* object (to avoid re-evaluating it many times during subquery execution) by making Item_in_optimizer class. While evaluating left expression we will prepare sel1. After that, we will put a new condition in sel2 in Item_in_subselect::select_transformer() which will compare t2.d and sel1(which is cached in Item_in_optimizer). Later while checking the collation in agg_item_collations() we get error and we cleanup the item. While cleaning up we cleaned the cached value in Item_in_optimizer object. When we execute the procedure second time, we have condition for sel2 and while setup_cond(), we can't able to find reference item as it is cleanup while item cleanup.So it assert. Solution:- We should not cleanup the cached value for Item_in_optimizer object, if we have put the condition to subselect.
-
- 21 Aug, 2013 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
compressed pages After loading a compressed-only page in buf_page_get_gen() we allocate a new block for decompression. The problem is that the compressed page is neither buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(), so it may end up being evicted and returned back as a new block. buf_page_get_gen(): Temporarily buffer-fix the compressed-only block while allocating memory for an uncompressed page frame. This should prevent this form of the infinite loop, which is more likely with a small innodb_buffer_pool_size. rb#2511 approved by Jimmy Yang, Sunny Bains
-
Praveenkumar Hulakund authored
"SHOW PROCESSLIST" Analysis: ---------- The problem here is, if one connection changes its default db and at the same time another connection executes "SHOW PROCESSLIST", when it wants to read db of the another connection then there is a chance of accessing the invalid memory. The db name stored in THD is not guarded while changing user DB and while reading the user DB in "SHOW PROCESSLIST". So, if THD.db is freed by thd "owner" thread and if another thread executing "SHOW PROCESSLIST" statement tries to read and copy THD.db at the same time then we may endup in the issue reported here. Fix: ---------- Used mutex "LOCK_thd_data" to guard THD.db while freeing it and while copying it to processlist.
-
- 16 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1 The function row_sel_sec_rec_is_for_clust_rec() was incorrectly preparing to compare a NULL column prefix in a secondary index with a non-NULL column in a clustered index. This can trigger an assertion failure in 5.1 plugin and later. In the built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do some extra work, by trimming the clustered index field for the comparison. The code might actually have worked properly apart from this debug assertion failure. It is merely doing some extra work in fetching a BLOB column, and then comparing it to NULL (which would return the same result, no matter what the BLOB contents is). While the test case involves CHECK TABLE, this could theoretically occur during any read that uses a secondary index on a column prefix of a column that can be NULL. rb#3101 approved by Mattias Jonsson
-
- 15 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC. Once row_undo_mod_clust() has rolled back the changes by the rolling-back transaction, it attempts to purge the delete-marked record, if possible, in a separate mini-transaction. However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of the record that it found after repositioning the cursor, is still the same. If it is not, it means that the record was purged and another record was inserted in its place. So, the rollback would have performed an incorrect purge, breaking the locking rules and causing corruption. The problem was found by creating a table that contains a unique secondary index and a primary key, and two threads running REPLACE with only one value for the unique column, so that the uniqueness constraint would be violated all the time, leading to statement rollback. This bug exists in all InnoDB versions (I checked MySQL 3.23.53). It has become easier to repeat in 5.5 and 5.6 thanks to scalability improvements and a dedicated purge thread. rb#3085 approved by Jimmy Yang
-
- 14 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
FAILED BLOB WRITE btr_store_big_rec_extern_fields(): Relax a debug assertion so that some BLOB pointers may remain zero if an error occurs. btr_free_externally_stored_field(), row_undo_ins(): Allow the BLOB pointer to be zero on any rollback. rb#3059 approved by Jimmy Yang, Kevin Lewis
-
- 12 Aug, 2013 1 commit
-
-
Anirudh Mangipudi authored
Problem Description: A mysqld_safe instance is started. An InnoDB crash recovery begins which takes few seconds to complete. During this crash recovery process happening, another mysqld_safe instance is started with the same server startup parameters. Since the mysqld's pid file is absent during the crash recovery process the second instance assumes there is no other process and tries to acquire a lock on the ibdata files in the datadir. But this step fails and the 2nd instance keeps retrying 100 times each with a delay of 1 second. Now after the 100 attempts, the server goes down, but while going down it hits the mysqld_safe script's cleanup section and without any check it blindly deletes the socket and pid files. Since no lock is placed on the socket file, it gets deleted. Solution: We create a mysqld_safe.pid file in the datadir, which protects the presence server instance resources by storing the mysqld_safe's process id in it. We place a check if the mysqld_safe.pid file is existing in the datadir. If yes then we check if the pid it contains is an active pid or not. If yes again, then the scripts logs an error saying "A mysqld_safe instance is already running". Otherwise it will log the present mysqld_safe's pid into the mysqld_safe.pid file.
-
- 31 Jul, 2013 1 commit
-
-
Joao Gramacho authored
Problem: ======= It was detected an incorrect behavior of my_strtoll10 function when converting strings with numbers in the following format: "184467440XXXXXXXXXYY" Where XXXXXXXXX > 737095516 and YY <= 15 Samples of problematic numbers: "18446744073709551915" "18446744073709552001" Instead of returning the larger unsigned long long value and setting overflow in the returned error code, my_strtoll10 function returns the lower 64-bits of the evaluated number and did not set overflow in the returned error code. Analysis: ======== Once trying to fix bug 16820156, I've found this bug in the overflow check of my_strtoll10 function. This function, once receiving a string with an integer number larger than 18446744073709551615 (the larger unsigned long long number) should return the larger unsigned long long number and set overflow in the returned error code. Because of a wrong overflow evaluation, the function didn't catch the overflow cases where (i == cutoff) && (j > cutoff2) && (k <= cutoff3). When the overflow evaluation fails, the function return the lower 64-bits of the evaluated number and do not set overflow in the returned error code. Fix: === Corrected the overflow evaluation in my_strtoll10.
-
- 30 Jul, 2013 1 commit
-
-
prabakaran thirumalai authored
Description: Original fix Bug#11765744 changed mutex to read write lock to avoid multiple recursive lock acquire operation on LOCK_status mutex. On Windows, locking read-write lock recursively is not safe. Slim read-write locks, which MySQL uses if they are supported by Windows version, do not support recursion according to their documentation. For our own implementation of read-write lock, which is used in cases when Windows version doesn't support SRW, recursive locking of read-write lock can easily lead to deadlock if there are concurrent lock requests. Fix: This patch reverts the previous fix for bug#11765744 that used read-write locks. Instead problem of recursive locking for LOCK_status mutex is solved by tracking recursion level using counter in THD object and acquiring lock only once when we enter fill_status() function first time.
-
- 29 Jul, 2013 1 commit
-
-
Aditya A authored
SHUTDOWN IS IN PROGRESS PROBLEM ------- In the background thread srv_master_thread() we have a a one second delay loop which will continuously monitor server activity .If the server is inactive (with out any user activity) or in a shutdown state we do some background activity like flushing the changes.In the current code we are not checking if server is in shutdown state before sleeping for one second. FIX --- If server is in shutdown state ,then dont go to one second sleep.
-
- 25 Jul, 2013 1 commit
-
-
Annamalai Gurusami authored
Problem: When the user specified foreign key name contains "_ibfk_", InnoDB wrongly tries to rename it. Solution: When a table is renamed, all its associated foreign keys will also be renamed, only if the foreign key names are automatically generated. If the foreign key names are given by the user, even if it has _ibfk_ in it, it must not be renamed. rb#2935 approved by Jimmy, Krunal and Satya
-
- 23 Jul, 2013 1 commit
-
-
Astha Pareek authored
BUG#12535301- SYS_VARS.RPL_INIT_SLAVE_FUNC MISMATCHES IN DAILY-5.5 Problem: sys_vars.rpl_init_slave_func test was not recorded after the last edit. It was disabled on 5.1 after seeing failures due to the above reason. No old failures as this suite never ran with pb2 on 5.1 Fix: Added assert condition after wait for checks. Recorded test and enabled it.
-
- 18 Jul, 2013 1 commit
-
-
Nisha Gopalakrishnan authored
TO DUMP DATA FROM MYSQL-5.6 Analysis -------- Dumping mysql-5.6 data using mysql-5.1/mysql-5.5 'myqldump' utility fails with a syntax error. Server system variable 'sql_quote_show_create' which quotes the identifiers is set in the mysqldump utility. The mysldump utility of mysql-5.1/mysql-5.5 uses deprecated syntax 'SET OPTION' to set the 'sql_quote_show_create' option. The support for the syntax is removed in mysql-5.6. Hence syntax error is reported while taking the dump. Fix: --- Changed the 'mysqldump' code to use the syntax 'SET SQL_QUOTE_SHOW_CREATE' to set the 'sql_quote_show_create' option. That syntax is supported on mysql-5.1, mysql-5.5 and mysql-5.6. NOTE: I have not added an mtr test case since it is difficult to simulate the condition. Also the syntax may not be further simplified in the future.
-
- 17 Jul, 2013 1 commit
-
-
sayantan dutta authored
-
- 09 Jul, 2013 1 commit
-
-
unknown authored
-
- 01 Jul, 2013 1 commit
-
-
Tor Didriksen authored
Cleanup test case (left outfile in data dir)
-
- 19 Jun, 2013 1 commit
-
-
Aditya A authored
Analysis -------- The pthread_mutex commit_threads_m was initiliazed but never used. Fix --- Removing the commit_threads_m mutex from the code base. [ Approved by Marko rb#2475]
-
- 18 Jun, 2013 1 commit
-
-
unknown authored
No commit message
-
- 14 Jun, 2013 2 commits
-
-
unknown authored
No commit message
-
Aditya A authored
TO INCONSISTENCY PROBLEM -------- When we drop a partitoned table , we first gather the information about partitions in the table from the table_name.par file and store it in an internal data structure.Then we delete this file and the data in the table. If the server crashes after deleting the file,then after recovering we cannot access the table .Even we cannot drop the table ,because drop algorithm requires par file to read the partition information. FIX --- 1. We move the part of deleting par file after deleting all the table data from the storage egine. 2. During drop operation if we detect that the par file is missing then we delete the .frm file,since there is no way of recovering without par file. [Approved by Mattias rb#2576 ]
-
- 10 Jun, 2013 1 commit
-
-
Murthy Narkedimilli authored
-
- 04 Jun, 2013 1 commit
-
-
unknown authored
-
- 24 May, 2013 1 commit
-
-
Venkatesh Duggirala authored
BY BINLOG_KILLED_SIMULATE.TEST 'mysqbinlog' tool creates a temporary file while preparing LOAD DATA QUERY. These files needs to be deleted at the end of the test script otherwise these files are left out in the daily-run machines, causing "no space on device issues" Fix: Delete them at the end of these test scripts 1) execute mysqlbinlog with --local-load option to create these files in a specified tmpdir 2) delete the tmpdir at the end of the test script
-
- 23 May, 2013 1 commit
-
-
Chaithra Gopalareddy authored
STRING CONVERSION FUNCTIONS Problem: While executing the prepared statement, user variable is set to memory which would be freed at the end of execution. If the statement is executed again, valgrind throws error when accessing this pointer. Analysis: 1. First time when Item_func_set_user_var::check is called, memory is allocated for "value" to store the result. (In the call to copy_if_not_alloced). 2. While sending the result, Item_func_set_user_var::check is called again. But, this time, its called with "use_result_field" set to true. As a result, we call result_field->val_str(&value). 3. Here memory allocated for "value" gets freed. And "value" gets set to "result_field", with "str_length" being that of result_field's. 4. In the call to JOIN::cleanup, result_field's memory gets freed as this is allocated in a chunk as part of the temporary table which is needed to execute the query. 5. Next time, when execute of the same statement is called, "value" will be set to memory which is already freed. Valgrind error occurs as "str_length" is positive (set at Step 3) Note that user variables list is stored as part of the Lex object in set_var_list. Hence the persistance across executions. Solution: Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem as well.So backporting the same. In the solution for Bug#11764371, we create another object of user_var and repoint it to temp_table's field. As a result while deleting the alloced buffer in Step 3, since the cloned object does not own the buffer, deletion will not happen. So at step 5 when we execute the statement second time, the original object will be used and since deletion did not happen valgrind will not complain about dangling pointer. sql/item_func.h: Add constructors. sql/sql_select.cc: Change user variable assignment functions to read from fields after tables have been unlocked.
-
- 22 May, 2013 1 commit
-
-
Chaithra Gopalareddy authored
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS Backporting these two fixes to 5.1 Added unittest to test my_decimal construtor and assignment operators sql/my_decimal.h: Added constructor and assignment operators for my_decimal unittest/my_decimal/my_decimal-t.cc: Added test to check constructor and assignment operators for my_decimal
-
- 16 May, 2013 3 commits
-
-
sayantan dutta authored
-
Annamalai Gurusami authored
INSERT BUFFER MERGE Problem: When the record is merged from the change buffer to the actual page, in a particular condition, it is assumed that the deleted rec will be re-used by the inserted rec. With this assumption the lock is restored on the pointer to the deleted rec itself, thinking that it is pointing to the newly inserted rec. Solution: Just before restoring the lock, update the rec pointer to point to the newly inserted record. An assert has been added to verify this. This assert will fail without the fix and will pass with the fix. rb#2449 in review by Marko and Jimmy
-
Jon Olav Hauglid authored
In order to keep error message numbers stable between GA releases, we can not now add a new error message to 5.1/5.5 as this message would get a number now used in 5.6. This patch enforces this by adding a 5.1/5.5 specific check when processing the error message file. If a new error message is added, building will abort and report an error.
-
- 15 May, 2013 1 commit
-
-
Marko Mäkelä authored
When a record contains no user data bytes (such as when the PRIMARY KEY is an empty string and all secondary index fields are NULL or the empty string), page_zip_decompress() could fail to set the record heap_no correctly. page_zip_decompress_node_ptrs(), page_zip_decompress_sec(), page_zip_decompress_clust(): Set heap_no also at the end of the compressed data stream. rb#2448 approved by Jimmy Yang and Inaam Rana
-
- 13 May, 2013 3 commits
-
-
unknown authored
No commit message
-
Murthy Narkedimilli authored
-
unknown authored
No commit message
-
- 12 May, 2013 1 commit
-
-
Annamalai Gurusami authored
innobase_convert_to_filename_charset() was by mistake kept within the conditional compilation of UNIV_COMPILE_TEST_FUNCS. Now placing the function out of UNIV_COMPILE_TEST_FUNCS. Also, removed the unnecessary log message (as in 5.6+).
-
- 10 May, 2013 2 commits
-
-
Chaithra Gopalareddy authored
Reverting fix for Bug#16119355 in 5.1 as this needs two patches from 5.5+ to work for a certain case
-
Murthy Narkedimilli authored
Description: Fixing a build issue. The function innobase_convert_to_system_charset() is included only in the builtin InnoDB, and it is missed in InnoDB plugin. Adding this function in InnoDB plugin as well.
-