1. 23 Mar, 2015 2 commits
    • Chaithra Gopalareddy's avatar
      Bug #20730220 : BACKPORT BUG#19880368 TO 5.1 · 044060fe
      Chaithra Gopalareddy authored
      Backport from mysql-5.5 to mysql-5.1
      
      Bug#19880368 : GROUP_CONCAT CRASHES AFTER DUMP_LEAF_KEY
      
      Problem:
      find_order_by_list does not update the address of order_item
      correctly after resolving.
      
      Solution:
      Change the ref_by address for a order_by field if its
      SUM_FUNC_ITEM to the address of the field present in
      all_fields.
      044060fe
    • Chaithra Gopalareddy's avatar
      Bug #20730129: BACKPORT BUG#19612819 TO 5.1 · a2cd622f
      Chaithra Gopalareddy authored
      Backport from mysql-5.5 to mysql-5.1
      
      Bug #19612819 :  FILESORT: ASSERTION FAILED: POS->FIELD != 0 || POS->ITEM != 0
      
      Problem:
      While getting the temp table field for a REF_ITEM
      make_sortorder is using the real_item. As a result
      server fails later with an assert.
      
      Solution:
      Do not use real_item to get the temp table field.
      Instead use the REF_ITEM itself as temp table fields
      are created for REF_ITEM not the real_item.
      a2cd622f
  2. 19 Mar, 2015 1 commit
    • Jon Olav Hauglid's avatar
      Bug#20730053: BACKPORT BUG#19770858 TO 5.1 · c7581bb5
      Jon Olav Hauglid authored
      Backport from mysql-5.5 to mysql-5.1 of:
      
      Bug19770858: MYSQLD CAN BE DRIVEN TO OOM WITH TWO SIMPLE SESSION VARS
      
      The problem was that the maximum value of the transaction_prealloc_size
      session system variable was ULONG_MAX which meant that it was possible
      to cause the server to allocate excessive amounts of memory.
      
      This patch fixes the problem by reducing the maxmimum value of
      transaction_prealloc_size and transaction_alloc_block_size down
      to 128K.
      
      Note that transactions will still be able to allocate more than
      128K if needed, this patch just reduces the amount that can be
      preallocated - as well as the maximum size of the incremental
      allocation blocks.
      
      (cherry picked from commit 540c9f7ebb428bbf9ec028feabe1f7f919fdefd9)
      
      Conflicts:
      	mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic.result
      	mysql-test/suite/sys_vars/r/transaction_alloc_block_size_basic_64.result
      	mysql-test/suite/sys_vars/t/disabled.def
      	mysql-test/suite/sys_vars/t/transaction_alloc_block_size_basic.test
      	sql/sys_vars.cc
      c7581bb5
  3. 03 Dec, 2013 1 commit
  4. 04 Nov, 2013 2 commits
  5. 01 Nov, 2013 1 commit
  6. 31 Oct, 2013 2 commits
    • mysql-builder@oracle.com's avatar
      No commit message · 7e1c78c8
      mysql-builder@oracle.com authored
      No commit message
      7e1c78c8
    • Venkata Sidagam's avatar
      Bug #12917164 DROP USER CAN'T DROP USERS WITH LEGACY · 46b617d2
      Venkata Sidagam authored
          UPPER CASE HOST NAME ANYMORE
      
      Description:
      It is not possible to drop users with host names with upper case
      letters in them. i.e DROP USER 'root'@'Tmp_Host_Name'; is failing
      with error.
      
      Analysis: Since the fix 11748570 we came up with lower case hostnames
      as standard. But in the current bug the hostname is created by
      mysql_install_db script is still having upper case hostnames. 
      So, if we have the hostname with upper case letters like(Tmp_Host_Name)
      then we will have as it is stored in the mysql.user table. 
      In this case if use "'DROP USER 'root'@'Tmp_Host_Name';" it gives 
      error because we do compare with the lower case of hostname since the 
      11748570 fix.
      
      Fix: We need to convert the hostname to lower case before storing into 
      the mysql.user table when we run the mysql_install_db script.
      46b617d2
  7. 30 Oct, 2013 1 commit
  8. 29 Oct, 2013 1 commit
  9. 18 Oct, 2013 1 commit
    • Aditya A's avatar
      Bug#17559867 AFTER REBUILDING,A MYISAM PARTITION ENDS UP · df5018f2
      Aditya A authored
                   AS A INNODB PARTITTION.
      
      PROBLEM
      -------
      The correct engine_type was not being set during 
      rebuild of the partition due to which the handler
      was always created with the default engine,
      which is innodb for 5.5+ ,therefore even if the
      table was myisam, after rebuilding the partitions
      ended up as innodb partitions.
      
      FIX
      ---
      Set the correct engine type during rebuild.  
      
      [Approved by mattiasj #rb3599]
      df5018f2
  10. 16 Oct, 2013 2 commits
    • Venkatesh Duggirala's avatar
      Bug#17234370 LAST_INSERT_ID IS REPLICATED INCORRECTLY IF · 29e45f15
      Venkatesh Duggirala authored
      REPLICATION FILTERS ARE USED.
      
      Problem:
      When Filtered-slave applies Int_var_log_event and when it
      tries to write the event to its own binlog, LAST_INSERT_ID
      value is written wrongly.
      
      Analysis:
      THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
      is a variable which is set when LAST_INSERT_ID() is used by
      a statement. If it is set, first_successful_insert_id_in_
      prev_stmt_for_binlog will be stored in the statement-based
      binlog. This variable is CUMULATIVE along the execution of
      a stored function or trigger: if one substatement sets it
      to 1 it will stay 1 until the function/trigger ends,
      thus making sure that first_successful_insert_id_in_
      prev_stmt_for_binlog does not change anymore and is
      propagated to the caller for binlogging. This is achieved
      using the following code
      if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt)               
      {                                                                           
        /* It's the first time we read it */                                      
        first_successful_insert_id_in_prev_stmt_for_binlog=                       
        first_successful_insert_id_in_prev_stmt;                                
        stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1;               
      }
      
      Slave server, after receiving Int_var_log_event event from
      master, it is setting
      stmt_depends_on_first_successful_insert_id_in_prev_stmt
      to true(*which is wrong*) and not setting
      first_successful_insert_id_in_prev_stmt_for_binlog. Because
      of this problem, when the actual DML statement with
      LAST_INSERT_ID() is parsed by slave SQL thread,
      first_successful_insert_id_in_prev_stmt_for_binlog is not
      set. Hence the value zero (default value) is written to
      slave's binlog.
      
      Why only *Filtered slave* is effected when the code is
      in common place:
      -------------------------------------------------------
      In Query_log_event::do_apply_event,
      THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
      is reset to zero at the end of the function. In case of
      normal slave (No Filters), this variable will be reset. 
      In Filtered slave, Slave SQL thread defers all IRU events's
      execution until IRU's Query_log event is received. Once it
      receives Query_log_event it executes all pending IRU events
      and then it executes Query_log_event. Hence the variable is
      not getting reset to 0, causing this bug.
      
      Fix: As described above, the root cause was setting 
      THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt
      when Int_var_log_event was executed by a SQL thread. Hence
      removing the problematic line from the code.
      29e45f15
    • Venkata Sidagam's avatar
      Bug#16900358 FIX FOR CVE-2012-5611 IS INCOMPLETE · 9fc51224
      Venkata Sidagam authored
      Description: Fix for bug CVE-2012-5611 (bug 67685) is 
      incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and 
      check_grant_db() can be overflown by up to two bytes. That's 
      probably not enough to do anything more serious than crashing 
      mysqld.
      Analysis: In acl_get() when "copy_length" is calculated it 
      just adding the variable lengths. But when we are using them 
      with strmov() we are adding +1 to each. This will lead to a 
      three byte buffer overflow (i.e two +1's at strmov() and one 
      byte for the null added by strmov() function). Similarly it 
      happens for check_grant_db() function as well.
      Fix: We need to add "+2" to "copy_length" in acl_get() 
      and "+1" to "copy_length" in check_grant_db(). 
      9fc51224
  11. 14 Oct, 2013 1 commit
    • Nuno Carvalho's avatar
      WL#7266: Dump-thread additional concurrency tests ... · 3f587452
      Nuno Carvalho authored
      WL#7266: Dump-thread additional concurrency tests                                                                                                                           
      
      This worklog aims at testing the two following scenarios:
      
      1) Whenever the mysql_binlog_send method (dump thread)
      reaches the end of file when reading events from the binlog, before
      checking if it should wait for more events, there was a test to
      check if the file being read was still active, i.e, it was the last
      known binlog. However, it was possible that something was written to
      the binary log and then a rotation would happen, after EOF was
      detected and before the check for active was performed. In this
      case, the end of the binary log would not be read by the dump
      thread, and this would cause the slave to lose updates.
      This test verifies that the problem has been fixed. It waits during
      this window while forcing a rotation in the binlog.
      
      2) Verify dump thread can send events in active file, correctly after
      encountering an IO error.
      3f587452
  12. 07 Oct, 2013 2 commits
  13. 04 Oct, 2013 1 commit
  14. 27 Sep, 2013 1 commit
  15. 20 Sep, 2013 1 commit
  16. 12 Sep, 2013 1 commit
  17. 11 Sep, 2013 1 commit
    • Satya Bodapati's avatar
      Bug#16752251 - INNODB DOESN'T REDO-LOG INSERT BUFFER MERGE OPERATION IF · f166ec71
      Satya Bodapati authored
      	       IT IS DONE IN-PLACE
      
      With change buffer enabled, InnoDB doesn't write a transaction log
      record when it merges a record from the insert buffer to an secondary
      index page if the insertion is performed as an update-in-place.
      
      Fixed by logging the 'update-in-place' operation on secondary index
      pages.
      
      Approved by Marko. rb#2429
      f166ec71
  18. 10 Sep, 2013 3 commits
    • mithun's avatar
      Bug #16978278 : BUFFER OVERFLOW WHEN PRINTING A LARGE 64-BIT INTEGER · d88c01d3
      mithun authored
                      WITH MY_B_VPRINTF()
      Issue         : In LP 64 machine max long value can be 20 digit
                      decimal value. But in my_b_vprintf() the intermediate
                      buffer storage used is 17 bytes length. This will lead to
                      buffer overflow.
      Solution      : Increased the buffer storage from 17 to 32 bytes.
                      code is backported from 5.6
      d88c01d3
    • Libing Song's avatar
      Bug#17402313 DUMP THREAD SENDS SOME EVENTS MORE THAN ONCE · 9e91f479
      Libing Song authored
      Postfix, suppress the new warning generated by the bug's fix.
      9e91f479
    • Libing Song's avatar
      Bug#17402313 DUMP THREAD SENDS SOME EVENTS MORE THAN ONCE · d5fdf9ef
      Libing Song authored
      Dump thread may encounter an error when reading events from the active binlog
      file. However the errors may be temporary, so dump thread will try to read
      the event again. But dump thread seeked to an wrong position, it caused some
      events was sent twice.
      
      To fix the bug, prev_pos is defined out the while loop and is set the correct
      position after reading every event correctly.
      
      This patch also make binlog_can_be_corrupted more accurate, only the binlogs
      not closed normally are marked binlog_can_be_corrupted.
      
      Finally, two warnings are added when dump threads encounter the temporary
      errors.
      d5fdf9ef
  19. 09 Sep, 2013 3 commits
  20. 03 Sep, 2013 1 commit
  21. 30 Aug, 2013 2 commits
  22. 29 Aug, 2013 1 commit
  23. 28 Aug, 2013 1 commit
    • Raghav Kapoor's avatar
      BUG#17294150-POTENTIAL CRASH DUE TO BUFFER OVERRUN IN SSL · c53cad81
      Raghav Kapoor authored
                   ERROR HANDLING CODE 
      
      BACKGROUND:
      There can be a potential crash due to buffer overrun in 
      SSL error handling code due to missing comma in
      ssl_error_string[] array in viosslfactories.c.
      
      ANALYSIS:
      Found by code Inspection.
      
      FIX:
      Added the missing comma in SSL error handling code
      in ssl_error_string[] array in viosslfactories.c.
      c53cad81
  24. 26 Aug, 2013 1 commit
  25. 23 Aug, 2013 1 commit
    • Neeraj Bisht's avatar
      Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS · 4f0e7c03
      Neeraj Bisht authored
      Problem:-
      In a Procedure, when we are comparing value of select query 
      with IN clause and they both have different collation, cause 
      error on first time execution and assert second time.
      procedure will have query like
      set @x = ((select a from t1) in (select d from t2));<---proc1
                    sel1                   sel2
      
      Analysis:-
      When we execute this proc1(first time)
      While resolving the fields of user variable, we will call 
      Item_in_subselect::fix_fields while will resolve sel2. There 
      in Item_in_subselect::select_transformer, we evaluate the 
      left expression(sel1) and store it in Item_cache_* object 
      (to avoid re-evaluating it many times during subquery execution) 
      by making Item_in_optimizer class.
      While evaluating left expression we will prepare sel1.
      After that, we will put a new condition in sel2  
      in Item_in_subselect::select_transformer() which will compare 
      t2.d and sel1(which is cached in Item_in_optimizer).
      
      Later while checking the collation in agg_item_collations() 
      we get error and we cleanup the item. While cleaning up we cleaned 
      the cached value in Item_in_optimizer object.
      
      When we execute the procedure second time, we have condition for 
      sel2 and while setup_cond(), we can't able to find reference item 
      as it is cleanup while item cleanup.So it assert.
      
      
      Solution:-
      We should not cleanup the cached value for Item_in_optimizer object, 
      if we have put the condition to subselect.
      
      4f0e7c03
  26. 21 Aug, 2013 4 commits
    • Marko Mäkelä's avatar
      Merge working copy to mysql-5.1. · 36db646f
      Marko Mäkelä authored
      36db646f
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to working copy. · 2e7ef2cb
      Marko Mäkelä authored
      2e7ef2cb
    • Marko Mäkelä's avatar
      Bug#12560151 61132: infinite loop in buf_page_get_gen() when handling · 6a3bb3c0
      Marko Mäkelä authored
      compressed pages
      
      After loading a compressed-only page in buf_page_get_gen() we allocate a new
      block for decompression. The problem is that the compressed page is neither
      buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(),
      so it may end up being evicted and returned back as a new block.
      
      buf_page_get_gen(): Temporarily buffer-fix the compressed-only block
      while allocating memory for an uncompressed page frame.
      This should prevent this form of the infinite loop, which is more likely
      with a small innodb_buffer_pool_size.
      
      rb#2511 approved by Jimmy Yang, Sunny Bains
      6a3bb3c0
    • Praveenkumar Hulakund's avatar
      Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND · 10a6aa25
      Praveenkumar Hulakund authored
                     "SHOW PROCESSLIST"
      
      Analysis:
      ----------
      The problem here is, if one connection changes its
      default db and at the same time another connection executes
      "SHOW PROCESSLIST", when it wants to read db of the another
      connection then there is a chance of accessing the invalid
      memory. 
      
      The db name stored in THD is not guarded while changing user
      DB and while reading the user DB in "SHOW PROCESSLIST".
      So, if THD.db is freed by thd "owner" thread and if another
      thread executing "SHOW PROCESSLIST" statement tries to read
      and copy THD.db at the same time then we may endup in the issue
      reported here.
      
      Fix:
      ----------
      Used mutex "LOCK_thd_data" to guard THD.db while freeing it
      and while copying it to processlist.
      10a6aa25
  27. 16 Aug, 2013 1 commit
    • Marko Mäkelä's avatar
      Bug#17312846 CHECK TABLE ASSERTION FAILURE · 55129f67
      Marko Mäkelä authored
      DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1
      
      The function row_sel_sec_rec_is_for_clust_rec() was incorrectly
      preparing to compare a NULL column prefix in a secondary index with a
      non-NULL column in a clustered index.
      
      This can trigger an assertion failure in 5.1 plugin and later. In the
      built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do
      some extra work, by trimming the clustered index field for the
      comparison.
      
      The code might actually have worked properly apart from this debug
      assertion failure. It is merely doing some extra work in fetching a
      BLOB column, and then comparing it to NULL (which would return the
      same result, no matter what the BLOB contents is).
      
      While the test case involves CHECK TABLE, this could theoretically
      occur during any read that uses a secondary index on a column prefix
      of a column that can be NULL.
      
      rb#3101 approved by Mattias Jonsson
      55129f67