1. 05 Jul, 2012 1 commit
    • Georgi Kodinov's avatar
      Bug #13889741: HANDLE_FATAL_SIGNAL IN _DB_ENTER_ | · 42644a07
      Georgi Kodinov authored
      HANDLE_FATAL_SIGNAL IN STRNLEN
      
      Fixed the following bounds checking problems :
      1. in check_if_legal_filename() make sure the null terminated
      string is long enough before accessing the bytes in it.
      Prevents pottential read-past-buffer-end
      2. in my_wc_mb_filename() of the filename charset check
      for the end of the destination buffer before sending single
      byte characters into it.
      Prevents write-past-end-of-buffer (and garbaling stack in
      the cases reported here) errors.
      
      Added test cases.
      42644a07
  2. 03 Jul, 2012 1 commit
    • Rohit Kalhans's avatar
      BUG#11762667:MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT · 176d6b1d
      Rohit Kalhans authored
      This is a followup patch for the bug enabling the test
      i_binlog.binlog_mysqlbinlog_file_write.test
      this was disabled in mysql trunk and mysql 5.5 as in the release
      build mysqlbinlog was not debug compiled whereas the mysqld was.
      Since have_debug.inc script checks only for mysqld to be debug
      compiled, the test was not being skipped on release builds.
      
      We resolve this problem by creating a new inc file 
      mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
      to be debug compiled. if not it skips the test.
       
      176d6b1d
  3. 29 Jun, 2012 1 commit
  4. 28 Jun, 2012 1 commit
    • Georgi Kodinov's avatar
      Bug #13708485: malformed resultset packet crashes client · 428ff7f8
      Georgi Kodinov authored
      Several fixes :
      
      * sql-common/client.c
      Added a validity check of the fields metadata packet sent 
      by the server.
      Now libmysql will check if the length of the data sent by
      the server matches what's expected by the protocol before
      using the data.
      
      * client/mysqltest.cc
      Fixed the error handling code in mysqltest to avoid sending
      new commands when the reading the result set failed (and 
      there are unread data in the pipe).
      
      * sql_common.h + libmysql/libmysql.c + sql-common/client.c
      unpack_fields() now generates a proper error when it fails.
      Added a new argument to this function to support the error 
      generation.
      
      * sql/protocol.cc
      Added a debug trigger to cause the server to send a NULL
      insted of the packet expected by the client for testing 
      purposes.
      428ff7f8
  5. 29 Jun, 2012 2 commits
  6. 28 Jun, 2012 1 commit
  7. 19 Jun, 2012 1 commit
  8. 18 Jun, 2012 1 commit
    • Norvald H. Ryeng's avatar
      Bug#13003736 CRASH IN ITEM_REF::WALK WITH SUBQUERIES · 5f61cc43
      Norvald H. Ryeng authored
      Problem: Some queries with subqueries and a HAVING clause that
      consists only of a column not in the select or grouping lists causes
      the server to crash.
      
      During parsing, an Item_ref is constructed for the HAVING column. The
      name of the column is resolved when JOIN::prepare calls fix_fields()
      on its having clause. Since the column is not mentioned in the select
      or grouping lists, a ref pointer is not found and a new Item_field is
      created instead. The Item_ref is replaced by the Item_field in the
      tree of HAVING clauses. Since the tree consists only of this item, the
      pointer that is updated is JOIN::having. However,
      st_select_lex::having still points to the Item_ref as the root of the
      tree of HAVING clauses.
      
      The bug is triggered when doing filesort for create_sort_index(). When
      find_all_keys() calls select->cond->walk() it eventually reaches
      Item_subselect::walk() where it continues to walk the having clauses
      from lex->having. This means that it finds the Item_ref instead of the
      new Item_field, and Item_ref::walk() tries to dereference the ref
      pointer, which is still null.
      
      The crash is reproducible only in 5.5, but the problem lies latent in
      5.1 and trunk as well.
      
      Fix: After calling fix_fields on the having clause in JOIN::prepare(),
      set select_lex::having to point to the same item as JOIN::having.
      
      This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
      query is executed as a prepared statement. The Item_field is created
      in the runtime arena when the query is prepared, and the pointer to
      the item is saved by st_select_lex::fix_prepare_information() and
      brought back as a dangling pointer when the query is executed, after
      the runtime arena has been reclaimed.
      
      Fix: Backport fix from trunk that switches to the permanent arena
      before calling Item_ref::fix_fields() in JOIN::prepare().
      5f61cc43
  9. 15 Jun, 2012 1 commit
  10. 14 Jun, 2012 1 commit
  11. 13 Jun, 2012 1 commit
    • Harin Vadodaria's avatar
      Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST · 3ec0a7eb
      Harin Vadodaria authored
                    INC_HOST_ERRORS() IS CALLED.
      
      Issue       : Sequence of calling inc_host_errors()
                    and reset_host_errors() required some
                    changes in order to maintain correct
                    connection error count.
      
      Solution    : Call to reset_host_errors() is shifted
                    to a location after which no calls to
                    inc_host_errors() are made.
      3ec0a7eb
  12. 12 Jun, 2012 1 commit
    • Manish Kumar's avatar
      BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET · 1211b5d5
      Manish Kumar authored
      Problem
      ========
                  
      Replication breaks in the cases if the event length exceeds 
      the size of master Dump thread's max_allowed_packet.
                    
      The reason why this failure is occuring is because the event length is
      more than the total size of the max_allowed_packet, on addition of the  
      max_event_header length exceeds the max_allowed_packet of the DUMP thread.
      This causes the Dump thread to break replication and throw an error.
                            
      That can happen e.g with row-based replication in Update_rows event.
                  
      Fix
      ====
                
      The problem is fixed in 2 steps:
      
      1.) The Dump thread limit to read event is increased to the upper limit
          i.e. Dump thread reads whatever gets logged in the binary log.
      
      2.) On the slave side we increase the the max_allowed_packet for the
          slave's threads (IO/SQL) by increasing it to 1GB.
      
          This is done using the new server option (slave_max_allowed_packet)
          included, is used to regulate the max_allowed_packet of the  
          slave thread (IO/SQL) by the DBA, and facilitates the sending of
          large packets from the master to the slave.
      
          This causes the large packets to be received by the slave and apply
          it successfully.
      1211b5d5
  13. 05 Jun, 2012 1 commit
  14. 01 Jun, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED · 08f36070
      Annamalai Gurusami authored
      WHEN KILLING
      
      Suppose there is a query waiting for a lock.  If the user kills
      this query, then "Got error -1 when reading table" error message
      must not be logged in the server log file.  Since this is a user
      requested interruption, no spurious error message must be logged
      in the server log.  This patch will remove the error message from
      the log.
      
      approved by joh and tatjana
      08f36070
  15. 31 May, 2012 2 commits
  16. 30 May, 2012 2 commits
  17. 29 May, 2012 1 commit
    • Rohit Kalhans's avatar
      Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT · 35d4c18e
      Rohit Kalhans authored
      Problem: mysqlbinlog exits without any error code in case of
      file write error. It is because of the fact that the calls
      to Log_event::print() method does not return a value and the
      thus any error were being ignored.
      
      Resolution: We resolve this problem by checking for the 
      IO_CACHE::error == -1 after every call to Log_event:: print()
      and terminating the further execution.
      35d4c18e
  18. 24 May, 2012 1 commit
    • Inaam Rana's avatar
      Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK · 0bb636b3
      Inaam Rana authored
      rb://1088
      approved by: Marko Makela
      
      This bug was introduced in early stages of plugin. We were not
      checking for an implicit lock on sec index rec for trx_id that is
      stamped on current version of the clust_index in case where the
      clust_index has a previous delete marked version.
      0bb636b3
  19. 21 May, 2012 2 commits
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · 3fcd55f6
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      Modified the fix based on review comment by Svoj.  
      3fcd55f6
    • Manish Kumar's avatar
      BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET · 9aa79dc5
      Manish Kumar authored
      Problem
      ========
                  
      SQL statements close to the size of max_allowed_packet produce binary
      log events larger than max_allowed_packet.
                    
      The reason why this failure is occuring is because the event length is
      more than the total size of the max_allowed_packet + max_event_header
      length. Now since the event length exceeds this size master Dump
      thread is unable to send the packet on to the slave.
                            
      That can happen e.g with row-based replication in Update_rows event.
                  
      Fix
      ====
                
      The problem was fixed by increasing the max_allowed_packet for the
      slave's threads (IO/SQL) by increasing it to 1GB.
      This is done using the new server option included which is used to
      regulate the max_allowed_packet of the slave thread (IO/SQL).
      This causes the large packets to be received by the slave and apply
      it successfully.
      9aa79dc5
  20. 18 May, 2012 1 commit
    • Rohit Kalhans's avatar
      BUG#14005409 - 64624 · c64b88d6
      Rohit Kalhans authored
            
      Problem: After the fix for Bug#12589870, a new field that
      stores the length of db name was added in the buffer that
      stores the query to be executed. Unlike for the plain user
      session, the replication execution did not allocate the
      necessary chunk in Query-event constructor. This caused an
      invalid read while accessing this field.
            
      Solution: We fix this problem by allocating a necessary chunk
      in the buffer created in the Query_log_event::Query_log_event()
      and store the length of database name.
      c64b88d6
  21. 17 May, 2012 3 commits
    • Gopal Shankar's avatar
      Bug#12636001 : deadlock from thd_security_context · 047fea06
      Gopal Shankar authored
      PROBLEM:
      Threads end-up in deadlock due to locks acquired as described
      below,
      
      con1: Run Query on a table. 
        It is important that this SELECT must back-off while
        trying to open the t1 and enter into wait_for_condition().
        The SELECT then is blocked trying to lock mysys_var->mutex
        which is held by con3. The very significant fact here is
        that mysys_var->current_mutex will still point to LOCK_open,
        even if LOCK_open is no longer held by con1 at this point.
      
      con2: Try dropping table used in con1 or query some table.
        It will hold LOCK_open and be blocked trying to lock
        kernel_mutex held by con4.
      
      con3: Try killing the query run by con1.
        It will hold THD::LOCK_thd_data belonging to con1 while
        trying to lock mysys_var->current_mutex belonging to con1.
        But current_mutex will point to LOCK_open which is held
        by con2.
      
      con4: Get innodb engine status
        It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
        belonging to con1 which is held by con3.
      
      So while technically only con2, con3 and con4 participate in the
      deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
      is a vital component of the deadlock.
      
      CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
               kernel_mutex -> THD::LOCK_thd_data)
      
      FIX:
      LOCK_thd_data has responsibility of protecting,
      1) thd->query, thd->query_length
      2) VIO
      3) thd->mysys_var (used by KILL statement and shutdown)
      4) THD during thread delete.
      
      Among above responsibilities, 1), 2)and (3,4) seems to be three
      independent group of responsibility. If there is different LOCK
      owning responsibility of (3,4), the above mentioned deadlock cycle
      can be avoid. This fix introduces LOCK_thd_kill to handle
      responsibility (3,4), which eliminates the deadlock issue.
      
      Note: The problem is not found in 5.5. Introduction MDL subsystem 
      caused metadata locking responsibility to be moved from TDC/TC to
      MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
      As the use of LOCK_open is removed in open_table() and 
      mysql_rm_table() the above mentioned CYCLE does not form.
      Revision ID for changes,
      open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
      mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
      047fea06
    • mysql-builder@oracle.com's avatar
      No commit message · 4f5ada6d
      mysql-builder@oracle.com authored
      No commit message
      4f5ada6d
    • mysql-builder@oracle.com's avatar
      No commit message · 9551d7b6
      mysql-builder@oracle.com authored
      No commit message
      9551d7b6
  22. 16 May, 2012 3 commits
    • Annamalai Gurusami's avatar
      Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER · 8ce4d100
      Annamalai Gurusami authored
      The following scenario crashes our mysql server:
      
      1.  set global innodb_file_per_table=1;
      2.  create table t1(c1 int) engine=innodb;
      3.  alter table t1 discard tablespace;
      4.  alter table t1 add unique index(c1);
      
      Step 4 crashes the server.  This patch introduces a check on discarded
      tablespace to avoid the crash.
      
      rb://1041 approved by Marko Makela
      8ce4d100
    • Venkata Sidagam's avatar
      Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH, · 4ff100e6
      Venkata Sidagam authored
                     FULLTEXT INDEX AND CONCURRENT DML.
      
      Problem Statement:
      ------------------
      1) Create a table with FT index.
      2) Enable concurrent inserts.
      3) In multiple threads do below operations repeatedly
         a) truncate table
         b) insert into table ....
         c) select ... match .. against .. non-boolean/boolean mode
      
      After some time we could observe two different assert core dumps
      
      Analysis:
      --------
      1)assert core dump at key_read_cache():
      Two select threads operating in-parallel on same key 
      root block.
      1st select thread block->status is set to BLOCK_ERROR 
      because the my_pread() in read_block() is returning '0'. 
      Truncate table made the index file size as 1024 and pread 
      was asked to get the block of count bytes(1024 bytes) 
      from offset of 1024 which it cannot read since its 
      "end of file" and retuning '0' setting 
      "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
      key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
      the 1st select thread enter into the free_block() and will 
      be under wait on conditional mutex by making status as 
      BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
      thread will also work on the same block and sees the status as 
      BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
      and asserting the server.
      
      2)assert core dump at key_write_cache():
      One select thread and One insert thread.
      Select thread gets the unlocks the 'keycache->cache_lock', 
      which allows other threads to continue and gets the pread() 
      return value as'0'(please see the explanation above) and 
      tries to get the lock on 'keycache->cache_lock' and waits 
      there for the lock.
      Insert thread requests for the block, block will be assigned 
      from the hash list and makes the page_status as 
      'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
      in the queue since there are some other threads performing 
      reads on the same block.
      Select thread which was waiting for the 'keycache->cache_lock' 
      mutex in the read_block() will continue after getting the my_pread() 
      value as '0' and sets the block status as BLOCK_ERROR and goes to 
      the free_block() and go to the wait_for_readers().
      Now the insert thread will awake and continues. and checks 
      block->status as not BLOCK_READ and it asserts.  
      
      Fix:
      ---
      In the full text code, multiple readers of index file is not guarded. 
      Hence added below below code in _ft2_search() and walk_and_match().
      
      to lock the key_root I have used below code in _ft2_search()
       if (info->s->concurrent_insert)
          mysql_rwlock_rdlock(&share->key_root_lock[0]);
      
      and to unlock 
       if (info->s->concurrent_insert)
         mysql_rwlock_unlock(&share->key_root_lock[0]);
      4ff100e6
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · f23215ee
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      rb://1074 approved by Alexander Nozdrin
      f23215ee
  23. 15 May, 2012 4 commits
  24. 10 May, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED · b76a59f5
      Annamalai Gurusami authored
      BY A CONCURRENT TRANSACTIO
      
      The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
      a table handler clone. Innodb does not provide a clone operation.  
      The ha_innobase::clone() is not there. The handler::clone() does not 
      take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
      this what happens is that for one index we do a locking read, and 
      for the other index we were doing a non-locking (consistent) read. 
      The patch introduces ha_innobase::clone() member function.  
      It is implemented similar to ha_myisam::clone().  It calls the 
      base class handler::clone() and then does any additional operation 
      required.  I am setting the ha_innobase->prebuilt->select_lock_type 
      correctly. 
      
      rb://1060 approved by Marko
      b76a59f5
  25. 08 May, 2012 1 commit
  26. 07 May, 2012 1 commit
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · 14aa2c02
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      14aa2c02
  27. 27 Apr, 2012 1 commit
  28. 26 Apr, 2012 1 commit
  29. 23 Apr, 2012 1 commit