1. 10 May, 2013 2 commits
  2. 07 May, 2013 2 commits
    • Chaithra Gopalareddy's avatar
      Bug #16119355: PREPARED STATEMENT: READ OF FREED MEMORY WITH · 266dd9c0
      Chaithra Gopalareddy authored
                                 STRING CONVERSION FUNCTIONS
                  
      Problem:
      While executing the prepared statement, user variable is
      set to memory which would be freed at the end of
      execution.
      If the statement is executed again, valgrind throws
      error when accessing this pointer.
                  
      Analysis:
                  
      1. First time when Item_func_set_user_var::check is called,
      memory is allocated for "value" to store the result.
      (In the call to copy_if_not_alloced).
      2. While sending the result, Item_func_set_user_var::check
      is called again. But, this time, its called with
      "use_result_field" set to true. 
      As a result, we call result_field->val_str(&value).
      3. Here memory allocated for "value" gets freed. And "value"
      gets set to "result_field", with "str_length" being that of
      result_field's.
      4. In the call to JOIN::cleanup, result_field's memory gets
      freed as this is allocated in a chunk as part of the
      temporary table which is needed to execute the query.
      5. Next time, when execute of the same statement is called,
      "value" will be set to memory which is already freed.
      Valgrind error occurs as "str_length" is positive 
      (set at Step 3)
                  
      Note that user variables list is stored as part of the Lex object
      in set_var_list. Hence the persistance across executions.
            
      Solution:
      Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem 
      as well.So backporting the same.
            
      In the solution for Bug#11764371, we create another object of 
      user_var and repoint it to temp_table's field. As a result while 
      deleting the alloced buffer in Step 3, since the cloned object 
      does not own the buffer, deletion will not happen.
      So at step 5 when we execute the statement second time, the 
      original object will be used and since deletion did not happen 
      valgrind will not complain about dangling pointer.
      
      
      sql/item_func.h:
        Add constructors.
      sql/sql_select.cc:
        Change user variable assignment functions to read from fields after
        tables have been unlocked.
      266dd9c0
    • Sergey Glukhov's avatar
      Bug#16095534 CRASH: PREPARED STATEMENT CRASHES IN ITEM_BOOL_FUNC2::FIX_LENGTH_AND_DEC · 2ec9dcf6
      Sergey Glukhov authored
      The problem happened due to broken left expression in Item_in_optimizer object.
      In case of the bug left expression is runtime created Item_outer_ref item which
      is deleted at the end of the statement and one of Item_in_optimizer arguments
      becomes bad when re-executed. The fix is to use real_item() instead of original
      left expression. Note: It feels a bit weird that after preparing, the field is
      directly part of the generated Item_func_eq, whereas in execution it is replaced
      with an Item_outer_ref wrapper object.
      
      
      sql/item_subselect.cc:
        use left_expr->real_item() instead of original left expression
        because left_expr can be runtime created Ref item which is deleted
        at the end of the statement. Thus one of 'substitution' arguments
        can be broken in case of PS.
      2ec9dcf6
  3. 06 May, 2013 2 commits
    • Annamalai Gurusami's avatar
      Bug #16722314 FOREIGN KEY ID MODIFIED DURING EXPORT · bf7325bb
      Annamalai Gurusami authored
      Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
      
      Problem:
      
      There are two situations here.  The constraint name is explicitly
      given by the user and the constraint name is automatically generated
      by InnoDB.  In the case of generated constraint name, it is formed by
      adding table name as prefix.  The table names are stored internally in
      my_charset_filename.  In the case of constraint name explicitly given
      by the user, it is stored in UTF8 format itself.  So, in some
      situations the constraint name is in utf8 and in some situations it is
      in my_charset_filename format.  Hence this problem.
      
      Solution:
      
      Always store the foreign key constraint name in UTF-8 even when
      automatically generated.
      
      Bug #16754901 PARS_INFO_FREE NOT CALLED IN DICT_CREATE_ADD_FOREIGN_TO_DICTIONARY
      
      Problem:
      
      There was a memory leak in the function dict_create_add_foreign_to_dictionary().
      The allocated pars_info_t object is not freed in the error code path.
      
      Solution:
      
      Allocate the pars_info_t object after the error checking.
      
      rb#2368 in review
      
      bf7325bb
    • unknown's avatar
      Raise version number after cloning 5.1.70 · 1a552530
      unknown authored
      1a552530
  4. 30 Apr, 2013 2 commits
    • unknown's avatar
      Bug#16405422 - RECOVERY FAILURE, ASSERT !RECV_NO_LOG_WRITE · 92989fde
      unknown authored
      eliminate a race condition over recv_sys->n_addrs which might result in a database corruption
      in recovery, without reporting a recovery error.
      
      recv_recover_page_func(): move the code segment that decrements recv_sys->n_addrs
        to the end of the function, after the call to mtr_commit()
      
      rb://2282 approved by Inaam
      92989fde
    • Neeraj Bisht's avatar
      BUG#16222245 - CRASH WITH EXPLAIN FOR A QUERY WITH LOOSE SCAN FOR · 0c9c76e9
      Neeraj Bisht authored
      GROUP BY, MYISAM 
      
      Problem:-
      In a query, where we are using loose index scan optimization and 
      we have MIN() causes segmentation fault(where table row length 
      is less then key_length).
      
      Analysis:
      
      While using loose index scan for MIN(), we call key_copy(), to copy 
      the key data from record.
      This function is using temporary record buffer to store key data 
      from the record buffer.But in case where the key length is greater 
      then the buffer length, this will cause a segmentation fault.
      
      
      Solution:
      Give a proper buffer to store a key record.
      
      
      sql/opt_range.cc:
        We can't use record buffer to store key data.So, give a proper buffer to store a key record.
      0c9c76e9
  5. 24 Apr, 2013 2 commits
  6. 22 Apr, 2013 1 commit
  7. 20 Apr, 2013 1 commit
    • Neeraj Bisht's avatar
      Bug#16073689 : CRASH IN ITEM_FUNC_MATCH::INIT_SEARCH · 89b1b508
      Neeraj Bisht authored
      Problem:
      In query like
      select 1 from .. order by match .. against ...;
      causes a debug assert failue.
      
      Analysis:
      In union type query like
      
      (select * from order by a) order by b;
      or
      (select * from order by a) union (select * from order by b);
      
      We skip resolving of order by a for 1st query and order by of a and b in 
      2nd query.
      
      
      This means that, in case when our order by have Item_func_match class, 
      we skip resolving it.
      But we maintain a ft_func_list and at the time of optimization, when we 
      Perform FULLTEXT search before all regular searches on the bases of the 
      list we call Item_func_match::init_search() which will cause debug assert 
      as the item is not resolved.
      
      
      Solution:
      We will skip execution if the item is not fixed and we will not 
      fix index(Item_func_match::fix_index()) for which 
      Item_func_match::fix_field() is not called so that on later changes 
      we can check the dependency on fix field.
      
      
      sql/item_func.cc:
        skiping execution, if item is not resolved.
      89b1b508
  8. 16 Apr, 2013 1 commit
  9. 14 Apr, 2013 1 commit
    • Chaithra Gopalareddy's avatar
      Bug#16347426:ASSERTION FAILED: (SELECT_INSERT && · 2d836633
      Chaithra Gopalareddy authored
                   !TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
            
      Problem:
      The context info of select query gets corrupted when a query
      with group_concat having order by is present in an order by
      clause of the select query. As a result, server crashes with
      an assert.
            
      Analysis:
      While parsing order by for group_concat, it is presumed that
      it is always present before the actual order by for the
      select query.
      As a result, parser uses select->order_list to populate the
      order by items of group_concat and creates a select->gorder_list
      to which select->order_list is copied onto. Once this is done,
      it empties the select->order_list.
      In the case presented in the bugpage, as order by is already
      parsed when group_concat's order by is encountered, parser
      presumes that it is the second order by in the select query
      and creates fake_lex_unit which results in the change of
      context info.
            
      Solution:
      Make group_concat's order by parsing independent of the select
      
      
      sql/item_sum.cc:
        Change the argument as, select->gorder_list is not pointer anymore
      sql/item_sum.h:
        Change the argument as, select->gorder_list is not pointer anymore
      sql/mysql_priv.h:
        Parsing for group_concat's order by is made independent.
        As a result, add_order_to_list cannot be used anymore.
      sql/sql_lex.cc:
        Parsing for group_concat's order by is made independent.
        As a result, add_order_to_list cannot be used anymore.
      sql/sql_lex.h:
        Parsing for group_concat's order by is made independent.
        As a result, add_order_to_list cannot be used anymore.
      sql/sql_yacc.yy:
         Make group_concat's order by parsing independent of the select
        queries order by.
      2d836633
  10. 09 Apr, 2013 1 commit
  11. 08 Apr, 2013 2 commits
  12. 02 Apr, 2013 2 commits
  13. 01 Apr, 2013 1 commit
  14. 31 Mar, 2013 1 commit
    • Chaithra Gopalareddy's avatar
      · cfb3bbac
      Chaithra Gopalareddy authored
      Bug #16347343 : CRASH, GROUP_CONCAT, DERIVED TABLES
            
      Problem:
      A select query inside a group_concat function having an 
      outer reference results in a crash.
            
      Analysis:
      In function Item_group_concat::add, we do not check if 
      return value of get_tmp_table_field can be NULL for 
      a non-const item. This can happen for a query with a 
      outer reference.
      While resolving the outer reference in the query present
      inside group_concat function, we set the "const_item_cache" 
      to false. As a result in the call to const_item() from 
      Item_func_group_concat::add, it returns false and goes on 
      to check if this can be NULL resulting in the crash.
      get_tmp_table_field does not return NULL for Items of type 
      Item_field, Item_result_field and Item_ref. 
      For all other items, it returns NULL. 
           
      Solution:
      Check for the return value of get_tmp_table_field before we 
      access field contents.
      
      sql/item_sum.cc:
        Check for the return value of get_tmp_table_field before accessing
      cfb3bbac
  15. 29 Mar, 2013 2 commits
  16. 28 Mar, 2013 4 commits
    • Georgi Kodinov's avatar
      Addendum #1 to the fix for bug #16451878 : GEOMETRY QUERY CRASHES SERVER · e927bda6
      Georgi Kodinov authored
      Fixed the get_data_size() methods for multi-point features to check properly for end 
      of their respective data arrays.
      Extended the point checking function to take a 3d optional argument so cases where
      there's additional data in each array element (besides the point data itself) can be
      covered by the helper function.
      Fixed the 3 cases where such offset was present to use the proper checking helper 
      function.
      Test cases added.
      Fixed review comments.
      e927bda6
    • Nisha Gopalakrishnan's avatar
      BUG#11753852: IF() VALUES ARE EVALUATED DIFFERENTLY IN A · e85c90b9
      Nisha Gopalakrishnan authored
                    REGULAR SQL VS PREPARED STATEMENT
      
      Analysis:
      ---------
      
      When passing user variables as parameters to the
      prepared statements, the IF() function evaluation
      turns out to be incorrect.
      
      Consider the example:
      
      SET @var1='0.038687';
      SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ;
      +----------+----------+
      | @var1    | sqlif    |
      +----------+----------+
      | 0.038687 | 0.038687 |
      +----------+----------+
      
      Executing a prepared statement where the parameters are
      supplied:
      
      PREPARE fail_stmt FROM "SELECT ? ,
      IF( ? = 0 , 1 , ? ) AS ps_if_fail" ;
      EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ;
      +----------+------------+
      | ?        | ps_if_fail |
      +----------+------------+
      | 0.038687 | 1          |
      +----------+------------+
      1 row in set (0.00 sec)
      
      In the regular statement or while executing the prepared
      statements without passing parameters, the decimal
      precision is set for the user variable of type string.
      The comparison function used for evaluation considered
      the precision while comparing the values.
      
      But while executing the prepared statement with the
      parameters supplied, the decimal precision was not
      set. Thus the comparison function chosen was different
      which looked at the absolute values for comparison.
      
      Fix:
      ----
      
      The fix is to set 'decimals' field of Item_param to the
      default value which is nothing but the maximum number of
      decimals(NOT_FIXED_DEC). This is set for cases where the
      strings are converted to the numeric form within certain
      functions. Thus the value is not rounded off during
      comparison, ensuring correct evaluation.
      e85c90b9
    • Sujatha Sivakumar's avatar
      Bug#14324766:PARTIALLY WRITTEN INSERT STATEMENT IN BINLOG · d054027c
      Sujatha Sivakumar authored
      NO ERRORS REPORTED
            
      Problem:
      =======
      Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache
      code assumes that 0 returned from my_b_fill always means
      end-of-cache, but that is incorrect. It can result in error
      and the error is ignored. Other callers of my_b_fill don't
      check for error: my_b_copy_to_file, maybe my_b_gets.
            
      Fix:
      ===
      An error handler is already present to check the "cache"
      error that is reported during "MYSQL_BIN_LOG::write_cache"
      call. Hence error handlers are added for "my_b_copy_to_file"
      and "my_b_gets".
      During my_b_fill() function call, when the cache read fails
      info->error= -1 is set. Hence a check for "info->error"
      is added for the above to callers upon their return.
      
      mysys/mf_iocache2.c:
        Added a check for "cache->error" and simulation of cache read failure
      mysys/my_read.c:
        Simulation of read failure
      sql/log_event.cc:
        Added debug simulation
      sql/sql_repl.cc:
        Added a check for cache error
      d054027c
    • Annamalai Gurusami's avatar
      Bug #16244691 SERVER GONE AWAY ERROR OCCURS DEPENDING ON THE NUMBER OF · f4b97d10
      Annamalai Gurusami authored
      TABLE/KEY RELATIONS
      
      Problem:
      
      When there are many tables, linked together through the foreign key
      constraints, then loading one table will recursively open other tables.  This
      can sometimes lead to thread stack overflow.  In such situations the server
      will exit.
      
      I see the stack overflow problem when the thread_stack is 196608 (the default
      value for 32-bit systems).  I don't see the problem when the thread_stack is
      set to 262144 (the default value for 64-bit systems).
      
      Solution:
      
      Currently, in InnoDB, there is a macro DICT_FK_MAX_RECURSIVE_LOAD which defines
      the maximum number of tables that will be loaded recursively because of foreign
      key relations.  This is currently set to 250.  We can reduce this number to 33
      (anything more than 33 does not solve the problem for the default value).  We
      can keep it small enough so that thread stack overflow does not happen for the
      default values.  Reducing the DICT_FK_MAX_RECURSIVE_LOAD will not affect the
      functionality of InnoDB.  The tables will eventually be loaded. 
      
      rb#2058 approved by Marko
      
      
      f4b97d10
  17. 27 Mar, 2013 3 commits
    • Georgi Kodinov's avatar
      Bug #16451878: GEOMETRY QUERY CRASHES SERVER · e7c48834
      Georgi Kodinov authored
      The GIS WKB reader was checking for the presence of
      enough data by first multiplying the number read (where
      it could overflow) and only then comparing it to the
      number of bytes available.
      This can overflow and effectively turn off the check.
      Fixed by:
      1. Introducing a new function that does division only so
      no overflow is possible.
      2. Using the proper macros and parenthesizing them.
      3. Doing an in-line division check in the only place where
      the boundary check is done over a data structure other
      than a dense points array.
      e7c48834
    • Nuno Carvalho's avatar
      BUG#16541422: LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES · 84bd6fec
      Nuno Carvalho authored
      Fixed possible uninitialized variable.
      84bd6fec
    • Sujatha Sivakumar's avatar
      Bug#11829838: ALTER TABLE NOT BINLOGGED WITH · 0e763f4d
      Sujatha Sivakumar authored
      --BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
            
      Problem:
      =======
      An ALTER TABLE statement is not written to binlog if server
      started with "--binlog-ignore-db some database" and 'fully
      qualified' table names are used in the ALTER TABLE statement
      altering table different from current database context.
            
      Analysis:
      ========
      The above mentioned problem not only affects "ALTER TABLE"
      statements but also to all kind of statements. Once the 
      current default database becomes "NULL" none of the 
      statements will be binlogged.
            
      The current behaviour is such that if the user has specified
      restrictions on which database needs to be replicated and the
      default db is not specified, then do not replicate.
      This means that "NULL" is considered to be equivalent to
      everything (default db = null implied ignore don't log the
      statement).
            
      Fix:
      ===
      "NULL" should not be considered as equivalent to everything.
      Since the filtering criteria is not equal to "NULL" the
      statement should be logged into binlog.
      
      mysql-test/suite/rpl/r/rpl_loaddata_m.result:
        Earlier when defalut database was "NULL" DROP TABLE
        was not getting logged. Post this fix it will be logged
        and the DROP will fail at slave as the table creation
        was skipped by master as --binlog-ignore-db=test.
      mysql-test/suite/rpl/t/rpl_loaddata_m.test:
        Earlier when defalut database was "NULL" DROP TABLE
        was not getting logged. Post this fix it will be logged
        and the DROP will fail at slave as the table creation
        was skipped by master as --binlog-ignore-db=test.
      sql/rpl_filter.cc:
        Replaced DBUG_RETURN(0) with DBUG_RETURN(1).
      0e763f4d
  18. 26 Mar, 2013 3 commits
    • Andrei Elkin's avatar
      merge from 5.1 repo. · 1ea6eb14
      Andrei Elkin authored
      1ea6eb14
    • Andrei Elkin's avatar
      Bug#16541422 LOG-SLAVE-UPDATES + REPLICATE-WILD-IGNORE-TABLE FAILS FOR USER VARIABLES · 9eb64ec5
      Andrei Elkin authored
      At logging a first Query referring a user var, the slave missed to log the user var.
      It appears that at execution of a Uservar event the slaver applier
      thought of the variable as already logged.
      The reason of misjudgement is in coincidence of query id:s: of one that the thread
      holds at Uservar execution and another one that the thread sees at the Query applying.
      While the two are naturally different in the regular execution branch (as two computational
      events are separated as individual events), in the deferred applying case the User var execution
      effectively belongs to its Query processing.
      
      Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id 
      to temporarily substitute with it the actual query id at the Uservar execution time
      (along with its query).
      Such manipulation mimics behaviour of the regular applying branch.
      
      sql/log_event.cc:
        Storing the Uservar parsing time query id into a new member of the event
        to to temporarily substitute
        with it the actual thread id at the Uservar execution time.
      sql/log_event.h:
        Storage for keeping query-id in User-var intance is added.
      9eb64ec5
    • Tor Didriksen's avatar
      Bug#62856 Check for "stack overrun" doesn't work with gcc-4.6, server crashes · ecf834b9
      Tor Didriksen authored
      Bug#13243248 CHECK FOR "STACK OVERRUN" DOESN'T WORK WITH GCC-4.6, SERVER CRASHES
      
      The existing check for stack direction may give wrong results
      for new versions of gcc at high optimization levels.
      
      Solution: Backport the stack-direction check from 5.5
      ecf834b9
  19. 22 Mar, 2013 2 commits
  20. 21 Mar, 2013 1 commit
    • Nirbhay Choubey's avatar
      Bug#12671635 HELP-TABLEFORMAT DOESN'T MATCH HELP-FILES · 04caf341
      Nirbhay Choubey authored
      As current size limit of 'url' field of help_topic
      table is no longer sufficient for the contents of
      the fill_help_tables-5.1.sql. So, loading the contents
      in the table might result in warning (or error with
      stricter modes).
      
      Updated the type for 'url' field of help_topic as well
      as help_category tables from char(128) to text.
      04caf341
  21. 20 Mar, 2013 1 commit
  22. 19 Mar, 2013 2 commits
  23. 18 Mar, 2013 1 commit
    • Sujatha Sivakumar's avatar
      Bug#14771299 OUT-OF-BOUND READS WRITE IN MYSQLBINLOG · b95d5cda
      Sujatha Sivakumar authored
      Problem:
      =======
      Found using AddressSanitizer testing.
      
      The mysqlbinlog utility may result in out-of-bound heap
      buffer reads and thus, undefined behaviour, when processing
      RBR events in the old (pre-5.1 GA) format.
      
      The following code in process_event() would only be correct
      if Rows_log_event was the base class for
      Write,Update,Delete_rows_log_event_old classes:
      
          case PRE_GA_WRITE_ROWS_EVENT:
          case PRE_GA_DELETE_ROWS_EVENT:
          case PRE_GA_UPDATE_ROWS_EVENT:
      ...
              Rows_log_event *e= (Rows_log_event*) ev;
              Table_map_log_event *ignored_map=
                print_event_info->m_table_map_ignored.get_table(e->get_table_id());
      ...
              if (e->get_flags(Rows_log_event::STMT_END_F))
              {
      ...
              }
      
      However, Rows_log_event is only the base class for the
      Write,Update_Delete_rows_event family of classes, but not
      for their *_old counterparts. So the above typecasts are
      incorrect for the old-format RBR events and may result (and
      do result according to AddressSanitizer reports) in reading
      memory outside of the previously allocated on heap buffer.
      
      Fix:
      ===
      The above mentioned invalid type cast has been replaced with
      appropriate old counterpart.
      
      Note:The above mentioned issue is present only mysql-5.1 and
      5.5. This is fixed in mysql-5.6 and above as part of 
      Bug#55790. Hence few of the relevant changes of Bug#55790 are
      being back ported to fix the current issue.
      
      client/mysqlbinlog.cc:
        The above mentioned invalid type cast of using new event
        object to read old events, has been replaced with
        appropriate old counterpart.
        
        Note:The above mentioned issue is present only mysql-5.1 and
        5.5. This is fixed in mysql-5.6 and above as part of 
        Bug#55790. Hence few of the relevant changes of Bug#55790 are
        being back ported to fix the current issue.
      b95d5cda