1. 31 Jan, 2013 3 commits
    • Gleb Shchepa's avatar
      Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY · 2993c299
      Gleb Shchepa authored
      Some queries with the "SELECT ... FROM DUAL" nested subqueries
      failed with an assertion on debug builds.
      Non-debug builds were not affected.
      
      There were a few different issues with similar assertion
      failures on different queries:
      
      1. The first problem was related to the incomplete propagation
      of the "non-constant" item status from underlying subquery
      items to the outer item tree: in some cases non-constants were
      interpreted as constants and evaluated at the preparation stage
      (val_int() calls withing fix_fields() etc).
      
      Thus, the default implementation of Item_ref::const_item() from
      the Item parent class didn't take into account the "const_item"
      status of the referenced item tree -- it used the insufficient
      "used_tables() == 0" check instead. This worked in most cases
      since our "non-constant" functions like RAND() and SLEEP() set
      the RAND_TABLE_BIT in the used table map, so they aren't
      non-constant from Item_ref's "point of view". However, the
      "SELECT ... FROM DUAL" subquery may have an empty map of used
      tables, but at the same time subqueries are never "constant" at
      the context analysis stage (preparation, view creation etc).
      So, the non-contantness of such subqueries was missed.
      
      Fix: the Item_ref::const_item() function has been overloaded to
      take into account both (*ref)->const_item() status and tricky
      Item_ref::used_tables() return values, since the only
      (*ref)->const_item() call is not enough there.
      
      2. In some cases instead of the const_item() call we check a
      value of the Item::with_subselect field to recognize items
      with nested subqueries. However, the Item_ref class didn't
      propagate this value from the referenced item tree.
      
      Fix: Item::has_subquery() and Item_ref::has_subquery()
      functions have been backported from 5.6. All direct
      references to the with_subselect fields of nested items have
      been replaced with the has_subquery() function call.
      
      3. The Item_func_regex class didn't propagate with_subselect
      as well, since it overloads the Item_func::fix_fields()
      function with insufficient fix_fields() implementation.
      
      Fix: the Item_func_regex::fix_fields() function has been
      modified to gather "constant" statuses from inner items.
      
      4. The Item_func_isnull::update_used_tables() function has
      a special branch for the underlying item where the maybe_null
      value is false: in this case it marks the Item_func_isnull
      as a "const_item" and sets the cached_value to false.
      However, the Item_func_isnull::val_int() was not in sync with
      update_used_tables(): it didn't take into account neither
      const_item_cache nor cached_value for the case of
      "args[0]->maybe_null == false optimization".
      As far as such an Item_func_isnull has "const_item() == true",
      it's ok to call Item_func_isnull::val_int() etc from outer
      items on preparation stage. In this case the server tried to
      call Item_func_isnull::args[0]->isnull(), and if the args[0]
      item contained a nested not-nullable subquery, it failed
      with an assertion.
      
      Fix: take the value of Item_func_isnull::const_item_cache into
      account in the val_int() function.
      
      5. The auxiliary Item_is_not_null_test class has a similar
      optimization in the update_used_tables() function as the
      Item_func_isnull class has, and the same issue in the val_int()
      function.
      In addition to that the Item_is_not_null_test::update_used_tables()
      doesn't update the const_item_cache value, so the "maybe_null"
      optimization is useless there. Thus, we missed some optimizations
      of cases like these (before and after the fix):
        <  <is_not_null_test>(a),
        ---
        >  <cache>(<is_not_null_test>(a)),
      or
        < having (<is_not_null_test>(a) and <is_not_null_test>(a))
        ---
        > having 1
      etc.
      
      Fix: update Item_is_not_null_test::const_item_cache in
      update_used_tables() and take in into account in val_int().
      2993c299
    • Yasufumi Kinoshita's avatar
      Bug #16220051 : INNODB_BUG12400341 FAILS ON VALGRIND WITH TOO MANY ACTIVE CONCURRENT TRANSACTION · 5656b9dd
      Yasufumi Kinoshita authored
      innodb_bug12400341.test is disabled for valgrind daily test.
      It might be affected by the previous test's undo slots existing,
      because of slower execution.
      5656b9dd
    • Chaithra Gopalareddy's avatar
      Bug#14096619: UNABLE TO RESTORE DATABASE DUMP · 082ac987
      Chaithra Gopalareddy authored
      Backport of Bug#13581962
      082ac987
  2. 30 Jan, 2013 2 commits
    • mysql-builder@oracle.com's avatar
      No commit message · 08b0d549
      mysql-builder@oracle.com authored
      No commit message
      08b0d549
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      - BUG#1608883: KILLING A QUERY INSIDE INNODB CAUSES IT TO EVENTUALLY CRASH · 1853fd95
        WITH AN ASSERTION
      
        Correcting the build failure that was caused because of changes 
        checked-in to below mentioned revision.
        (Changes: DEBUG_SYNC_C should be disabled for innodb_plugin under
         Windows enviornment. Note: only for innodb_plugin.)
      
        revno: 3915
        revision-id: krunal.bauskar@oracle.com-20130114051951-ang92lkirop37431
        parent: nisha.gopalakrishnan@oracle.com-20130112054337-gk5pmzf30d2imuw7
        committer: Krunal Bauskar krunal.bauskar@oracle.com
        branch nick: mysql-5.1
        timestamp: Mon 2013-01-14 10:49:51 +0530
      
      1853fd95
  3. 29 Jan, 2013 1 commit
  4. 28 Jan, 2013 2 commits
    • Nuno Carvalho's avatar
      BUG#16200555: EMPTY NAME FOR USER VARIABLE IS ALLOWED AND BREAKS STATEMENT BINARY LOGGING · e174bf73
      Nuno Carvalho authored
      On a previous fix, user variables with zero length name were incorrectly
      considered as event corruption, despite that them are allowed by server.
      
      Fix this wrong assumption by allowing again user variables with zero
      length on binary log.
      e174bf73
    • Venkatesh Duggirala's avatar
      Bug#16084594 USER_VAR ITEM IN 'LOAD FILE QUERY' WAS NOT · 534b65a4
      Venkatesh Duggirala authored
      PROPERLY QUOTED IN BINLOG FILE
      Problem: In load data file query, User variables are allowed
      inside "Into_list" and "Set_list". These user variables used
      inside these two lists are not properly guarded with backticks
      while server is writting into binlog. Hence user variable names
      like a` cannot be used in this context.
      
      Fix: Properly quote these variables while
      writting into binlog
      534b65a4
  5. 24 Jan, 2013 2 commits
    • Venkata Sidagam's avatar
      BUG#11908153 CRASH AND/OR VALGRIND ERRORS IN FIELD_BLOB::GET_KEY_IMAGE · 5674d559
      Venkata Sidagam authored
      Backporting bug patch from 5.5 to 5.1.
      This fix is applicable to BUG#14362617 as well
      5674d559
    • Venkata Sidagam's avatar
      Bug #11752803 SERVER CRASHES IF MAX_CONNECTIONS DECREASED BELOW · d0181929
      Venkata Sidagam authored
                     CERTAIN LEVEL
            
      Problem description: mysqld crashes when we update the max_connections 
      variable to lesser value than the number of currently open connections.
            
      Analysis: The "alarm_queue.max_elements" size will be decided at the 
      server start time and it will get modified if we change max_connections 
      value. In the current scenario the value of "alarm_queue.max_elements" 
      is decremented when the max_connections is set to 2. When updating the  
      "alarm_queue.max_elements" value we are not updating "max_used_alarms" 
      value. Hence, instead of getting the warning "thr_alarm queue is full" 
      it is ending up in asserting the server at the time of inserting new 
      elements in the queue.
            
      Fix: the fix is to dynamically increase the size of the alarm_queue.
      In order to do that, queue_insert_safe() should be used instead if
      queue_insert().
      d0181929
  6. 23 Jan, 2013 2 commits
    • Yasufumi Kinoshita's avatar
      Bug #16089381 : POSSIBLE NUMBER UNDERFLOW AROUND CALLING PAGE_ZIP_EMPTY_SIZE() · 6083ae52
      Yasufumi Kinoshita authored
      some callers for page_zip_empty_size() ignored possibility its returning 0, and could cause underflow.
      
      rb#1837 approved by Marko
      6083ae52
    • Gleb Shchepa's avatar
      Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY · e53345f0
      Gleb Shchepa authored
      Some queries with the "SELECT ... FROM DUAL" nested subqueries
      failed with an assertion on debug builds.
      Non-debug builds were not affected.
      
      There were a few different issues with similar assertion
      failures on different queries:
      
      1. The first problem was related to the incomplete propagation
      of the "non-constant" item status from underlying subquery
      items to the outer item tree: in some cases non-constants were
      interpreted as constants and evaluated at the preparation stage
      (val_int() calls withing fix_fields() etc).
      
      Thus, the default implementation of Item_ref::const_item() from
      the Item parent class didn't take into account the "const_item"
      status of the referenced item tree -- it used the insufficient
      "used_tables() == 0" check instead. This worked in most cases
      since our "non-constant" functions like RAND() and SLEEP() set
      the RAND_TABLE_BIT in the used table map, so they aren't
      non-constant from Item_ref's "point of view". However, the
      "SELECT ... FROM DUAL" subquery may have an empty map of used
      tables, but at the same time subqueries are never "constant" at
      the context analysis stage (preparation, view creation etc).
      So, the non-contantness of such subqueries was missed.
      
      Fix: the Item_ref::const_item() function has been overloaded to
      take into account both (*ref)->const_item() status and tricky
      Item_ref::used_tables() return values, since the only
      (*ref)->const_item() call is not enough there.
      
      2. In some cases instead of the const_item() call we check a
      value of the Item::with_subselect field to recognize items
      with nested subqueries. However, the Item_ref class didn't
      propagate this value from the referenced item tree.
      
      Fix: Item::has_subquery() and Item_ref::has_subquery()
      functions have been backported from 5.6. All direct
      references to the with_subselect fields of nested items have
      been with the has_subquery() function call.
      
      3. The Item_func_regex class didn't propagate with_subselect
      as well, since it overloads the Item_func::fix_fields()
      function with insufficient fix_fields() implementation.
      
      Fix: the Item_func_regex::fix_fields() function has been
      modified to gather "constant" statuses from inner items.
      
      4. The Item_func_isnull::update_used_tables() function has
      a special branch for the underlying item where the maybe_null
      value is false: in this case it marks the Item_func_isnull
      as a "const_item" and sets the cached_value to false.
      However, the Item_func_isnull::val_int() was not in sync with
      update_used_tables(): it didn't take into account neither
      const_item_cache nor cached_value for the case of
      "args[0]->maybe_null == false optimization".
      As far as such an Item_func_isnull has "const_item() == true",
      it's ok to call Item_func_isnull::val_int() etc from outer
      items on preparation stage. In this case the server tried to
      call Item_func_isnull::args[0]->isnull(), and if the args[0]
      item contained a nested not-nullable subquery, it failed
      with an assertion.
      
      Fix: take the value of Item_func_isnull::const_item_cache into
      account in the val_int() function.
      
      5. The auxiliary Item_is_not_null_test class has a similar
      optimization in the update_used_tables() function as the
      Item_func_isnull class has, and the same issue in the val_int()
      function.
      In addition to that the Item_is_not_null_test::update_used_tables()
      doesn't update the const_item_cache value, so the "maybe_null"
      optimization is useless there. Thus, we missed some optimizations
      of cases like these (before and after the fix):
        <  <is_not_null_test>(a),
        ---
        >  <cache>(<is_not_null_test>(a)),
      or
        < having (<is_not_null_test>(a) and <is_not_null_test>(a))
        ---
        > having 1
      etc.
      
      Fix: update Item_is_not_null_test::const_item_cache in
      update_used_tables() and take in into account in val_int().
      e53345f0
  7. 21 Jan, 2013 1 commit
    • Marko Mäkelä's avatar
      Bug#16067973 DROP TABLE SLOW WHEN IT DECOMPRESS COMPRESSED-ONLY PAGES · f130ccc1
      Marko Mäkelä authored
      buf_page_get_gen(): Do not attempt to decompress a compressed-only
      page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
      btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
      hash index pointing to a page that does not exist in uncompressed
      format in the buffer pool.
      
      innodb_buffer_pool_evict_update(): New function for debug builds, to handle
      SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
      by evicting all uncompressed page frames of compressed tablespaces
      from the buffer pool.
      
      rb#1873 approved by Jimmy Yang
      f130ccc1
  8. 19 Jan, 2013 1 commit
    • Venkatesh Duggirala's avatar
      Bug#11752707-SLAVE CRASHES IF RBR HAS AS DESTINATION A VIEW · d1f351bc
      Venkatesh Duggirala authored
      RATHER THAN A TABLE
      
      Problem: In RBR, If a table is converted into a view at slave,
      (i.e., "drop table 'object1'" & "create view 'object1'"), then any
      DML operations on the table at master are causing crash at slave.
      
      Analysis: Slave prepares tables to be opened for DML list when it
      receives Table_map_log_event(s). And the same list will be sent to
      open_table function. Open_table logic assumes that if the list
      contains a view object, it also contains "select_lex" object of
      that view. In the above special case, the table object does not
      contain 'select_lex' as it is base table at master. Since it
      is a view at slave, open_table logic goes to 'mysql_make_view()'
      function which assumes that 'select_lex' exists for the object.
      
      Fix: While preparing 'tables to be opened' list, we should make 
      sure that table required type is 'base table'. If it is not 
      base table while opening the object, mysql_make_view will throw an 
      error similar to 'object is not a base table' 
      d1f351bc
  9. 16 Jan, 2013 2 commits
    • Anirudh Mangipudi's avatar
      BUG#14117025: UNABLE TO RESTORE DUMP · cd3b2ac9
      Anirudh Mangipudi authored
      Problem: When a view, with a specific character set and collation, 
      is created on another view with a different character set and collation the 
      dump restoration results in an illegal mix of collations error.
      SOLUTION: To avoid this confusion of collations, the create table datatype 
      being used is hardcoded as "tinyint NOT NULL". This will not matter as the table 
      created will be dropped at runtime and specifically tinyint is used to 
      avoid hitting the row size conflicts.
      cd3b2ac9
    • Neeraj Bisht's avatar
      Bug#11751794 MYSQL GIVES THE WRONG RESULT WITH SOME SPECIAL USAGE · 0e787b24
      Neeraj Bisht authored
      Consider the following query:
      
      SELECT f_1,..,f_m, AGGREGATE_FN(C)
      FROM t1
      WHERE ...
      GROUP BY ...
      
      Loose index scan ("Using index for group-by") can be used for
      this query if there is an index 'i' covering all fields in the
      select list, and the GROUP BY clause makes up a prefix f1,...,fn
      of 'i'. Furthermore, according to rule NGA2 of
      get_best_group_min_max(), the WHERE clause must contain a
      conjunction of equality predicates for all fields fn+1,...,fm.
      
      The problem in this bug was that a query with WHERE clause that
      broke NGA2 was not detected and therefore used loose index scan.
      This lead to wrong result. The query had an index
      covering (c1,c2) and had:
        "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
         GROUP BY c1"
      or 
        "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
         GROUP BY c1"
      
      
      This WHERE clause cannot be transformed to a conjunction of
      equality predicates.
      
      The solution is to introduce another rule, NGA3, that complements
      NGA2. NGA3 says that if a gap field (field between those
      listed in GROUP BY and C in the index) has a predicate, then
      there can only be one range in the query. This requirement is
      more strict than it has to be in theory. BUG 15947433 will deal
      with that.
      0e787b24
  10. 15 Jan, 2013 1 commit
    • Neeraj Bisht's avatar
      Bug#11758009 - UNION EXECUTION ORDER WRONG ? · d8d6f270
      Neeraj Bisht authored
      Problem:-
      In case of blob data field, UNION ALL doesn't give correct result.
      
      Analysis:-
      In MyISAM table, when we dont want to check for the distinct for particular 
      key, we set the key_map to zero.
      
      While writing record in MyISAM table, we check the distinct with the help 
      of keys, by checking whether that key is active in key_map and then writing 
      the record.
      
      In case of blob field, we are checking for distinct by unique constraint, 
      where we are not checking whether that unique key is active or not in key_map.
      
      Solution:-
      Before checking for distinct, check whether any key is active in key_map.
      d8d6f270
  11. 14 Jan, 2013 2 commits
  12. 12 Jan, 2013 1 commit
    • Nisha Gopalakrishnan's avatar
      BUG#11757250: REPLACE(...) INSIDE A STORED PROCEDURE. · 3d9d0e77
      Nisha Gopalakrishnan authored
      Analysis:
      --------
      
      REPLACE operation provides incorrect output when
      user variable is supplied as an argument and there
      are multiple rows on which the operation is performed.
      
      Consider the example below:
      
      SET @var='(( 00000000 ++ 00000000 ))';
      SELECT REPLACE(@var, '00000000', table_name) AS a FROM
      INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';
      
      Invalid output:
        +---------------------------------------+
        | REPLACE(@var, '00000000', TABLE_NAME) |
        +---------------------------------------+
        | (( columns_priv ++ columns_priv ))    |
        | (( columns_priv ++ columns_priv ))    |
            ......
            ......
        | (( columns_priv ++ columns_priv ))    |
        | (( columns_priv ++ columns_priv ))    |
        | (( columns_priv ++ columns_priv ))    |
        +---------------------------------------+
      
      The user argument supplied as the string to REPLACE
      operation is overwritten after the first iteration
      to '(( columns_priv ++ columns_priv ))'.
      The overwritten string after the first iteration
      is used for the subsequent REPLACE iteration. Since
      the pattern string is not found, it returns invalid
      output as mentioned above.
      
      Fix:
      ---
      If the Alloced_length is zero, realloc() and create a
      copy of the string which is then used for the REPLACE
      operation for every iteration.
      3d9d0e77
  13. 11 Jan, 2013 1 commit
    • Aditya A's avatar
      Bug#15843818 PARTITIONING BY RANGE WITH TO_DAYS ALWAYS · 01094a4b
      Aditya A authored
                     INCLUDES FIRST PARTITION WHEN PRUNING
      
      
      PROBLEM
      -------
      
      TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which 
      was stored in the first partition ,therefore the first partition 
      was always included for the scan when range was specified.
      
      
      FIX
      ---
      
      The fix is a small optimization which we have included ,which will
      prune the scanning of NULL/first partition if the dates specified 
      in the range are valid and in the same year and month . TO_SECONDS()
      function is not supported in 5.1 so removed it from the fix and test
      scripts for mysql-5.1 version.
      01094a4b
  14. 10 Jan, 2013 2 commits
    • Chaithra Gopalareddy's avatar
      Bug#11760726: LEFT JOIN OPTIMIZED INTO JOIN LEADS TO · b140c368
      Chaithra Gopalareddy authored
                    INCORRECT RESULTS
      
      This is a backport of fix for Bug#13068506.
      b140c368
    • Praveenkumar Hulakund's avatar
      Bug#11749556: DEBUG ASSERTION WHEN ACCESSING A VIEW AND · 79438d75
      Praveenkumar Hulakund authored
                    AVAILABLE MEMORY IS TOO LOW 
      
      Analysis:
      ---------
      In function "mysql_make_view", "table->view" is initialized
      after parsing(using File_parser::parse) the view definition.
      If "::parse" function fails then control is moved to label 
      "err:". Here we have assert (table->view == thd->lex). 
      This assert fails if "::parse" function fails, as 
      table->view is not initialized yet.
      
      File_parser::parse fails if data being parsed is incorrect/
      corrupted or when memory allocation fails. In this scenario
      its failing because of failure in memory allocation.
      
      Fix:
      ---------
      In case of failure in function "File_parser::parse", moving
      to label "err:" is incorrect. Modified code to move
      to label "end:".
      79438d75
  15. 09 Jan, 2013 1 commit
  16. 08 Jan, 2013 1 commit
  17. 07 Jan, 2013 2 commits
  18. 04 Jan, 2013 3 commits
    • mysql-builder@oracle.com's avatar
      No commit message · ed8b6fc4
      mysql-builder@oracle.com authored
      No commit message
      ed8b6fc4
    • Satya Bodapati's avatar
      Post Fix to Bug#14628410 - ASSERTION `! IS_SET()' FAILED IN · 1de6ac5b
      Satya Bodapati authored
      			    DIAGNOSTICS_AREA::SET_OK_STATUS
      
      Test fails on 5.1 valgrind build. This is because of close(-1)
      system call.
      
      Fixed by adding extra checks for valid file descriptor.
      
      Approved by Vasil(Calvin). rb#1792
      1de6ac5b
    • Nirbhay Choubey's avatar
      Bug#16066243 PB2 FAILURES I_MAIN.BUG15912213 AND · 138217a2
      Nirbhay Choubey authored
          I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1
      
      While converting directory name to filename, a
      file separator (FN_LIBCHAR) might get appended
      to the resulting file name. This can result in
      off-by-one error when length of the input string
      is equal to FN_REFLEN. In this case, the terminating
      '\0' gets written beyond the buffer allocated to store
      the result.
      
      Fixed by incrementing the dst buffer size by 1. As
      extra safety, switched to strnmov() and added a debug
      assert to check the length of the input file name.
      
      No test case added as the scenario is already
      covered by the test cases added for bugs in
      the description.
      138217a2
  19. 02 Jan, 2013 1 commit
    • Venkatesh Duggirala's avatar
      BUG#11753923-SQL THREAD CRASHES ON DISK FULL · c72f687f
      Venkatesh Duggirala authored
      Problem:If Disk becomes full while writing into the binlog,
      then the server instance hangs till someone frees the space.
      After user frees up the disk space, mysql server crashes
      with an assert (m_status != DA_EMPTY)
      
      Analysis: wait_for_free_space is being called in an
      infinite loop i.e., server instance will hang until
      someone frees up the space. So there is no need to
      set status bit in diagnostic area.
      
      Fix: Replace my_error/my_printf_error with
      sql_print_warning() which prints the warning in error log.
      c72f687f
  20. 01 Jan, 2013 1 commit
  21. 29 Dec, 2012 1 commit
  22. 28 Dec, 2012 1 commit
  23. 27 Dec, 2012 2 commits
  24. 26 Dec, 2012 2 commits
  25. 24 Dec, 2012 2 commits
    • Annamalai Gurusami's avatar
      Fixing a pb2 issue. There is some difference in the output in my local... · d1dcbfd2
      Annamalai Gurusami authored
      Fixing a pb2 issue.  There is some difference in the output in my local machine and pb2 machines in the explain output.  
      d1dcbfd2
    • Chaithra Gopalareddy's avatar
      Bug#11757005: UNION CONVERTS UNSIGNED MEDIUMINT AND BIGINT · adc973d5
      Chaithra Gopalareddy authored
                    TO SIGNED
      Problem:
      When we are joining types (of fields) in case of a union, we usually
      upgrade the datatypes to the largest present in the query.
      In case of mediumint, it is not happening.
      Analysis:
      When joined with types LONG and LONGLONG, mediumint should get
      upgraded to LONG and LONGLONG respectively.
      W.r.t the given query, constant '1' will be created as a LONGLONG
      internally and SIGNED flag is enabled. As a result, while combining
      types for the field, LONGLONG along with MEDIUMINT gets converted
      to LONG first. LONG with MEDIUMINT(of the third select) gets converted
      to MEDIUMINT. SIGNED FLAG would be that of the first field's.
      As a result, the final result would be SIGNED MEDIUMINT.
      Fix:
      While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
      is converted to LONGLONG and LONG respectively. Also, made some 
      changes for FLOAT and DOUBLE.
      adc973d5