1. 24 Jun, 2015 2 commits
    • Yashwant Sahu's avatar
    • Debarun Banerjee's avatar
      BUG#20310212 PARTITION DDL- CRASH AFTER THD::NOCHECK_REGISTER_ITEM_ · 0eadadad
      Debarun Banerjee authored
      Problem :
      ---------
      Issue-1: The root cause for the issues is that (col1 > 1) is not a
      valid partition function and we should have thrown error at much early
      stage [partition_info::check_partition_info]. We are not checking
      sub-partition expression when partition expression is NULL.
      
      Issue-2: Potential issue for future if any partition function needs to
      change item tree during open/fix_fields. We should release changed
      items, if any, before doing closefrm when we open the partitioned table
      during creation in create_table_impl.
      
      Solution :
      ----------
      1.check_partition_info() - Check for sub-partition expression even if no
      partition expression.
      [partition by ... columns(...) subpartition by hash(<expr>)]
      
      2.create_table_impl() - Assert that the change list is empty before doing
      closefrm for partitioned table. Currently no supported partition function
      seems to be changing item tree during open.
      Reviewed-by: default avatarMattias Jonsson <mattias.jonsson@oracle.com>
      
      RB: 9345
      0eadadad
  2. 23 Jun, 2015 3 commits
  3. 22 Jun, 2015 2 commits
  4. 19 Jun, 2015 2 commits
    • Annamalai Gurusami's avatar
      Bug #20762798 FK DDL: CRASH IN DICT_FOREIGN_REMOVE_FROM_CACHE · db2ed27e
      Annamalai Gurusami authored
      Problem:
      
      If we add a referential integrity constraint with a duplicate
      name, an error occurs.  The foreign key object would not have
      been added to the dictionary cache.  In the error path, there
      is an attempt to remove this foreign key object. Since this
      object is not there, the search returns a NULL result.
      De-referencing the null object results in this crash.
      
      Solution:
      
      If the search to the foreign key object failed, then don't
      attempt to access it.
      
      rb#9309 approved by Marko.
      db2ed27e
    • V S Murthy Sidagam's avatar
      Bug #21221862 NEWEST RHEL/CENTOS OPENSSL UPDATE BREAKS MYSQL DHE CIPHERS · dbbe747e
      V S Murthy Sidagam authored
      Description: The newest RHEL/CentOS/SL 6.6 openssl package
      (1.0.1e-30.el6_6.9; published around 6/4/2015) contains a fix for
      LogJam. RedHat's fix for this was to limit the use
      of any SSL DH key sizes to a minimum of 768 bits. This breaks any
      DHE SSL ciphers for MySQL clients as soon as you install the
      openssl update, since in vio/viosslfactories.c, the default
      DHPARAM is a 512 bit one. This cannot be changed in
      configuration/runtime; and needs a recompile. Because of this the
      client connection with --ssl-cipher=DHE-RSA-AES256-SHA is not
      able to connect the server.
      
      Analysis: Openssl has changed Diffie-Hellman key from the 512 to
      1024 due to some reasons(please see the details at
      http://openssl.org/news/secadv_20150611.txt) Because of this the client
      with DHE cipher is failing to connect the server. This change took
      place from the openssl-1.0.1n onwards.
      
      Fix: Similar bug fix is already pushed to mysql-5.7 under bug#18367167.
      Hence we backported the same fix to mysql-5.5 and mysql-5.6.
      dbbe747e
  5. 17 Jun, 2015 2 commits
  6. 16 Jun, 2015 2 commits
  7. 05 Jun, 2015 2 commits
  8. 04 Jun, 2015 2 commits
    • Arun Kuruvila's avatar
      Merge branch 'mysql-5.1' into mysql-5.5 · 95cb8c1d
      Arun Kuruvila authored
      95cb8c1d
    • Arun Kuruvila's avatar
      Bug #20605441 : BUFFER OVERFLOW IN MYSQLSLAP · 044e3b1d
      Arun Kuruvila authored
      Description:- mysqlslap is a diagnostic utility designed to
      emulate client load for a MySQL server and to report the
      timing of each stage. This utility crashes when invalid
      values are passed to the options 'num_int_cols_opt' or
      'num_chars_cols_opt' or 'engine'.
      
      Analysis:- mysqlslap uses "parse_option()" to parse the
      values specified to the options 'num_int_cols_opt',
      'num_chars_cols_opt' and 'engine'. These options takes
      values separated by commas. In "parse_option()", the comma
      separated values are separated and copied into a buffer
      without checking the length of the string to be copied. The
      size of the buffer is defined by a macro HUGE_STRING_LENGTH
      whose value is 8196. So if the length of the any of the
      comma separated value exceeds HUGE_STRING_LENGTH, will
      result in a buffer overflow.
      
      Fix:- A check is introduced in "parse_option()" to check
      whether the size of the string to be copied is more than
      HUGE_STRING_LENGTH. If it is more, an error, "Invalid value
      specified for the option 'xxx'" is thrown.
      Option length was incorrectly calculated for the last comma
      separated value. So fixed that as well.
      044e3b1d
  9. 03 Jun, 2015 2 commits
    • Debarun Banerjee's avatar
      BUG#21065746 RQG_PARTN_PRUNING_VALGRIND FAILED IN REM0REC.CC · e5991403
      Debarun Banerjee authored
      Problem :
      ---------
      This is a regression of Bug#19138298. In purge_node_t::validate_pcur
      we are trying to get offsets for all columns of clustered index from
      stored record in persistent cursor. This would fail when stored record
      is not having all fields of the index. The stored record stores only
      fields that are needed to uniquely identify the entry.
      
      Solution :
      ----------
      1. Use pcur.old_n_fields to get fields that are stored
      2. Add comment to note dependency between stored fields in purge node
      ref and stored cursor.
      3. Return if the cursor record is not already stored as it is not safe
      to access cursor record directly without latch.
      Reviewed-by: default avatarMarko Makela <marko.makela@oracle.com>
      
      RB: 9139
      e5991403
    • Debarun Banerjee's avatar
      BUG#21126772 VALGRIND FAILURE IN ENGINES/FUNCS SUITE · 4b8304a9
      Debarun Banerjee authored
      Problem :
      ---------
      This is a regression of bug-19138298. During purge, if
      btr_pcur_restore_position fails, we set found_clust to FALSE
      so that it can find a possible clustered index record in future
      calls for the same undo entry. This, however, overwrites the
      old_rec_buf while initializing pcur again in next call.
      
      The leak is reproducible in local environment and with the
      test provided along with bug-19138298.
      
      Solution :
      ----------
      If btr_pcur_restore_position() fails close the cursor.
      Reviewed-by: default avatarMarko Makela <Marko.Makela@oracle.com>
      Reviewed-by: default avatarAnnamalai Gurusami <Annamalai.Gurusami@oracle.com>
      
      RB: 9074
      4b8304a9
  10. 29 May, 2015 2 commits
  11. 22 May, 2015 1 commit
  12. 21 May, 2015 1 commit
    • Bin Su's avatar
      Bug#21113036 - MYSQL/INNODB MIX BUFFERED AND DIRECT IO · b4daac21
      Bin Su authored
      As man page of open(2) suggested, we should open the same file in the same
      mode, to have better performance. For some data files, we will first call
      os_file_create_simple_no_error_handling_func() to open them, and then call
      os_file_create_func() again. We have to make sure if DIRECT IO is specified,
      these two functions should both open file with O_DIRECT.
      Reviewed-by: default avatarSunny Bains <sunny.bains@oracle.com>
      RB: 8981
      b4daac21
  13. 18 May, 2015 1 commit
    • Tatiana Azundris Nuernberg's avatar
      Bug#20642505: HENRY SPENCER REGULAR EXPRESSIONS (REGEX) LIBRARY · dc45e408
      Tatiana Azundris Nuernberg authored
      The MySQL server uses Henry Spencer's library for regular
      expressions to support the REGEXP/RLIKE string operator.
      This changeset adapts a recent fix from the upstream for
      better 32-bit compatiblity. (Note that we cannot simply use
      the current upstream version as a drop-in replacement
      for the version used by the server as the latter has
      been extended to understand MySQL charsets etc.)
      dc45e408
  14. 12 May, 2015 1 commit
  15. 11 May, 2015 1 commit
    • Ajo Robert's avatar
      Bug #18075170 SQL NODE RESTART REQUIRED TO · 515b2203
      Ajo Robert authored
      AVOID DEADLOCK AFTER RESTORE
      
      Analysis
      --------
      Accessing the restored NDB table in an active multi-statement
      transaction was resulting in deadlock found error.
      
      MySQL Server needs to discover metadata of NDB table from
      data nodes after table is restored from backup. Metadata
      discovery happens on the first access to restored table.
      Current code mandates this statement to be the first one
      in the transaction. This is because discover needs exclusive
      metadata lock on the table. Lock upgrade at this point can
      lead to MDL deadlock and the code was written at the time
      when MDL deadlock detector was not present. In case when
      discovery attempted in the statement other than the first
      one in transaction ER_LOCK_DEADLOCK error is reported
      pessimistically.
      
      Fix:
      ---
      Removed the constraint as any potential deadlock will be
      handled by deadlock detector. Also changed code in discover
      to keep metadata locks of active transaction.
      
      Same issue was present in table auto repair scenario. Same
      fix is added in repair path also.
      515b2203
  16. 09 May, 2015 1 commit
    • Annamalai Gurusami's avatar
      Bug #19138298 RECORD IN INDEX WAS NOT FOUND ON ROLLBACK, TRYING TO INSERT · e7b6e814
      Annamalai Gurusami authored
      Scenario:
      
      1. The purge thread takes an undo log record and parses it and forms
         the record to be purged. We have the primary and secondary keys
         to locate the actual records.
      2. Using the secondary index key, we search in the secondary index.
         One record is found.
      3. Then it is checked if this record can be purged.  The answer is we
         can purge this record.  To determine this we look up the clustered
         index record.  Either there is no corresponding clustered index
         record, or the matching clustered index record is delete marked.
      4. Then we check whether the secondary index record is delete marked.
         We find that it is not delete marked.  We report warning in optimized
         build and assert in debug build.
      
      Problem:
      
      In step 3, we report that the record is purgeable even though it is
      not delete marked.  This is because of inconsistency between the
      following members of purge_node_t structure - found_clust, ref and pcur.
      
      Solution:
      
      In the row_purge_reposition_pcur(), if the persistent cursor restore
      fails, then reset the purge_node_t->found_clust member.  This will
      keep the members of purge_node_t structure in a consistent state.
      
      rb#8813 approved by Marko.
      e7b6e814
  17. 04 May, 2015 1 commit
  18. 29 Apr, 2015 1 commit
  19. 28 Apr, 2015 2 commits
    • Arun Kuruvila's avatar
      Merge branch 'mysql-5.1' into mysql-5.5 · c9a38e86
      Arun Kuruvila authored
      c9a38e86
    • Arun Kuruvila's avatar
      Bug #20181776 :- ACCESS CONTROL DOESN'T MATCH MOST SPECIFIC · fdae90dd
      Arun Kuruvila authored
                       HOST WHEN IT CONTAINS WILDCARD
      
      Description :- Incorrect access privileges are provided to a
      user due to wrong sorting of users when wildcard characters
      is present in the hostname.
      
      Analysis :- Function "get_sorts()" is used to sort the
      strings of user name, hostname, database name. It is used
      to arrange the users in the access privilege matching order.
      When a user connects, it checks in the sorted user access
      privilege list and finds a corresponding matching entry for
      the user. Algorithm used in "get_sort()" sorts the strings
      inappropriately. As a result, when a user connects to the
      server, it is mapped to incorrect user access privileges.
      Algorithm used in "get_sort()" counts the number of
      characters before the first occurence of any one of the
      wildcard characters (single-wildcard character '_' or
      multi-wildcard character '%') and sorts in that order.
      As a result of inconnect sorting it treats hostname "%" and
      "%.mysql.com" as equally-specific values and therefore
      the order is indeterminate.
      
      Fix:- The "get_sort()" algorithm has been modified to treat
      "%" seperately. Now "get_sort()" returns a number which, if
      sorted in descending order, puts strings in the following
      order:-
      * strings with no wildcards
      * strings containg wildcards and non-wildcard characters
      * single muilt-wildcard character('%')
      * empty string.
      fdae90dd
  20. 27 Apr, 2015 3 commits
    • V S Murthy Sidagam's avatar
      Bug #18592390 QUERY TO I_S.TABLES AND I_S.COLUMNS LEADS TO HUGE MEMORY USAGE · c3870e08
      V S Murthy Sidagam authored
      Description: On an example MySQL instance with 28k empty
      InnoDB tables, a specific query to information_schema.tables
      and information_schema.columns leads to memory consumption
      over 38GB RSS.
      
      Analysis: In get_all_tables() call, we fill the I_S tables
      from frm files and storage engine. As part of that process
      we call make_table_name_list() and allocate memory for all
      the 28k frm file names in the THD mem_root through
      make_lex_string_root(). Since it has been called around
      28k * 28k times there is a huge memory getting hogged in
      THD mem_root. This causes the RSS to grow to 38GB.
      
      Fix: As part of fix we are creating a temporary mem_root
      in get_all_tables and passing it to fill_fiels(). There we
      replace the THD mem_root with the temporary mem_root and
      allocates the file names in temporary mem_root and frees
      it once we fill the I_S tables in get_all_tables and
      re-assign the original mem_root back to THD mem_root.
      
      Note: Checked the massif out put with the fix now the memory growth is just around 580MB at peak.
      c3870e08
    • V S Murthy Sidagam's avatar
      7797ef4d
    • V S Murthy Sidagam's avatar
      Bug #20683237 BACKPORT 19817663 TO 5.1 and 5.5 · c655515d
      V S Murthy Sidagam authored
      Restrict when user table hashes can be viewed. Require SUPER privileges.
      c655515d
  21. 24 Apr, 2015 2 commits
    • Arun Kuruvila's avatar
      Merge branch 'mysql-5.1' into mysql-5.5 · dbe6832c
      Arun Kuruvila authored
      dbe6832c
    • Arun Kuruvila's avatar
      Bug#20318154 : NEGATIVE ARRAY INDEX WRITE V2 · eb79ead4
      Arun Kuruvila authored
      Description:- There is a possibility of negative array index
      write associated with the function "terminal_writec()". This
      is due to the assumption that there is a possibility of
      getting -1 return value from the function call
      "ct_visual_char()".
      
      Analysis:- The function "terminal_writec()" is called only
      from "em_delete_or_list()" and "vi_list_or_eof()" and both
      these functions deal with the "^D" (ctrl+D) signal. So the
      "size_t len" and "Char c" passed to "ct_visual_char()" (when
      called from "terminal_writec()") is always 8 (macro
      VISUAL_WIDTH_MAX is passed whose value is 8) and 4 (ASCII
      value for "^D"/"ctrl+D") respectively.
      Since the value of "c" is 4, "ct_chr_class()" returns -1
      (macro CHTYPE_ASCIICTL is associated with -1 value). And
      since value of "len" is 8, "ct_visual_char()" will always
      return 2 when it is called from "terminal_writec()".
      So there is no possible case so that we encounter a negative
      array index write in "terminal_writec()". But since there is
      a rare posibility of using "terminal_writec()" in future
      enhancements, it is good handle the error case as well.
      
      Fix:- A condition is added in "terminal_writec()" to check
      whether "ct_visual_char()" is returning -1 or not. If the
      return value is -1, then value 0 is returned to its calling
      function "em_delete_or_list()" or "vi_list_or_eof()", which
      in turn will return CC_ERROR.
      
      NOTE:- No testcase is added since currently there is no
      possible scenario to encounter this error case.
      eb79ead4
  22. 21 Apr, 2015 1 commit
  23. 20 Apr, 2015 2 commits
    • V S Murthy Sidagam's avatar
      Bug #16861371 SSL_OP_NO_COMPRESSION NOT DEFINED · f07d9957
      V S Murthy Sidagam authored
      post push change: missed the change in mysql-5.5
      (Fixing compiler warning/error)
      f07d9957
    • V S Murthy Sidagam's avatar
      Bug #16861371 SSL_OP_NO_COMPRESSION NOT DEFINED · e7ad7f05
      V S Murthy Sidagam authored
      Description:
      Can't build mysql-5.5 latest source with openssl 0.9.8e.
      
      Analysis:
      Older OpenSSL versions(prior to openssl 1.0) doesn't have 'SSL_OP_NO_COMPRESSION' defined.
      Hence the build is failing with SSL_OP_NO_COMPRESSION undeclared.
      
      Fix:
      Added a conditonal compilation for 'SSL_OP_NO_COMPRESSION'.
      i.e if 'SSL_OP_NO_COMPRESSION' is defined then have the SSL_set_options call for OpenSSL 1.0 versions.
      Have sk_SSL_COMP_zero() call for OpenSSL 0.9.8 version
      e7ad7f05
  24. 17 Apr, 2015 1 commit
    • Mauritz Sundell's avatar
      Bug#20814396 PB2 IS SECRET ABOUT WHAT UNIT TESTS IT RUN · 30c14893
      Mauritz Sundell authored
      One can not see in PB2 test logs which unit tests have been run
      and passed.
      
      This patchs adds an option --unit-tests-report to mtr which
      include the ctest report in mtr output.  It will also turn on unit
      testing if not explicitly turned off with --no-unit-tests or
      equivalent.
      
      In manual runs one can always look in the ctest.log file in mtr
      vardir.
      
      --unit-tests are replaced with --unit-tests-report in files under
      mysql-test/collections/ to activate report in PB2.
      30c14893