1. 02 Jul, 2007 1 commit
  2. 30 Jun, 2007 1 commit
  3. 29 Jun, 2007 7 commits
  4. 28 Jun, 2007 3 commits
    • unknown's avatar
      Bug#25513 · dba70720
      unknown authored
        "Federared Transactions Failure"
        Bug occurs when the user performs an operation which inserts more than 
        one row into the federated table and the federated table references a 
        remote table stored within a transactional storage engine. When the
        insert operation for any one row in the statement fails due to 
        constraint violation, the federated engine is unable to perform 
        statement rollback and so the remote table contains a partial commit. 
        The user would expect a statement to perform the same so a statement 
        rollback is expected.
        This bug was fixed by implementing  bulk-insert handling into the
        federated storage engine. This will relieve the bug for most common
        situations by enabling the generation of a multi-row insert into the
        remote table and thus permitting the remote table to perform 
        statement rollback when neccessary.
        The multi-row insert is limited to the maximum packet size between 
        servers and should the size overflow, more than one insert statement 
        will be sent and this bug will reappear. Multi-row insert is disabled
        when an "INSERT...ON DUPLICATE KEY UPDATE" is being performed.
        The bulk-insert handling will offer a significant performance boost 
        when inserting a large number of small rows.
      This patch builds on Bug29019 and Bug25511
      
      
      sql/ha_federated.cc:
        bug25513
          new member methods:
            start_bulk_insert() - initializes memory for bulk insert
            end_bulk_insert() - sends any remaining bulk insert and frees memory
            append_stmt_insert() - create the INSERT statement
      sql/ha_federated.h:
        bug25513
          new member value:
            bulk_insert
          new member methods:
            start_bulk_insert(), end_bulk_insert(), append_stmt_insert()
          make member methods private:
            read_next(), index_read_idx_with_result_set()
      mysql-test/r/federated_innodb.result:
        New BitKeeper file ``mysql-test/r/federated_innodb.result''
      mysql-test/t/federated_innodb-slave.opt:
        New BitKeeper file ``mysql-test/t/federated_innodb-slave.opt''
      mysql-test/t/federated_innodb.test:
        New BitKeeper file ``mysql-test/t/federated_innodb.test''
      dba70720
    • unknown's avatar
      Bug#25511 · 94beb7cd
      unknown authored
        "Federated INSERT failures"
        Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
        However, implementing such support is not reasonably possible without
        increasing complexity of the storage engine: checking that constraints
        on remote server match local server and parsing error messages.
        This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
        if a conflict occurs and not to fail silently.
      
      
      include/my_base.h:
        bug25511
          new storage engine hint: HA_EXTRA_INSERT_WITH_UPDATE
      mysql-test/r/federated.result:
        test for bug25511
      mysql-test/t/federated.test:
        test for bug25511
      sql/ha_federated.cc:
        bug25511
          implement support for handling HA_EXTRA_INSERT_WITH_UPDATE hint
      sql/ha_federated.h:
        bug25511
          new property: insert_dup_update
      sql/sql_insert.cc:
        bug25511
          implement support for HA_EXTRA_INSERT_WITH_UPDATE
          When checking duplicates flag, if it is DUP_UPDATE, send hint
          to the storage engine.
      94beb7cd
    • unknown's avatar
      Bug#29019 · 0e5e884b
      unknown authored
        "REPLACE/INSERT IGNORE/UPDATE IGNORE doesn't work"
        Federated does not record neccessary HA_EXTRA flags in order to
        support REPLACE/INSERT IGNORE/UPDATE IGNORE.
        Implement ::extra() to capture flags neccessary for functionality.
      New function append_ident() to better escape identifiers consistantly.
      
      
      mysql-test/r/federated.result:
        test for bug29019
      mysql-test/t/federated.test:
        test for bug29019
      sql/ha_federated.cc:
        Bug29019
          Federated does not record neccessary HA_EXTRA flags in order to
          support REPLACE/INSERT IGNORE/UPDATE IGNORE.
          Implement ::extra() to capture flags neccessary for functionality.
        New function append_ident() to better escape identifiers consistantly.
      sql/ha_federated.h:
        bug29019
          add 2 member values to ha_federated class
            ignore_duplicates and replace_duplicates.
          add 1 member method to ha_federated class
            extra()
      0e5e884b
  5. 27 Jun, 2007 2 commits
    • unknown's avatar
      BUG#29299 - repeatable myisam fulltext index corruption · 030d98d3
      unknown authored
      Fulltext index may get corrupt by certain gbk characters.
      
      The problem was that when skipping leading non-true-word-characters,
      we assumed that these characters are always 1 byte long. This is not
      the case with gbk character set, since non-true-word-characters may
      be 2 bytes long.
      
      Affects 5.0 only.
      
      
      myisam/ft_parser.c:
        Leading non-true-word-characters may also be multi-byte (e.g. in
        gbk character set).
      mysql-test/r/fulltext2.result:
        A test case for BUG#29299.
      mysql-test/t/fulltext2.test:
        A test case for BUG#29299.
      030d98d3
    • unknown's avatar
      BUG#29207 - archive table reported as corrupt by check table (P1) · a38b1ae7
      unknown authored
      CHECK TABLE against ARCHIVE table may falsely report table corruption,
      or cause server crash.
      
      Fixed by using proper buffer for CHECK TABLE.
      
      Affects both 5.0 and 5.1.
      
      
      mysql-test/r/archive.result:
        A test case for BUG#28916.
      mysql-test/t/archive.test:
        A test case for BUG#28916.
      sql/ha_archive.cc:
        We call Field::get_length() from get_row(). Field::get_length() assumes
        that the row was read into table->record[0] buffer, which is not the
        case when we check a table. As a result we get wrongly initialized
        blob length.
        
        Use table->record[0] as record buffer for check table instead.
      a38b1ae7
  6. 25 Jun, 2007 3 commits
  7. 24 Jun, 2007 6 commits
    • unknown's avatar
      Merge olga.mysql.com:/home/igor/mysql-5.0-opt · f30db309
      unknown authored
      into  olga.mysql.com:/home/igor/dev-opt/mysql-5.0-opt-bug25602
      
      
      sql/sql_select.cc:
        Auto merged
      f30db309
    • unknown's avatar
      Merge chilla.local:/home/mydev/mysql-5.0-amain · b6ec51eb
      unknown authored
      into  chilla.local:/home/mydev/mysql-5.0-axmrg
      
      
      b6ec51eb
    • unknown's avatar
      Merge chilla.local:/home/mydev/mysql-5.0-ateam · 5b7e9882
      unknown authored
      into  chilla.local:/home/mydev/mysql-5.0-axmrg
      
      
      5b7e9882
    • unknown's avatar
      BUG#15787 - MySQL crashes when archive table exceeds 2GB · b3b8d516
      unknown authored
      Max compressed file size was calculated incorretly causing server
      crash on INSERT.
      
      With this patch we use proper max file size provided by zlib.
      
      Affects 5.0 only.
      
      
      sql/ha_archive.cc:
        When calculating max compressed file size, use the real offset size
        that is provided by zlib, instead of sizeof(z_off_t), which may be
        different from actual offset size.
        
        When we're about to write and the data file is almost full flush gzio
        buffer to get accurate real file size.
      mysql-test/r/archive-big.result:
        New BitKeeper file ``mysql-test/r/archive-big.result''
      mysql-test/t/archive-big.test:
        New BitKeeper file ``mysql-test/t/archive-big.test''
      b3b8d516
    • unknown's avatar
      Merge gleb.loc:/home/uchum/work/bk/5.0 · fec835f1
      unknown authored
      into  gleb.loc:/home/uchum/work/bk/5.0-opt
      
      
      sql/log_event.cc:
        Auto merged
      fec835f1
    • unknown's avatar
      Fixed bug #25602. A query with DISTINCT in the select list to which · e009b764
      unknown authored
      the loose scan optimization for grouping queries was applied returned 
      a wrong result set when the query was used with the SQL_BIG_RESULT
      option.
      
      The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
      queries instead of employing a suitable index. The current loose scan
      optimization is applied only for one table queries when the suitable
      index is covering. It does not make sense to use sort algorithm in this
      case. However the create_sort_index function does not take into account
      the possible choice of the loose scan to implement the DISTINCT operator
      which makes sorting unnecessary. Moreover the current implementation of
      the loose scan for queries with distinct assumes that sorting will
      never happen. Thus in this case create_sort_index should not call
      the function filesort.
      
      
      mysql-test/r/group_min_max.result:
        Added a test case for bug #25602.
      mysql-test/t/group_min_max.test:
        Added a test case for bug #25602.
      e009b764
  8. 23 Jun, 2007 3 commits
    • unknown's avatar
      Merge gleb.loc:/home/uchum/work/bk/4.1-opt · b462e06e
      unknown authored
      into  gleb.loc:/home/uchum/work/bk/5.0-opt
      
      
      b462e06e
    • unknown's avatar
      Merge gleb.loc:/home/uchum/work/bk/5.0-opt-29095 · d37471b4
      unknown authored
      into  gleb.loc:/home/uchum/work/bk/5.0-opt
      
      
      d37471b4
    • unknown's avatar
      Fixed bug #29095. · 1bab1ddc
      unknown authored
      INSERT into table from SELECT from the same table
      with ORDER BY and LIMIT was inserting other data
      than sole SELECT ... ORDER BY ... LIMIT returns.
      
      One part of the patch for bug #9676 improperly pushed
      LIMIT to temporary table in the presence of the ORDER BY
      clause.
      That part has been removed.
      
      
      sql/sql_select.cc:
        Fixed bug #29095.
        One part of the patch for bug #9676 improperly pushed
        LIMIT to temporary table in the presence of the ORDER BY
        clause.
        That part has been removed.
      mysql-test/t/insert_select.test:
        Expanded the test case for bug #9676.
        Created a test case for bug #29095.
      mysql-test/r/insert_select.result:
        Expanded the test case for bug #9676.
        Created a test case for bug #29095.
      1bab1ddc
  9. 22 Jun, 2007 9 commits
    • unknown's avatar
      Merge trift2.:/MySQL/M50/mysql-5.0 · 8541b56c
      unknown authored
      into  trift2.:/MySQL/M50/push-5.0
      
      
      8541b56c
    • unknown's avatar
      Add the "nist" suite to the "test-bt" target, · 054201f4
      unknown authored
      to be run only if it is available on the machine.
      
      
      054201f4
    • unknown's avatar
      Merge gkodinov@bk-internal.mysql.com:/home/bk/mysql-5.0-opt · f3940eba
      unknown authored
      into  magare.gmz:/home/kgeorge/mysql/autopush/B28400-5.0-opt
      
      
      f3940eba
    • unknown's avatar
      Bug #27383: Crash in test "mysql_client_test" · fe036d98
      unknown authored
      The C optimizer may decide that data access operations
      through pointer of different type are not related to 
      the original data (strict aliasing).
      This is what happens in fetch_long_with_conversion(),
      when called as part of mysql_stmt_fetch() : it tries 
      to check for truncation errors by first storing float
      (and other types of data) into a char * buffer and then 
      accesses them through a float pointer.
      This is done to prevent the effects of excess precision
      when using FPU registers.
      However the doublestore() macro converts a double pointer
      to an union pointer. This violates the strict aliasing rule.
      Fixed by making the intermediary variables volatile (
      to not re-introduce the excess precision bug) and using
      the intermediary value instead of the char * buffer.
      Note that there can be loss of precision for both signed
      and unsigned 64 bit integers converted to double and back,
      so the check must stay there (even for compatibility 
      reasons).
      Based on the excellent analysis in bug 28400.
      
      
      libmysql/libmysql.c:
        Bug #27383: avoid pointer aliasing problems while 
        not re-violating the Intel FPU gcc bug.
      fe036d98
    • unknown's avatar
      Merge trift2.:/MySQL/M50/mysql-5.0 · bdc32139
      unknown authored
      into  trift2.:/MySQL/M50/push-5.0
      
      
      bdc32139
    • unknown's avatar
      Merge bk-internal.mysql.com:/home/bk/mysql-5.0-maint · f0dbd310
      unknown authored
      into  maint1.mysql.com:/data/localhome/tsmith/bk/maint/50
      
      
      f0dbd310
    • unknown's avatar
      Merge bk@192.168.21.1:mysql-5.0-opt · e434a5ca
      unknown authored
      into  mysql.com:/home/hf/work/28839/my50-28839
      
      
      e434a5ca
    • unknown's avatar
      rpl_skip_error.test fixed · cb606a66
      unknown authored
      
      mysql-test/r/rpl_skip_error.result:
        test result fixed
      mysql-test/t/rpl_skip_error.test:
        inconsistent column results hidden
      cb606a66
    • unknown's avatar
      Bug #29138 'kill' fails in pushbuild · 37344c68
      unknown authored
      The reason the "reap;" succeeds unexpectedly is because the query was completing(almost always) and the network buffer was big enough to store the query result (sometimes) on Windows, meaning the response was completely sent before the server thread could be killed.
      
      Therefore we use a much longer running query that doesn't have a chance to fully complete before the reap happens, testing the kill properly.
      
      
      mysql-test/r/kill.result:
        We use a much longer running query that doesn't have a chance to fully complete before the reap happens.
      mysql-test/t/kill.test:
        We use a much longer running query that doesn't have a chance to fully complete before the reap happens.
      37344c68
  10. 21 Jun, 2007 5 commits