1. 04 Jul, 2007 1 commit
    • istruewing@chilla.local's avatar
      Bug#26827 - table->read_set is set incorrectly, · dc82068c
      istruewing@chilla.local authored
                causing update of a different column
      
      For efficiency some storage engines do not read a complete record
      for update, but only the columns required for selecting the rows.
      
      When updating a row of a partitioned table, modifying a column
      that is part of the partition or subpartition expression, then
      the row may need to move from one [sub]partition to another one.
      This is done by inserting the new row into the target
      [sub]partition and deleting the old row from the originating one.
      For the insert we need a complete record.
      
      If an above mentioned engine was used for a partitioned table, we
      did not have a complete record in update_row(). The implicitly
      executed write_row() got an incomplete record.
      
      This is solved by instructing the engine to read a complete record
      if one of the columns of the partition or subpartiton is to be
      updated.
      
      No testcase. This can be reproduced with Falcon only. The engines
      contained in standard 5.1 do always return complete records on
      update.
      dc82068c
  2. 03 Jul, 2007 2 commits
  3. 02 Jul, 2007 3 commits
  4. 01 Jul, 2007 5 commits
  5. 30 Jun, 2007 5 commits
  6. 29 Jun, 2007 21 commits
  7. 28 Jun, 2007 3 commits
    • antony@ppcg5.local's avatar
      Bug#25513 · fc241de3
      antony@ppcg5.local authored
        "Federared Transactions Failure"
        Bug occurs when the user performs an operation which inserts more than 
        one row into the federated table and the federated table references a 
        remote table stored within a transactional storage engine. When the
        insert operation for any one row in the statement fails due to 
        constraint violation, the federated engine is unable to perform 
        statement rollback and so the remote table contains a partial commit. 
        The user would expect a statement to perform the same so a statement 
        rollback is expected.
        This bug was fixed by implementing  bulk-insert handling into the
        federated storage engine. This will relieve the bug for most common
        situations by enabling the generation of a multi-row insert into the
        remote table and thus permitting the remote table to perform 
        statement rollback when neccessary.
        The multi-row insert is limited to the maximum packet size between 
        servers and should the size overflow, more than one insert statement 
        will be sent and this bug will reappear. Multi-row insert is disabled
        when an "INSERT...ON DUPLICATE KEY UPDATE" is being performed.
        The bulk-insert handling will offer a significant performance boost 
        when inserting a large number of small rows.
      This patch builds on Bug29019 and Bug25511
      fc241de3
    • antony@ppcg5.local's avatar
      Bug#25511 · b0b0b0fb
      antony@ppcg5.local authored
        "Federated INSERT failures"
        Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
        However, implementing such support is not reasonably possible without
        increasing complexity of the storage engine: checking that constraints
        on remote server match local server and parsing error messages.
        This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
        if a conflict occurs and not to fail silently.
      b0b0b0fb
    • gkodinov/kgeorge@magare.gmz's avatar
      Bug #29157: UPDATE, changed rows incorrect · 71aaf52a
      gkodinov/kgeorge@magare.gmz authored
      Sometimes the number of really updated rows (with changed
      column values) cannot be determined at the server level
      alone (e.g. if the storage engine does not return enough
      column values to verify that). So the only dependable way
      in such cases is to let the storage engine return that
      information if possible.
      Fixed the bug at server level by providing a way for the 
      storage engine to return information about wether it 
      actually updated the row or the old and the new column 
      values are the same. It can do that by returning 
      HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
      Note that each storage engine may choose not to try to
      return this status code, so this behaviour remains 
      storage engine specific.
      71aaf52a