1. 20 Oct, 2011 1 commit
    • Sergey Vojtovich's avatar
      BUG#11757032 - 49030: OPTIMIZE TABLE BREAKS MYISAM TABLE WHEN · 3e0491c7
      Sergey Vojtovich authored
                     USING MYISAM_USE_MMAP ON WINDOWS
      
      When OPTIMIZE/REPAIR TABLE is switching to new data file,
      old data file is removed while memory mapping is still
      active.
      
      With 5.1 implementation of nt_share_delete() it is not
      permitted to remove mmaped file.
      
      This fix disables memory mapping for mi_repair() operations.
      3e0491c7
  2. 18 Oct, 2011 1 commit
  3. 13 Oct, 2011 1 commit
  4. 12 Oct, 2011 2 commits
    • Marko Mäkelä's avatar
      Bug#13006367 62487: innodb takes 3 minutes to clean up the adaptive · 41b97529
      Marko Mäkelä authored
      hash index at shutdown
      
      btr_search_disable(): Just drop the entire adaptive hash index,
      without dropping every record separately.
      
      buf_pool_clear_hash_index(): Renamed and simplified from
      buf_pool_drop_hash_index(). Set block->index = NULL for every block in
      the buffer pool. Do not release the btr_search_latch. The caller will
      have to adjust other data structures.
      
      Remove block->is_hashed. It is redundant, should be always equal to
      block->index != NULL.
      
      Remove btr_search_fully_disabled, btr_search_enabled_mutex, and
      SYNC_SEARCH_SYS_CONF. We drop the AHI in one pass, without releasing
      the btr_search_latch in between.
      
      Replace void* with const rec_t* and add assertions on btr_search_latch
      and btr_search_enabled to ha0ha.h, ha0ha.ic, ha0ha.c.
      
      page_set_max_trx_id(): Ignore the adaptive hash index. I forgot to
      push this in rb:750.
      
      btr0sea.c: Always after acquiring btr_search_latch, check for
      block->index==NULL or !btr_search_enabled. We can now set
      block->index=NULL while only holding btr_search_latch in exclusive
      mode. Always acquire btr_search_latch before reading block->index,
      except in shortcuts when testing for block->index == NULL.
      
      ha_clear(), ha_search(): Unused function, remove.
      
      buf_page_peek_if_search_hashed(): Remove. This function may avoid
      latching a page at the cost of doing a duplicate buf_pool->page_hash
      lookup.
      
      rb:775 approved by Inaam Rana
      41b97529
    • Vinay Fisrekar's avatar
      bug#11766457 - adjusting/modifying the the tests as tests were failing if... · c6120de6
      Vinay Fisrekar authored
      bug#11766457 - adjusting/modifying the the tests as tests were failing if system time zone is set differently.
      c6120de6
  5. 05 Oct, 2011 4 commits
    • Bjorn Munch's avatar
      merge 5.1-mtr => 5.1 · ebaa6006
      Bjorn Munch authored
      ebaa6006
    • Sergey Glukhov's avatar
      automerge · cff85ac1
      Sergey Glukhov authored
      cff85ac1
    • Sergey Glukhov's avatar
      Bug#11747970 34660: CRASH WHEN FEDERATED TABLE LOSES CONNECTION DURING INSERT ... SELECT · fcd99c15
      Sergey Glukhov authored
      Problematic query:
      insert ignore into `t1_federated` (`c1`) select `c1` from  `t1_local` a
      where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
      When this query is killed in another connection it could lead to crash.
      The problem is follwing:
      An attempt to obtain table statistics for subselect table in killed query
      fails with an error. So JOIN::optimize() for subquery is failed but
      it does not prevent further subquery evaluation.
      At the first subquery execution JOIN::optimize() is called
      (see subselect_single_select_engine::exec()) and fails with
      an error. 'executed' flag is set to TRUE and it prevents
      further subquery evaluation. At the second call
      JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
      and in case of uncacheable subquery the 'executed' flag is set
      to FALSE before subquery evaluation. So we loose 'optimize stage'
      error indication (see subselect_single_select_engine::exec()).
      In other words 'executed' flag is used for two purposes, for
      error indication at JOIN::optimize() stage and for an
      indication of subquery execution. And it seems it's wrong
      as the flag could be reset.
      fcd99c15
    • Marko Mäkelä's avatar
      Add InnoDB UNIV_SYNC_DEBUG assertions to rw-lock code. · 739c5296
      Marko Mäkelä authored
      rw_lock_x_lock_func(): Assert that the thread is not already holding
      the lock in a conflicting mode (RW_LOCK_SHARED).
      
      rw_lock_s_lock_func(): Assert that the thread is not already holding
      the lock in a conflicting mode (RW_LOCK_EX).
      739c5296
  6. 04 Oct, 2011 5 commits
  7. 03 Oct, 2011 1 commit
  8. 28 Sep, 2011 1 commit
    • Raghav Kapoor's avatar
      BUG#11758062 - 50206: ER_TOO_BIG_SELECT REFERS TO OUTMODED · 92d96d14
      Raghav Kapoor authored
      SYSTEM VARIABLE NAME SQL_MAX_JOIN_SI 
      
      BACKGROUND:
      
      ER_TOO_BIG_SELECT refers to SQL_MAX_JOIN_SIZE, which is the
      old name for MAX_JOIN_SIZE.
      
      FIX:
      
      Support for old name SQL_MAX_JOIN_SIZE is removed in MySQL 5.6
      and is renamed as MAX_JOIN_SIZE.So the errmsg.txt 
      and mysql.cc files have been updated and the corresponding result
      files have also been updated.
      92d96d14
  9. 27 Sep, 2011 2 commits
  10. 26 Sep, 2011 2 commits
  11. 22 Sep, 2011 1 commit
  12. 21 Sep, 2011 1 commit
    • kevin.lewis@oracle.com's avatar
      Bug 12963823 - Crash in Purge thread under unusual circumstances. · 8d036bcd
      kevin.lewis@oracle.com authored
      The problem occurred when indexes are added between the time that an
      UNDO record is created and the time that the purge thread comes around
      and deletes the old secondary index entries.  The purge thread would
      hit an assert when trying to build a secondary index entry for
      searching.  The problem was that the old value of those fields were not
      in the UNDO record since they were not part of an index when the UPDATE
      occured. 
      A test case was added to innodb-index.test.
      8d036bcd
  13. 20 Sep, 2011 1 commit
  14. 19 Sep, 2011 1 commit
  15. 16 Sep, 2011 2 commits
  16. 15 Sep, 2011 2 commits
  17. 14 Sep, 2011 3 commits
  18. 13 Sep, 2011 2 commits
  19. 12 Sep, 2011 1 commit
    • Marko Mäkelä's avatar
      Bug#12601439 CONSISTENT READ FAILURE IN COLUMN PREFIX INDEX · 607a3e83
      Marko Mäkelä authored
      When there is a secondary index on a column prefix of an externally
      stored column and an entry in the secondary index is shorter than the
      reserved prefix length, it should mean that the secondary index entry
      is holding the complete column value. When comparing this secondary
      index column value to the column in the clustered index row, we must
      compare the entire prefix that was fetched from the clustered
      index. The bug was that we would just compare that the column in the
      clustered index starts with the value found in the secondary index
      column.
      
      This bug affects only the InnoDB Barracuda formats (ROW_FORMAT=DYNAMIC
      and ROW_FORMAT=COMPRESSED), in which columns that are stored off-page
      in the clustered index do not contain any prefix in the clustered
      index record.
      
      row_sel_sec_rec_is_for_blob(): Add the parameter prefix_len, for
      ifield->prefix_len. Add some assertions.
      
      Sorry, I did not manage to produce a test case. This patch does
      produce correct results on the data set that Michael isolated on our
      test machine. That was with the purge and background rollback
      suspended, because they would make the bug go away.
      
      rb:760 approved by Sunny Bains
      607a3e83
  20. 09 Sep, 2011 1 commit
  21. 08 Sep, 2011 2 commits
  22. 07 Sep, 2011 1 commit
    • Vasil Dimov's avatar
      Use cursors for seeking records in SYS_FOREIGN and SYS_INDEXES from · 1ebfa44b
      Vasil Dimov authored
      DROP_TABLE_PROC().
      
      With this change I observe a speedup from 6.2s to 0.1s when executing
      DROP_TABLE_PROC() during DROP TABLE with 512 foreign keys, like what
      is being done in innodb_bug56143.test
      
      This fixes "Bug#11765460 DROP TABLE USES INEFFICIENT METHODS TO REMOVE
      FKS/INDEXES FROM INNODB SYS TABLES"
      
      Reviewed by:	Marko
      1ebfa44b
  23. 06 Sep, 2011 2 commits