1. 01 Dec, 2006 1 commit
    • svoj@mysql.com/april.(none)'s avatar
      BUG#23196 - MySQL server does not exit / shutdown when · 5e733ee8
      svoj@mysql.com/april.(none) authored
                  storage engine returns errno 12
      
      If there is not enough memory to store or update blob record
      (while allocating record buffer), myisam marks table as crashed.
      
      With this fix myisam attempts to roll an index back and return
      an error, not marking a table as crashed.
      
      Affects myisam tables with blobs only. No test case for this fix.
      5e733ee8
  2. 07 Nov, 2006 1 commit
  3. 01 Nov, 2006 1 commit
  4. 27 Oct, 2006 1 commit
  5. 26 Oct, 2006 1 commit
  6. 25 Oct, 2006 2 commits
    • istruewing@chilla.local's avatar
      Bug#22119 - Changing MI_KEY_BLOCK_LENGTH makes a wrong myisamchk · 7dc7af51
      istruewing@chilla.local authored
      When compiling with a default key block size greater than the
      smallest key block size used in a table, checking that table
      failed with bogus errors. The table was marked corrupt. This
      affected myisamchk and the server.
      
      The problem was that the default key block size was used at
      some places where sizes less or equal to the block size of the
      index in check was required.
      
      We do now use the key block size of the particular index
      when checking.
      
      A test case is available for later versions only.
      7dc7af51
    • svoj@mysql.com/april.(none)'s avatar
      BUG#22053 - REPAIR table can crash server for some · cdb83585
      svoj@mysql.com/april.(none) authored
                  really damaged MyISAM tables
      
      When unpacking a blob column from broken row server crash
      could happen. This could rather happen when trying to repair
      a table using either REPAIR TABLE or myisamchk, though it
      also could happend when trying to access broken row using
      other SQL statements like SELECT if table is not marked as
      crashed.
      
      Fixed ulong overflow when trying to extract blob from
      broken row.
      
      Affects MyISAM only.
      cdb83585
  7. 20 Oct, 2006 1 commit
  8. 19 Oct, 2006 4 commits
  9. 18 Oct, 2006 1 commit
    • svoj@mysql.com/april.(none)'s avatar
      BUG#23175 - MYISAM crash/repair failed during repair · a2e0059f
      svoj@mysql.com/april.(none) authored
      Repair table could crash a server if there is not sufficient
      memory (myisam_sort_buffer_size) to operate. Affects not only
      repair, but also all statements that use create index by sort:
      repair by sort, parallel repair, bulk insert.
      
      Return an error if there is not sufficient memory to store at
      least one key per BUFFPEK.
      
      Also fixed memory leak if thr_find_all_keys returns an error.
      a2e0059f
  10. 17 Oct, 2006 1 commit
  11. 16 Oct, 2006 3 commits
  12. 13 Oct, 2006 2 commits
  13. 11 Oct, 2006 4 commits
  14. 10 Oct, 2006 1 commit
  15. 09 Oct, 2006 2 commits
    • istruewing@chilla.local's avatar
      Merge chilla.local:/home/mydev/mysql-4.1-bug8283 · 1daa6a71
      istruewing@chilla.local authored
      into  chilla.local:/home/mydev/mysql-4.1-bug8283-one
      1daa6a71
    • istruewing@chilla.local's avatar
      Bug#8283 - OPTIMIZE TABLE causes data loss · 5f08a831
      istruewing@chilla.local authored
      OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick 
      parallel repair. This means that it does not only rebuild all 
      indexes, but also the data file.
      
      Non-quick parallel repair works so that there is one thread per 
      index. The first of the threads rebuilds also the new data file.
      
      The problem was that all threads shared the read io cache on the
      old data file. If there were holes (deleted records) in the table,
      the first thread skipped them, writing only contiguous, non-deleted
      records to the new data file. Then it built the new index so that
      its entries pointed to the correct record positions. But the other
      threads didn't know the new record positions, but put the positions
      from the old data file into the index.
      
      The new design is so that there is a shared io cache which is filled
      by the first thread (the data file writer) with the new contiguous
      records and read by the other threads. Now they know the new record
      positions.
      
      Another problem was that for the parallel repair of compressed
      tables a common bit_buff and rec_buff was used. I changed it so
      that thread specific buffers are used for parallel repair.
      
      A similar problem existed for checksum calculation. I made this
      multi-thread safe too.
      5f08a831
  16. 08 Oct, 2006 1 commit
  17. 06 Oct, 2006 4 commits
  18. 05 Oct, 2006 3 commits
  19. 03 Oct, 2006 6 commits