1. 22 Nov, 2010 1 commit
    • Rusty Russell's avatar
      tdb2: use expansion heuristics from tdb1 · 20defbbc
      Rusty Russell authored
      This reduces the amount of expansion we do.
      
      Before:
      ./speed 1000000
      Adding 1000000 records:  23210 ns (59193360 bytes)
      Finding 1000000 records:  2387 ns (59193360 bytes)
      Traversing 1000000 records:  2150 ns (59193360 bytes)
      Deleting 1000000 records:  13392 ns (59193360 bytes)
      Re-adding 1000000 records:  11546 ns (59193360 bytes)
      Appending 1000000 records:  29327 ns (91193360 bytes)
      Churning 1000000 records:  33026 ns (91193360 bytes)
      
      After:
      $ ./speed 1000000
      Adding 1000000 records:  17480 ns (61472904 bytes)
      Finding 1000000 records:  2431 ns (61472904 bytes)
      Traversing 1000000 records:  2194 ns (61472904 bytes)
      Deleting 1000000 records:  10948 ns (61472904 bytes)
      Re-adding 1000000 records:  11247 ns (61472904 bytes)
      Appending 1000000 records:  21826 ns (96051424 bytes)
      Churning 1000000 records:  27242 ns (96051424 bytes)
      20defbbc
  2. 01 Dec, 2010 1 commit
    • Rusty Russell's avatar
      tdb2: shrink free header from 32 to 24 bytes. · 5e30abc6
      Rusty Russell authored
      This reduces our minimum key+data length to 8 bytes; we do this by packing
      the prev pointer where we used to put the flist pointer, and storing the
      flist as an 8 bit index (meaning we can only have 256 free tables).
      
      Note that this has a perverse result on the size of the database, as our
      4-byte key and 4-byte data now fit perfectly in a minimal record, so
      appeding causes us to allocate new records which are 50% larger,
      since we detect growing.
      
      Current results of speed test:
      $ ./speed 1000000
      Adding 1000000 records:  23210 ns (59193360 bytes)
      Finding 1000000 records:  2387 ns (59193360 bytes)
      Traversing 1000000 records:  2150 ns (59193360 bytes)
      Deleting 1000000 records:  13392 ns (59193360 bytes)
      Re-adding 1000000 records:  11546 ns (59193360 bytes)
      Appending 1000000 records:  29327 ns (91193360 bytes)
      Churning 1000000 records:  33026 ns (91193360 bytes)
      
      Previous:
      $ ./speed 1000000
      Adding 1000000 records:  28324 ns (67232528 bytes)
      Finding 1000000 records:  2468 ns (67232528 bytes)
      Traversing 1000000 records:  2200 ns (67232528 bytes)
      Deleting 1000000 records:  13083 ns (67232528 bytes)
      Re-adding 1000000 records:  16433 ns (67232528 bytes)
      Appending 1000000 records:  2511 ns (67232528 bytes)
      Churning 1000000 records:  31068 ns (67570448 bytes)
      5e30abc6
  3. 23 Nov, 2010 1 commit
  4. 01 Dec, 2010 5 commits
    • Rusty Russell's avatar
      tdb2: Add speed test to tdb and tdb2 · 076c398e
      Rusty Russell authored
      Current results of speed test:
      $ ./speed 1000000
      Adding 1000000 records:  14726 ns (67244816 bytes)
      Finding 1000000 records:  2844 ns (67244816 bytes)
      Missing 1000000 records:  2528 ns (67244816 bytes)
      Traversing 1000000 records:  2572 ns (67244816 bytes)
      Deleting 1000000 records:  5358 ns (67244816 bytes)
      Re-adding 1000000 records:  9176 ns (67244816 bytes)
      Appending 1000000 records:  3035 ns (67244816 bytes)
      Churning 1000000 records:  18139 ns (67565840 bytes)
      $ ./speed 100000
      Adding 100000 records:  13270 ns (14349584 bytes)
      Finding 100000 records:  2769 ns (14349584 bytes)
      Missing 100000 records:  2422 ns (14349584 bytes)
      Traversing 100000 records:  2595 ns (14349584 bytes)
      Deleting 100000 records:  5331 ns (14349584 bytes)
      Re-adding 100000 records:  5875 ns (14349584 bytes)
      Appending 100000 records:  2751 ns (14349584 bytes)
      Churning 100000 records:  20666 ns (25771280 bytes)
      
      vs tdb1 (with hashsize 100003):
      $ ./speed 1000000
      Adding 1000000 records:  8547 ns (44306432 bytes)
      Finding 1000000 records:  5595 ns (44306432 bytes)
      Missing 1000000 records:  3469 ns (44306432 bytes)
      Traversing 1000000 records:  4571 ns (44306432 bytes)
      Deleting 1000000 records:  12115 ns (44306432 bytes)
      Re-adding 1000000 records:  10505 ns (44306432 bytes)
      Appending 1000000 records:  10610 ns (44306432 bytes)
      Churning 1000000 records:  28697 ns (44306432 bytes)
      $ ./speed 100000
      Adding 100000 records:  6030 ns (4751360 bytes)
      Finding 100000 records:  3141 ns (4751360 bytes)
      Missing 100000 records:  3143 ns (4751360 bytes)
      Traversing 100000 records:  4659 ns (4751360 bytes)
      Deleting 100000 records:  7891 ns (4751360 bytes)
      Re-adding 100000 records:  5913 ns (4751360 bytes)
      Appending 100000 records:  4242 ns (4751360 bytes)
      Churning 100000 records:  15300 ns (4751360 bytes)
      076c398e
    • Rusty Russell's avatar
      tdb2: fix tdb_check() return when free table entries missing. · dbbde019
      Rusty Russell authored
      It mistakenly returned -1 meaning "success".
      dbbde019
    • Rusty Russell's avatar
      c84c6577
    • Rusty Russell's avatar
      tdb2: make summary command handle recovery "dead zone" · ec88af5d
      Rusty Russell authored
      We can run summary with a recovery area, or a dead zone.
      ec88af5d
    • Rusty Russell's avatar
      tdb2: cancel transactions on tdb_close · d95645d5
      Rusty Russell authored
      Otherwise we leak memory.
      d95645d5
  5. 23 Nov, 2010 6 commits
  6. 17 Nov, 2010 5 commits
    • Rusty Russell's avatar
      tdb2: handle chains of free tables · ef9dec60
      Rusty Russell authored
      This adds chains of free tables: we choose one at random at the start and
      we iterate through them when they are exhausted.
      
      Currently there is no code to actually add a new free table, but we test
      that we can handle it if we add one in future.
      ef9dec60
    • Rusty Russell's avatar
      tdb2: get rid of zones · d70577b6
      Rusty Russell authored
      Zones were a bad idea.  They mean we can't simply add stuff to the end
      of the file (which transactions relied upon), and there's no good heuristic
      in how to size them.
      
      This patch reverts us to a single free table.
      d70577b6
    • Rusty Russell's avatar
      tdb2: fix bucket search · 2ecf943a
      Rusty Russell authored
      We were previously jumping straight from the first bucket to the end.
      2ecf943a
    • Rusty Russell's avatar
      tdb2: only adjust size once when growing · e984ef66
      Rusty Russell authored
      We were adding 50% to datalen twice, so move it out of adjust_size and
      make the callers do it.
      
      We also add a test that the heuristic is working at all.
      e984ef66
    • Rusty Russell's avatar
      tdb2: remove tailer · d1383862
      Rusty Russell authored
      We don't actually need it.
      d1383862
  7. 15 Nov, 2010 9 commits
    • Rusty Russell's avatar
      tdb2: fix tdb_chainlock · c5e3f07a
      Rusty Russell authored
      We can't enlarge the lock without risking deadlock, so tdb_chainlock() can't
      simply grab a one-byte lock; it needs to grab the lock we'd need to protect
      the hash.
      
      In theory, tdb_chainlock_read() could use a one-byte lock though.
      c5e3f07a
    • Rusty Russell's avatar
      tdb2: fix coalesce race #3 · 06a5b1a8
      Rusty Russell authored
      When we're coalescing, we need to drop the lock on the current free list, as
      we've enlarged the block and it may now belong in a different list.
      
      Unfortunately (as shown by repeated tdbtorture -n 8) another coalescing run
      can do the coalescing while we've dropped the lock.  So for this case, we
      use the TDB_COALESCING_MAGIC flag so it doesn't look free.
      06a5b1a8
    • Rusty Russell's avatar
      tdb2: add TDB_COALESCING_MAGIC to solve coalescing race. · 56ea2c52
      Rusty Russell authored
      A new special record marker to indicate coalescing is in progress.
      56ea2c52
    • Rusty Russell's avatar
      tdb2: fix coalesce race #2 · d2a4d6b4
      Rusty Russell authored
      When we find a free block, we need to mark it as used before we drop the
      free lock, even though we've removed it from the list.  Otherwise the
      coalescing code can find it.
      
      This means handing the information we need down to lock_and_alloc, which
      also means we know when we're grabbing a "growing" entry, and can relax
      the heuristics about finding a good-sized block in that case.
      d2a4d6b4
    • Rusty Russell's avatar
      tdb2: coalescing race fix #1 · b5479009
      Rusty Russell authored
      When coalescing, we check the adjacent entry then lock its free list: we
      need to *recheck* after locking, to make sure it's still in that free list.
      b5479009
    • Rusty Russell's avatar
      tdb2: minor optimization for set_header · 8afb9681
      Rusty Russell authored
      We actually only need the bottom 5 bits of the hash value, so don't waste
      8 bytes passing it.
      8afb9681
    • Rusty Russell's avatar
      tdb2: hoist adjust_size · 590eee6f
      Rusty Russell authored
      We're going to want it in get_free() in the next patch, so move it upwards.
      Trivial changes, too: add to size before min length check, and rename growing
      to want_extra.
      590eee6f
    • Rusty Russell's avatar
      tdb2: clean up makefile for tools · fdba839b
      Rusty Russell authored
      fdba839b
    • Rusty Russell's avatar
      tdb2: extra debugging checks · b371060f
      Rusty Russell authored
      b371060f
  8. 17 Nov, 2010 3 commits
  9. 15 Nov, 2010 6 commits
  10. 14 Nov, 2010 1 commit
  11. 15 Nov, 2010 1 commit
    • Rusty Russell's avatar
      ccanlint: fix idempotent handler · f1c96e9d
      Rusty Russell authored
      Test for inserting idempotent header was wrong way around, and we check
      all headers at once, rather than finishing after one.
      
      Also, turn - into _ rather than removing.
      f1c96e9d
  12. 14 Nov, 2010 1 commit