1. 17 Jun, 2011 3 commits
  2. 16 Jun, 2011 1 commit
  3. 15 Jun, 2011 1 commit
  4. 11 Jun, 2011 1 commit
  5. 08 Jun, 2011 1 commit
    • Rusty Russell's avatar
      hash: remove VALGRIND #ifdef - always run clean. · 23319007
      Rusty Russell authored
      My simple test program on my laptop showed that with modern 32 bit Intel
      CPUs and modern GCC, there's no measurable penalty for the clean version.
      
      Andrew Bartlett complained that the valgrind noise was grating.  Agreed.
      23319007
  6. 05 Jun, 2011 1 commit
  7. 31 May, 2011 2 commits
  8. 30 May, 2011 1 commit
  9. 20 May, 2011 4 commits
  10. 10 May, 2011 3 commits
  11. 27 Apr, 2011 5 commits
    • Rusty Russell's avatar
      tdb2: fix msync() arg · 18fe5ef0
      Rusty Russell authored
      PAGESIZE used to be defined to getpagesize(); we changed it to a
      constant in b556ef1f, which broke the msync() call.
      18fe5ef0
    • Rusty Russell's avatar
      tdb2: use direct access functions when creating recovery blob · 71d8cfb6
      Rusty Russell authored
      We don't need to copy into a buffer to examine the old data: in the
      common case, it's mmaped already.  It's made a bit trickier because
      the tdb_access_read() function uses the current I/O methods, so we
      need to restore that temporarily.
      
      The difference was in the noise, however (the sync no-doubt
      dominates).
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m45.021s
      user	0m16.261s
      sys	0m2.432s
      -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m10.144s
      user	0m0.480s
      sys	0m0.460s
      -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb
      Adding 2000000 records:  863 ns (110601144 bytes)
      Finding 2000000 records:  565 ns (110601144 bytes)
      Missing 2000000 records:  383 ns (110601144 bytes)
      Traversing 2000000 records:  409 ns (110601144 bytes)
      Deleting 2000000 records:  676 ns (225354680 bytes)
      Re-adding 2000000 records:  784 ns (225354680 bytes)
      Appending 2000000 records:  1191 ns (247890168 bytes)
      Churning 2000000 records:  2166 ns (423133432 bytes)
      
      After:
      real	0m47.141s
      user	0m16.073s
      sys	0m2.460s
      -rw------- 1 rusty rusty 364469344 2011-04-27 22:58 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m4.207s
      user	0m0.416s
      sys	0m0.504s
      -rw------- 1 rusty rusty 313576 2011-04-27 22:59 torture.tdb
      Adding 2000000 records:  874 ns (110601144 bytes)
      Finding 2000000 records:  565 ns (110601144 bytes)
      Missing 2000000 records:  393 ns (110601144 bytes)
      Traversing 2000000 records:  404 ns (110601144 bytes)
      Deleting 2000000 records:  684 ns (225354680 bytes)
      Re-adding 2000000 records:  792 ns (225354680 bytes)
      Appending 2000000 records:  1212 ns (247890168 bytes)
      Churning 2000000 records:  2191 ns (423133432 bytes)
      
      71d8cfb6
    • Rusty Russell's avatar
      tdb2: enlarge transaction pagesize to 64k · 0753972a
      Rusty Russell authored
      We don't need to use 4k for our transaction pages; we can use any
      value.  For the tools/speed benchmark, any value between about 4k and
      64M makes no difference, but that's probably because the entire
      database is touched in each transaction.
      
      So instead, I looked at tdbtorture to try to find an optimum value, as
      it uses smaller transactions.  4k and 64k were equivalent.  16M was
      almost three times slower, 1M was 5-10% slower.  1024 was also 5-10%
      slower.
      
      There's a slight advantage of having larger pages, both for allowing
      direct access to the database (if it's all in one page we can sometimes
      grant direct access even inside a transaction) and for the compactness
      of our recovery area (since our code is naive and won't combine one
      run across pages).
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m47.127s
      user	0m17.125s
      sys	0m2.456s
      -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m16.049s
      user	0m0.300s
      sys	0m0.492s
      -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb
      Adding 2000000 records:  894 ns (110551992 bytes)
      Finding 2000000 records:  564 ns (110551992 bytes)
      Missing 2000000 records:  398 ns (110551992 bytes)
      Traversing 2000000 records:  399 ns (110551992 bytes)
      Deleting 2000000 records:  711 ns (225633208 bytes)
      Re-adding 2000000 records:  819 ns (225633208 bytes)
      Appending 2000000 records:  1252 ns (248196544 bytes)
      Churning 2000000 records:  2319 ns (424005056 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m45.021s
      user	0m16.261s
      sys	0m2.432s
      -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m10.144s
      user	0m0.480s
      sys	0m0.460s
      -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb
      Adding 2000000 records:  863 ns (110601144 bytes)
      Finding 2000000 records:  565 ns (110601144 bytes)
      Missing 2000000 records:  383 ns (110601144 bytes)
      Traversing 2000000 records:  409 ns (110601144 bytes)
      Deleting 2000000 records:  676 ns (225354680 bytes)
      Re-adding 2000000 records:  784 ns (225354680 bytes)
      Appending 2000000 records:  1191 ns (247890168 bytes)
      Churning 2000000 records:  2166 ns (423133432 bytes)
      0753972a
    • Rusty Russell's avatar
      tdb2: try to fit transactions in existing space before we expand. · a9428621
      Rusty Russell authored
      Currently we use the worst-case-possible size for the recovery area.
      Instead, prepare the recovery data, then see whether it's too large.
      
      Note that this currently works out to make the database *larger* on
      our speed benchmark, since we happen to need to enlarge the recovery
      area at the wrong time now, rather than the old case where its already
      hugely oversized.
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m50.366s
      user	0m17.109s
      sys	0m2.468s
      -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m23.818s
      user	0m0.304s
      sys	0m0.508s
      -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb
      Adding 2000000 records:  887 ns (110556088 bytes)
      Finding 2000000 records:  556 ns (110556088 bytes)
      Missing 2000000 records:  385 ns (110556088 bytes)
      Traversing 2000000 records:  401 ns (110556088 bytes)
      Deleting 2000000 records:  710 ns (244003768 bytes)
      Re-adding 2000000 records:  825 ns (244003768 bytes)
      Appending 2000000 records:  1255 ns (268404160 bytes)
      Churning 2000000 records:  2299 ns (268404160 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m47.127s
      user	0m17.125s
      sys	0m2.456s
      -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m16.049s
      user	0m0.300s
      sys	0m0.492s
      -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb
      Adding 2000000 records:  894 ns (110551992 bytes)
      Finding 2000000 records:  564 ns (110551992 bytes)
      Missing 2000000 records:  398 ns (110551992 bytes)
      Traversing 2000000 records:  399 ns (110551992 bytes)
      Deleting 2000000 records:  711 ns (225633208 bytes)
      Re-adding 2000000 records:  819 ns (225633208 bytes)
      Appending 2000000 records:  1252 ns (248196544 bytes)
      Churning 2000000 records:  2319 ns (424005056 bytes)
      a9428621
    • Rusty Russell's avatar
      tdb2: reduce transaction before writing to recovery area. · cfc7d301
      Rusty Russell authored
      We don't need to write the whole page to the recovery area if it
      hasn't all changed.  Simply skipping the start and end of the pages
      which are similar saves us about 20% on growtdb-bench 250000, and 45%
      on tdbtorture.  The more thorough examination of page differences
      gives us a saving of 90% on growtdb-bench and 98% on tdbtorture!
      
      And we do win a bit on timings for transaction commit:
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	1m4.844s
      user	0m15.537s
      sys	0m3.796s
      -rw------- 1 rusty rusty 626693096 2011-04-27 21:28 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m17.021s
      user	0m0.272s
      sys	0m0.540s
      -rw------- 1 rusty rusty 458800 2011-04-27 21:29 torture.tdb
      Adding 2000000 records:  894 ns (110556088 bytes)
      Finding 2000000 records:  569 ns (110556088 bytes)
      Missing 2000000 records:  390 ns (110556088 bytes)
      Traversing 2000000 records:  403 ns (110556088 bytes)
      Deleting 2000000 records:  710 ns (244003768 bytes)
      Re-adding 2000000 records:  825 ns (244003768 bytes)
      Appending 2000000 records:  1262 ns (268404160 bytes)
      Churning 2000000 records:  2311 ns (268404160 bytes)
      
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m50.366s
      user	0m17.109s
      sys	0m2.468s
      -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m23.818s
      user	0m0.304s
      sys	0m0.508s
      -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb
      Adding 2000000 records:  887 ns (110556088 bytes)
      Finding 2000000 records:  556 ns (110556088 bytes)
      Missing 2000000 records:  385 ns (110556088 bytes)
      Traversing 2000000 records:  401 ns (110556088 bytes)
      Deleting 2000000 records:  710 ns (244003768 bytes)
      Re-adding 2000000 records:  825 ns (244003768 bytes)
      Appending 2000000 records:  1255 ns (268404160 bytes)
      Churning 2000000 records:  2299 ns (268404160 bytes)
      cfc7d301
  12. 21 Apr, 2011 2 commits
  13. 27 Apr, 2011 4 commits
    • Rusty Russell's avatar
      tdb2: limit coalescing based on how successful we are. · 6b3c079f
      Rusty Russell authored
      Instead of walking the entire free list, walk 8 entries, or more if we
      are successful: the reward is scaled by the size coalesced.
      
      We also move previously-examined records to the end of the list.
      
      This reduces file size with very little speed penalty.
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	1m17.022s
      user	0m27.206s
      sys	0m3.920s
      -rw------- 1 rusty rusty 570130576 2011-04-27 21:17 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m27.355s
      user	0m0.296s
      sys	0m0.516s
      -rw------- 1 rusty rusty 617352 2011-04-27 21:18 torture.tdb
      Adding 2000000 records:  890 ns (110556088 bytes)
      Finding 2000000 records:  565 ns (110556088 bytes)
      Missing 2000000 records:  390 ns (110556088 bytes)
      Traversing 2000000 records:  410 ns (110556088 bytes)
      Deleting 2000000 records:  8623 ns (244003768 bytes)
      Re-adding 2000000 records:  7089 ns (244003768 bytes)
      Appending 2000000 records:  33708 ns (244003768 bytes)
      Churning 2000000 records:  2029 ns (268404160 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	1m7.096s
      user	0m15.637s
      sys	0m3.812s
      -rw------- 1 rusty rusty 561270928 2011-04-27 21:22 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m13.850s
      user	0m0.268s
      sys	0m0.492s
      -rw------- 1 rusty rusty 429768 2011-04-27 21:23 torture.tdb
      Adding 2000000 records:  892 ns (110556088 bytes)
      Finding 2000000 records:  570 ns (110556088 bytes)
      Missing 2000000 records:  390 ns (110556088 bytes)
      Traversing 2000000 records:  407 ns (110556088 bytes)
      Deleting 2000000 records:  706 ns (244003768 bytes)
      Re-adding 2000000 records:  822 ns (244003768 bytes)
      Appending 2000000 records:  1262 ns (268404160 bytes)
      Churning 2000000 records:  2320 ns (268404160 bytes)
      6b3c079f
    • Rusty Russell's avatar
      tdb2: use counters to decide when to coalesce records. · 024a5647
      Rusty Russell authored
      This simply uses a 7 bit counter which gets incremented on each addition
      to the list (but not decremented on removals).  When it wraps, we walk the
      entire list looking for things to coalesce.
      
      This causes performance problems, especially when appending records, so
      we limit it in the next patch:
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m59.687s
      user	0m11.593s
      sys	0m4.100s
      -rw------- 1 rusty rusty 752004064 2011-04-27 21:14 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m17.738s
      user	0m0.348s
      sys	0m0.580s
      -rw------- 1 rusty rusty 663360 2011-04-27 21:15 torture.tdb
      Adding 2000000 records:  926 ns (110556088 bytes)
      Finding 2000000 records:  592 ns (110556088 bytes)
      Missing 2000000 records:  416 ns (110556088 bytes)
      Traversing 2000000 records:  422 ns (110556088 bytes)
      Deleting 2000000 records:  741 ns (244003768 bytes)
      Re-adding 2000000 records:  799 ns (244003768 bytes)
      Appending 2000000 records:  1147 ns (295244592 bytes)
      Churning 2000000 records:  1827 ns (568411440 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	1m17.022s
      user	0m27.206s
      sys	0m3.920s
      -rw------- 1 rusty rusty 570130576 2011-04-27 21:17 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m27.355s
      user	0m0.296s
      sys	0m0.516s
      -rw------- 1 rusty rusty 617352 2011-04-27 21:18 torture.tdb
      Adding 2000000 records:  890 ns (110556088 bytes)
      Finding 2000000 records:  565 ns (110556088 bytes)
      Missing 2000000 records:  390 ns (110556088 bytes)
      Traversing 2000000 records:  410 ns (110556088 bytes)
      Deleting 2000000 records:  8623 ns (244003768 bytes)
      Re-adding 2000000 records:  7089 ns (244003768 bytes)
      Appending 2000000 records:  33708 ns (244003768 bytes)
      Churning 2000000 records:  2029 ns (268404160 bytes)
      024a5647
    • Rusty Russell's avatar
      tdb2: overallocate the recovery area. · a8b30ad4
      Rusty Russell authored
      I noticed a counter-intuitive phenomenon as I tweaked the coalescing
      code: the more coalescing we did, the larger the tdb grew!  This was
      measured using "growtdb-bench 250000 10".
      
      The cause: more coalescing means larger transactions, and every time
      we do a larger transaction, we need to allocate a larger recovery
      area.  The only way to do this is to append to the file, so the file
      keeps growing, even though it's mainly unused!
      
      Overallocating by 25% seems reasonable, and gives better results in
      such benchmarks.
      
      The real fix is to reduce the transaction to a run-length based format
      rather then the naive block system used now.
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m57.403s
      user	0m11.361s
      sys	0m4.056s
      -rw------- 1 rusty rusty 689536976 2011-04-27 21:10 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m24.901s
      user	0m0.380s
      sys	0m0.512s
      -rw------- 1 rusty rusty 655368 2011-04-27 21:12 torture.tdb
      Adding 2000000 records:  941 ns (110551992 bytes)
      Finding 2000000 records:  603 ns (110551992 bytes)
      Missing 2000000 records:  428 ns (110551992 bytes)
      Traversing 2000000 records:  416 ns (110551992 bytes)
      Deleting 2000000 records:  741 ns (199517112 bytes)
      Re-adding 2000000 records:  819 ns (199517112 bytes)
      Appending 2000000 records:  1228 ns (376542552 bytes)
      Churning 2000000 records:  2042 ns (553641304 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m59.687s
      user	0m11.593s
      sys	0m4.100s
      -rw------- 1 rusty rusty 752004064 2011-04-27 21:14 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m17.738s
      user	0m0.348s
      sys	0m0.580s
      -rw------- 1 rusty rusty 663360 2011-04-27 21:15 torture.tdb
      Adding 2000000 records:  926 ns (110556088 bytes)
      Finding 2000000 records:  592 ns (110556088 bytes)
      Missing 2000000 records:  416 ns (110556088 bytes)
      Traversing 2000000 records:  422 ns (110556088 bytes)
      Deleting 2000000 records:  741 ns (244003768 bytes)
      Re-adding 2000000 records:  799 ns (244003768 bytes)
      Appending 2000000 records:  1147 ns (295244592 bytes)
      Churning 2000000 records:  1827 ns (568411440 bytes)
      a8b30ad4
    • Rusty Russell's avatar
      tdb2: don't start again when we coalesce a record. · 5c4a21ab
      Rusty Russell authored
      We currently start walking the free list again when we coalesce any record;
      this is overzealous, as we only care about the next record being blatted,
      or the record we currently consider "best".
      
      We can also opportunistically try to add the coalesced record into the
      new free list: if it fails, we go back to the old "mark record,
      unlock, re-lock" code.
      
      Before:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	1m0.243s
      user	0m13.677s
      sys	0m4.336s
      -rw------- 1 rusty rusty 683302864 2011-04-27 21:03 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m24.074s
      user	0m0.344s
      sys	0m0.468s
      -rw------- 1 rusty rusty 836040 2011-04-27 21:04 torture.tdb
      Adding 2000000 records:  1015 ns (110551992 bytes)
      Finding 2000000 records:  641 ns (110551992 bytes)
      Missing 2000000 records:  445 ns (110551992 bytes)
      Traversing 2000000 records:  439 ns (110551992 bytes)
      Deleting 2000000 records:  807 ns (199517112 bytes)
      Re-adding 2000000 records:  851 ns (199517112 bytes)
      Appending 2000000 records:  1301 ns (376542552 bytes)
      Churning 2000000 records:  2423 ns (553641304 bytes)
      
      After:
      $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      real	0m57.403s
      user	0m11.361s
      sys	0m4.056s
      -rw------- 1 rusty rusty 689536976 2011-04-27 21:10 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m24.901s
      user	0m0.380s
      sys	0m0.512s
      -rw------- 1 rusty rusty 655368 2011-04-27 21:12 torture.tdb
      Adding 2000000 records:  941 ns (110551992 bytes)
      Finding 2000000 records:  603 ns (110551992 bytes)
      Missing 2000000 records:  428 ns (110551992 bytes)
      Traversing 2000000 records:  416 ns (110551992 bytes)
      Deleting 2000000 records:  741 ns (199517112 bytes)
      Re-adding 2000000 records:  819 ns (199517112 bytes)
      Appending 2000000 records:  1228 ns (376542552 bytes)
      Churning 2000000 records:  2042 ns (553641304 bytes)
      
      5c4a21ab
  14. 25 Mar, 2011 1 commit
  15. 27 Apr, 2011 1 commit
    • Rusty Russell's avatar
      tdb2: expand more slowly. · 48241893
      Rusty Russell authored
      We took the original expansion heuristic from TDB1, and they just
      fixed theirs, so copy that.
      
      Before:
      
      After:
      time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000
      growtdb-bench.c: In function ‘main’:
      growtdb-bench.c:74:8: warning: ignoring return value of ‘system’, declared with attribute warn_unused_result
      growtdb-bench.c:108:9: warning: ignoring return value of ‘system’, declared with attribute warn_unused_result
      
      real	1m0.243s
      user	0m13.677s
      sys	0m4.336s
      -rw------- 1 rusty rusty 683302864 2011-04-27 21:03 /tmp/growtdb.tdb
      testing with 3 processes, 5000 loops, seed=0
      OK
      
      real	1m24.074s
      user	0m0.344s
      sys	0m0.468s
      -rw------- 1 rusty rusty 836040 2011-04-27 21:04 torture.tdb
      Adding 2000000 records:  1015 ns (110551992 bytes)
      Finding 2000000 records:  641 ns (110551992 bytes)
      Missing 2000000 records:  445 ns (110551992 bytes)
      Traversing 2000000 records:  439 ns (110551992 bytes)
      Deleting 2000000 records:  807 ns (199517112 bytes)
      Re-adding 2000000 records:  851 ns (199517112 bytes)
      Appending 2000000 records:  1301 ns (376542552 bytes)
      Churning 2000000 records:  2423 ns (553641304 bytes)
      48241893
  16. 19 Apr, 2011 1 commit
  17. 21 Apr, 2011 1 commit
  18. 07 Apr, 2011 1 commit
    • Rusty Russell's avatar
      tdb2: allow transaction to nest. · 72e974b2
      Rusty Russell authored
      This is definitely a bad idea in general, but SAMBA uses nested transactions
      in many and varied ways (some of them probably reflect real bugs) and it's
      far easier to support them inside tdb2 with a flag.
      
      We already have part of the TDB1 infrastructure in place, so this patch
      just completes it and fixes one place where I'd messed it up.
      72e974b2
  19. 27 Apr, 2011 2 commits
    • Rusty Russell's avatar
      tdb2: allow multiple chain locks. · dc9da1e3
      Rusty Russell authored
      It's probably not a good idea, because it's a recipe for deadlocks if
      anyone else grabs any *other* two chainlocks, or the allrecord lock,
      but SAMBA definitely does it, so allow it as TDB1 does.
      dc9da1e3
    • Rusty Russell's avatar
      tdb2: TDB_ATTRIBUTE_STATS access via tdb_get_attribute. · 8cca0397
      Rusty Russell authored
      Now we have tdb_get_attribute, it makes sense to make that the method
      of accessing statistics.  That way they are always available, and it's
      probably cheaper doing the direct increment than even the unlikely()
      branch.
      8cca0397
  20. 07 Apr, 2011 4 commits