- 19 Jul, 2011 2 commits
-
-
Rusty Russell authored
GPL versions 2 and 3 both specifically mention "any later version" as the phrase which allows the user to choose to upgrade the license. Make sure we use that phrase, and make the format consistent across modules.
-
Rusty Russell authored
This improves on the current ad-hoc methods, and also fixes a bug where we mapped "GPLv2" to the GPLv3 symlink.
-
- 06 Jul, 2011 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Turns out it's not standard (thanks Samba build farm!) And the previous test had a hole in it anyway. This one is more conservative.
-
- 04 Jul, 2011 1 commit
-
-
Rusty Russell authored
I'm not sure that a "pthread-safe" tap library is very useful; how many people have multiple threads calling ok()? Kirill Shutemov noted that it gives a warning with -Wundef; indeed, we should ask in this case whether they want pthread support, not whether the system has pthread support to offer.
-
- 02 Jul, 2011 1 commit
-
-
Joey Adams authored
This file contains my private ramblings about the JSON module, and was not meant to be included in the public release.
-
- 01 Jul, 2011 1 commit
-
-
Joey Adams authored
-
- 21 Jun, 2011 1 commit
-
-
Rusty Russell authored
Posix says ssize_t is in sys/types.h; on Linux stdlib.h is enough.
-
- 19 Jun, 2011 1 commit
-
-
Russell Steicke authored
I've been using the antithread arabella example to generate some "arty" portraits for decoration. I've made a few changes to it (triangle sizes and number of generations before giving up), and may send those as patches later. Because some of the images I'm generating have taken quite a while (many days) I've needed to restart the run after rebooting machines for other reasons, and noticed that arabella restarted the generation count from zero. I wanted to continue the generation count, so here's a patch to do just that.
-
- 17 Jun, 2011 3 commits
-
-
Rusty Russell authored
Simple port from the TDB1 versions. Also, change to "tdb2.h" includes so they can be built even in other directories in future.
-
Rusty Russell authored
This means they can be installed in parallel with tdb1's tools.
-
Rusty Russell authored
This is where we should be getting bswap_64 from.
-
- 16 Jun, 2011 1 commit
-
-
Rusty Russell authored
Take some care to preserve formatting, even with mixed tabs and spaces.
-
- 15 Jun, 2011 1 commit
-
-
Joey Adams authored
-
- 11 Jun, 2011 1 commit
-
-
Joey Adams authored
* utf8_read_char * utf8_write_char * from_surrogate_pair * to_surrogate_pair
-
- 08 Jun, 2011 1 commit
-
-
Rusty Russell authored
My simple test program on my laptop showed that with modern 32 bit Intel CPUs and modern GCC, there's no measurable penalty for the clean version. Andrew Bartlett complained that the valgrind noise was grating. Agreed.
-
- 05 Jun, 2011 1 commit
-
-
Rusty Russell authored
-
- 31 May, 2011 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
- 30 May, 2011 1 commit
-
-
Joey Adams authored
-
- 20 May, 2011 4 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
We tried to get a F_WRLCK on the open lock; we shouldn't do that for a read-only tdb. (TDB1 gets away with it because a read-only open skips all locking). We also avoid leaking the fd in two tdb_open() failure paths revealed by this extra testing.
-
Rusty Russell authored
Allows tests to explicitly avoid continuing when a failure has been injected.
-
Rusty Russell authored
TDB2 tracks locks using getpid(), and gets upset when we fork behind its back.
-
- 10 May, 2011 3 commits
-
-
Rusty Russell authored
This crept in, it should be the same as the tests in typesafe_cb.h.
-
Rusty Russell authored
More recording of interesting events. As we don't have an ABI yet, we don't need to put these at the end.
-
Rusty Russell authored
The original code assumed that unlocking would fail if we didn't have a lock; this isn't true (at least, on my machine). So we have to always check the pid before unlocking.
-
- 27 Apr, 2011 5 commits
-
-
Rusty Russell authored
PAGESIZE used to be defined to getpagesize(); we changed it to a constant in b556ef1f, which broke the msync() call.
-
Rusty Russell authored
We don't need to copy into a buffer to examine the old data: in the common case, it's mmaped already. It's made a bit trickier because the tdb_access_read() function uses the current I/O methods, so we need to restore that temporarily. The difference was in the noise, however (the sync no-doubt dominates). Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m45.021s user 0m16.261s sys 0m2.432s -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m10.144s user 0m0.480s sys 0m0.460s -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb Adding 2000000 records: 863 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 383 ns (110601144 bytes) Traversing 2000000 records: 409 ns (110601144 bytes) Deleting 2000000 records: 676 ns (225354680 bytes) Re-adding 2000000 records: 784 ns (225354680 bytes) Appending 2000000 records: 1191 ns (247890168 bytes) Churning 2000000 records: 2166 ns (423133432 bytes) After: real 0m47.141s user 0m16.073s sys 0m2.460s -rw------- 1 rusty rusty 364469344 2011-04-27 22:58 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m4.207s user 0m0.416s sys 0m0.504s -rw------- 1 rusty rusty 313576 2011-04-27 22:59 torture.tdb Adding 2000000 records: 874 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 393 ns (110601144 bytes) Traversing 2000000 records: 404 ns (110601144 bytes) Deleting 2000000 records: 684 ns (225354680 bytes) Re-adding 2000000 records: 792 ns (225354680 bytes) Appending 2000000 records: 1212 ns (247890168 bytes) Churning 2000000 records: 2191 ns (423133432 bytes)
-
Rusty Russell authored
We don't need to use 4k for our transaction pages; we can use any value. For the tools/speed benchmark, any value between about 4k and 64M makes no difference, but that's probably because the entire database is touched in each transaction. So instead, I looked at tdbtorture to try to find an optimum value, as it uses smaller transactions. 4k and 64k were equivalent. 16M was almost three times slower, 1M was 5-10% slower. 1024 was also 5-10% slower. There's a slight advantage of having larger pages, both for allowing direct access to the database (if it's all in one page we can sometimes grant direct access even inside a transaction) and for the compactness of our recovery area (since our code is naive and won't combine one run across pages). Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m47.127s user 0m17.125s sys 0m2.456s -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m16.049s user 0m0.300s sys 0m0.492s -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb Adding 2000000 records: 894 ns (110551992 bytes) Finding 2000000 records: 564 ns (110551992 bytes) Missing 2000000 records: 398 ns (110551992 bytes) Traversing 2000000 records: 399 ns (110551992 bytes) Deleting 2000000 records: 711 ns (225633208 bytes) Re-adding 2000000 records: 819 ns (225633208 bytes) Appending 2000000 records: 1252 ns (248196544 bytes) Churning 2000000 records: 2319 ns (424005056 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m45.021s user 0m16.261s sys 0m2.432s -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m10.144s user 0m0.480s sys 0m0.460s -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb Adding 2000000 records: 863 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 383 ns (110601144 bytes) Traversing 2000000 records: 409 ns (110601144 bytes) Deleting 2000000 records: 676 ns (225354680 bytes) Re-adding 2000000 records: 784 ns (225354680 bytes) Appending 2000000 records: 1191 ns (247890168 bytes) Churning 2000000 records: 2166 ns (423133432 bytes)
-
Rusty Russell authored
Currently we use the worst-case-possible size for the recovery area. Instead, prepare the recovery data, then see whether it's too large. Note that this currently works out to make the database *larger* on our speed benchmark, since we happen to need to enlarge the recovery area at the wrong time now, rather than the old case where its already hugely oversized. Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m50.366s user 0m17.109s sys 0m2.468s -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m23.818s user 0m0.304s sys 0m0.508s -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb Adding 2000000 records: 887 ns (110556088 bytes) Finding 2000000 records: 556 ns (110556088 bytes) Missing 2000000 records: 385 ns (110556088 bytes) Traversing 2000000 records: 401 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1255 ns (268404160 bytes) Churning 2000000 records: 2299 ns (268404160 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m47.127s user 0m17.125s sys 0m2.456s -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m16.049s user 0m0.300s sys 0m0.492s -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb Adding 2000000 records: 894 ns (110551992 bytes) Finding 2000000 records: 564 ns (110551992 bytes) Missing 2000000 records: 398 ns (110551992 bytes) Traversing 2000000 records: 399 ns (110551992 bytes) Deleting 2000000 records: 711 ns (225633208 bytes) Re-adding 2000000 records: 819 ns (225633208 bytes) Appending 2000000 records: 1252 ns (248196544 bytes) Churning 2000000 records: 2319 ns (424005056 bytes)
-
Rusty Russell authored
We don't need to write the whole page to the recovery area if it hasn't all changed. Simply skipping the start and end of the pages which are similar saves us about 20% on growtdb-bench 250000, and 45% on tdbtorture. The more thorough examination of page differences gives us a saving of 90% on growtdb-bench and 98% on tdbtorture! And we do win a bit on timings for transaction commit: Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m4.844s user 0m15.537s sys 0m3.796s -rw------- 1 rusty rusty 626693096 2011-04-27 21:28 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m17.021s user 0m0.272s sys 0m0.540s -rw------- 1 rusty rusty 458800 2011-04-27 21:29 torture.tdb Adding 2000000 records: 894 ns (110556088 bytes) Finding 2000000 records: 569 ns (110556088 bytes) Missing 2000000 records: 390 ns (110556088 bytes) Traversing 2000000 records: 403 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1262 ns (268404160 bytes) Churning 2000000 records: 2311 ns (268404160 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m50.366s user 0m17.109s sys 0m2.468s -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m23.818s user 0m0.304s sys 0m0.508s -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb Adding 2000000 records: 887 ns (110556088 bytes) Finding 2000000 records: 556 ns (110556088 bytes) Missing 2000000 records: 385 ns (110556088 bytes) Traversing 2000000 records: 401 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1255 ns (268404160 bytes) Churning 2000000 records: 2299 ns (268404160 bytes)
-
- 21 Apr, 2011 2 commits
-
-
Rusty Russell authored
tdb1 always makes the tdb a multiple of the transaction page size, tdb2 doesn't. This means that if a transaction hits the exact end of the file, we might need to save off a partial page. So that we don't have to rewrite tdb_recovery_size() too, we simply do a short read and memset the unused section to 0 (to keep valgrind happy).
-
Rusty Russell authored
We don't have tailers in tdb2, so it's just 8 bytes of confusing wastage.
-
- 27 Apr, 2011 4 commits
-
-
Rusty Russell authored
Instead of walking the entire free list, walk 8 entries, or more if we are successful: the reward is scaled by the size coalesced. We also move previously-examined records to the end of the list. This reduces file size with very little speed penalty. Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m17.022s user 0m27.206s sys 0m3.920s -rw------- 1 rusty rusty 570130576 2011-04-27 21:17 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m27.355s user 0m0.296s sys 0m0.516s -rw------- 1 rusty rusty 617352 2011-04-27 21:18 torture.tdb Adding 2000000 records: 890 ns (110556088 bytes) Finding 2000000 records: 565 ns (110556088 bytes) Missing 2000000 records: 390 ns (110556088 bytes) Traversing 2000000 records: 410 ns (110556088 bytes) Deleting 2000000 records: 8623 ns (244003768 bytes) Re-adding 2000000 records: 7089 ns (244003768 bytes) Appending 2000000 records: 33708 ns (244003768 bytes) Churning 2000000 records: 2029 ns (268404160 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m7.096s user 0m15.637s sys 0m3.812s -rw------- 1 rusty rusty 561270928 2011-04-27 21:22 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m13.850s user 0m0.268s sys 0m0.492s -rw------- 1 rusty rusty 429768 2011-04-27 21:23 torture.tdb Adding 2000000 records: 892 ns (110556088 bytes) Finding 2000000 records: 570 ns (110556088 bytes) Missing 2000000 records: 390 ns (110556088 bytes) Traversing 2000000 records: 407 ns (110556088 bytes) Deleting 2000000 records: 706 ns (244003768 bytes) Re-adding 2000000 records: 822 ns (244003768 bytes) Appending 2000000 records: 1262 ns (268404160 bytes) Churning 2000000 records: 2320 ns (268404160 bytes)
-
Rusty Russell authored
This simply uses a 7 bit counter which gets incremented on each addition to the list (but not decremented on removals). When it wraps, we walk the entire list looking for things to coalesce. This causes performance problems, especially when appending records, so we limit it in the next patch: Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m59.687s user 0m11.593s sys 0m4.100s -rw------- 1 rusty rusty 752004064 2011-04-27 21:14 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m17.738s user 0m0.348s sys 0m0.580s -rw------- 1 rusty rusty 663360 2011-04-27 21:15 torture.tdb Adding 2000000 records: 926 ns (110556088 bytes) Finding 2000000 records: 592 ns (110556088 bytes) Missing 2000000 records: 416 ns (110556088 bytes) Traversing 2000000 records: 422 ns (110556088 bytes) Deleting 2000000 records: 741 ns (244003768 bytes) Re-adding 2000000 records: 799 ns (244003768 bytes) Appending 2000000 records: 1147 ns (295244592 bytes) Churning 2000000 records: 1827 ns (568411440 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m17.022s user 0m27.206s sys 0m3.920s -rw------- 1 rusty rusty 570130576 2011-04-27 21:17 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m27.355s user 0m0.296s sys 0m0.516s -rw------- 1 rusty rusty 617352 2011-04-27 21:18 torture.tdb Adding 2000000 records: 890 ns (110556088 bytes) Finding 2000000 records: 565 ns (110556088 bytes) Missing 2000000 records: 390 ns (110556088 bytes) Traversing 2000000 records: 410 ns (110556088 bytes) Deleting 2000000 records: 8623 ns (244003768 bytes) Re-adding 2000000 records: 7089 ns (244003768 bytes) Appending 2000000 records: 33708 ns (244003768 bytes) Churning 2000000 records: 2029 ns (268404160 bytes)
-
Rusty Russell authored
I noticed a counter-intuitive phenomenon as I tweaked the coalescing code: the more coalescing we did, the larger the tdb grew! This was measured using "growtdb-bench 250000 10". The cause: more coalescing means larger transactions, and every time we do a larger transaction, we need to allocate a larger recovery area. The only way to do this is to append to the file, so the file keeps growing, even though it's mainly unused! Overallocating by 25% seems reasonable, and gives better results in such benchmarks. The real fix is to reduce the transaction to a run-length based format rather then the naive block system used now. Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m57.403s user 0m11.361s sys 0m4.056s -rw------- 1 rusty rusty 689536976 2011-04-27 21:10 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m24.901s user 0m0.380s sys 0m0.512s -rw------- 1 rusty rusty 655368 2011-04-27 21:12 torture.tdb Adding 2000000 records: 941 ns (110551992 bytes) Finding 2000000 records: 603 ns (110551992 bytes) Missing 2000000 records: 428 ns (110551992 bytes) Traversing 2000000 records: 416 ns (110551992 bytes) Deleting 2000000 records: 741 ns (199517112 bytes) Re-adding 2000000 records: 819 ns (199517112 bytes) Appending 2000000 records: 1228 ns (376542552 bytes) Churning 2000000 records: 2042 ns (553641304 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m59.687s user 0m11.593s sys 0m4.100s -rw------- 1 rusty rusty 752004064 2011-04-27 21:14 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m17.738s user 0m0.348s sys 0m0.580s -rw------- 1 rusty rusty 663360 2011-04-27 21:15 torture.tdb Adding 2000000 records: 926 ns (110556088 bytes) Finding 2000000 records: 592 ns (110556088 bytes) Missing 2000000 records: 416 ns (110556088 bytes) Traversing 2000000 records: 422 ns (110556088 bytes) Deleting 2000000 records: 741 ns (244003768 bytes) Re-adding 2000000 records: 799 ns (244003768 bytes) Appending 2000000 records: 1147 ns (295244592 bytes) Churning 2000000 records: 1827 ns (568411440 bytes)
-
Rusty Russell authored
We currently start walking the free list again when we coalesce any record; this is overzealous, as we only care about the next record being blatted, or the record we currently consider "best". We can also opportunistically try to add the coalesced record into the new free list: if it fails, we go back to the old "mark record, unlock, re-lock" code. Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m0.243s user 0m13.677s sys 0m4.336s -rw------- 1 rusty rusty 683302864 2011-04-27 21:03 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m24.074s user 0m0.344s sys 0m0.468s -rw------- 1 rusty rusty 836040 2011-04-27 21:04 torture.tdb Adding 2000000 records: 1015 ns (110551992 bytes) Finding 2000000 records: 641 ns (110551992 bytes) Missing 2000000 records: 445 ns (110551992 bytes) Traversing 2000000 records: 439 ns (110551992 bytes) Deleting 2000000 records: 807 ns (199517112 bytes) Re-adding 2000000 records: 851 ns (199517112 bytes) Appending 2000000 records: 1301 ns (376542552 bytes) Churning 2000000 records: 2423 ns (553641304 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m57.403s user 0m11.361s sys 0m4.056s -rw------- 1 rusty rusty 689536976 2011-04-27 21:10 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m24.901s user 0m0.380s sys 0m0.512s -rw------- 1 rusty rusty 655368 2011-04-27 21:12 torture.tdb Adding 2000000 records: 941 ns (110551992 bytes) Finding 2000000 records: 603 ns (110551992 bytes) Missing 2000000 records: 428 ns (110551992 bytes) Traversing 2000000 records: 416 ns (110551992 bytes) Deleting 2000000 records: 741 ns (199517112 bytes) Re-adding 2000000 records: 819 ns (199517112 bytes) Appending 2000000 records: 1228 ns (376542552 bytes) Churning 2000000 records: 2042 ns (553641304 bytes)
-
- 25 Mar, 2011 1 commit
-
-
Rusty Russell authored
This makes life easier for the next patch.
-
- 27 Apr, 2011 1 commit
-
-
Rusty Russell authored
We took the original expansion heuristic from TDB1, and they just fixed theirs, so copy that. Before: After: time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 growtdb-bench.c: In function ‘main’: growtdb-bench.c:74:8: warning: ignoring return value of ‘system’, declared with attribute warn_unused_result growtdb-bench.c:108:9: warning: ignoring return value of ‘system’, declared with attribute warn_unused_result real 1m0.243s user 0m13.677s sys 0m4.336s -rw------- 1 rusty rusty 683302864 2011-04-27 21:03 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m24.074s user 0m0.344s sys 0m0.468s -rw------- 1 rusty rusty 836040 2011-04-27 21:04 torture.tdb Adding 2000000 records: 1015 ns (110551992 bytes) Finding 2000000 records: 641 ns (110551992 bytes) Missing 2000000 records: 445 ns (110551992 bytes) Traversing 2000000 records: 439 ns (110551992 bytes) Deleting 2000000 records: 807 ns (199517112 bytes) Re-adding 2000000 records: 851 ns (199517112 bytes) Appending 2000000 records: 1301 ns (376542552 bytes) Churning 2000000 records: 2423 ns (553641304 bytes)
-