- 21 Jul, 2011 7 commits
-
-
Rusty Russell authored
After discussion with various developers (particularly the Samba team), there's a consensus that a reference to the license in each source file is useful. Since CCAN modules are designed to be cut and paste, this helps avoid any confusion should the LICENSE file go missing. We also detect standard boilerplates, in which case a one-line summary isn't necessary.
-
Rusty Russell authored
We really want everyone to be using these; establishing conventions helps all code, so make it the most liberal license possible. It's all my code, so I can do this unilaterally.
-
Rusty Russell authored
Trivial code, all mine.
-
Rusty Russell authored
Trivial code, all mine.
-
Rusty Russell authored
We really want everyone to be using these; establishing conventions helps all code, so make it the most liberal license possible. It's all my code, so I can do this unilaterally.
-
Rusty Russell authored
We really want everyone to be using these; establishing conventions helps all code, so make it the most liberal license possible. It's all my code, so I can do this unilaterally.
-
Rusty Russell authored
Now we've made GPL wording uniform, use it everywhere. There's no point allowing variants which might be unclear. We still have some non-conformant licenses in the tree (eg. just "BSD"), so we only warn on unknown license strings for now.
-
- 19 Jul, 2011 2 commits
-
-
Rusty Russell authored
GPL versions 2 and 3 both specifically mention "any later version" as the phrase which allows the user to choose to upgrade the license. Make sure we use that phrase, and make the format consistent across modules.
-
Rusty Russell authored
This improves on the current ad-hoc methods, and also fixes a bug where we mapped "GPLv2" to the GPLv3 symlink.
-
- 06 Jul, 2011 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Turns out it's not standard (thanks Samba build farm!) And the previous test had a hole in it anyway. This one is more conservative.
-
- 04 Jul, 2011 1 commit
-
-
Rusty Russell authored
I'm not sure that a "pthread-safe" tap library is very useful; how many people have multiple threads calling ok()? Kirill Shutemov noted that it gives a warning with -Wundef; indeed, we should ask in this case whether they want pthread support, not whether the system has pthread support to offer.
-
- 02 Jul, 2011 1 commit
-
-
Joey Adams authored
This file contains my private ramblings about the JSON module, and was not meant to be included in the public release.
-
- 01 Jul, 2011 1 commit
-
-
Joey Adams authored
-
- 21 Jun, 2011 1 commit
-
-
Rusty Russell authored
Posix says ssize_t is in sys/types.h; on Linux stdlib.h is enough.
-
- 19 Jun, 2011 1 commit
-
-
Russell Steicke authored
I've been using the antithread arabella example to generate some "arty" portraits for decoration. I've made a few changes to it (triangle sizes and number of generations before giving up), and may send those as patches later. Because some of the images I'm generating have taken quite a while (many days) I've needed to restart the run after rebooting machines for other reasons, and noticed that arabella restarted the generation count from zero. I wanted to continue the generation count, so here's a patch to do just that.
-
- 17 Jun, 2011 3 commits
-
-
Rusty Russell authored
Simple port from the TDB1 versions. Also, change to "tdb2.h" includes so they can be built even in other directories in future.
-
Rusty Russell authored
This means they can be installed in parallel with tdb1's tools.
-
Rusty Russell authored
This is where we should be getting bswap_64 from.
-
- 16 Jun, 2011 1 commit
-
-
Rusty Russell authored
Take some care to preserve formatting, even with mixed tabs and spaces.
-
- 15 Jun, 2011 1 commit
-
-
Joey Adams authored
-
- 11 Jun, 2011 1 commit
-
-
Joey Adams authored
* utf8_read_char * utf8_write_char * from_surrogate_pair * to_surrogate_pair
-
- 08 Jun, 2011 1 commit
-
-
Rusty Russell authored
My simple test program on my laptop showed that with modern 32 bit Intel CPUs and modern GCC, there's no measurable penalty for the clean version. Andrew Bartlett complained that the valgrind noise was grating. Agreed.
-
- 05 Jun, 2011 1 commit
-
-
Rusty Russell authored
-
- 31 May, 2011 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
- 30 May, 2011 1 commit
-
-
Joey Adams authored
-
- 20 May, 2011 4 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
We tried to get a F_WRLCK on the open lock; we shouldn't do that for a read-only tdb. (TDB1 gets away with it because a read-only open skips all locking). We also avoid leaking the fd in two tdb_open() failure paths revealed by this extra testing.
-
Rusty Russell authored
Allows tests to explicitly avoid continuing when a failure has been injected.
-
Rusty Russell authored
TDB2 tracks locks using getpid(), and gets upset when we fork behind its back.
-
- 10 May, 2011 3 commits
-
-
Rusty Russell authored
This crept in, it should be the same as the tests in typesafe_cb.h.
-
Rusty Russell authored
More recording of interesting events. As we don't have an ABI yet, we don't need to put these at the end.
-
Rusty Russell authored
The original code assumed that unlocking would fail if we didn't have a lock; this isn't true (at least, on my machine). So we have to always check the pid before unlocking.
-
- 27 Apr, 2011 5 commits
-
-
Rusty Russell authored
PAGESIZE used to be defined to getpagesize(); we changed it to a constant in b556ef1f, which broke the msync() call.
-
Rusty Russell authored
We don't need to copy into a buffer to examine the old data: in the common case, it's mmaped already. It's made a bit trickier because the tdb_access_read() function uses the current I/O methods, so we need to restore that temporarily. The difference was in the noise, however (the sync no-doubt dominates). Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m45.021s user 0m16.261s sys 0m2.432s -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m10.144s user 0m0.480s sys 0m0.460s -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb Adding 2000000 records: 863 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 383 ns (110601144 bytes) Traversing 2000000 records: 409 ns (110601144 bytes) Deleting 2000000 records: 676 ns (225354680 bytes) Re-adding 2000000 records: 784 ns (225354680 bytes) Appending 2000000 records: 1191 ns (247890168 bytes) Churning 2000000 records: 2166 ns (423133432 bytes) After: real 0m47.141s user 0m16.073s sys 0m2.460s -rw------- 1 rusty rusty 364469344 2011-04-27 22:58 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m4.207s user 0m0.416s sys 0m0.504s -rw------- 1 rusty rusty 313576 2011-04-27 22:59 torture.tdb Adding 2000000 records: 874 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 393 ns (110601144 bytes) Traversing 2000000 records: 404 ns (110601144 bytes) Deleting 2000000 records: 684 ns (225354680 bytes) Re-adding 2000000 records: 792 ns (225354680 bytes) Appending 2000000 records: 1212 ns (247890168 bytes) Churning 2000000 records: 2191 ns (423133432 bytes)
-
Rusty Russell authored
We don't need to use 4k for our transaction pages; we can use any value. For the tools/speed benchmark, any value between about 4k and 64M makes no difference, but that's probably because the entire database is touched in each transaction. So instead, I looked at tdbtorture to try to find an optimum value, as it uses smaller transactions. 4k and 64k were equivalent. 16M was almost three times slower, 1M was 5-10% slower. 1024 was also 5-10% slower. There's a slight advantage of having larger pages, both for allowing direct access to the database (if it's all in one page we can sometimes grant direct access even inside a transaction) and for the compactness of our recovery area (since our code is naive and won't combine one run across pages). Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m47.127s user 0m17.125s sys 0m2.456s -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m16.049s user 0m0.300s sys 0m0.492s -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb Adding 2000000 records: 894 ns (110551992 bytes) Finding 2000000 records: 564 ns (110551992 bytes) Missing 2000000 records: 398 ns (110551992 bytes) Traversing 2000000 records: 399 ns (110551992 bytes) Deleting 2000000 records: 711 ns (225633208 bytes) Re-adding 2000000 records: 819 ns (225633208 bytes) Appending 2000000 records: 1252 ns (248196544 bytes) Churning 2000000 records: 2319 ns (424005056 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m45.021s user 0m16.261s sys 0m2.432s -rw------- 1 rusty rusty 364469344 2011-04-27 22:55 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m10.144s user 0m0.480s sys 0m0.460s -rw------- 1 rusty rusty 391992 2011-04-27 22:56 torture.tdb Adding 2000000 records: 863 ns (110601144 bytes) Finding 2000000 records: 565 ns (110601144 bytes) Missing 2000000 records: 383 ns (110601144 bytes) Traversing 2000000 records: 409 ns (110601144 bytes) Deleting 2000000 records: 676 ns (225354680 bytes) Re-adding 2000000 records: 784 ns (225354680 bytes) Appending 2000000 records: 1191 ns (247890168 bytes) Churning 2000000 records: 2166 ns (423133432 bytes)
-
Rusty Russell authored
Currently we use the worst-case-possible size for the recovery area. Instead, prepare the recovery data, then see whether it's too large. Note that this currently works out to make the database *larger* on our speed benchmark, since we happen to need to enlarge the recovery area at the wrong time now, rather than the old case where its already hugely oversized. Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m50.366s user 0m17.109s sys 0m2.468s -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m23.818s user 0m0.304s sys 0m0.508s -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb Adding 2000000 records: 887 ns (110556088 bytes) Finding 2000000 records: 556 ns (110556088 bytes) Missing 2000000 records: 385 ns (110556088 bytes) Traversing 2000000 records: 401 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1255 ns (268404160 bytes) Churning 2000000 records: 2299 ns (268404160 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m47.127s user 0m17.125s sys 0m2.456s -rw------- 1 rusty rusty 366680288 2011-04-27 21:34 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m16.049s user 0m0.300s sys 0m0.492s -rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb Adding 2000000 records: 894 ns (110551992 bytes) Finding 2000000 records: 564 ns (110551992 bytes) Missing 2000000 records: 398 ns (110551992 bytes) Traversing 2000000 records: 399 ns (110551992 bytes) Deleting 2000000 records: 711 ns (225633208 bytes) Re-adding 2000000 records: 819 ns (225633208 bytes) Appending 2000000 records: 1252 ns (248196544 bytes) Churning 2000000 records: 2319 ns (424005056 bytes)
-
Rusty Russell authored
We don't need to write the whole page to the recovery area if it hasn't all changed. Simply skipping the start and end of the pages which are similar saves us about 20% on growtdb-bench 250000, and 45% on tdbtorture. The more thorough examination of page differences gives us a saving of 90% on growtdb-bench and 98% on tdbtorture! And we do win a bit on timings for transaction commit: Before: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 1m4.844s user 0m15.537s sys 0m3.796s -rw------- 1 rusty rusty 626693096 2011-04-27 21:28 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m17.021s user 0m0.272s sys 0m0.540s -rw------- 1 rusty rusty 458800 2011-04-27 21:29 torture.tdb Adding 2000000 records: 894 ns (110556088 bytes) Finding 2000000 records: 569 ns (110556088 bytes) Missing 2000000 records: 390 ns (110556088 bytes) Traversing 2000000 records: 403 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1262 ns (268404160 bytes) Churning 2000000 records: 2311 ns (268404160 bytes) After: $ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction 2000000 real 0m50.366s user 0m17.109s sys 0m2.468s -rw------- 1 rusty rusty 564215952 2011-04-27 21:31 /tmp/growtdb.tdb testing with 3 processes, 5000 loops, seed=0 OK real 1m23.818s user 0m0.304s sys 0m0.508s -rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb Adding 2000000 records: 887 ns (110556088 bytes) Finding 2000000 records: 556 ns (110556088 bytes) Missing 2000000 records: 385 ns (110556088 bytes) Traversing 2000000 records: 401 ns (110556088 bytes) Deleting 2000000 records: 710 ns (244003768 bytes) Re-adding 2000000 records: 825 ns (244003768 bytes) Appending 2000000 records: 1255 ns (268404160 bytes) Churning 2000000 records: 2299 ns (268404160 bytes)
-
- 21 Apr, 2011 1 commit
-
-
Rusty Russell authored
tdb1 always makes the tdb a multiple of the transaction page size, tdb2 doesn't. This means that if a transaction hits the exact end of the file, we might need to save off a partial page. So that we don't have to rewrite tdb_recovery_size() too, we simply do a short read and memset the unused section to 0 (to keep valgrind happy).
-