- 09 Jun, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 08 Jun, 2010 1 commit
-
-
Rusty Russell authored
This version has limitations: pools must be at least 1MB, and allocations are restricted to 1/1024 of the total pool size.
-
- 07 Jun, 2010 5 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Particularly useful for building tests standalone.
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 24 May, 2010 2 commits
-
-
Rusty Russell authored
To do this, we have to lose the ability for preargs and postargs to allow const and volatile argument signatures.
-
Rusty Russell authored
-
- 23 May, 2010 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
hashtable: make traverse callback typesafe.
-
- 20 May, 2010 1 commit
-
-
Rusty Russell authored
-
- 11 May, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 04 May, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 09 Apr, 2010 11 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
This takes my "make fastcheck" from about 57 seconds to about 43 seconds.
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
The ccanlint patch is rather intrusive. First, it adds a new field to all the ccanlint tests, "key". key is a shorter, still unique description of the test (e.g. "valgrind"). The names I chose as keys for all the tests are somewhat arbitrary and often don't reflect the name of the .c source file (because some of those names are just too darn long). Second, it adds two new options to ccanlint: -l: list tests ccanlint performs -x: exclude tests (e.g. -x trailing_whitespace,valgrind) It also adds a consistency check making sure all tests have unique keys and names. The primary goal of the ccanlint patch was so I could exclude the valgrind test, which takes a really long time for some modules (I think btree takes the longest, at around 2 minutes). I'm not sure I did it 100% correctly, so you'll want to review it first.
-
Rusty Russell authored
The btree patch gives the btree module an intuitive frontend (btree_insert, btree_remove, btree_lookup) and a built-in ordering function for strings. Together, these make it easy to use the btree module as a dynamic string map.
-
Rusty Russell authored
The charset patch makes utf8_validate reject the invalid codepoints U+FFFE and U+FFFF . Hopefully it's fully UTF-8 compliant now.
-
- 31 Mar, 2010 2 commits
-
-
Rusty Russell authored
-
Joseph Adams authored
-
- 24 Feb, 2010 5 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
tdb transactions were designed to be robust against the machine powering off, but interestingly were never designed to handle the case where an administrator kill -9's a process during commit. Because recovery is only done on tdb_open, processes with the tdb already mapped will simply use it despite it being corrupt and needing recovery. The solution to this is to check for recovery every time we grab a data lock: we could have gained the lock because a process just died. This has no measurable cost: here is the time for tdbtorture -s 0 -n 1 -l 10000: Before: 2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75 After: 2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Reduce code duplication, and also gives us a central point for the next patch which wants to cover all list locks. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Now the transaction code uses the standard allrecord lock, that stops us from trying to grab any per-record locks anyway. We don't need to have special noop lock ops for transactions. This is a nice simplification: if you see brlock, you know it's really going to grab a lock. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 23 Feb, 2010 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Records themselves get (read) locked by the traversal code against delete. Interestingly, this locking isn't done when the allrecord lock has been taken, though the allrecord lock until recently didn't cover the actual records (it now goes to end of file). The write record lock, grabbed by the delete code, is not suppressed by the allrecord lock, which causes us to punch a hole in that lock when we release the write record lock. Make this consistent: *no* record locks of any kind when the allrecord lock is taken.
-