- 11 May, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 04 May, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
- 09 Apr, 2010 11 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
This takes my "make fastcheck" from about 57 seconds to about 43 seconds.
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
The ccanlint patch is rather intrusive. First, it adds a new field to all the ccanlint tests, "key". key is a shorter, still unique description of the test (e.g. "valgrind"). The names I chose as keys for all the tests are somewhat arbitrary and often don't reflect the name of the .c source file (because some of those names are just too darn long). Second, it adds two new options to ccanlint: -l: list tests ccanlint performs -x: exclude tests (e.g. -x trailing_whitespace,valgrind) It also adds a consistency check making sure all tests have unique keys and names. The primary goal of the ccanlint patch was so I could exclude the valgrind test, which takes a really long time for some modules (I think btree takes the longest, at around 2 minutes). I'm not sure I did it 100% correctly, so you'll want to review it first.
-
Rusty Russell authored
The btree patch gives the btree module an intuitive frontend (btree_insert, btree_remove, btree_lookup) and a built-in ordering function for strings. Together, these make it easy to use the btree module as a dynamic string map.
-
Rusty Russell authored
The charset patch makes utf8_validate reject the invalid codepoints U+FFFE and U+FFFF . Hopefully it's fully UTF-8 compliant now.
-
- 31 Mar, 2010 2 commits
-
-
Rusty Russell authored
-
Joseph Adams authored
-
- 24 Feb, 2010 5 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
-
Rusty Russell authored
tdb transactions were designed to be robust against the machine powering off, but interestingly were never designed to handle the case where an administrator kill -9's a process during commit. Because recovery is only done on tdb_open, processes with the tdb already mapped will simply use it despite it being corrupt and needing recovery. The solution to this is to check for recovery every time we grab a data lock: we could have gained the lock because a process just died. This has no measurable cost: here is the time for tdbtorture -s 0 -n 1 -l 10000: Before: 2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75 After: 2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Reduce code duplication, and also gives us a central point for the next patch which wants to cover all list locks. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Now the transaction code uses the standard allrecord lock, that stops us from trying to grab any per-record locks anyway. We don't need to have special noop lock ops for transactions. This is a nice simplification: if you see brlock, you know it's really going to grab a lock. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 23 Feb, 2010 3 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Records themselves get (read) locked by the traversal code against delete. Interestingly, this locking isn't done when the allrecord lock has been taken, though the allrecord lock until recently didn't cover the actual records (it now goes to end of file). The write record lock, grabbed by the delete code, is not suppressed by the allrecord lock, which causes us to punch a hole in that lock when we release the write record lock. Make this consistent: *no* record locks of any kind when the allrecord lock is taken.
-
Rusty Russell authored
-
- 22 Feb, 2010 13 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
There's little point in ever shrinking the file, and it definitely breaks in the case where a process has died during a transaction commit and other processes have the tdb mapped. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
commit b37b452cb8c1f56b37b04abe7bffdede371ca361 Author: Rusty Russell <rusty@rustcorp.com.au> Date: Thu Feb 4 23:59:54 2010 +1030 tdb: fix recovery reuse after crash If a process (or the machine) dies after just after writing the recovery head (pointing at the end of file), the recovery record will filled with 0x42. This will not invoke a recovery on open, since rec.magic != TDB_RECOVERY_MAGIC. Unfortunately, the first transaction commit will happily reuse that area: tdb_recovery_allocate() doesn't check the magic. The recovery record has length 0x42424242, and it writes that back into the now-valid-looking transaction header) for the next comer (which happens to be tdb_wipe_all in my tests). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
With killing children, CLEAR_IF_FIRST can happen quite a bit.
-
Rusty Russell authored
Now the transaction allrecord lock the standard one, and thus is cleaned in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to know what type it is. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Centralize locking of all chains of the tdb; rename _tdb_lockall to tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and tdb_brlock_upgrade to tdb_allrecord_upgrade. Then we use this in the transaction code. Unfortunately, if the transaction code records that it has grabbed the allrecord lock read-only, write locks will fail, so we treat this upgradable lock as a write lock, and mark it as upgradable using the otherwise-unused offset field. One subtlety: now the transaction code is using the allrecord_lock, the tdb_release_extra_locks() function drops it for us, so we no longer need to do it manually in _tdb_transaction_cancel. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We were previously inconsistent with our "global" lock: the transaction code grabbed it from FREELIST_TOP to end of file, and the rest of the code grabbed it from FREELIST_TOP to end of the hash chains. Change it to always grab to end of file for simplicity. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This was redundant before this patch series: it mirrored num_lockrecs exactly. It still does. Also, skip useless branch when locks == 1: unconditional assignment is cheaper anyway. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This is pure overhead, but it centralizes the locking. Realloc (esp. as most implementations are lazy) is fast compared to the fnctl anyway. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Rather than a boutique lock and a separate nest count, use our newly-generic nested lock tracking for the active lock. Note that the tdb_have_extra_locks() and tdb_release_extra_locks() functions have to skip over this lock now it is tracked. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This never nests, so it's overkill, but it centralizes the locking into lock.c and removes the ugly flag in the transaction code to track whether we have the lock or not. Note that we have a temporary hack so this places a real lock, despite the fact that we are in a transaction. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Rather than a boutique lock and a separate nest count, use our newly-generic nested lock tracking for the transaction lock. Note that the tdb_have_extra_locks() and tdb_release_extra_locks() functions have to skip over this lock now it is tracked. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Factor out two loops which find locks; we are going to introduce a couple more so a helper makes sense. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-