Commit a5b66d70 authored by Rusty Russell's avatar Rusty Russell

tdb2: relax locking to allow two free list locks at once

As long as they are in descending order.  This prevents the common case of:

1) Grab lock for bucket.
2) Remove entry from bucket.
3) Drop lock for bucket.
4) Grab lock for bucket for leftover.
5) Add leftover entry to bucket.
6) Drop lock for leftover bucket.

In particular it's quite common for the leftover bucket to be the same
as the entry bucket (when it's the largest bucket); if it's not, we are
no worse than before.

Current results of speed test:
$ ./speed 1000000
Adding 1000000 records:  13194 ns (60083192 bytes)
Finding 1000000 records:  2438 ns (60083192 bytes)
Traversing 1000000 records:  2167 ns (60083192 bytes)
Deleting 1000000 records:  9265 ns (60083192 bytes)
Re-adding 1000000 records:  10241 ns (60083192 bytes)
Appending 1000000 records:  17692 ns (93879992 bytes)
Churning 1000000 records:  26077 ns (93879992 bytes)

Previous:
$ ./speed 1000000
Adding 1000000 records:  23210 ns (59193360 bytes)
Finding 1000000 records:  2387 ns (59193360 bytes)
Traversing 1000000 records:  2150 ns (59193360 bytes)
Deleting 1000000 records:  13392 ns (59193360 bytes)
Re-adding 1000000 records:  11546 ns (59193360 bytes)
Appending 1000000 records:  29327 ns (91193360 bytes)
Churning 1000000 records:  33026 ns (91193360 bytes)
parent 20defbbc
......@@ -450,15 +450,17 @@ again:
if (tdb_write_convert(tdb, best_off, &rec, sizeof(rec)) != 0)
goto unlock_err;
tdb_unlock_free_bucket(tdb, b_off);
/* Bucket of leftover will be <= current bucket, so nested
* locking is allowed. */
if (leftover) {
if (add_free_record(tdb,
best_off + sizeof(rec)
+ frec_len(&best) - leftover,
leftover))
return TDB_OFF_ERR;
best_off = TDB_OFF_ERR;
}
tdb_unlock_free_bucket(tdb, b_off);
return best_off;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment