- 13 Dec, 2010 2 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
It requires that we build the objects first.
-
- 08 Dec, 2010 4 commits
-
-
Rusty Russell authored
-
Ronnie Sahlberg authored
trbt_delete32() was broken and caused SEGV as soon as you tried to delete an object from a tree. Rework trbt_delete32() to instead just call talloc_free() instread of trying to call delete_node() directly. This makes the "from_destructor" argument to delete_node() redundant so that parameter is removed too. Signed-off-by: Ronnie Sahlberg <sahlberg@lenovo-laptop.(none)>
-
Rusty Russell authored
Unfortunately, gcc only warns if it sees an unknown attribute (in this case, gcc 4.1 vs "cold").
-
Rusty Russell authored
cc -g -Wall -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes -Wmissing-declarations -I. -MD -Werror -c -o tools/ccanlint/tests/examples_run.o tools/ccanlint/tests/examples_run.c cc1: warnings being treated as errors tools/ccanlint/tests/examples_run.c: In function ‘scan_forv’: tools/ccanlint/tests/examples_run.c:37: warning: passing argument 2 of ‘__builtin_va_copy’ discards qualifiers from pointer target type tools/ccanlint/tests/examples_run.c:43: warning: passing argument 4 of ‘scan_forv’ from incompatible pointer type tools/ccanlint/tests/examples_run.c:52: warning: passing argument 4 of ‘scan_forv’ from incompatible pointer type tools/ccanlint/tests/examples_run.c:60: warning: passing argument 4 of ‘scan_forv’ from incompatible pointer type tools/ccanlint/tests/examples_run.c: In function ‘scan_for’: tools/ccanlint/tests/examples_run.c:78: warning: passing argument 4 of ‘scan_forv’ from incompatible pointer type make: *** [tools/ccanlint/tests/examples_run.o] Error 1 It really doesn't like constifying a va_arg, so remove the const declaration.
-
- 07 Dec, 2010 1 commit
-
-
Rusty Russell authored
-
- 06 Dec, 2010 8 commits
-
-
Rusty Russell authored
Chris Cowan tracked down a SEGV in sub_alloc: idp->level can actually be equal to 7 (MAX_LEVEL) there, as it can be in sub_remove.
-
Rusty Russell authored
(Imported from SAMBA commit 2db1987f5a3a) Right-shifting signed integers in undefined; indeed it seems that on AIX with their compiler, doing a 30-bit shift on (INT_MAX-200) gives 0, not 1 as we might expect (THIS WAS WRONG, REAL FIX LATER). The obvious fix is to make id and oid unsigned: l (level count) is also logically unsigned. (Note: Samba doesn't generally get to ids > 1 billion, but ctdb does) Reported-by: Chris Cowan <cc@us.ibm.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
(Imported from b53f8c187de8) Author: Rusty Russell <rusty@rustorp.com.au> Date: Thu Jun 10 13:27:51 2010 -0700 Since idtree assigns sequentially, it rarely reaches high numbers. But such numbers can be forced with idr_get_new_above(), and that reveals two bugs: 1) Crash in sub_remove() caused by pa array being too short. 2) Shift by more than 32 in _idr_find(), which is undefined, causing the "outside the current tree" optimization to misfire and return NULL.
-
Rusty Russell authored
This causes a SEGV on my laptop.
-
Rusty Russell authored
We insert comments when we massage or combine examples; don't let these throw off our analysis (as happened for idtree.h).
-
Rusty Russell authored
-
Rusty Russell authored
We might as well use the compiled .o rather than all the little .o files.
-
Rusty Russell authored
Out-by-one error had us using character prior to declaration, eg, in "static int *foo" we use "*foo". This seems to compile, but is weird.
-
- 03 Dec, 2010 1 commit
-
-
Rusty Russell authored
Commit da72623a added a typo; ccanlint caught it, but doesn't consider test compile failing to be fatal (it should!).
-
- 01 Dec, 2010 13 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
Specifically the linked free tables, and reflect on the status of each point of the design document.
-
Rusty Russell authored
Rather than overloading TDB_USED_MAGIC and the hash value as we do now. We also rename "free list" to the more-accurate "free table" everywhere.
-
Rusty Russell authored
Currently we fall back to copying data during a transaction, but we don't need to in many cases. Grant direct access in those cases. Before: $ ./speed --transaction 1000000 Adding 1000000 records: 2409 ns (59916680 bytes) Finding 1000000 records: 1156 ns (59916680 bytes) Missing 1000000 records: 604 ns (59916680 bytes) Missing 1000000 records: 604 ns (59916680 bytes) Traversing 1000000 records: 1226 ns (59916680 bytes) Deleting 1000000 records: 1556 ns (119361928 bytes) Re-adding 1000000 records: 2326 ns (119361928 bytes) Appending 1000000 records: 3269 ns (246656880 bytes) Churning 1000000 records: 5613 ns (338235248 bytes) After: $ ./speed --transaction 1000000 Adding 1000000 records: 1902 ns (59916680 bytes) Finding 1000000 records: 1032 ns (59916680 bytes) Missing 1000000 records: 606 ns (59916680 bytes) Missing 1000000 records: 606 ns (59916680 bytes) Traversing 1000000 records: 741 ns (59916680 bytes) Deleting 1000000 records: 1347 ns (119361928 bytes) Re-adding 1000000 records: 1727 ns (119361928 bytes) Appending 1000000 records: 2561 ns (246656880 bytes) Churning 1000000 records: 4403 ns (338235248 bytes)
-
Rusty Russell authored
This is a precursor to direct access during transactions: they care about whether we are going to read or write to the file.
-
Rusty Russell authored
If we get enough hash collisions, we can run out of hash bits; this almost certainly is caused by a deliberate attempt to break the tdb (hash bombing). Implement chained records for this case; they're slow but will keep the rest of the database functioning.
-
Rusty Russell authored
We have to unlock during coalescing, so we mark records specially to indicate to tdb_check that they're not on any list, and to prevent other coalescers from grabbing them. Use a special free list number, rather than a new magic.
-
Rusty Russell authored
We already have 10 hash bits encoded in the offset itself; we only get here incorrectly about 1 time in 1000, so it's a pretty minor optimization at best. Nonetheless, we have the information, so let's check it before accessing the key. This reduces the probability of a false keycmp by another factor of 2000.
-
Rusty Russell authored
-
Rusty Russell authored
Logged errors should always set tdb->ecode before they are called, and there's little reason to have a sprintf-style logging function since we can do the formatting internally. Change the tdb_log attribute to just take a "const char *", and create a new tdb_logerr() helper which sets ecode and calls it. As a bonus, mark it COLD so the compiler can optimize appropriately knowing that it's unlikely to be invoked.
-
Rusty Russell authored
There was an idea that we would use a bit to indicate that we didn't have the full hash value; this would allow us to move records down when we expanded a hash without rehashing them. There's little evidence that rehashing in this case is particularly expensive, so remove the bit. We use that bit simply to indicate that an offset refers to a subhash instead.
-
Rusty Russell authored
This is one case where getting rid of tdb_get() cost us. Also, we add more read-only checks. Before we removed tdb_get: Adding 1000000 records: 6480 ns (59900296 bytes) Finding 1000000 records: 2839 ns (59900296 bytes) Missing 1000000 records: 2485 ns (59900296 bytes) Traversing 1000000 records: 2598 ns (59900296 bytes) Deleting 1000000 records: 5342 ns (59900296 bytes) Re-adding 1000000 records: 5613 ns (59900296 bytes) Appending 1000000 records: 12194 ns (93594224 bytes) Churning 1000000 records: 14549 ns (93594224 bytes) Now: Adding 1000000 records: 6307 ns (59900296 bytes) Finding 1000000 records: 2801 ns (59900296 bytes) Missing 1000000 records: 2515 ns (59900296 bytes) Traversing 1000000 records: 2579 ns (59900296 bytes) Deleting 1000000 records: 5225 ns (59900296 bytes) Re-adding 1000000 records: 5878 ns (59900296 bytes) Appending 1000000 records: 12665 ns (93594224 bytes) Churning 1000000 records: 16090 ns (93594224 bytes)
-
Rusty Russell authored
We have four internal helpers for reading data from the database: 1) tdb_read_convert() - read (and convert) into a buffer. 2) tdb_read_off() - read (and convert) and offset. 3) tdb_access_read() - malloc or direct access to the database. 4) tdb_get() - copy into a buffer or direct access to the database. The last one doesn't really buy us anything, so remove it (except for tdb_read_off/tdb_write_off, see next patch). Before: Adding 1000000 records: 6480 ns (59900296 bytes) Finding 1000000 records: 2839 ns (59900296 bytes) Missing 1000000 records: 2485 ns (59900296 bytes) Traversing 1000000 records: 2598 ns (59900296 bytes) Deleting 1000000 records: 5342 ns (59900296 bytes) Re-adding 1000000 records: 5613 ns (59900296 bytes) Appending 1000000 records: 12194 ns (93594224 bytes) Churning 1000000 records: 14549 ns (93594224 bytes) After: Adding 1000000 records: 6497 ns (59900296 bytes) Finding 1000000 records: 2854 ns (59900296 bytes) Missing 1000000 records: 2563 ns (59900296 bytes) Traversing 1000000 records: 2735 ns (59900296 bytes) Deleting 1000000 records: 11357 ns (59900296 bytes) Re-adding 1000000 records: 8145 ns (59900296 bytes) Appending 1000000 records: 10939 ns (93594224 bytes) Churning 1000000 records: 18479 ns (93594224 bytes)
-
- 23 Nov, 2010 2 commits
-
-
Rusty Russell authored
Now it will build copies of other ccan deps if it can't find them.
-
Rusty Russell authored
We currently only have one, so shortcut the case where we want our current one.
-
- 01 Dec, 2010 1 commit
-
-
Rusty Russell authored
This is good for deep debugging.
-
- 23 Nov, 2010 6 commits
-
-
Rusty Russell authored
-
Rusty Russell authored
It's problematic for transaction commit to get the expansion lock, but in fact we always grab a hash lock before the transaction lock, so it doesn't really need it (the transaction locks the entire database). Assert that this is true, and fix up a few lowlevel tests where it wasn't.
-
Rusty Russell authored
I left much tdb1 code in various files for inspiration, and in case I needed it later. Now we have all the major features implemented, remove it.
-
Rusty Russell authored
This adds transactions to tdb2; the code is taken from tdb1 with minimal modifications, as are the unit
-
Rusty Russell authored
If we have a write lock and ask for a read lock, that's OK, but not the other way around. tdb_nest_lock() allowed both, tdb_allrecord_lock() allowed neither.
-
Rusty Russell authored
This wasn't fixed when we converted to ccan/opt in 8d706678. Unfortunately, unistd.h defines optarg, so the compiler didn't catch this.
-
- 22 Nov, 2010 2 commits
-
-
Rusty Russell authored
As long as they are in descending order. This prevents the common case of: 1) Grab lock for bucket. 2) Remove entry from bucket. 3) Drop lock for bucket. 4) Grab lock for bucket for leftover. 5) Add leftover entry to bucket. 6) Drop lock for leftover bucket. In particular it's quite common for the leftover bucket to be the same as the entry bucket (when it's the largest bucket); if it's not, we are no worse than before. Current results of speed test: $ ./speed 1000000 Adding 1000000 records: 13194 ns (60083192 bytes) Finding 1000000 records: 2438 ns (60083192 bytes) Traversing 1000000 records: 2167 ns (60083192 bytes) Deleting 1000000 records: 9265 ns (60083192 bytes) Re-adding 1000000 records: 10241 ns (60083192 bytes) Appending 1000000 records: 17692 ns (93879992 bytes) Churning 1000000 records: 26077 ns (93879992 bytes) Previous: $ ./speed 1000000 Adding 1000000 records: 23210 ns (59193360 bytes) Finding 1000000 records: 2387 ns (59193360 bytes) Traversing 1000000 records: 2150 ns (59193360 bytes) Deleting 1000000 records: 13392 ns (59193360 bytes) Re-adding 1000000 records: 11546 ns (59193360 bytes) Appending 1000000 records: 29327 ns (91193360 bytes) Churning 1000000 records: 33026 ns (91193360 bytes)
-
Rusty Russell authored
This reduces the amount of expansion we do. Before: ./speed 1000000 Adding 1000000 records: 23210 ns (59193360 bytes) Finding 1000000 records: 2387 ns (59193360 bytes) Traversing 1000000 records: 2150 ns (59193360 bytes) Deleting 1000000 records: 13392 ns (59193360 bytes) Re-adding 1000000 records: 11546 ns (59193360 bytes) Appending 1000000 records: 29327 ns (91193360 bytes) Churning 1000000 records: 33026 ns (91193360 bytes) After: $ ./speed 1000000 Adding 1000000 records: 17480 ns (61472904 bytes) Finding 1000000 records: 2431 ns (61472904 bytes) Traversing 1000000 records: 2194 ns (61472904 bytes) Deleting 1000000 records: 10948 ns (61472904 bytes) Re-adding 1000000 records: 11247 ns (61472904 bytes) Appending 1000000 records: 21826 ns (96051424 bytes) Churning 1000000 records: 27242 ns (96051424 bytes)
-