Commit 521d3ae5 authored by Kirill Smelkov's avatar Kirill Smelkov

X zodb/cache: Also free OCE entries on GC

Until now we were only GC'ing RCE entries, but OCE entries (top-level
cache entry for an OID) were never GC'ed thus allowing cache size to
grow infinitely.

Even though the average N(allocations) stays the same this change lowers
pressure on amount of memory allocated and thus helping avoid some GC
speeds things up (see attachment).

Notes:

1. Hit*/size=0 increase is probably due to Cache.entryMap going to
   empty/1-element back and forth constantly. size/0 is not going to
   happen in practice and so this wat we can accept regression here.

   ( still NoHit*/size=0 works faster ).

2. HitProc/size=* "regression should be within noise"

name                        old time/op    new time/op    delta
NoopStorage-4                 56.6ns ± 1%    56.2ns ± 0%       ~     (p=0.079 n=5+5)
CacheStartup-4                1.24µs ± 6%    1.28µs ± 9%       ~     (p=0.595 n=5+5)
CacheNoHit/size=0-4           1.44µs ± 4%    0.93µs ± 2%    -35.26%  (p=0.008 n=5+5)
CacheNoHit/size=16-4          1.43µs ± 5%    0.92µs ± 1%    -35.19%  (p=0.008 n=5+5)
CacheNoHit/size=128-4         1.45µs ± 3%    0.94µs ± 1%    -35.03%  (p=0.008 n=5+5)
CacheNoHit/size=512-4         1.44µs ± 4%    0.97µs ± 2%    -32.64%  (p=0.008 n=5+5)
CacheNoHit/size=4096-4        1.45µs ± 2%    1.07µs ± 0%    -25.85%  (p=0.008 n=5+5)
CacheHit/size=0-4              131ns ± 2%     276ns ±22%   +110.99%  (p=0.008 n=5+5)
CacheHit/size=16-4             122ns ± 1%     121ns ± 1%       ~     (p=0.079 n=5+5)
CacheHit/size=128-4            126ns ± 2%     125ns ± 1%       ~     (p=0.563 n=5+5)
CacheHit/size=512-4            127ns ± 1%     126ns ± 0%       ~     (p=0.095 n=5+4)
CacheHit/size=4096-4           128ns ± 0%     128ns ± 0%       ~     (p=0.556 n=5+4)
NoopStoragePar-4              30.6ns ± 4%    31.2ns ±10%       ~     (p=0.690 n=5+5)
CacheStartupPar-4             1.44µs ± 5%    1.43µs ± 3%       ~     (p=0.690 n=5+5)
CacheNoHitPar/size=0-4        1.62µs ± 4%    1.04µs ± 1%    -35.76%  (p=0.008 n=5+5)
CacheNoHitPar/size=16-4       1.65µs ± 4%    1.05µs ± 1%    -36.39%  (p=0.008 n=5+5)
CacheNoHitPar/size=128-4      1.64µs ± 5%    1.05µs ± 1%    -35.84%  (p=0.008 n=5+5)
CacheNoHitPar/size=512-4      1.62µs ± 3%    1.08µs ± 1%    -33.10%  (p=0.008 n=5+5)
CacheNoHitPar/size=4096-4     1.68µs ± 1%    1.18µs ± 0%    -29.44%  (p=0.008 n=5+5)
CacheHitPar/size=0-4           215ns ± 0%     383ns ± 2%    +78.23%  (p=0.008 n=5+5)
CacheHitPar/size=16-4          214ns ± 2%     214ns ± 0%       ~     (p=0.786 n=5+5)
CacheHitPar/size=128-4         210ns ± 0%     209ns ± 0%       ~     (p=0.079 n=5+5)
CacheHitPar/size=512-4         207ns ± 0%     206ns ± 0%     -0.48%  (p=0.008 n=5+5)
CacheHitPar/size=4096-4        204ns ± 0%     202ns ± 0%     -0.98%  (p=0.000 n=5+4)
NoopStorageProc-4             31.4ns ± 7%    33.7ns ± 5%       ~     (p=0.151 n=5+5)
CacheStartupProc-4            1.13µs ± 5%    1.12µs ± 3%       ~     (p=0.690 n=5+5)
CacheNoHitProc/size=0-4       1.12µs ± 5%    0.62µs ± 1%    -44.52%  (p=0.008 n=5+5)
CacheNoHitProc/size=16-4      1.14µs ± 6%    0.63µs ± 1%    -45.14%  (p=0.008 n=5+5)
CacheNoHitProc/size=128-4     1.06µs ± 5%    0.64µs ± 2%    -40.12%  (p=0.008 n=5+5)
CacheNoHitProc/size=512-4     1.14µs ±11%    0.69µs ± 4%    -39.87%  (p=0.008 n=5+5)
CacheNoHitProc/size=4096-4    1.14µs ± 9%    0.68µs ± 2%    -40.21%  (p=0.008 n=5+5)
CacheHitProc/size=0-4         56.5ns ± 7%    84.6ns ±14%    +49.66%  (p=0.008 n=5+5)
CacheHitProc/size=16-4        55.8ns ± 0%    62.0ns ± 6%    +11.03%  (p=0.008 n=5+5)
CacheHitProc/size=128-4       56.6ns ± 0%    60.9ns ± 4%     +7.63%  (p=0.008 n=5+5)
CacheHitProc/size=512-4       57.3ns ± 0%    64.1ns ± 7%    +11.83%  (p=0.016 n=4+5)
CacheHitProc/size=4096-4      61.6ns ± 1%    69.7ns ± 5%    +13.29%  (p=0.008 n=5+5)

name                        old alloc/op   new alloc/op   delta
NoopStorage-4                  0.00B          0.00B            ~     (all equal)
CacheStartup-4                  269B ± 0%      285B ± 0%     +5.95%  (p=0.008 n=5+5)
CacheNoHit/size=0-4             225B ± 0%      153B ± 0%    -32.12%  (p=0.008 n=5+5)
CacheNoHit/size=16-4            225B ± 0%      153B ± 0%    -32.00%  (p=0.029 n=4+4)
CacheNoHit/size=128-4           225B ± 1%      153B ± 0%    -31.76%  (p=0.008 n=5+5)
CacheNoHit/size=512-4           225B ± 1%      154B ± 0%    -31.50%  (p=0.008 n=5+5)
CacheNoHit/size=4096-4          224B ± 0%      155B ± 0%    -30.80%  (p=0.008 n=5+5)
CacheHit/size=0-4              0.00B         13.40B ±42%      +Inf%  (p=0.008 n=5+5)
CacheHit/size=16-4             0.00B          0.00B            ~     (all equal)
CacheHit/size=128-4            0.00B          0.00B            ~     (all equal)
CacheHit/size=512-4            0.00B          0.00B            ~     (all equal)
CacheHit/size=4096-4           0.00B          0.00B            ~     (all equal)
NoopStoragePar-4               0.00B          0.00B            ~     (all equal)
CacheStartupPar-4               267B ± 0%      282B ± 1%     +5.67%  (p=0.016 n=4+5)
CacheNoHitPar/size=0-4          232B ± 1%      162B ± 1%    -30.11%  (p=0.008 n=5+5)
CacheNoHitPar/size=16-4         228B ± 1%      161B ± 0%    -29.21%  (p=0.008 n=5+5)
CacheNoHitPar/size=128-4        229B ± 1%      162B ± 0%    -29.43%  (p=0.008 n=5+5)
CacheNoHitPar/size=512-4        228B ± 1%      162B ± 1%    -28.86%  (p=0.008 n=5+5)
CacheNoHitPar/size=4096-4       224B ± 0%      166B ± 0%    -26.02%  (p=0.000 n=5+4)
CacheHitPar/size=0-4           1.00B ± 0%    13.60B ± 4%  +1260.00%  (p=0.008 n=5+5)
CacheHitPar/size=16-4          0.00B          0.00B            ~     (all equal)
CacheHitPar/size=128-4         0.00B          0.00B            ~     (all equal)
CacheHitPar/size=512-4         0.00B          0.00B            ~     (all equal)
CacheHitPar/size=4096-4        0.00B          0.00B            ~     (all equal)
NoopStorageProc-4              0.00B          0.00B            ~     (all equal)
CacheStartupProc-4              269B ± 0%      285B ± 0%     +5.95%  (p=0.008 n=5+5)
CacheNoHitProc/size=0-4         240B ± 0%      194B ± 0%    -19.17%  (p=0.000 n=5+4)
CacheNoHitProc/size=16-4        240B ± 2%      194B ± 1%    -19.38%  (p=0.008 n=5+5)
CacheNoHitProc/size=128-4       241B ± 0%      193B ± 1%    -20.00%  (p=0.016 n=4+5)
CacheNoHitProc/size=512-4       241B ± 1%      188B ± 2%    -22.06%  (p=0.008 n=5+5)
CacheNoHitProc/size=4096-4      240B ± 1%      179B ± 0%    -25.52%  (p=0.008 n=5+5)
CacheHitProc/size=0-4          0.00B          3.60B ±17%      +Inf%  (p=0.008 n=5+5)
CacheHitProc/size=16-4         0.00B          0.00B            ~     (all equal)
CacheHitProc/size=128-4        0.00B          0.00B            ~     (all equal)
CacheHitProc/size=512-4        0.00B          0.00B            ~     (all equal)
CacheHitProc/size=4096-4       0.00B          0.00B            ~     (all equal)

name                        old allocs/op  new allocs/op  delta
NoopStorage-4                   0.00           0.00            ~     (all equal)
CacheStartup-4                  5.00 ± 0%      5.00 ± 0%       ~     (all equal)
CacheNoHit/size=0-4             3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=16-4            3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=128-4           3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=512-4           3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=4096-4          3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHit/size=0-4               0.00           0.00            ~     (all equal)
CacheHit/size=16-4              0.00           0.00            ~     (all equal)
CacheHit/size=128-4             0.00           0.00            ~     (all equal)
CacheHit/size=512-4             0.00           0.00            ~     (all equal)
CacheHit/size=4096-4            0.00           0.00            ~     (all equal)
NoopStoragePar-4                0.00           0.00            ~     (all equal)
CacheStartupPar-4               4.00 ± 0%      4.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=0-4          3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=16-4         3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=128-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=512-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=4096-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHitPar/size=0-4            0.00           0.00            ~     (all equal)
CacheHitPar/size=16-4           0.00           0.00            ~     (all equal)
CacheHitPar/size=128-4          0.00           0.00            ~     (all equal)
CacheHitPar/size=512-4          0.00           0.00            ~     (all equal)
CacheHitPar/size=4096-4         0.00           0.00            ~     (all equal)
NoopStorageProc-4               0.00           0.00            ~     (all equal)
CacheStartupProc-4              5.00 ± 0%      5.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=0-4         3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=16-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=128-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=512-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=4096-4      3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHitProc/size=0-4           0.00           0.00            ~     (all equal)
CacheHitProc/size=16-4          0.00           0.00            ~     (all equal)
CacheHitProc/size=128-4         0.00           0.00            ~     (all equal)
CacheHitProc/size=512-4         0.00           0.00            ~     (all equal)
CacheHitProc/size=4096-4        0.00           0.00            ~     (all equal)
parent 1f07e51a
...@@ -59,6 +59,8 @@ type Cache struct { ...@@ -59,6 +59,8 @@ type Cache struct {
// oidCacheEntry maintains cached revisions for 1 oid // oidCacheEntry maintains cached revisions for 1 oid
type oidCacheEntry struct { type oidCacheEntry struct {
oid Oid
sync.Mutex sync.Mutex
// cached revisions in ascending order // cached revisions in ascending order
...@@ -174,7 +176,7 @@ func (c *Cache) Load(ctx context.Context, xid Xid) (buf *Buf, serial Tid, err er ...@@ -174,7 +176,7 @@ func (c *Cache) Load(ctx context.Context, xid Xid) (buf *Buf, serial Tid, err er
} else { } else {
// XXX use connection poll // XXX use connection poll
// XXX or it should be cared by loader? // XXX or it should be cared by loader?
c.loadRCE(ctx, rce, xid.Oid) c.loadRCE(ctx, rce)
} }
if rce.err != nil { if rce.err != nil {
...@@ -200,7 +202,7 @@ func (c *Cache) Prefetch(ctx context.Context, xid Xid) { ...@@ -200,7 +202,7 @@ func (c *Cache) Prefetch(ctx context.Context, xid Xid) {
// spawn loading in the background if rce was not yet loaded // spawn loading in the background if rce was not yet loaded
if rceNew { if rceNew {
// XXX use connection poll // XXX use connection poll
go c.loadRCE(ctx, rce, xid.Oid) go c.loadRCE(ctx, rce)
} }
} }
...@@ -237,7 +239,7 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b ...@@ -237,7 +239,7 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b
c.mu.Lock() c.mu.Lock()
oce = c.entryMap[xid.Oid] oce = c.entryMap[xid.Oid]
if oce == nil { if oce == nil {
oce = &oidCacheEntry{} oce = &oidCacheEntry{oid: xid.Oid}
c.entryMap[xid.Oid] = oce c.entryMap[xid.Oid] = oce
} }
cacheHead = c.head // reload c.head because we relocked the cache cacheHead = c.head // reload c.head because we relocked the cache
...@@ -306,9 +308,9 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b ...@@ -306,9 +308,9 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b
// //
// rce must be new just created by lookupRCE() with returned rceNew=true. // rce must be new just created by lookupRCE() with returned rceNew=true.
// loading completion is signalled by marking rce.ready done. // loading completion is signalled by marking rce.ready done.
func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) { func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry) {
oce := rce.parent oce := rce.parent
buf, serial, err := c.loader.Load(ctx, Xid{At: rce.head, Oid: oid}) buf, serial, err := c.loader.Load(ctx, Xid{At: rce.head, Oid: oce.oid})
// normalize buf/serial if it was error // normalize buf/serial if it was error
if err != nil { if err != nil {
...@@ -322,7 +324,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) { ...@@ -322,7 +324,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
rce.err = err rce.err = err
// verify db gives serial <= head // verify db gives serial <= head
if rce.serial > rce.head { if rce.serial > rce.head {
rce.markAsDBError(oid, "load(@%v) -> %v", rce.head, serial) rce.markAsDBError("load(@%v) -> %v", rce.head, serial)
} }
δsize := rce.buf.Len() δsize := rce.buf.Len()
...@@ -365,7 +367,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) { ...@@ -365,7 +367,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
rceDropped := false rceDropped := false
if i + 1 < len(oce.rcev) { if i + 1 < len(oce.rcev) {
rceNext := oce.rcev[i+1] rceNext := oce.rcev[i+1]
if rceNext.loaded() && tryMerge(rce, rceNext, rce, oid) { if rceNext.loaded() && tryMerge(rce, rceNext, rce) {
// not δsize -= len(rce.buf.Data) // not δsize -= len(rce.buf.Data)
// tryMerge can change rce.buf if consistency is broken // tryMerge can change rce.buf if consistency is broken
δsize = 0 δsize = 0
...@@ -380,7 +382,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) { ...@@ -380,7 +382,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
var rcePrevDropped *revCacheEntry var rcePrevDropped *revCacheEntry
if i > 0 { if i > 0 {
rcePrev := oce.rcev[i-1] rcePrev := oce.rcev[i-1]
if rcePrev.loaded() && tryMerge(rcePrev, rce, rce, oid) { if rcePrev.loaded() && tryMerge(rcePrev, rce, rce) {
rcePrevDropped = rcePrev rcePrevDropped = rcePrev
if rcePrev.accounted { if rcePrev.accounted {
δsize -= rcePrev.buf.Len() δsize -= rcePrev.buf.Len()
...@@ -433,9 +435,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) { ...@@ -433,9 +435,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
// return: true if merging done and thus prev was dropped from parent // return: true if merging done and thus prev was dropped from parent
// //
// must be called with .parent locked // must be called with .parent locked
// func tryMerge(prev, next, cur *revCacheEntry) bool {
// XXX move oid from args to revCacheEntry?
func tryMerge(prev, next, cur *revCacheEntry, oid Oid) bool {
// can merge if consistent if // can merge if consistent if
// (if merging) // (if merging)
...@@ -458,12 +458,12 @@ func tryMerge(prev, next, cur *revCacheEntry, oid Oid) bool { ...@@ -458,12 +458,12 @@ func tryMerge(prev, next, cur *revCacheEntry, oid Oid) bool {
// check consistency // check consistency
switch { switch {
case prev.err == nil && prev.serial != next.serial: case prev.err == nil && prev.serial != next.serial:
cur.markAsDBError(oid, "load(@%v) -> %v; load(@%v) -> %v", cur.markAsDBError("load(@%v) -> %v; load(@%v) -> %v",
prev.head, prev.serial, next.head, next.serial) prev.head, prev.serial, next.head, next.serial)
case prev.err != nil && !isErrNoData(prev.err): case prev.err != nil && !isErrNoData(prev.err):
if cur.err == nil { if cur.err == nil {
cur.markAsDBError(oid, "load(@%v) -> %v; load(@%v) -> %v", cur.markAsDBError("load(@%v) -> %v; load(@%v) -> %v",
prev.head, prev.err, next.head, next.serial) prev.head, prev.err, next.head, next.serial)
} }
} }
...@@ -539,11 +539,16 @@ func (c *Cache) gc() { ...@@ -539,11 +539,16 @@ func (c *Cache) gc() {
rce := h.rceFromInLRU() rce := h.rceFromInLRU()
oce := rce.parent oce := rce.parent
oceFree := false // whether to GC whole rce.parent OCE cache entry
oce.Lock() oce.Lock()
i := oce.find(rce) i := oce.find(rce)
if i != -1 { // rce could be already deleted by e.g. merge if i != -1 { // rce could be already deleted by e.g. merge
oce.deli(i) oce.deli(i)
if len(oce.rcev) == 0 {
oceFree = true
}
c.size -= rce.buf.Len() c.size -= rce.buf.Len()
//fmt.Printf("gc: free %d bytes\n", rce.buf.Len())) //fmt.Printf("gc: free %d bytes\n", rce.buf.Len()))
...@@ -558,6 +563,18 @@ func (c *Cache) gc() { ...@@ -558,6 +563,18 @@ func (c *Cache) gc() {
h.Delete() h.Delete()
c.gcMu.Unlock() c.gcMu.Unlock()
if oceFree {
c.mu.Lock()
oce.Lock()
// recheck once again oce is still not used
// (it could be looked up again in the meantime we were not holding its lock)
if len(oce.rcev) == 0 {
delete(c.entryMap, oce.oid)
}
oce.Unlock()
c.mu.Unlock()
}
} }
} }
...@@ -620,7 +637,7 @@ func (oce *oidCacheEntry) deli(i int) { ...@@ -620,7 +637,7 @@ func (oce *oidCacheEntry) deli(i int) {
oce.rcev = oce.rcev[:n] oce.rcev = oce.rcev[:n]
} }
// del delets rce from .rcev. // del deletes rce from .rcev.
// it panics if rce is not there. // it panics if rce is not there.
// //
// oce must be locked. // oce must be locked.
...@@ -684,9 +701,9 @@ func errDB(oid Oid, format string, argv ...interface{}) error { ...@@ -684,9 +701,9 @@ func errDB(oid Oid, format string, argv ...interface{}) error {
// //
// Caller must be the only one to access rce. // Caller must be the only one to access rce.
// In practice this means rce was just loaded but neither yet signalled to be // In practice this means rce was just loaded but neither yet signalled to be
// ready to waiter, nor yet made visible to GC (via adding to Cacle.lru list). // ready to waiter, nor yet made visible to GC (via adding to Cache.lru list).
func (rce *revCacheEntry) markAsDBError(oid Oid, format string, argv ...interface{}) { func (rce *revCacheEntry) markAsDBError(format string, argv ...interface{}) {
rce.err = errDB(oid, format, argv...) rce.err = errDB(rce.parent.oid, format, argv...)
rce.buf.XRelease() rce.buf.XRelease()
rce.buf = nil rce.buf = nil
rce.serial = 0 rce.serial = 0
......
...@@ -170,6 +170,23 @@ func TestCache(t *testing.T) { ...@@ -170,6 +170,23 @@ func TestCache(t *testing.T) {
} }
} }
checkOIDV := func(oidvOk ...Oid) {
t.Helper()
var oidv []Oid
for oid := range c.entryMap {
oidv = append(oidv, oid)
}
sort.Slice(oidv, func(i, j int) bool {
return oidv[i] < oidv[j]
})
if !reflect.DeepEqual(oidv, oidvOk) {
t.Fatalf("oidv: %s", pretty.Compare(oidvOk, oidv))
}
}
checkRCE := func(rce *revCacheEntry, head, serial Tid, buf *Buf, err error) { checkRCE := func(rce *revCacheEntry, head, serial Tid, buf *Buf, err error) {
t.Helper() t.Helper()
bad := &bytes.Buffer{} bad := &bytes.Buffer{}
...@@ -193,7 +210,10 @@ func TestCache(t *testing.T) { ...@@ -193,7 +210,10 @@ func TestCache(t *testing.T) {
checkOCE := func(oid Oid, rcev ...*revCacheEntry) { checkOCE := func(oid Oid, rcev ...*revCacheEntry) {
t.Helper() t.Helper()
oce := c.entryMap[oid] oce, ok := c.entryMap[oid]
if !ok {
t.Fatalf("oce(%v): not present in cache", oid)
}
oceRcev := oce.rcev oceRcev := oce.rcev
if len(oceRcev) == 0 { if len(oceRcev) == 0 {
oceRcev = nil // nil != []{} oceRcev = nil // nil != []{}
...@@ -233,10 +253,12 @@ func TestCache(t *testing.T) { ...@@ -233,10 +253,12 @@ func TestCache(t *testing.T) {
// ---- verify cache behaviour for must be loaded/merged entries ---- // ---- verify cache behaviour for must be loaded/merged entries ----
// (this exercises mostly loadRCE/tryMerge) // (this exercises mostly loadRCE/tryMerge)
checkOIDV()
checkMRU(0) checkMRU(0)
// load @2 -> new rce entry // load @2 -> new rce entry
checkLoad(xidat(1,2), nil, 0, &ErrXidMissing{xidat(1,2)}) checkLoad(xidat(1,2), nil, 0, &ErrXidMissing{xidat(1,2)})
checkOIDV(1)
oce1 := c.entryMap[1] oce1 := c.entryMap[1]
ok1(len(oce1.rcev) == 1) ok1(len(oce1.rcev) == 1)
rce1_h2 := oce1.rcev[0] rce1_h2 := oce1.rcev[0]
...@@ -245,6 +267,7 @@ func TestCache(t *testing.T) { ...@@ -245,6 +267,7 @@ func TestCache(t *testing.T) {
// load @3 -> 2] merged with 3] // load @3 -> 2] merged with 3]
checkLoad(xidat(1,3), nil, 0, &ErrXidMissing{xidat(1,3)}) checkLoad(xidat(1,3), nil, 0, &ErrXidMissing{xidat(1,3)})
checkOIDV(1)
ok1(len(oce1.rcev) == 1) ok1(len(oce1.rcev) == 1)
rce1_h3 := oce1.rcev[0] rce1_h3 := oce1.rcev[0]
ok1(rce1_h3 != rce1_h2) // rce1_h2 was merged into rce1_h3 ok1(rce1_h3 != rce1_h2) // rce1_h2 was merged into rce1_h3
...@@ -253,6 +276,7 @@ func TestCache(t *testing.T) { ...@@ -253,6 +276,7 @@ func TestCache(t *testing.T) {
// load @1 -> 1] merged with 3] // load @1 -> 1] merged with 3]
checkLoad(xidat(1,1), nil, 0, &ErrXidMissing{xidat(1,1)}) checkLoad(xidat(1,1), nil, 0, &ErrXidMissing{xidat(1,1)})
checkOIDV(1)
ok1(len(oce1.rcev) == 1) ok1(len(oce1.rcev) == 1)
ok1(oce1.rcev[0] == rce1_h3) ok1(oce1.rcev[0] == rce1_h3)
checkRCE(rce1_h3, 3, 0, nil, &ErrXidMissing{xidat(1,3)}) checkRCE(rce1_h3, 3, 0, nil, &ErrXidMissing{xidat(1,3)})
...@@ -260,6 +284,7 @@ func TestCache(t *testing.T) { ...@@ -260,6 +284,7 @@ func TestCache(t *testing.T) {
// load @5 -> new rce entry with data // load @5 -> new rce entry with data
checkLoad(xidat(1,5), b(hello), 4, nil) checkLoad(xidat(1,5), b(hello), 4, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 2) ok1(len(oce1.rcev) == 2)
rce1_h5 := oce1.rcev[1] rce1_h5 := oce1.rcev[1]
checkRCE(rce1_h5, 5, 4, b(hello), nil) checkRCE(rce1_h5, 5, 4, b(hello), nil)
...@@ -268,11 +293,13 @@ func TestCache(t *testing.T) { ...@@ -268,11 +293,13 @@ func TestCache(t *testing.T) {
// load @4 -> 4] merged with 5] // load @4 -> 4] merged with 5]
checkLoad(xidat(1,4), b(hello), 4, nil) checkLoad(xidat(1,4), b(hello), 4, nil)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h5) checkOCE(1, rce1_h3, rce1_h5)
checkMRU(5, rce1_h5, rce1_h3) checkMRU(5, rce1_h5, rce1_h3)
// load @6 -> 5] merged with 6] // load @6 -> 5] merged with 6]
checkLoad(xidat(1,6), b(hello), 4, nil) checkLoad(xidat(1,6), b(hello), 4, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 2) ok1(len(oce1.rcev) == 2)
rce1_h6 := oce1.rcev[1] rce1_h6 := oce1.rcev[1]
ok1(rce1_h6 != rce1_h5) ok1(rce1_h6 != rce1_h5)
...@@ -282,6 +309,7 @@ func TestCache(t *testing.T) { ...@@ -282,6 +309,7 @@ func TestCache(t *testing.T) {
// load @7 -> ioerr + new rce // load @7 -> ioerr + new rce
checkLoad(xidat(1,7), nil, 0, ioerr) checkLoad(xidat(1,7), nil, 0, ioerr)
checkOIDV(1)
ok1(len(oce1.rcev) == 3) ok1(len(oce1.rcev) == 3)
rce1_h7 := oce1.rcev[2] rce1_h7 := oce1.rcev[2]
checkRCE(rce1_h7, 7, 0, nil, ioerr) checkRCE(rce1_h7, 7, 0, nil, ioerr)
...@@ -290,6 +318,7 @@ func TestCache(t *testing.T) { ...@@ -290,6 +318,7 @@ func TestCache(t *testing.T) {
// load @9 -> ioerr + new rce (IO errors are not merged) // load @9 -> ioerr + new rce (IO errors are not merged)
checkLoad(xidat(1,9), nil, 0, ioerr) checkLoad(xidat(1,9), nil, 0, ioerr)
checkOIDV(1)
ok1(len(oce1.rcev) == 4) ok1(len(oce1.rcev) == 4)
rce1_h9 := oce1.rcev[3] rce1_h9 := oce1.rcev[3]
checkRCE(rce1_h9, 9, 0, nil, ioerr) checkRCE(rce1_h9, 9, 0, nil, ioerr)
...@@ -298,6 +327,7 @@ func TestCache(t *testing.T) { ...@@ -298,6 +327,7 @@ func TestCache(t *testing.T) {
// load @10 -> new data rce, not merged with ioerr at 9] // load @10 -> new data rce, not merged with ioerr at 9]
checkLoad(xidat(1,10), b(world), 10, nil) checkLoad(xidat(1,10), b(world), 10, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 5) ok1(len(oce1.rcev) == 5)
rce1_h10 := oce1.rcev[4] rce1_h10 := oce1.rcev[4]
checkRCE(rce1_h10, 10, 10, b(world), nil) checkRCE(rce1_h10, 10, 10, b(world), nil)
...@@ -306,6 +336,7 @@ func TestCache(t *testing.T) { ...@@ -306,6 +336,7 @@ func TestCache(t *testing.T) {
// load @11 -> 10] merged with 11] // load @11 -> 10] merged with 11]
checkLoad(xidat(1,11), b(world), 10, nil) checkLoad(xidat(1,11), b(world), 10, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 5) ok1(len(oce1.rcev) == 5)
rce1_h11 := oce1.rcev[4] rce1_h11 := oce1.rcev[4]
ok1(rce1_h11 != rce1_h10) ok1(rce1_h11 != rce1_h10)
...@@ -323,6 +354,7 @@ func TestCache(t *testing.T) { ...@@ -323,6 +354,7 @@ func TestCache(t *testing.T) {
rce1_h15.serial = 10 rce1_h15.serial = 10
rce1_h15.buf = mkbuf(world) rce1_h15.buf = mkbuf(world)
// here: first half of loadRCE(15]) before close(15].ready) // here: first half of loadRCE(15]) before close(15].ready)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h15) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h15)
ok1(!rce1_h15.loaded()) ok1(!rce1_h15.loaded())
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 15] yet checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 15] yet
...@@ -331,6 +363,7 @@ func TestCache(t *testing.T) { ...@@ -331,6 +363,7 @@ func TestCache(t *testing.T) {
// automatically at lookup phase) // automatically at lookup phase)
rce1_h13, new13 := c.lookupRCE(xidat(1,13), +0) rce1_h13, new13 := c.lookupRCE(xidat(1,13), +0)
ok1(new13) ok1(new13)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15)
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no <14 and <16 yet checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no <14 and <16 yet
...@@ -338,13 +371,15 @@ func TestCache(t *testing.T) { ...@@ -338,13 +371,15 @@ func TestCache(t *testing.T) {
rce1_h15.waitBufRef = -1 rce1_h15.waitBufRef = -1
rce1_h15.ready.Done() rce1_h15.ready.Done()
ok1(rce1_h15.loaded()) ok1(rce1_h15.loaded())
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15)
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 13] and 15] yet checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 13] and 15] yet
// (13] also becomes ready and takes oce lock first, merging 11] and 13] into 15]. // (13] also becomes ready and takes oce lock first, merging 11] and 13] into 15].
// 15] did not yet took oce lock so c.size is temporarily reduced and // 15] did not yet took oce lock so c.size is temporarily reduced and
// 15] is not yet on LRU list) // 15] is not yet on LRU list)
c.loadRCE(ctx, rce1_h13, 1) c.loadRCE(ctx, rce1_h13)
checkOIDV(1)
checkRCE(rce1_h13, 13, 10, b(world), nil) checkRCE(rce1_h13, 13, 10, b(world), nil)
checkRCE(rce1_h15, 15, 10, b(world), nil) checkRCE(rce1_h15, 15, 10, b(world), nil)
checkRCE(rce1_h11, 11, 10, b(world), nil) checkRCE(rce1_h11, 11, 10, b(world), nil)
...@@ -353,7 +388,8 @@ func TestCache(t *testing.T) { ...@@ -353,7 +388,8 @@ func TestCache(t *testing.T) {
// (15] takes oce lock and updates c.size and LRU list) // (15] takes oce lock and updates c.size and LRU list)
rce1_h15.ready.Add(1) // so loadRCE could run rce1_h15.ready.Add(1) // so loadRCE could run
c.loadRCE(ctx, rce1_h15, 1) c.loadRCE(ctx, rce1_h15)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15)
checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
...@@ -370,11 +406,13 @@ func TestCache(t *testing.T) { ...@@ -370,11 +406,13 @@ func TestCache(t *testing.T) {
rce1_h16.waitBufRef = -1 rce1_h16.waitBufRef = -1
rce1_h16.ready.Done() rce1_h16.ready.Done()
ok1(rce1_h16.loaded()) ok1(rce1_h16.loaded())
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h16, rce1_h17) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h16, rce1_h17)
checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 16] and 17] yet checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 16] and 17] yet
// (17] loads and takes oce lock first - merge 16] with 17]) // (17] loads and takes oce lock first - merge 16] with 17])
c.loadRCE(ctx, rce1_h17, 1) c.loadRCE(ctx, rce1_h17)
checkOIDV(1)
checkRCE(rce1_h17, 17, 16, b(zz), nil) checkRCE(rce1_h17, 17, 16, b(zz), nil)
checkRCE(rce1_h16, 16, 16, b(zz), nil) checkRCE(rce1_h16, 16, 16, b(zz), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h17) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h17)
...@@ -385,6 +423,7 @@ func TestCache(t *testing.T) { ...@@ -385,6 +423,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 6) ok1(len(oce1.rcev) == 6)
rce1_h19 := oce1.rcev[5] rce1_h19 := oce1.rcev[5]
ok1(rce1_h19 != rce1_h17) ok1(rce1_h19 != rce1_h17)
checkOIDV(1)
checkRCE(rce1_h19, 19, 16, b(zz), nil) checkRCE(rce1_h19, 19, 16, b(zz), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19)
checkMRU(14, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) checkMRU(14, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
...@@ -393,6 +432,7 @@ func TestCache(t *testing.T) { ...@@ -393,6 +432,7 @@ func TestCache(t *testing.T) {
checkLoad(xidat(1,20), b(www), 20, nil) checkLoad(xidat(1,20), b(www), 20, nil)
ok1(len(oce1.rcev) == 7) ok1(len(oce1.rcev) == 7)
rce1_h20 := oce1.rcev[6] rce1_h20 := oce1.rcev[6]
checkOIDV(1)
checkRCE(rce1_h20, 20, 20, b(www), nil) checkRCE(rce1_h20, 20, 20, b(www), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h20) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h20)
checkMRU(17, rce1_h20, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) checkMRU(17, rce1_h20, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
...@@ -402,6 +442,7 @@ func TestCache(t *testing.T) { ...@@ -402,6 +442,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 7) ok1(len(oce1.rcev) == 7)
rce1_h21 := oce1.rcev[6] rce1_h21 := oce1.rcev[6]
ok1(rce1_h21 != rce1_h20) ok1(rce1_h21 != rce1_h20)
checkOIDV(1)
checkRCE(rce1_h21, 21, 20, b(www), nil) checkRCE(rce1_h21, 21, 20, b(www), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h21) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) checkMRU(17, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
...@@ -439,7 +480,8 @@ func TestCache(t *testing.T) { ...@@ -439,7 +480,8 @@ func TestCache(t *testing.T) {
// 8] must be separate from 7] and 9] because it is IO error there // 8] must be separate from 7] and 9] because it is IO error there
rce1_h8, new8 := c.lookupRCE(xidat(1,8), +0) rce1_h8, new8 := c.lookupRCE(xidat(1,8), +0)
ok1(new8) ok1(new8)
c.loadRCE(ctx, rce1_h8, 1) c.loadRCE(ctx, rce1_h8)
checkOIDV(1)
checkRCE(rce1_h8, 8, 0, nil, ioerr) checkRCE(rce1_h8, 8, 0, nil, ioerr)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h8, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) checkMRU(17, rce1_h8, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
...@@ -488,6 +530,7 @@ func TestCache(t *testing.T) { ...@@ -488,6 +530,7 @@ func TestCache(t *testing.T) {
gcstart := &evCacheGCStart{c} gcstart := &evCacheGCStart{c}
gcfinish := &evCacheGCFinish{c} gcfinish := &evCacheGCFinish{c}
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21) checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h15, rce1_h6, rce1_h8, rce1_h21, rce1_h19, rce1_h9, rce1_h7, rce1_h3) checkMRU(17, rce1_h15, rce1_h6, rce1_h8, rce1_h21, rce1_h19, rce1_h9, rce1_h7, rce1_h3)
...@@ -499,6 +542,7 @@ func TestCache(t *testing.T) { ...@@ -499,6 +542,7 @@ func TestCache(t *testing.T) {
// - 7] (lru.2, ioerr, size=0) // - 7] (lru.2, ioerr, size=0)
// - 9] (lru.3, ioerr, size=0) // - 9] (lru.3, ioerr, size=0)
// - 19] (lru.4, zz, size=2) // - 19] (lru.4, zz, size=2)
checkOIDV(1)
checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h21) checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h21)
checkMRU(15, rce1_h15, rce1_h6, rce1_h8, rce1_h21) checkMRU(15, rce1_h15, rce1_h6, rce1_h8, rce1_h21)
...@@ -511,6 +555,7 @@ func TestCache(t *testing.T) { ...@@ -511,6 +555,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 4) ok1(len(oce1.rcev) == 4)
rce1_h19_2 := oce1.rcev[3] rce1_h19_2 := oce1.rcev[3]
ok1(rce1_h19_2 != rce1_h19) ok1(rce1_h19_2 != rce1_h19)
checkOIDV(1)
checkRCE(rce1_h19_2, 19, 16, b(zz), nil) checkRCE(rce1_h19_2, 19, 16, b(zz), nil)
checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h19_2) checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h19_2)
checkMRU(14, rce1_h19_2, rce1_h15, rce1_h6, rce1_h8) checkMRU(14, rce1_h19_2, rce1_h15, rce1_h6, rce1_h8)
...@@ -525,6 +570,7 @@ func TestCache(t *testing.T) { ...@@ -525,6 +570,7 @@ func TestCache(t *testing.T) {
// - loaded 77] (big, size=10) // - loaded 77] (big, size=10)
ok1(len(oce1.rcev) == 2) ok1(len(oce1.rcev) == 2)
rce1_h77 := oce1.rcev[1] rce1_h77 := oce1.rcev[1]
checkOIDV(1)
checkRCE(rce1_h77, 77, 77, b(big), nil) checkRCE(rce1_h77, 77, 77, b(big), nil)
checkOCE(1, rce1_h19_2, rce1_h77) checkOCE(1, rce1_h19_2, rce1_h77)
checkMRU(12, rce1_h77, rce1_h19_2) checkMRU(12, rce1_h77, rce1_h19_2)
...@@ -532,7 +578,7 @@ func TestCache(t *testing.T) { ...@@ -532,7 +578,7 @@ func TestCache(t *testing.T) {
// sizeMax=0 evicts everything from cache // sizeMax=0 evicts everything from cache
go c.SetSizeMax(0) go c.SetSizeMax(0)
tc.Expect(gcstart, gcfinish) tc.Expect(gcstart, gcfinish)
checkOCE(1) checkOIDV()
checkMRU(0) checkMRU(0)
// and still loading works (because even if though rce's are evicted // and still loading works (because even if though rce's are evicted
...@@ -540,7 +586,7 @@ func TestCache(t *testing.T) { ...@@ -540,7 +586,7 @@ func TestCache(t *testing.T) {
checkLoad(xidat(1,4), b(hello), 4, nil) checkLoad(xidat(1,4), b(hello), 4, nil)
tc.Expect(gcstart, gcfinish) tc.Expect(gcstart, gcfinish)
checkOCE(1) checkOIDV()
checkMRU(0) checkMRU(0)
// ---- Load vs concurrent GC ---- // ---- Load vs concurrent GC ----
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment