Commit 521d3ae5 authored by Kirill Smelkov's avatar Kirill Smelkov

X zodb/cache: Also free OCE entries on GC

Until now we were only GC'ing RCE entries, but OCE entries (top-level
cache entry for an OID) were never GC'ed thus allowing cache size to
grow infinitely.

Even though the average N(allocations) stays the same this change lowers
pressure on amount of memory allocated and thus helping avoid some GC
speeds things up (see attachment).

Notes:

1. Hit*/size=0 increase is probably due to Cache.entryMap going to
   empty/1-element back and forth constantly. size/0 is not going to
   happen in practice and so this wat we can accept regression here.

   ( still NoHit*/size=0 works faster ).

2. HitProc/size=* "regression should be within noise"

name                        old time/op    new time/op    delta
NoopStorage-4                 56.6ns ± 1%    56.2ns ± 0%       ~     (p=0.079 n=5+5)
CacheStartup-4                1.24µs ± 6%    1.28µs ± 9%       ~     (p=0.595 n=5+5)
CacheNoHit/size=0-4           1.44µs ± 4%    0.93µs ± 2%    -35.26%  (p=0.008 n=5+5)
CacheNoHit/size=16-4          1.43µs ± 5%    0.92µs ± 1%    -35.19%  (p=0.008 n=5+5)
CacheNoHit/size=128-4         1.45µs ± 3%    0.94µs ± 1%    -35.03%  (p=0.008 n=5+5)
CacheNoHit/size=512-4         1.44µs ± 4%    0.97µs ± 2%    -32.64%  (p=0.008 n=5+5)
CacheNoHit/size=4096-4        1.45µs ± 2%    1.07µs ± 0%    -25.85%  (p=0.008 n=5+5)
CacheHit/size=0-4              131ns ± 2%     276ns ±22%   +110.99%  (p=0.008 n=5+5)
CacheHit/size=16-4             122ns ± 1%     121ns ± 1%       ~     (p=0.079 n=5+5)
CacheHit/size=128-4            126ns ± 2%     125ns ± 1%       ~     (p=0.563 n=5+5)
CacheHit/size=512-4            127ns ± 1%     126ns ± 0%       ~     (p=0.095 n=5+4)
CacheHit/size=4096-4           128ns ± 0%     128ns ± 0%       ~     (p=0.556 n=5+4)
NoopStoragePar-4              30.6ns ± 4%    31.2ns ±10%       ~     (p=0.690 n=5+5)
CacheStartupPar-4             1.44µs ± 5%    1.43µs ± 3%       ~     (p=0.690 n=5+5)
CacheNoHitPar/size=0-4        1.62µs ± 4%    1.04µs ± 1%    -35.76%  (p=0.008 n=5+5)
CacheNoHitPar/size=16-4       1.65µs ± 4%    1.05µs ± 1%    -36.39%  (p=0.008 n=5+5)
CacheNoHitPar/size=128-4      1.64µs ± 5%    1.05µs ± 1%    -35.84%  (p=0.008 n=5+5)
CacheNoHitPar/size=512-4      1.62µs ± 3%    1.08µs ± 1%    -33.10%  (p=0.008 n=5+5)
CacheNoHitPar/size=4096-4     1.68µs ± 1%    1.18µs ± 0%    -29.44%  (p=0.008 n=5+5)
CacheHitPar/size=0-4           215ns ± 0%     383ns ± 2%    +78.23%  (p=0.008 n=5+5)
CacheHitPar/size=16-4          214ns ± 2%     214ns ± 0%       ~     (p=0.786 n=5+5)
CacheHitPar/size=128-4         210ns ± 0%     209ns ± 0%       ~     (p=0.079 n=5+5)
CacheHitPar/size=512-4         207ns ± 0%     206ns ± 0%     -0.48%  (p=0.008 n=5+5)
CacheHitPar/size=4096-4        204ns ± 0%     202ns ± 0%     -0.98%  (p=0.000 n=5+4)
NoopStorageProc-4             31.4ns ± 7%    33.7ns ± 5%       ~     (p=0.151 n=5+5)
CacheStartupProc-4            1.13µs ± 5%    1.12µs ± 3%       ~     (p=0.690 n=5+5)
CacheNoHitProc/size=0-4       1.12µs ± 5%    0.62µs ± 1%    -44.52%  (p=0.008 n=5+5)
CacheNoHitProc/size=16-4      1.14µs ± 6%    0.63µs ± 1%    -45.14%  (p=0.008 n=5+5)
CacheNoHitProc/size=128-4     1.06µs ± 5%    0.64µs ± 2%    -40.12%  (p=0.008 n=5+5)
CacheNoHitProc/size=512-4     1.14µs ±11%    0.69µs ± 4%    -39.87%  (p=0.008 n=5+5)
CacheNoHitProc/size=4096-4    1.14µs ± 9%    0.68µs ± 2%    -40.21%  (p=0.008 n=5+5)
CacheHitProc/size=0-4         56.5ns ± 7%    84.6ns ±14%    +49.66%  (p=0.008 n=5+5)
CacheHitProc/size=16-4        55.8ns ± 0%    62.0ns ± 6%    +11.03%  (p=0.008 n=5+5)
CacheHitProc/size=128-4       56.6ns ± 0%    60.9ns ± 4%     +7.63%  (p=0.008 n=5+5)
CacheHitProc/size=512-4       57.3ns ± 0%    64.1ns ± 7%    +11.83%  (p=0.016 n=4+5)
CacheHitProc/size=4096-4      61.6ns ± 1%    69.7ns ± 5%    +13.29%  (p=0.008 n=5+5)

name                        old alloc/op   new alloc/op   delta
NoopStorage-4                  0.00B          0.00B            ~     (all equal)
CacheStartup-4                  269B ± 0%      285B ± 0%     +5.95%  (p=0.008 n=5+5)
CacheNoHit/size=0-4             225B ± 0%      153B ± 0%    -32.12%  (p=0.008 n=5+5)
CacheNoHit/size=16-4            225B ± 0%      153B ± 0%    -32.00%  (p=0.029 n=4+4)
CacheNoHit/size=128-4           225B ± 1%      153B ± 0%    -31.76%  (p=0.008 n=5+5)
CacheNoHit/size=512-4           225B ± 1%      154B ± 0%    -31.50%  (p=0.008 n=5+5)
CacheNoHit/size=4096-4          224B ± 0%      155B ± 0%    -30.80%  (p=0.008 n=5+5)
CacheHit/size=0-4              0.00B         13.40B ±42%      +Inf%  (p=0.008 n=5+5)
CacheHit/size=16-4             0.00B          0.00B            ~     (all equal)
CacheHit/size=128-4            0.00B          0.00B            ~     (all equal)
CacheHit/size=512-4            0.00B          0.00B            ~     (all equal)
CacheHit/size=4096-4           0.00B          0.00B            ~     (all equal)
NoopStoragePar-4               0.00B          0.00B            ~     (all equal)
CacheStartupPar-4               267B ± 0%      282B ± 1%     +5.67%  (p=0.016 n=4+5)
CacheNoHitPar/size=0-4          232B ± 1%      162B ± 1%    -30.11%  (p=0.008 n=5+5)
CacheNoHitPar/size=16-4         228B ± 1%      161B ± 0%    -29.21%  (p=0.008 n=5+5)
CacheNoHitPar/size=128-4        229B ± 1%      162B ± 0%    -29.43%  (p=0.008 n=5+5)
CacheNoHitPar/size=512-4        228B ± 1%      162B ± 1%    -28.86%  (p=0.008 n=5+5)
CacheNoHitPar/size=4096-4       224B ± 0%      166B ± 0%    -26.02%  (p=0.000 n=5+4)
CacheHitPar/size=0-4           1.00B ± 0%    13.60B ± 4%  +1260.00%  (p=0.008 n=5+5)
CacheHitPar/size=16-4          0.00B          0.00B            ~     (all equal)
CacheHitPar/size=128-4         0.00B          0.00B            ~     (all equal)
CacheHitPar/size=512-4         0.00B          0.00B            ~     (all equal)
CacheHitPar/size=4096-4        0.00B          0.00B            ~     (all equal)
NoopStorageProc-4              0.00B          0.00B            ~     (all equal)
CacheStartupProc-4              269B ± 0%      285B ± 0%     +5.95%  (p=0.008 n=5+5)
CacheNoHitProc/size=0-4         240B ± 0%      194B ± 0%    -19.17%  (p=0.000 n=5+4)
CacheNoHitProc/size=16-4        240B ± 2%      194B ± 1%    -19.38%  (p=0.008 n=5+5)
CacheNoHitProc/size=128-4       241B ± 0%      193B ± 1%    -20.00%  (p=0.016 n=4+5)
CacheNoHitProc/size=512-4       241B ± 1%      188B ± 2%    -22.06%  (p=0.008 n=5+5)
CacheNoHitProc/size=4096-4      240B ± 1%      179B ± 0%    -25.52%  (p=0.008 n=5+5)
CacheHitProc/size=0-4          0.00B          3.60B ±17%      +Inf%  (p=0.008 n=5+5)
CacheHitProc/size=16-4         0.00B          0.00B            ~     (all equal)
CacheHitProc/size=128-4        0.00B          0.00B            ~     (all equal)
CacheHitProc/size=512-4        0.00B          0.00B            ~     (all equal)
CacheHitProc/size=4096-4       0.00B          0.00B            ~     (all equal)

name                        old allocs/op  new allocs/op  delta
NoopStorage-4                   0.00           0.00            ~     (all equal)
CacheStartup-4                  5.00 ± 0%      5.00 ± 0%       ~     (all equal)
CacheNoHit/size=0-4             3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=16-4            3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=128-4           3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=512-4           3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHit/size=4096-4          3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHit/size=0-4               0.00           0.00            ~     (all equal)
CacheHit/size=16-4              0.00           0.00            ~     (all equal)
CacheHit/size=128-4             0.00           0.00            ~     (all equal)
CacheHit/size=512-4             0.00           0.00            ~     (all equal)
CacheHit/size=4096-4            0.00           0.00            ~     (all equal)
NoopStoragePar-4                0.00           0.00            ~     (all equal)
CacheStartupPar-4               4.00 ± 0%      4.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=0-4          3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=16-4         3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=128-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=512-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitPar/size=4096-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHitPar/size=0-4            0.00           0.00            ~     (all equal)
CacheHitPar/size=16-4           0.00           0.00            ~     (all equal)
CacheHitPar/size=128-4          0.00           0.00            ~     (all equal)
CacheHitPar/size=512-4          0.00           0.00            ~     (all equal)
CacheHitPar/size=4096-4         0.00           0.00            ~     (all equal)
NoopStorageProc-4               0.00           0.00            ~     (all equal)
CacheStartupProc-4              5.00 ± 0%      5.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=0-4         3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=16-4        3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=128-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=512-4       3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheNoHitProc/size=4096-4      3.00 ± 0%      3.00 ± 0%       ~     (all equal)
CacheHitProc/size=0-4           0.00           0.00            ~     (all equal)
CacheHitProc/size=16-4          0.00           0.00            ~     (all equal)
CacheHitProc/size=128-4         0.00           0.00            ~     (all equal)
CacheHitProc/size=512-4         0.00           0.00            ~     (all equal)
CacheHitProc/size=4096-4        0.00           0.00            ~     (all equal)
parent 1f07e51a
......@@ -59,6 +59,8 @@ type Cache struct {
// oidCacheEntry maintains cached revisions for 1 oid
type oidCacheEntry struct {
oid Oid
sync.Mutex
// cached revisions in ascending order
......@@ -174,7 +176,7 @@ func (c *Cache) Load(ctx context.Context, xid Xid) (buf *Buf, serial Tid, err er
} else {
// XXX use connection poll
// XXX or it should be cared by loader?
c.loadRCE(ctx, rce, xid.Oid)
c.loadRCE(ctx, rce)
}
if rce.err != nil {
......@@ -200,7 +202,7 @@ func (c *Cache) Prefetch(ctx context.Context, xid Xid) {
// spawn loading in the background if rce was not yet loaded
if rceNew {
// XXX use connection poll
go c.loadRCE(ctx, rce, xid.Oid)
go c.loadRCE(ctx, rce)
}
}
......@@ -237,7 +239,7 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b
c.mu.Lock()
oce = c.entryMap[xid.Oid]
if oce == nil {
oce = &oidCacheEntry{}
oce = &oidCacheEntry{oid: xid.Oid}
c.entryMap[xid.Oid] = oce
}
cacheHead = c.head // reload c.head because we relocked the cache
......@@ -306,9 +308,9 @@ func (c *Cache) lookupRCE(xid Xid, wantBufRef int) (rce *revCacheEntry, rceNew b
//
// rce must be new just created by lookupRCE() with returned rceNew=true.
// loading completion is signalled by marking rce.ready done.
func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry) {
oce := rce.parent
buf, serial, err := c.loader.Load(ctx, Xid{At: rce.head, Oid: oid})
buf, serial, err := c.loader.Load(ctx, Xid{At: rce.head, Oid: oce.oid})
// normalize buf/serial if it was error
if err != nil {
......@@ -322,7 +324,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
rce.err = err
// verify db gives serial <= head
if rce.serial > rce.head {
rce.markAsDBError(oid, "load(@%v) -> %v", rce.head, serial)
rce.markAsDBError("load(@%v) -> %v", rce.head, serial)
}
δsize := rce.buf.Len()
......@@ -365,7 +367,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
rceDropped := false
if i + 1 < len(oce.rcev) {
rceNext := oce.rcev[i+1]
if rceNext.loaded() && tryMerge(rce, rceNext, rce, oid) {
if rceNext.loaded() && tryMerge(rce, rceNext, rce) {
// not δsize -= len(rce.buf.Data)
// tryMerge can change rce.buf if consistency is broken
δsize = 0
......@@ -380,7 +382,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
var rcePrevDropped *revCacheEntry
if i > 0 {
rcePrev := oce.rcev[i-1]
if rcePrev.loaded() && tryMerge(rcePrev, rce, rce, oid) {
if rcePrev.loaded() && tryMerge(rcePrev, rce, rce) {
rcePrevDropped = rcePrev
if rcePrev.accounted {
δsize -= rcePrev.buf.Len()
......@@ -433,9 +435,7 @@ func (c *Cache) loadRCE(ctx context.Context, rce *revCacheEntry, oid Oid) {
// return: true if merging done and thus prev was dropped from parent
//
// must be called with .parent locked
//
// XXX move oid from args to revCacheEntry?
func tryMerge(prev, next, cur *revCacheEntry, oid Oid) bool {
func tryMerge(prev, next, cur *revCacheEntry) bool {
// can merge if consistent if
// (if merging)
......@@ -458,12 +458,12 @@ func tryMerge(prev, next, cur *revCacheEntry, oid Oid) bool {
// check consistency
switch {
case prev.err == nil && prev.serial != next.serial:
cur.markAsDBError(oid, "load(@%v) -> %v; load(@%v) -> %v",
cur.markAsDBError("load(@%v) -> %v; load(@%v) -> %v",
prev.head, prev.serial, next.head, next.serial)
case prev.err != nil && !isErrNoData(prev.err):
if cur.err == nil {
cur.markAsDBError(oid, "load(@%v) -> %v; load(@%v) -> %v",
cur.markAsDBError("load(@%v) -> %v; load(@%v) -> %v",
prev.head, prev.err, next.head, next.serial)
}
}
......@@ -539,11 +539,16 @@ func (c *Cache) gc() {
rce := h.rceFromInLRU()
oce := rce.parent
oceFree := false // whether to GC whole rce.parent OCE cache entry
oce.Lock()
i := oce.find(rce)
if i != -1 { // rce could be already deleted by e.g. merge
oce.deli(i)
if len(oce.rcev) == 0 {
oceFree = true
}
c.size -= rce.buf.Len()
//fmt.Printf("gc: free %d bytes\n", rce.buf.Len()))
......@@ -558,6 +563,18 @@ func (c *Cache) gc() {
h.Delete()
c.gcMu.Unlock()
if oceFree {
c.mu.Lock()
oce.Lock()
// recheck once again oce is still not used
// (it could be looked up again in the meantime we were not holding its lock)
if len(oce.rcev) == 0 {
delete(c.entryMap, oce.oid)
}
oce.Unlock()
c.mu.Unlock()
}
}
}
......@@ -620,7 +637,7 @@ func (oce *oidCacheEntry) deli(i int) {
oce.rcev = oce.rcev[:n]
}
// del delets rce from .rcev.
// del deletes rce from .rcev.
// it panics if rce is not there.
//
// oce must be locked.
......@@ -684,9 +701,9 @@ func errDB(oid Oid, format string, argv ...interface{}) error {
//
// Caller must be the only one to access rce.
// In practice this means rce was just loaded but neither yet signalled to be
// ready to waiter, nor yet made visible to GC (via adding to Cacle.lru list).
func (rce *revCacheEntry) markAsDBError(oid Oid, format string, argv ...interface{}) {
rce.err = errDB(oid, format, argv...)
// ready to waiter, nor yet made visible to GC (via adding to Cache.lru list).
func (rce *revCacheEntry) markAsDBError(format string, argv ...interface{}) {
rce.err = errDB(rce.parent.oid, format, argv...)
rce.buf.XRelease()
rce.buf = nil
rce.serial = 0
......
......@@ -170,6 +170,23 @@ func TestCache(t *testing.T) {
}
}
checkOIDV := func(oidvOk ...Oid) {
t.Helper()
var oidv []Oid
for oid := range c.entryMap {
oidv = append(oidv, oid)
}
sort.Slice(oidv, func(i, j int) bool {
return oidv[i] < oidv[j]
})
if !reflect.DeepEqual(oidv, oidvOk) {
t.Fatalf("oidv: %s", pretty.Compare(oidvOk, oidv))
}
}
checkRCE := func(rce *revCacheEntry, head, serial Tid, buf *Buf, err error) {
t.Helper()
bad := &bytes.Buffer{}
......@@ -193,7 +210,10 @@ func TestCache(t *testing.T) {
checkOCE := func(oid Oid, rcev ...*revCacheEntry) {
t.Helper()
oce := c.entryMap[oid]
oce, ok := c.entryMap[oid]
if !ok {
t.Fatalf("oce(%v): not present in cache", oid)
}
oceRcev := oce.rcev
if len(oceRcev) == 0 {
oceRcev = nil // nil != []{}
......@@ -233,10 +253,12 @@ func TestCache(t *testing.T) {
// ---- verify cache behaviour for must be loaded/merged entries ----
// (this exercises mostly loadRCE/tryMerge)
checkOIDV()
checkMRU(0)
// load @2 -> new rce entry
checkLoad(xidat(1,2), nil, 0, &ErrXidMissing{xidat(1,2)})
checkOIDV(1)
oce1 := c.entryMap[1]
ok1(len(oce1.rcev) == 1)
rce1_h2 := oce1.rcev[0]
......@@ -245,6 +267,7 @@ func TestCache(t *testing.T) {
// load @3 -> 2] merged with 3]
checkLoad(xidat(1,3), nil, 0, &ErrXidMissing{xidat(1,3)})
checkOIDV(1)
ok1(len(oce1.rcev) == 1)
rce1_h3 := oce1.rcev[0]
ok1(rce1_h3 != rce1_h2) // rce1_h2 was merged into rce1_h3
......@@ -253,6 +276,7 @@ func TestCache(t *testing.T) {
// load @1 -> 1] merged with 3]
checkLoad(xidat(1,1), nil, 0, &ErrXidMissing{xidat(1,1)})
checkOIDV(1)
ok1(len(oce1.rcev) == 1)
ok1(oce1.rcev[0] == rce1_h3)
checkRCE(rce1_h3, 3, 0, nil, &ErrXidMissing{xidat(1,3)})
......@@ -260,6 +284,7 @@ func TestCache(t *testing.T) {
// load @5 -> new rce entry with data
checkLoad(xidat(1,5), b(hello), 4, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 2)
rce1_h5 := oce1.rcev[1]
checkRCE(rce1_h5, 5, 4, b(hello), nil)
......@@ -268,11 +293,13 @@ func TestCache(t *testing.T) {
// load @4 -> 4] merged with 5]
checkLoad(xidat(1,4), b(hello), 4, nil)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h5)
checkMRU(5, rce1_h5, rce1_h3)
// load @6 -> 5] merged with 6]
checkLoad(xidat(1,6), b(hello), 4, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 2)
rce1_h6 := oce1.rcev[1]
ok1(rce1_h6 != rce1_h5)
......@@ -282,6 +309,7 @@ func TestCache(t *testing.T) {
// load @7 -> ioerr + new rce
checkLoad(xidat(1,7), nil, 0, ioerr)
checkOIDV(1)
ok1(len(oce1.rcev) == 3)
rce1_h7 := oce1.rcev[2]
checkRCE(rce1_h7, 7, 0, nil, ioerr)
......@@ -290,6 +318,7 @@ func TestCache(t *testing.T) {
// load @9 -> ioerr + new rce (IO errors are not merged)
checkLoad(xidat(1,9), nil, 0, ioerr)
checkOIDV(1)
ok1(len(oce1.rcev) == 4)
rce1_h9 := oce1.rcev[3]
checkRCE(rce1_h9, 9, 0, nil, ioerr)
......@@ -298,6 +327,7 @@ func TestCache(t *testing.T) {
// load @10 -> new data rce, not merged with ioerr at 9]
checkLoad(xidat(1,10), b(world), 10, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 5)
rce1_h10 := oce1.rcev[4]
checkRCE(rce1_h10, 10, 10, b(world), nil)
......@@ -306,6 +336,7 @@ func TestCache(t *testing.T) {
// load @11 -> 10] merged with 11]
checkLoad(xidat(1,11), b(world), 10, nil)
checkOIDV(1)
ok1(len(oce1.rcev) == 5)
rce1_h11 := oce1.rcev[4]
ok1(rce1_h11 != rce1_h10)
......@@ -323,6 +354,7 @@ func TestCache(t *testing.T) {
rce1_h15.serial = 10
rce1_h15.buf = mkbuf(world)
// here: first half of loadRCE(15]) before close(15].ready)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h15)
ok1(!rce1_h15.loaded())
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 15] yet
......@@ -331,6 +363,7 @@ func TestCache(t *testing.T) {
// automatically at lookup phase)
rce1_h13, new13 := c.lookupRCE(xidat(1,13), +0)
ok1(new13)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15)
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no <14 and <16 yet
......@@ -338,13 +371,15 @@ func TestCache(t *testing.T) {
rce1_h15.waitBufRef = -1
rce1_h15.ready.Done()
ok1(rce1_h15.loaded())
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h11, rce1_h13, rce1_h15)
checkMRU(12, rce1_h11, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 13] and 15] yet
// (13] also becomes ready and takes oce lock first, merging 11] and 13] into 15].
// 15] did not yet took oce lock so c.size is temporarily reduced and
// 15] is not yet on LRU list)
c.loadRCE(ctx, rce1_h13, 1)
c.loadRCE(ctx, rce1_h13)
checkOIDV(1)
checkRCE(rce1_h13, 13, 10, b(world), nil)
checkRCE(rce1_h15, 15, 10, b(world), nil)
checkRCE(rce1_h11, 11, 10, b(world), nil)
......@@ -353,7 +388,8 @@ func TestCache(t *testing.T) {
// (15] takes oce lock and updates c.size and LRU list)
rce1_h15.ready.Add(1) // so loadRCE could run
c.loadRCE(ctx, rce1_h15, 1)
c.loadRCE(ctx, rce1_h15)
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15)
checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
......@@ -370,11 +406,13 @@ func TestCache(t *testing.T) {
rce1_h16.waitBufRef = -1
rce1_h16.ready.Done()
ok1(rce1_h16.loaded())
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h16, rce1_h17)
checkMRU(12, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3) // no 16] and 17] yet
// (17] loads and takes oce lock first - merge 16] with 17])
c.loadRCE(ctx, rce1_h17, 1)
c.loadRCE(ctx, rce1_h17)
checkOIDV(1)
checkRCE(rce1_h17, 17, 16, b(zz), nil)
checkRCE(rce1_h16, 16, 16, b(zz), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h17)
......@@ -385,6 +423,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 6)
rce1_h19 := oce1.rcev[5]
ok1(rce1_h19 != rce1_h17)
checkOIDV(1)
checkRCE(rce1_h19, 19, 16, b(zz), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19)
checkMRU(14, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
......@@ -393,6 +432,7 @@ func TestCache(t *testing.T) {
checkLoad(xidat(1,20), b(www), 20, nil)
ok1(len(oce1.rcev) == 7)
rce1_h20 := oce1.rcev[6]
checkOIDV(1)
checkRCE(rce1_h20, 20, 20, b(www), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h20)
checkMRU(17, rce1_h20, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
......@@ -402,6 +442,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 7)
rce1_h21 := oce1.rcev[6]
ok1(rce1_h21 != rce1_h20)
checkOIDV(1)
checkRCE(rce1_h21, 21, 20, b(www), nil)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
......@@ -439,7 +480,8 @@ func TestCache(t *testing.T) {
// 8] must be separate from 7] and 9] because it is IO error there
rce1_h8, new8 := c.lookupRCE(xidat(1,8), +0)
ok1(new8)
c.loadRCE(ctx, rce1_h8, 1)
c.loadRCE(ctx, rce1_h8)
checkOIDV(1)
checkRCE(rce1_h8, 8, 0, nil, ioerr)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h8, rce1_h21, rce1_h19, rce1_h15, rce1_h9, rce1_h7, rce1_h6, rce1_h3)
......@@ -488,6 +530,7 @@ func TestCache(t *testing.T) {
gcstart := &evCacheGCStart{c}
gcfinish := &evCacheGCFinish{c}
checkOIDV(1)
checkOCE(1, rce1_h3, rce1_h6, rce1_h7, rce1_h8, rce1_h9, rce1_h15, rce1_h19, rce1_h21)
checkMRU(17, rce1_h15, rce1_h6, rce1_h8, rce1_h21, rce1_h19, rce1_h9, rce1_h7, rce1_h3)
......@@ -499,6 +542,7 @@ func TestCache(t *testing.T) {
// - 7] (lru.2, ioerr, size=0)
// - 9] (lru.3, ioerr, size=0)
// - 19] (lru.4, zz, size=2)
checkOIDV(1)
checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h21)
checkMRU(15, rce1_h15, rce1_h6, rce1_h8, rce1_h21)
......@@ -511,6 +555,7 @@ func TestCache(t *testing.T) {
ok1(len(oce1.rcev) == 4)
rce1_h19_2 := oce1.rcev[3]
ok1(rce1_h19_2 != rce1_h19)
checkOIDV(1)
checkRCE(rce1_h19_2, 19, 16, b(zz), nil)
checkOCE(1, rce1_h6, rce1_h8, rce1_h15, rce1_h19_2)
checkMRU(14, rce1_h19_2, rce1_h15, rce1_h6, rce1_h8)
......@@ -525,6 +570,7 @@ func TestCache(t *testing.T) {
// - loaded 77] (big, size=10)
ok1(len(oce1.rcev) == 2)
rce1_h77 := oce1.rcev[1]
checkOIDV(1)
checkRCE(rce1_h77, 77, 77, b(big), nil)
checkOCE(1, rce1_h19_2, rce1_h77)
checkMRU(12, rce1_h77, rce1_h19_2)
......@@ -532,7 +578,7 @@ func TestCache(t *testing.T) {
// sizeMax=0 evicts everything from cache
go c.SetSizeMax(0)
tc.Expect(gcstart, gcfinish)
checkOCE(1)
checkOIDV()
checkMRU(0)
// and still loading works (because even if though rce's are evicted
......@@ -540,7 +586,7 @@ func TestCache(t *testing.T) {
checkLoad(xidat(1,4), b(hello), 4, nil)
tc.Expect(gcstart, gcfinish)
checkOCE(1)
checkOIDV()
checkMRU(0)
// ---- Load vs concurrent GC ----
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment