Commit fe89563b authored by Kirill Smelkov's avatar Kirill Smelkov

.

parent 862c2cad
......@@ -45,7 +45,7 @@
// Wcfs client implements pins handling in so-called "pinner" thread(+). The
// pinner thread receives pin requests from wcfs server via watchlink handle
// opened through wcfs/head/watch. For every pin request the pinner finds
// corresponding mappings and injects wcfs/@revX/f parts into the mappings
// corresponding Mappings and injects wcfs/@revX/f parts into the Mappings
// appropriately.
//
// The same watchlink handle is used to send client-originated requests to wcfs
......@@ -55,9 +55,9 @@
// points like Conn.open, Conn.resync and FileH.close.
//
// Every FileH maintains fileh._pinned {} with currently pinned blk -> rev. The
// dict is updated by pinner driven by pin messages, and is used used when
// either new fileh Mapping is created (FileH.mmap) or refreshed due to request
// from virtmem (Mapping.remmap_blk, see below).
// dict is updated by pinner driven by pin messages, and is used when either
// new fileh Mapping is created (FileH.mmap) or refreshed due to request from
// virtmem (Mapping.remmap_blk, see below).
//
// In wendelin.core a bigfile has semantic that it is infinite in size and
// reads as all zeros beyond region initialized with data. Memory-mapping of
......@@ -73,21 +73,22 @@
//
// Integration with wendelin.core virtmem layer
//
// Wcfs integrates with virtmem layer to support virtmem to handle dirtying
// pages of read-only base-layer that wcfs client provides via isolated
// Mapping. For wcfs-backed bigfiles every VMA is interlinked with Mapping:
// Wcfs client integrates with virtmem layer to support virtmem to handle
// dirtying pages of read-only base-layer that wcfs client provides via
// isolated Mapping. For wcfs-backed bigfiles every VMA is interlinked with
// Mapping:
//
// VMA -> BigFileH -> ZBigFile --.
// ↑↓ ZODB
// Mapping -> FileH -> wcfs server --'
// VMA -> BigFileH -> ZBigFile -----> Z
// ↑↓ O
// Mapping -> FileH -> wcfs server --> DB
//
// When a page is write-accessed, virtmem mmaps in a page of RAM in place of
// accessed virtual memory, copies base-layer content provided by Mapping into
// there, and marks that page as read-write accessed.
// there, and marks that page as read-write.
//
// Upon receiving pin message, the pinner consults virtmem, whether
// corresponding page was already dirtied in virtmem's BigFileH, and if it was,
// the pinner does not remmap virtual memory to wcfs/@revX/f and just leaves
// the pinner does not remmap Mapping part to wcfs/@revX/f and just leaves
// dirty page in its place, remembering pin information in fileh._pinned.
//
// Once dirty pages are no longer needed (either after discard/abort or
......@@ -97,9 +98,10 @@
// The scheme outlined above does not need to split Mapping upon dirtying an
// inner page.
//
// See bigfile_ops (wendelin/bigfile/file.h) for interface related to
// base-layer overlaying from virtmem point of view. For wcfs, this interface
// is provided by small wcfs client wrapper in bigfile/file_zodb.cpp .
// See bigfile_ops (wendelin/bigfile/file.h) for virtmem interface related to
// base-layer overlaying that explains the process from virtmem point of view.
// For wcfs this interface is provided by small wcfs client wrapper in
// bigfile/file_zodb.cpp.
//
// --------
//
......@@ -107,7 +109,6 @@
// (+) currently, for simplicity, there is one pinner for each connection. In
// the future, for efficiency, it might be reworked to be one pinner thread
// that serves all connections simultaneously.
// XXX (wcfs client-level file handle - see package overview) ?
// XXX locking -> explain atMu + slaves and refer to "Locking" in wcfs.go
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment