1. 18 Sep, 2024 3 commits
    • Kirill Smelkov's avatar
      go/zodb/internal/weak: Disable support for weak references · ee23551d
      Kirill Smelkov authored
      Experience shows that we cannot provide reliably working weak references
      support without crashing GC because there needs to be a coordination in
      between object resurrection and at least GC mark phase[1,2] with [3] and
      [4] showing an idea what kind it coordination it needs to be.
      
      And since support for weak references is upcoming in standard library
      with hopefully Go 1.24 [5], it does not make sense to continue pushing
      hard to make weak references work on our standalone side.
      
      By disabling support for custom weak references we will avoid crashes,
      but will start to leak unused objects in zodb.Connection live cache.
      This should be relatively ok for now as WCFS in wendelin.core is
      currently the only known ZODB/go user and it cares to drop ZBlk* state
      after loading the data. The leak, thus, should be modest with,
      hopefully, not creating problems in practice.
      
      And once std package weak is out there we will switch to that restoring
      automatic LiveCache cleanup.
      
      [1] https://github.com/golang/go/issues/41303
      [2] wendelin.core@9b44fc23
      [3] https://github.com/golang/go/commit/dfc86e922cd0
      [4] https://github.com/golang/go/commit/79fd633632cd
      [5] https://github.com/golang/go/issues/67552
      
      /reviewed-by @levin.zimmermann
      /reviewed-on !11
      ee23551d
    • Kirill Smelkov's avatar
      go/zodb/internal/weak: Assert that the object was not moved in the finalizer · c48f7008
      Kirill Smelkov authored
      This is good to do since we are relying on non-moving property of
      current GC. It never triggered but it removed one uncertainty while
      debugging GC crash issue. Good to have it in place.
      
      /reviewed-by @levin.zimmermann
      /reviewed-on !11
      c48f7008
    • Kirill Smelkov's avatar
      go/zodb/internal/weak: Try to fix GC crashes via reworking Ref to keep only one word instead of two · 55491547
      Kirill Smelkov authored
      We are observing garbage-collector crashes due to weak package under
      load(*) with GC crashing as e.g.
      
          runtime: full=0xc0001f10000005 next=205 jobs=204 nDataRoots=1 nBSSRoots=1 nSpanRoots=16 nStackRoots=184
          panic: non-empty mark queue after concurrent mark
          fatal error: panic on system stack
      
          runtime stack:
          runtime.throw({0x5c60fe?, 0x601d70?})
                  /home/kirr/src/tools/go/go1.21/src/runtime/panic.go:1077 +0x5c fp=0xc000051e88 sp=0xc000051e58 pc=0x436efc
          panic({0x585100?, 0x601d70?})
                  /home/kirr/src/tools/go/go1.21/src/runtime/panic.go:840 +0x6ea fp=0xc000051f38 sp=0xc000051e88 pc=0x436e0a
          runtime.gcMark(0x118946?)
                  /home/kirr/src/tools/go/go1.21/src/runtime/mgc.go:1464 +0x40c fp=0xc000051fb0 sp=0xc000051f38 pc=0x41bd6c
          runtime.gcMarkTermination.func1()
                  /home/kirr/src/tools/go/go1.21/src/runtime/mgc.go:962 +0x17 fp=0xc000051fc8 sp=0xc000051fb0 pc=0x41b277
          runtime.systemstack()
                  /home/kirr/src/tools/go/go1.21/src/runtime/asm_amd64.s:509 +0x4a fp=0xc000051fd8 sp=0xc000051fc8 pc=0x468eea
      
      One problem in current implementation is that weak.Ref keeps two words
      for the copy of original interface object and recreates that interface
      object on Ref.Get from those two words _nonatomically_. Which is
      explicitly documented by Go memory model as something that can lead to
      corrupted memory and crashes. From https://go.dev/ref/mem#restrictions:
      
          This means that races on multiword data structures can lead to
          inconsistent values not corresponding to a single write. When the values
          depend on the consistency of internal (pointer, length) or (pointer,
          type) pairs, as can be the case for interface values, maps, slices, and
          strings in most Go implementations, such races can in turn lead to
          arbitrary memory corruption.
      
      We can avoid doing multiword writes during object resurrection by using
      concrete type T* instead of interface{}.
      
      Unfortunately as wendelin.core@9b44fc23
      shows it does not help with the above issue and the GC continues to
      crash with the same "panic: non-empty mark queue after concurrent mark"
      message. This is because weak.Ref implementation needs tight integration
      with concurrent GC that Go does and in practice we are unable to do that
      from outside of Go runtime internals. See e.g.
      https://github.com/golang/go/commit/dfc86e922cd0 and
      https://github.com/golang/go/commit/79fd633632cd to get an idea what
      kind of integration that needs to be.
      
      Anyway before disabling weak references support I wanted to commit this
      change to show that one-word approach was tried and it does not work.
      This leaves a trace in the history.
      
      On the good side we are going to use standard package weak in the future
      hopefully (https://github.com/golang/go/issues/67552). That needs to
      wait for at least Go 1.24 though.
      
      (*) see https://github.com/golang/go/issues/41303 for details
      
      /reviewed-by @levin.zimmermann
      /reviewed-on !11
      55491547
  2. 20 Aug, 2024 1 commit
  3. 19 Aug, 2024 2 commits
    • Levin Zimmermann's avatar
      go/neo/proto: Fix KnownMasterList nesting · dc844465
      Levin Zimmermann authored
      Before this patch, the 'KnownMasterList' field of the 'NotPrimaryMaster'
      was expected to be structured in the following way:
      
      	ArrayHeader (KnownMasterList)
      
      	    ArrayHeader (KnownMaster)
      
      		ArrayHeader (Address)
      
      		    Host (string)
      		    Port (uint16)
      
      However NEO/py sends the following structure:
      
      	ArrayHeader (KnownMasterList)
      
      	    ArrayHeader (Address)
      
      		Host (string)
      		Port (uint16)
      
      This also makes sense, as 'KnownMaster' doesn't need to add another
      nesting, because it only includes the address.
      
      This patch amends the NEO/go protocol definition to transparently
      represent the nesting as it's send by NEO/py. See also
      levin.zimmermann/neoppod@18287612 for a similar issue.
      
      ----
      kirr: tests currently live only on t branch.
      
      /reviewed-by @kirr
      /reviewed-on !6
      dc844465
    • Levin Zimmermann's avatar
      go/neo/proto/RowInfo: Fix representation on the wire · 7f96ad77
      Levin Zimmermann authored
      Some NEO protocol packets have the field 'RowList'. This field contains
      information about each row of a partition table. In NEO/go the information
      of each row is represented with the 'RowInfo' type [1]. This type is defined
      as a struct with the field ‘CellList’. ‘CellList’ is defined as a list of
      'CellInfo' [1] (e.g. an entry for each cell).
      
      NEO/go {en,de}codes struct types with ‘genStructHead’ (structures in
      golang are encoded as arrays in msgpack) [2]. From the 'RowList'
      definition, the msgpack decoder currently expects the following msgpack
      array structure:
      
          ArrayHeader (RowList)
      
              ArrayHeader (RowInfo)
      
                  ArrayHeader (CellList)
      
                      ArrayHeader (CellInfo)
      
                          int32 (NID)
                          enum (State)
      
      However NEO/py actually sends:
      
          ArrayHeader (RowList)
      
              ArrayHeader (CellList)
      
                  ArrayHeader (CellInfo)
      
                      int32 (NID)
                      enum (State)
      
      In other words, currently the NEO/go msgpack encoder expects one more
      nesting, which NEO/py doesn’t provide (and which also doesn’t seem to
      be necessary as the outer nesting would always only contain one
      element). We could adjust the msgpack {en,de}coder to introduce an
      exception for the 'RowInfo' type, however as the protocol definition in
      'proto.go' aims to transparently reflect the structure of the packets on
      the wire, it seems to be more appropriate to fix this straight in the
      protocol definition. This is also less error-prone as we don't have to
      fix all the different positions of the encoder, decoder & sizer and it's
      less code (particularly if 'RowInfo' doesn't stay the only case for such
      an issue).
      
      [1] https://lab.nexedi.com/kirr/neo/-/blob/1ad088c8/go/neo/proto/proto.go#L391-394
      [2] https://lab.nexedi.com/kirr/neo/-/blob/1ad088c8/go/neo/proto/protogen.go#L1770-1775
      
      --------
      
      kirr: I've applied the following interdiff to the original patch of
      levin.zimmermann/neoppod@c93d5dbc :
      
          --- a/go/neo/neo_test.go
          +++ b/go/neo/neo_test.go
          @@ -100,7 +100,7 @@ func _TestMasterStorage(t0 *tEnv) {
                          PTid:           1,
                          NumReplicas:    0,
                          RowList:        []proto.RowInfo{
          -                       proto.RowInfo{proto.CellInfo{proto.NID(proto.STORAGE, 1), proto.UP_TO_DATE}},
          +                       {proto.CellInfo{proto.NID(proto.STORAGE, 1), proto.UP_TO_DATE}},
                          },
                  }))
      
          @@ -173,7 +173,7 @@ func _TestMasterStorage(t0 *tEnv) {
                          PTid:           1,
                          NumReplicas:    0,
                          RowList:        []proto.RowInfo{
          -                       proto.RowInfo{proto.CellInfo{proto.NID(proto.STORAGE, 1), proto.UP_TO_DATE}},
          +                       {proto.CellInfo{proto.NID(proto.STORAGE, 1), proto.UP_TO_DATE}},
                          },
                  }))
      
          --- a/go/neo/proto/proto_test.go
          +++ b/go/neo/proto/proto_test.go
          @@ -210,9 +210,9 @@ func TestMsgMarshal(t *testing.T) {
                                  PTid:        0x0102030405060708,
                                  NumReplicas: 34,
                                  RowList: []RowInfo{
          -                               {CellInfo{11, UP_TO_DATE}, CellInfo{17, OUT_OF_DATE}},
          -                               {CellInfo{11, FEEDING}},
          -                               {CellInfo{11, CORRUPTED}, CellInfo{15, DISCARDED}, CellInfo{23, UP_TO_DATE}},
          +                               {{11, UP_TO_DATE}, {17, OUT_OF_DATE}},
          +                               {{11, FEEDING}},
          +                               {{11, CORRUPTED}, {15, DISCARDED}, {23, UP_TO_DATE}},
                                  },
                                  },
      
          @@ -229,9 +229,9 @@ func TestMsgMarshal(t *testing.T) {
                                  hex("cf0102030405060708") +
                                  hex("22") +
                                  hex("93") +
          -                               hex("92"+"920bd40001"+"9211d40000") +
          -                               hex("91"+"920bd40002") +
          -                               hex("93"+"920bd40003"+"920fd40004"+"9217d40001"),
          +                               hex("92"+"920bd40401"+"9211d40400") +
          +                               hex("91"+"920bd40402") +
          +                               hex("93"+"920bd40403"+"920fd40404"+"9217d40401"),
                          },
      
                          // map[Oid]struct {Tid,Tid,bool}
      
      for cosmetics and because the tests were failing as
      
          --- FAIL: TestMsgMarshal (0.00s)
              proto_test.go:106: M/proto.AnswerPartitionTable: encode result unexpected:
              proto_test.go:107:  have: 93cf0102030405060708229392920bd404019211d4040091920bd4040293920bd40403920fd404049217d40401
              proto_test.go:108:  want: 93cf0102030405060708229392920bd400019211d4000091920bd4000293920bd40003920fd400049217d40001
      
      /reviewed-by @kirr
      /reviewed-on !6
      7f96ad77
  4. 04 Aug, 2024 2 commits
    • Levin Zimmermann's avatar
      go/neo/neonet: DialLink: Fix SIGSEGV in case client handshake fails · d75f4ac2
      Levin Zimmermann authored
      In case the last 'handshakeClient' call returns an error, 'DialLink'
      returns 'link = nil, err = nil'. Callers of 'DialLink' then don't
      recognize that 'link' is 'nil', as it's the convention to only check if
      'err' is 'nil', which leads to a 'segmentation violation' as soon as
      subsequent code tries to access fields of 'link':
      
      ```
      panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x14 pc=0x7087ae]
      
      goroutine 5 [running]:
      lab.nexedi.com/kirr/neo/go/neo/neonet.(*NodeLink).NewConn(0x0)
      	/srv/slapgrid/slappart82/srv/runner/instance/slappart6/software_release/parts/wendelin.core/wcfs/neo/go/neo/neonet/connection.go:404 +0x4e
      lab.nexedi.com/kirr/neo/go/neo/xneo.Dial.func1()
      	/srv/slapgrid/slappart82/srv/runner/instance/slappart6/software_release/parts/wendelin.core/wcfs/neo/go/neo/xneo/connect.go:138 +0x52
      lab.nexedi.com/kirr/neo/go/internal/xio.WithCloseOnErrCancel.func2()
      	/srv/slapgrid/slappart82/srv/runner/instance/slappart6/software_release/parts/wendelin.core/wcfs/neo/go/internal/xio/xio.go:114 +0x6a
      created by lab.nexedi.com/kirr/neo/go/internal/xio.WithCloseOnErrCancel in goroutine 21
      	/srv/slapgrid/slappart82/srv/runner/instance/slappart6/software_release/parts/wendelin.core/wcfs/neo/go/internal/xio/xio.go:109 +0x1ad
      ```
      
      This patch fixes this issue so that now 'err' and 'link' are never both
      'nil' again.
      
      /reviewed-by @kirr
      /reviewed-on !10
      d75f4ac2
    • Kirill Smelkov's avatar
      go/neo/neonet: Demonstrate DialLink misbehaviour when all handshake attempts fail · 03db1d8a
      Kirill Smelkov authored
      Levin found that when all handshake attempts fail DialLink returns
      both link=nil and err=nil which breaks what callers expect and lead to
      segmentation fault when accessing that nil link.
      
      -> Add test to demonstrate the problem.
      
      With xfail removed that test currently fails as
      
          --- FAIL: TestDialLink_AllHandshakeErr (0.00s)
          panic: lab.nexedi.com/kirr/neo/go/neo/neonet.TestDialLink_AllHandshakeErr.gox.func4.1: lab.nexedi.com/kirr/neo/go/neo/neonet.TestDialLink_AllHandshakeErr.func2: DialLink to handshake-rejecting server:
          have: link=<nil> err=<nil>
          want: link=<nil> err=client:1 - server:2: handshake (client): unexpected EOF [recovered]
      
      We will fix the problem in the next patch.
      
      /reported-by @levin.zimmermann
      /reported-at !10
      03db1d8a
  5. 02 Feb, 2024 1 commit
  6. 17 Jul, 2023 3 commits
    • Levin Zimmermann's avatar
      go/neo/neonet: Fix client handshake not to accept server encoding if it is... · 052856ce
      Levin Zimmermann authored
      go/neo/neonet: Fix client handshake not to accept server encoding if it is different from what client indicated
      
      If the peers encoding is different than our encoding two different
      scenarios can happen, because the handshake order is undefined (e.g.
      we don't know if our handshake is received before the peer sends its
      handshake):
      
      1. Our handshake is received before peer sends its handshake, NEO/py
      closes connection if it sees unexpected magic, version, etc.
      
      2. The client already sends a handshake before it proceeds our handshake.
      In this case it initally sends us it version, we can extract its encoding,
      and only later, once it proceeded our handshake with the bad encoding,
      it closes the connection.
      
      Before this patch case (2) wasn't handled correctly by the automatic
      encoding detection of 'DialLink'. 'DialLink' simply accepted the
      different-than-expected encoding, but once the peer proceeded the nodes
      handshake the peer closed the connection and the initially established
      and returned link was immediately closed again. Due to this it was good
      luck whether connecting with a peer different with an encoding different
      from the expected one worked or didn't work (it depended on which handshake
      was faster). Now 'DialLink' should reliably find the correct encoding
      and return a stable link.
      
      --------
      
      kirr: this is based on the following original patch by Levin: f6b59772
      
      I updated documentation throughout correspondingly and also added
      corresponding handshake-specific test in the previous patch.
      
      See kirr/neo!5 and b2da69e2
      (go/neo/neonet: Demonstrate problem in handshake with NEO/py) for more
      in-depth description of the problem.
      052856ce
    • Kirill Smelkov's avatar
      go/neo/neonet: Demonstrate problem in handshake with NEO/py · b2da69e2
      Kirill Smelkov authored
      Levin Zimmerman discovered that sometimes NEO/py accepts our handshake
      hello with encoding 'M', then replies its owh handshake ehlo with
      encoding 'N' and then further terminates the connection. In other words
      it looks like that the handshake went successful, but it actually did
      not and NEO/py terminates the link after some time.
      
      This manifests itself e.g. as infrequent TestLoad failures on t branch
      with the following output:
      
      === RUN   TestLoad/py/!ssl
      I: runneo.py: /tmp/neo776618506/1 !ssl: started master(s): 127.0.0.1:21151
      === RUN   TestLoad/py/!ssl/enc=N(dialTryOrder=N,M)
          client_test.go:598: skip: does not excercise client redial
      === RUN   TestLoad/py/!ssl/enc=N(dialTryOrder=M,N)
          xtesting.go:330: load 0285cbac258bf266:0000000000000000: returned err unexpected:
              have: neo://127.0.0.1:21151/1: load 0285cbac258bf266:0000000000000000: dial S1: dial 127.0.0.1:40345 (STORAGE): 127.0.0.1:56678 - 127.0.0.1:40345: request identification: 127.0.0.1:56678 - 127.0.0.1:40345 .1: recv: EOF
              want: nil
          xtesting.go:330: load 0285cbac258bf266:0000000000000000: returned tid unexpected: 0000000000000000  ; want: 0285cbac258bf266
          xtesting.go:330: load 0285cbac258bf266:0000000000000000: returned buf = nil
          xtesting.go:339: load 0285cbac258bf265:0000000000000000: returned err unexpected:
              have: neo://127.0.0.1:21151/1: load 0285cbac258bf265:0000000000000000: dial S1: dial 127.0.0.1:40345 (STORAGE): 127.0.0.1:56688 - 127.0.0.1:40345: request identification: 127.0.0.1:56688 - 127.0.0.1:40345 .1: recv: EOF
              want: neo://127.0.0.1:21151/1: load 0285cbac258bf265:0000000000000000: 0000000000000000: object was not yet created
          ...
          client_test.go:588: NEO log tail:
      
              log file 'storage_0.log' tail:
      
              2023-07-17 17:21:57.1519 DEBUG     S1         connection completed for <ServerConnection(nid=None, address=127.0.0.1:51230, handler=IdentificationHandler, fd=20, server) at 7f3583fd4f50> (from 127.0.0.1:40345)
              2023-07-17 17:21:57.1537 WARNING   S1         Protocol version mismatch with <ServerConnection(nid=None, address=127.0.0.1:51230, handler=IdentificationHandler, fd=20, server) at 7f3583fd4f50>
              2023-07-17 17:21:57.1548 DEBUG     S1         connection closed for <ServerConnection(nid=None, address=127.0.0.1:51230, handler=IdentificationHandler, closed, server) at 7f3583fd4f50>
              2023-07-17 17:21:57.1551 WARNING   S1         A connection was lost during identification
              2023-07-17 17:22:00.1582 DEBUG     S1         accepted a connection from 127.0.0.1:51236
              2023-07-17 17:22:00.1585 DEBUG     S1         connection completed for <ServerConnection(nid=None, address=127.0.0.1:51236, handler=IdentificationHandler, fd=20, server) at 7f3583fd4e90> (from 127.0.0.1:40345)
              2023-07-17 17:22:00.1604 WARNING   S1         Protocol version mismatch with <ServerConnection(nid=None, address=127.0.0.1:51236, handler=IdentificationHandler, fd=20, server) at 7f3583fd4e90>
              2023-07-17 17:22:00.1622 DEBUG     S1         connection closed for <ServerConnection(nid=None, address=127.0.0.1:51236, handler=IdentificationHandler, closed, server) at 7f3583fd4e90>
              2023-07-17 17:22:00.1625 WARNING   S1         A connection was lost during identification
              2023-07-17 17:22:03.1663 DEBUG     S1         accepted a connection from 127.0.0.1:51238
              2023-07-17 17:22:03.1666 DEBUG     S1         connection completed for <ServerConnection(nid=None, address=127.0.0.1:51238, handler=IdentificationHandler, fd=20, server) at 7f3583fd4d10> (from 127.0.0.1:40345)
              2023-07-17 17:22:03.1674 WARNING   S1         Protocol version mismatch with <ServerConnection(nid=None, address=127.0.0.1:51238, handler=IdentificationHandler, fd=20, server) at 7f3583fd4d10>
              2023-07-17 17:22:03.1688 DEBUG     S1         connection closed for <ServerConnection(nid=None, address=127.0.0.1:51238, handler=IdentificationHandler, closed, server) at 7f3583fd4d10>
              2023-07-17 17:22:03.1691 WARNING   S1         A connection was lost during identification
              2023-07-17 17:22:06.1714 DEBUG     S1         accepted a connection from 127.0.0.1:57072
              2023-07-17 17:22:06.1719 DEBUG     S1         connection completed for <ServerConnection(nid=None, address=127.0.0.1:57072, handler=IdentificationHandler, fd=20, server) at 7f3583fd4b50> (from 127.0.0.1:40345)
              2023-07-17 17:22:06.1727 WARNING   S1         Protocol version mismatch with <ServerConnection(nid=None, address=127.0.0.1:57072, handler=IdentificationHandler, fd=20, server) at 7f3583fd4b50>
              2023-07-17 17:22:06.1738 DEBUG     S1         connection closed for <ServerConnection(nid=None, address=127.0.0.1:57072, handler=IdentificationHandler, closed, server) at 7f3583fd4b50>
              2023-07-17 17:22:06.1738 WARNING   S1         A connection was lost during identification
      
              log file 'master_0.log' tail:
      
              2023-07-17 17:21:21.0799 PACKET    M1         #0x0012 NotifyNodeInformation          > A1 (127.0.0.1:37906)
              2023-07-17 17:21:21.0799 PACKET    M1          ! C0 | CLIENT |  | RUNNING | 2023-07-17 14:21:21.079838
              2023-07-17 17:21:21.0800 PACKET    M1         #0x0102 NotifyNodeInformation          > S1 (127.0.0.1:37918)
              2023-07-17 17:21:21.0800 PACKET    M1          ! C0 | CLIENT |  | RUNNING | 2023-07-17 14:21:21.079838
              2023-07-17 17:21:21.0801 DEBUG     M1         Handler changed on <ServerConnection(nid=None, address=127.0.0.1:37966, handler=ClientServiceHandler, fd=18, server) at 7f3584245910>
              2023-07-17 17:21:21.0802 PACKET    M1         #0x0001 AnswerRequestIdentification    > C0 (127.0.0.1:37966)
              2023-07-17 17:21:21.0804 PACKET    M1         #0x0000 NotifyNodeInformation          > C0 (127.0.0.1:37966)
              2023-07-17 17:21:21.0804 PACKET    M1          ! C0 | CLIENT  |                 | RUNNING | 2023-07-17 14:21:21.079838
              2023-07-17 17:21:21.0804 PACKET    M1          ! M1 | MASTER  | 127.0.0.1:21151 | RUNNING | None
              2023-07-17 17:21:21.0804 PACKET    M1          ! S1 | STORAGE | 127.0.0.1:40345 | RUNNING | 2023-07-17 14:21:18.737469
              2023-07-17 17:21:21.0805 PACKET    M1         #0x0002 NotifyPartitionTable           > C0 (127.0.0.1:37966)
              2023-07-17 17:21:21.0810 PACKET    M1         #0x0003 LastTransaction                < C0 (127.0.0.1:37966)
              2023-07-17 17:21:21.0811 PACKET    M1         #0x0003 AnswerLastTransaction          > C0 (127.0.0.1:37966)
              2023-07-17 17:22:06.2053 DEBUG     M1         <SocketConnectorIPv4 at 0x7f3584252d10 fileno 18 ('127.0.0.1', 21151), opened from ('127.0.0.1', 37966)> closed in recv
              2023-07-17 17:22:06.2056 DEBUG     M1         connection closed for <ServerConnection(nid=C0, address=127.0.0.1:37966, handler=ClientServiceHandler, closed, server) at 7f3584245910>
              2023-07-17 17:22:06.2058 PACKET    M1         #0x0014 NotifyNodeInformation          > A1 (127.0.0.1:37906)
              2023-07-17 17:22:06.2058 PACKET    M1          ! C0 | CLIENT |  | UNKNOWN | 2023-07-17 14:21:21.079838
              2023-07-17 17:22:06.2059 PACKET    M1         #0x0104 NotifyNodeInformation          > S1 (127.0.0.1:37918)
              2023-07-17 17:22:06.2059 PACKET    M1          ! C0 | CLIENT |  | UNKNOWN | 2023-07-17 14:21:21.079838
      
      The problem is due to that my analysis from e407f725 (go/neo/neonet:
      Rework handshake to differentiate client and server parts) turned out to
      be incorrect. Quoting that patch:
      
          -> Rework handshake so that client always sends its hello first, and
          only then the server side replies. This matches actual NEO/py behaviour:
      
          https://lab.nexedi.com/nexedi/neoppod/blob/v1.12-67-g261dd4b4/neo/lib/connector.py#L293-294
      
          even though the "NEO protocol" states that
      
          	Handshake transmissions are not ordered with respect to each other and can go in parallel.
      
          	( https://neo.nexedi.com/P-NEO-Protocol.Specification.2019?portal_skin=CI_slideshow#/9/2 )
      
          If I recall correctly that sentence was authored by me in 2018 based on
          previous understanding of should-be full symmetry in-between client and
          server.
      
      so here "This matches actual NEO/py behaviour" was wrong: even though
      https://lab.nexedi.com/nexedi/neoppod/blob/v1.12-67-g261dd4b4/neo/lib/connector.py#L293-294
      indeed says that
      
          #       The NEO protocol is such that a client connection is always the
          #       first to send a packet, as soon as the connection is established,
      
      in reality it is not the case as SocketConnector always queues handshake
      hello upon its creation before receiving anything from remote side:
      https://lab.nexedi.com/nexedi/neoppod/blob/v1.12-93-gfd87e153/neo/lib/connector.py#L77-78 .
      In practice this leads to that in non-SSL case NEO/py server might be
      fast enough to send its prepared hello before receiving hello from us.
      Levin also explains at kirr/neo!5 (comment 187429):
      
          I think what happens is this:
      
          the NEO protocol doesn't specify in which order handshakes happen after
          initial dial. If the peer sends a handshake before receiving our
          handshake and if this peers handshake is received by us, 'DialLink'
          assumes everything is fine (no err is returned), it breaks the loop and
          returns the link. But then, very little time later, when the peer
          finally receives our handshake, this looks strange for the peer and it
          closes the connection.
      
          So in my understanding this should be fixed by explicitly comparing the
          encodings between our expected one and what the peer provided us. If
          encodings don't match we should retry with a new encoding in order to
          prevent the peer from closing the connection.
      
          For me this also explains why sometimes the tests passed and sometimes
          didn't: it depended on which node was faster ('race condition').
      
      -> In this patch we add correspondig handshake test that demonstrates
      this problem. It currently fails as
      
      --- FAIL: TestHandshake (0.01s)
          --- FAIL: TestHandshake/enc=N (0.00s)
              newlink_test.go:154: handshake encoding mismatch: client: unexpected error:
                  have: <nil>
                       "<nil>"
                  want: &neonet._HandshakeError{LocalRole:1, LocalAddr:net.pipeAddr{}, RemoteAddr:net.pipeAddr{}, Err:(*neonet._EncodingMismatchError)(0xc0000a4190)}
                       "pipe - pipe: handshake (client): protocol encoding mismatch: peer = 'M'  ; our side = 'N'"
          --- FAIL: TestHandshake/enc=M (0.00s)
              newlink_test.go:154: handshake encoding mismatch: client: unexpected error:
                  have: <nil>
                       "<nil>"
                  want: &neonet._HandshakeError{LocalRole:1, LocalAddr:net.pipeAddr{}, RemoteAddr:net.pipeAddr{}, Err:(*neonet._EncodingMismatchError)(0xc0001a22cc)}
                       "pipe - pipe: handshake (client): protocol encoding mismatch: peer = 'N'  ; our side = 'M'"
      
      We will fix it in the next patch.
      
      /reported-by @levin.zimmermann
      /reported-on kirr/neo!5
      b2da69e2
    • Kirill Smelkov's avatar
      go/neo/neonet: Dedicate an error type to indicate "protocol version mismatch"... · c100a0c9
      Kirill Smelkov authored
      go/neo/neonet: Dedicate an error type to indicate "protocol version mismatch" as handshake failure cause
      
      We will soon need to detect if a handshake failure was due to mismatch
      of protocol encodings and that would require introduction of dedicated
      error type for that cause. As a preparatory step first refactor "version
      mismatch cause" to follow the same style for symmetry.
      c100a0c9
  7. 18 Jan, 2023 2 commits
  8. 18 May, 2022 2 commits
    • Kirill Smelkov's avatar
      go/neo/t/neotest: Use python -c 'print ...' in a way that works on both py2 and py3 · 789c9ed9
      Kirill Smelkov authored
      Without parenthesis it was failing on py3:
      
      	(neo) (py3.venv) (g.env) kirr@deca:~/src/neo/src/lab.nexedi.com/kirr/neo/go/neo/t$ ./neotest info-local
      	date:   Wed, 18 May 2022 11:05:50 +0300
      	xnode:  kirr@deca.navytux.spb.ru (2401:5180:0:af::1 192.168.0.3 (+ 1·ipv4))
      	uname:  Linux deca 5.10.0-13-amd64 #1 SMP Debian 5.10.106-1 (2022-03-17) x86_64 GNU/Linux
      	cpu:    Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
      	  File "<string>", line 1
      	    print '%.2fGHz' % (400000 / 1E6)
      	          ^
      	SyntaxError: invalid syntax
      789c9ed9
    • Kirill Smelkov's avatar
      go/neo/t/tzodb.py: Fix zhash for Python3 · 30329f5a
      Kirill Smelkov authored
      On py3 dict.keys() returns iterator instead of list:
      
          $ ./tzodb.py zhash
          Traceback (most recent call last):
            File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/neo/t/./tzodb.py", line 141, in <module>
              main()
            File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/neo/t/./tzodb.py", line 138, in main
              zhash()
            File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/neo/t/./tzodb.py", line 59, in zhash
              optv, argv = getopt(sys.argv[2:], "h", ["help", "check=", "bench="] + hashRegistry.keys())
          TypeError: can only concatenate list (not "dict_keys") to list
      30329f5a
  9. 25 Nov, 2021 1 commit
  10. 04 Oct, 2021 5 commits
  11. 08 Sep, 2021 4 commits
    • Kirill Smelkov's avatar
      go/internal/xtesting: Add missing X · e7bea476
      Kirill Smelkov authored
      staticcheck says:
      
      	xtesting.go:386:2: this value of err is never used (SA4006)
      e7bea476
    • Kirill Smelkov's avatar
      go/zodb/{fs1,zeo}: ~staticcheck · 3ff4eb14
      Kirill Smelkov authored
      fs1/format.go:204:2: only the first constant in this group has an explicit type (SA9004)
      zeo/proto.go:56:2: only the first constant in this group has an explicit type (SA9004)
      3ff4eb14
    • Kirill Smelkov's avatar
      go/zodb/btree: Fix missing return on data-consistency error · ca630163
      Kirill Smelkov authored
      staticcheck reports
      
          ziobtree.go:606:4: Errorf is a pure function but its return value is ignored (SA4017)
          ziobtree.go:626:4: Errorf is a pure function but its return value is ignored (SA4017)
          zlobtree.go:606:4: Errorf is a pure function but its return value is ignored (SA4017)
          zlobtree.go:626:4: Errorf is a pure function but its return value is ignored (SA4017)
      ca630163
    • Kirill Smelkov's avatar
      go/zodb, go/zodb/btree: Fix go generate after rename on zodbtools side · 3d27ed5d
      Kirill Smelkov authored
      run_with_zodb3py2_compat was renamed to run_with_zodb4py2_compat in
      nexedi/zodbtools@c59a54ca .
      
      Without the fix go genrate was failing as
      
          (neo) (z-dev) (g.env) kirr@deca:~/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/btree$ go generate
          Traceback (most recent call last):
            File "./testdata/gen-testdata", line 26, in <module>
              from zodbtools.test.gen_testdata import run_with_zodb3py2_compat
          ImportError: cannot import name run_with_zodb3py2_compat
      
      and
      
          (neo) (z-dev) (g.env) kirr@deca:~/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb$ go generate
          Traceback (most recent call last):
            File "./py/pydata-gen-testdata", line 24, in <module>
              from zodbtools.test.gen_testdata import run_with_zodb3py2_compat
          ImportError: cannot import name run_with_zodb3py2_compat
      
      This amends commit fc69e00d (go/zodb/fs1: Fix Python database generator
      to work with recent zodbtools).
      3d27ed5d
  12. 20 Jul, 2021 1 commit
  13. 24 May, 2021 6 commits
    • Kirill Smelkov's avatar
      go/zodb: Prevent newly-created not-yet committed objects to go to ghost on deactivate · bb618ce1
      Kirill Smelkov authored
      When object is just created, it is not yet assigned an OID, but can be
      reachable from other objects. The code that processes transaction can
      reach to that new object and try to PActivate/PDeactivate it. And
      currently PDeactivate will drop the object state completely.
      
      Another example of object without an OID is Bucket embedded into a Tree
      object. There, the code that scans the tree can reach to that bucket and
      try to activate/deactivate it, leading, again, to dropping state of that bucket.
      
      -> Fix it.
      bb618ce1
    • Kirill Smelkov's avatar
      go/zodb: Fix PActivate not to panic after an error · 67f0d2cd
      Kirill Smelkov authored
      Persistent.PActivate used to panic when called the second time, if the
      first time it hit an error. WCFS hit this in practice via object, that was
      previously accessed and pinned into the cache, but later deleted in the
      storage.
      
      -> Fix PActivate to reset .loading on an error, so that next time PActivate is
      called, it tries to trigger load again instead of panicking. Change doload
      criteria from
      
      	state==GHOST && refcnt==1
      
      to
      
      	state==GHOST && loading==nil
      
      because now, after failed PActivate, refcnt can be != 0, if there are several
      other PActivate calls that were waiting for the failed PActivate but did not
      yet woke up.
      
      Here is how added test fails without the fix:
      
          --- FAIL: TestActivateAfterDelete (1.65s)
          panic: t.zodb.MyObject(0000000000000065): activate: need to load, but .loading != nil [recovered]
                  panic: t.zodb.MyObject(0000000000000065): activate: need to load, but .loading != nil
      
          goroutine 10085 [running]:
          testing.tRunner.func1.2(0x649020, 0xc000520660)
                  /home/kirr/src/tools/go/go/src/testing/testing.go:1143 +0x332
          testing.tRunner.func1(0xc0001cb080)
                  /home/kirr/src/tools/go/go/src/testing/testing.go:1146 +0x4b6
          panic(0x649020, 0xc000520660)
                  /home/kirr/src/tools/go/go/src/runtime/panic.go:965 +0x1b9
          lab.nexedi.com/kirr/neo/go/zodb.(*Persistent).PActivate(0xc0001184d0, 0x6e8360, 0xc00019ac90, 0x0, 0x0)
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/persistent.go:191 +0xce5
          lab.nexedi.com/kirr/neo/go/zodb.TestActivateAfterDelete(0xc0001cb080)
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/persistent_test.go:786 +0x72c
      67f0d2cd
    • Kirill Smelkov's avatar
      go/zodb/fs1tools: Verify · 5a26fb31
      Kirill Smelkov authored
      Add utility to verify FileStorage data for consistency.
      To verify we just need to iterate through all records, because
      FileStorage driver performs all consistency checks by itself.
      
      Mimic normal output to be the same as in fstest from ZODB/py.
      Example runs of fstest.py and `fs1 verify` on a broken file:
      
          $ python ~/src/wendelin/z/ZODB/src/ZODB/scripts/fstest.py -v 1.fs
                   4: transaction tid 0x03e044f6448c8022 #0
                 213: transaction tid 0x03e044f646e044bb #1
          1.fs has data records that extend beyond the transaction record; end at 466
      
          $ fs1 verify -v 1.fs
                   4: transaction tid 0x03e044f6448c8022 #0
                 213: transaction tid 0x03e044f646e044bb #1
          2021/05/24 12:43:37 fsverify: 1.fs: 1.fs: transaction record @355: -> (iter data): 1.fs: data record @416: check: data record [..., 466) overlaps txn boundary [..., 458)
      
      As can be seen, in fs1 case, the error contains more details: [start, end) of
      both mismatching transaction and data records.
      
      In addition to fstest-like verbosity, add progress-mode, where % of total
      completion is printed in a style similar to one used by `fs1 verify-index`.
      
      The Go-based implementation is also faster even when data is on HDD. For
      example on a 73GB database provided by @jerome[1] fsrefs.py takes ~15 minutes
      to run and occupy ~70-100% of CPU. On the other hand `fs1 verify` takes ~7
      minutes to run and occupy ~ 20-30% of CPU.
      
      Tests pending.
      
      [1] nexedi/zodbtools!19 (comment 129480)
      5a26fb31
    • Kirill Smelkov's avatar
      go/zodb/fs1tools: Dumper += DumpEndOK · c66dc12d
      Kirill Smelkov authored
      Some dumpers might want to print something at the end of their dump.
      We will need this functionality for Verify (see next patch).
      c66dc12d
    • Kirill Smelkov's avatar
      7d896243
    • Kirill Smelkov's avatar
      go/zodb: tests: Teach tDB.Commit to return committed TID · c68c0994
      Kirill Smelkov authored
      And use that in the callers.
      c68c0994
  14. 19 May, 2021 1 commit
  15. 10 May, 2021 1 commit
  16. 26 Mar, 2021 5 commits