1. 19 Oct, 2015 3 commits
  2. 16 Oct, 2015 1 commit
  3. 13 Oct, 2015 1 commit
  4. 12 Oct, 2015 1 commit
  5. 05 Oct, 2015 3 commits
  6. 02 Oct, 2015 2 commits
  7. 01 Oct, 2015 1 commit
    • Julien Muchembled's avatar
      Review API betweeen connections and connectors · 57481c35
      Julien Muchembled authored
      - Review error handling. Only 2 exceptions remain in connector.py:
      
        - Drop useless exception handling for EAGAIN since it should not happen
          if the kernel says the socket is ready.
        - Do not distinguish other socket errors. Just close and log in a generic way.
        - No need to raise a specific exception for EOF.
        - Make 'connect' return a boolean instead of raising an exception.
        - Raise appropriate exception when answer/ask/notify is called on a closed
          non-MT connection.
      
      - Add support for more complex connectors, which may need to write for a read
        operation, or to read when there's pending data to send. This will be
        required for SSL support (more exactly, the handshake will be done in
        a transparent way):
      
        - Move write buffer to connector.
        - Make 'receive' fill the read buffer, instead of returning the read data.
        - Make 'receive' & 'send' return a boolean to switch polling for writing.
        - Tolerate that sockets return 0 as number of bytes sent.
      
      - In testConnection, simply delete all failing tests, as announced
        in commit 71e30fb9.
      57481c35
  8. 30 Sep, 2015 1 commit
  9. 24 Sep, 2015 4 commits
  10. 23 Sep, 2015 3 commits
  11. 15 Sep, 2015 8 commits
  12. 14 Sep, 2015 1 commit
  13. 07 Sep, 2015 1 commit
  14. 28 Aug, 2015 6 commits
    • Julien Muchembled's avatar
      client: drop now useless wrapper to log safely in poll thread during shutdown · 9531c9cb
      Julien Muchembled authored
      Recent Python already catches exceptions due to garbage collection on exit.
      9531c9cb
    • Julien Muchembled's avatar
      storage: fix history() not waiting oid to be unlocked · e27358d1
      Julien Muchembled authored
      This fixes a random failure in testClientReconnection:
      
      Traceback (most recent call last):
        File "neo/tests/threaded/test.py", line 754, in testClientReconnection
          self.assertTrue(cluster.client.history(x1._p_oid))
      failureException: None is not true
      e27358d1
    • Julien Muchembled's avatar
      Fix random failure in testRecycledClientUUID · 79be7787
      Julien Muchembled authored
      Traceback (most recent call last):
        File "neo/tests/threaded/test.py", line 838, in testRecycledClientUUID
          x = client.load(ZERO_TID)
        [...]
        File "neo/tests/threaded/test.py", line 822, in notReady
          m2s.remove(delayNotifyInformation)
        File "neo/tests/threaded/__init__.py", line 482, in remove
          del self.filter_dict[filter]
      KeyError: <function delayNotifyInformation at 0x7f511063a578>
      79be7787
    • Julien Muchembled's avatar
      Fix several random failures in tests that didn't wait for transaction to be unlocked · c4ac45a8
      Julien Muchembled authored
      NEOCluster.tic() gets a new 'slave' parameter that must be True when a client
      node is in 'master' mode (i.e. setPoll(True)). In this case, tic() will wait
      that all nodes finish their work and the client polls with a non-zero timeout.
      
      Here, tic(slave=1) is used to wait for the storage to process
      NotifyUnlockInformation notification from the master.
      
      Traceback (most recent call last):
        File "neo/tests/threaded/test.py", line 80, in testBasicStore
          self.assertEqual(data_info, cluster.storage.getDataLockInfo())
        File "neo/tests/__init__.py", line 170, in assertEqual
          return super(NeoTestBase, self).assertEqual(first, second, msg=msg)
      failureException: {('\x0b\xee\xc7\xb5\xea?\x0f\xdb\xc9]\r\xd4\x7f<[\xc2u\xda\x8a3', 0): 0} != {('\x0b\xee\xc7\xb5\xea?\x0f\xdb\xc9]\r\xd4\x7f<[\xc2u\xda\x8a3', 0): 1}
      c4ac45a8
    • Julien Muchembled's avatar
      Several improvements to verbose locks · 5dc1f06c
      Julien Muchembled authored
      All these changes were useful to debug deadlocks in threaded tests:
      - New verbose Semaphore.
      - Logs with numerical 'ident' were too annoying to read so revert to thread
        name (before commit 5b69d553), with an
        exception for threaded tests. There remains one case where the result is not
        unique: when several client apps are instantiated.
      - Make deadlock detection optional.
      - Make it possible to name locks.
      - Make output more compact.
      - Remove useless 'debug_lock' option.
      - Add timing information.
      - Make exception more verbose when an un-acquired lock is released.
      
      Here is how I used 'locking':
      
      --- a/neo/tests/threaded/__init__.py
      +++ b/neo/tests/threaded/__init__.py
      @@ -37,0 +38 @@
      +from neo.lib.locking import VerboseSemaphore
      @@ -71 +72,2 @@ def init(cls):
      -        cls._global_lock = threading.Semaphore(0)
      +        cls._global_lock = VerboseSemaphore(0, check_owner=False,
      +                                            name="Serialized._global_lock")
      @@ -265 +267,2 @@ def start(self):
      -        self.em._lock = l = threading.Semaphore(0)
      +        self.em._lock = l = VerboseSemaphore(0, check_owner=False,
      +                                             name=self.node_name)
      @@ -346 +349,2 @@ def __init__(self, master_nodes, name, **kw):
      -        self.em._lock = threading.Semaphore(0)
      +        self.em._lock = VerboseSemaphore(0, check_owner=False,
      +                                         name=repr(self))
      5dc1f06c
    • Julien Muchembled's avatar
      Fix occasional deadlocks in threaded tests · 0b93b1fb
      Julien Muchembled authored
      deadlocks mainly happened while stopping a cluster, hence the complete review
      of NEOCluster.stop()
      
      A major change is to make the client node handle its lock like other nodes
      (i.e. in the polling thread itself) to better know when to call
      Serialized.background() (there was a race condition with the test of
      'self.poll_thread.isAlive()' in ClientApplication.close).
      0b93b1fb
  15. 14 Aug, 2015 2 commits
  16. 12 Aug, 2015 2 commits
    • Julien Muchembled's avatar
      Remove useless testEvent · 71e30fb9
      Julien Muchembled authored
      Such kind of test has never helped to detect regressions and any bug in
      EpollEventManager would be quickly reported by other tests.
      
      testConnection may go the same way if it keeps annoying me too much.
      71e30fb9
    • Julien Muchembled's avatar
      client: do not wait for the remote to close the connection if it's not ready · f9df31be
      Julien Muchembled authored
      This is currently not an issue because the 'time.sleep(1)' in iterateForObject
      (storage) and _connectToPrimaryNode (master) leave enough time. What could
      happen is a new connection attempt for a node that already has a connection
      (causing a failure assertion in Node.setConnection).
      f9df31be