1. 23 Mar, 2010 14 commits
    • Sage Weil's avatar
      ceph: make write_begin wait propagate ERESTARTSYS · 8f883c24
      Sage Weil authored
      Currently, if the wait_event_interruptible is interrupted, we
      return EAGAIN unconditionally and loop, such that we aren't, in
      fact, interruptible.  So, propagate ERESTARTSYS if we get it.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      8f883c24
    • Sage Weil's avatar
      ceph: fix snap rebuild condition · ec4318bc
      Sage Weil authored
      We were rebuilding the snap context when it was not necessary
      (i.e. when the realm seq hadn't changed _and_ the parent seq
      was still older), which caused page snapc pointers to not match
      the realm's snapc pointer (even though the snap context itself
      was identical).  This confused begin_write and put it into an
      endless loop.
      
      The correct logic is: rebuild snapc if _my_ realm seq changed, or
      if my parent realm's seq is newer than mine (and thus mine needs
      to be rebuilt too).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      ec4318bc
    • Sage Weil's avatar
      ceph: avoid reopening osd connections when address hasn't changed · 87b315a5
      Sage Weil authored
      We get a fault callback on _every_ tcp connection fault.  Normally, we
      want to reopen the connection when that happens.  If the address we have
      is bad, however, and connection attempts always result in a connection
      refused or similar error, explicitly closing and reopening the msgr
      connection just prevents the messenger's backoff logic from kicking in.
      The result can be a console full of
      
      [ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
      [ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
      [ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
      
      Instead, if we get a fault, and have outstanding requests, but the osd
      address hasn't changed and the connection never successfully connected in
      the first place, do nothing to the osd connection.  The messenger layer
      will back off and retry periodically, because we never connected and thus
      the lossy bit is not set.
      
      Instead, touch each request's r_stamp so that handle_timeout can tell the
      request is still alive and kicking.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      87b315a5
    • Sage Weil's avatar
      ceph: rename r_sent_stamp r_stamp · 3dd72fc0
      Sage Weil authored
      Make variable name slightly more generic, since it will (soon)
      reflect either the time the request was sent OR the time it was
      last determined to be still retrying.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      3dd72fc0
    • Sage Weil's avatar
      ceph: fix connection fault con_work reentrancy problem · 3c3f2e32
      Sage Weil authored
      The messenger fault was clearing the BUSY bit, for reasons unclear.  This
      made it possible for the con->ops->fault function to reopen the connection,
      and requeue work in the workqueue--even though the current thread was
      already in con_work.
      
      This avoids a problem where the client busy loops with connection failures
      on an unreachable OSD, but doesn't address the root cause of that problem.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      3c3f2e32
    • Sage Weil's avatar
      ceph: prevent dup stale messages to console for restarting mds · e4cb4cb8
      Sage Weil authored
      Prevent duplicate 'mds0 caps stale' message from spamming the console every
      few seconds while the MDS restarts.  Set s_renew_requested earlier, so that
      we only print the message once, even if we don't send an actual request.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      e4cb4cb8
    • Sage Weil's avatar
      ceph: fix pg pool decoding from incremental osdmap update · efd7576b
      Sage Weil authored
      The incremental map decoding of pg pool updates wasn't skipping
      the snaps and removed_snaps vectors.  This caused osd requests
      to stall when pool snapshots were created or fs snapshots were
      deleted.  Use a common helper for full and incremental map
      decoders that decodes pools properly.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      efd7576b
    • Sage Weil's avatar
      ceph: fix mds sync() race with completing requests · 80fc7314
      Sage Weil authored
      The wait_unsafe_requests() helper dropped the mdsc mutex to wait
      for each request to complete, and then examined r_node to get the
      next request after retaking the lock.  But the request completion
      removes the request from the tree, so r_node was always undefined
      at this point.  Since it's a small race, it usually led to a
      valid request, but not always.  The result was an occasional
      crash in rb_next() while dereferencing node->rb_left.
      
      Fix this by clearing the rb_node when removing the request from
      the request tree, and not walking off into the weeds when we
      are done waiting for a request.  Since the request we waited on
      will _always_ be out of the request tree, take a ref on the next
      request, in the hopes that it won't be.  But if it is, it's ok:
      we can start over from the beginning (and traverse over older read
      requests again).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      80fc7314
    • Sage Weil's avatar
      ceph: only release unused caps with mds requests · 916623da
      Sage Weil authored
      We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release
      with MDS requests (e.g. setattr).  We don't carry refs on most caps, so
      this code worked most of the time, but for setattr (utimes) we try to
      drop Fscr.
      
      This causes cap state to get slightly out of sync with reality, and may
      result in subsequent mds revoke messages getting ignored.
      
      Fix by only releasing unused caps.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      916623da
    • Sage Weil's avatar
      ceph: clean up handle_cap_grant, handle_caps wrt session mutex · 15637c8b
      Sage Weil authored
      Drop session mutex unconditionally in handle_cap_grant, and do the
      check_caps from the handle_cap_grant helper.  This avoids using a magic
      return value.
      
      Also avoid using a flag variable in the IMPORT case and call
      check_caps at the appropriate point.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      15637c8b
    • Sage Weil's avatar
      ceph: fix session locking in handle_caps, ceph_check_caps · cdc2ce05
      Sage Weil authored
      Passing a session pointer to ceph_check_caps() used to mean it would leave
      the session mutex locked.  That wasn't always possible if it wasn't passed
      CHECK_CAPS_AUTHONLY.   If could unlock the passed session and lock a
      differet session mutex, which was clearly wrong, and also emitted a
      warning when it a racing CPU retook it and we did an unlock from the wrong
      context.
      
      This was only a problem when there was more than one MDS.
      
      First, make ceph_check_caps unconditionally drop the session mutex, so that
      it is free to lock other sessions as needed.  Then adjust the one caller
      that passes in a session (handle_cap_grant) accordingly.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      cdc2ce05
    • Sage Weil's avatar
      ceph: drop unnecessary WARN_ON in caps migration · 4ea0043a
      Sage Weil authored
      If we don't have the exported cap it's because we already released it. No
      need to WARN.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      4ea0043a
    • Sage Weil's avatar
      ceph: fix null pointer deref of r_osd in debug output · 12eadc19
      Sage Weil authored
      This causes an oops when debug output is enabled and we kick
      an osd request with no current r_osd (sometime after an osd
      failure).  Check the pointer before dereferencing.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      12eadc19
    • Sage Weil's avatar
      ceph: clean up service ticket decoding · 0a990e70
      Sage Weil authored
      Previously we would decode state directly into our current ticket_handler.
      This is problematic if for some reason we fail to decode, because we end
      up with half new state and half old state.
      
      We are probably already in bad shape if we get an update we can't decode,
      but we may as well be tidy anyway.  Decode into new_* temporaries and
      update the ticket_handler only on success.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      0a990e70
  2. 21 Mar, 2010 6 commits
  3. 20 Mar, 2010 3 commits
  4. 19 Mar, 2010 17 commits