1. 27 Oct, 2005 1 commit
    • Roland Dreier's avatar
      [IB] mthca: first pass at catastrophic error reporting · 3d155f8c
      Roland Dreier authored
      Add some initial support for detecting and reporting catastrophic
      errors reported by Mellanox HCAs.  We start a periodic timer which
      polls the catastrophic error reporting buffer in device memory.  If an
      error is detected, we dump the contents of the buffer for port-mortem
      debugging, and report a fatal asynchronous error to higher levels.
      
      In the future we can try to recover from these errors by resetting the
      device, but this will require some work in higher-level code as well.
      Let's get this in now, so that we at least get catastrophic errors
      reported in logs.
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      3d155f8c
  2. 25 Oct, 2005 3 commits
    • Roland Dreier's avatar
      [IB] simplify mad_rmpp.c:alloc_response_msg() · 7cc656ef
      Roland Dreier authored
      Change alloc_response_msg() in mad_rmpp.c to return the struct
      it allocates directly (or an error code a la ERR_PTR), rather than
      returning a status and passing the struct back in a pointer param.
      This simplifies the code and gets rid of warnings like
      
          drivers/infiniband/core/mad_rmpp.c: In function nack_recv:
          drivers/infiniband/core/mad_rmpp.c:192: warning: msg may be used uninitialized in this function
      
      with newer versions of gcc.
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      7cc656ef
    • Roland Dreier's avatar
      [IB] mthca: correct modify QP attribute masks for UC · 547e3090
      Roland Dreier authored
      The UC transport does not support RDMA reads or atomic operations, so
      we shouldn't require or even allow the consumer to set attributes
      relating to these operations for UC QPs.
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      547e3090
    • Sean Hefty's avatar
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty authored
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: default avatarSean Hefty <sean.hefty@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      34816ad9
  3. 24 Oct, 2005 5 commits
  4. 23 Oct, 2005 14 commits
  5. 22 Oct, 2005 4 commits
  6. 21 Oct, 2005 11 commits
  7. 20 Oct, 2005 2 commits