1. 13 Nov, 2017 38 commits
  2. 10 Nov, 2017 2 commits
    • Noa Osherovich's avatar
      IB/mlx5: Add PCI write end padding support · b1383aa6
      Noa Osherovich authored
      Add the PCI write end padding flag to device_cap_flags enum and set it
      during mlx5_ib_query_device so it will be reported to user-space.
      
      During WQ/QP creation, set that capability for WQ/QP if user requested
      it and HW supports it.
      
      PCI write end padding modification is not supported for now. There's no
      such flag for a QP but for a WQ, create and modify use the same flag.
      Return an error if PCI write end padding flag is set during modify_wq.
      Signed-off-by: default avatarNoa Osherovich <noaos@mellanox.com>
      Reviewed-by: default avatarMajd Dibbiny <majd@mellanox.com>
      Signed-off-by: default avatarLeon Romanovsky <leon@kernel.org>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      b1383aa6
    • Noa Osherovich's avatar
      IB/core: Add PCI write end padding flags for WQ and QP · e1d2e887
      Noa Osherovich authored
      There are root complexes that are able to optimize their
      performance when incoming data is multiple full cache lines.
      
      PCI write end padding is the device's ability to pad the ending of
      incoming packets (scatter) to full cache line such that the last
      upstream write generated by an incoming packet will be a full cache
      line.
      
      Add a relevant entry to ib_device_cap_flags to report such capability
      of an RDMA device.
      
      Add the QP and WQ create flags:
       * A QP/WQ created with a scatter end padding flag will cause
         HW to pad the last upstream write generated by a packet to cache line.
      
      User should consider several factors before activating this feature:
      - In case of high CPU memory load (which may cause PCI back pressure in
        turn), if a large percent of the writes are partial cache line, this
        feature should be checked as an optional solution.
      - This feature might reduce performance if most packets are between one
        and two cache lines and PCIe throughput has reached its maximum
        capacity. E.g. 65B packet from the network port will lead to 128B
        write on PCIe, which may cause traffic on PCIe to reach high
        throughput.
      Signed-off-by: default avatarNoa Osherovich <noaos@mellanox.com>
      Reviewed-by: default avatarMajd Dibbiny <majd@mellanox.com>
      Signed-off-by: default avatarLeon Romanovsky <leon@kernel.org>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      e1d2e887