Commit 19fd08b8 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus-unmerged' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma

Pull rdma updates from Jason Gunthorpe:
 "Doug and I are at a conference next week so if another PR is sent I
  expect it to only be bug fixes. Parav noted yesterday that there are
  some fringe case behavior changes in his work that he would like to
  fix, and I see that Intel has a number of rc looking patches for HFI1
  they posted yesterday.

  Parav is again the biggest contributor by patch count with his ongoing
  work to enable container support in the RDMA stack, followed by Leon
  doing syzkaller inspired cleanups, though most of the actual fixing
  went to RC.

  There is one uncomfortable series here fixing the user ABI to actually
  work as intended in 32 bit mode. There are lots of notes in the commit
  messages, but the basic summary is we don't think there is an actual
  32 bit kernel user of drivers/infiniband for several good reasons.

  However we are seeing people want to use a 32 bit user space with 64
  bit kernel, which didn't completely work today. So in fixing it we
  required a 32 bit rxe user to upgrade their userspace. rxe users are
  still already quite rare and we think a 32 bit one is non-existing.

   - Fix RDMA uapi headers to actually compile in userspace and be more
     complete

   - Three shared with netdev pull requests from Mellanox:

      * 7 patches, mostly to net with 1 IB related one at the back).
        This series addresses an IRQ performance issue (patch 1),
        cleanups related to the fix for the IRQ performance problem
        (patches 2-6), and then extends the fragmented completion queue
        support that already exists in the net side of the driver to the
        ib side of the driver (patch 7).

      * Mostly IB, with 5 patches to net that are needed to support the
        remaining 10 patches to the IB subsystem. This series extends
        the current 'representor' framework when the mlx5 driver is in
        switchdev mode from being a netdev only construct to being a
        netdev/IB dev construct. The IB dev is limited to raw Eth queue
        pairs only, but by having an IB dev of this type attached to the
        representor for a switchdev port, it enables DPDK to work on the
        switchdev device.

      * All net related, but needed as infrastructure for the rdma
        driver

   - Updates for the hns, i40iw, bnxt_re, cxgb3, cxgb4, hns drivers

   - SRP performance updates

   - IB uverbs write path cleanup patch series from Leon

   - Add RDMA_CM support to ib_srpt. This is disabled by default. Users
     need to set the port for ib_srpt to listen on in configfs in order
     for it to be enabled
     (/sys/kernel/config/target/srpt/discovery_auth/rdma_cm_port)

   - TSO and Scatter FCS support in mlx4

   - Refactor of modify_qp routine to resolve problems seen while
     working on new code that is forthcoming

   - More refactoring and updates of RDMA CM for containers support from
     Parav

   - mlx5 'fine grained packet pacing', 'ipsec offload' and 'device
     memory' user API features

   - Infrastructure updates for the new IOCTL interface, based on
     increased usage

   - ABI compatibility bug fixes to fully support 32 bit userspace on 64
     bit kernel as was originally intended. See the commit messages for
     extensive details

   - Syzkaller bugs and code cleanups motivated by them"

* tag 'for-linus-unmerged' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (199 commits)
  IB/rxe: Fix for oops in rxe_register_device on ppc64le arch
  IB/mlx5: Device memory mr registration support
  net/mlx5: Mkey creation command adjustments
  IB/mlx5: Device memory support in mlx5_ib
  net/mlx5: Query device memory capabilities
  IB/uverbs: Add device memory registration ioctl support
  IB/uverbs: Add alloc/free dm uverbs ioctl support
  IB/uverbs: Add device memory capabilities reporting
  IB/uverbs: Expose device memory capabilities to user
  RDMA/qedr: Fix wmb usage in qedr
  IB/rxe: Removed GID add/del dummy routines
  RDMA/qedr: Zero stack memory before copying to user space
  IB/mlx5: Add ability to hash by IPSEC_SPI when creating a TIR
  IB/mlx5: Add information for querying IPsec capabilities
  IB/mlx5: Add IPsec support for egress and ingress
  {net,IB}/mlx5: Add ipsec helper
  IB/mlx5: Add modify_flow_action_esp verb
  IB/mlx5: Add implementation for create and destroy action_xfrm
  IB/uverbs: Introduce ESP steering match filter
  IB/uverbs: Add modify ESP flow_action
  ...
parents 28da7be5 efc365e7
...@@ -102,6 +102,8 @@ Koushik <raghavendra.koushik@neterion.com> ...@@ -102,6 +102,8 @@ Koushik <raghavendra.koushik@neterion.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com> Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com>
Leonid I Ananiev <leonid.i.ananiev@intel.com> Leonid I Ananiev <leonid.i.ananiev@intel.com>
Linas Vepstas <linas@austin.ibm.com> Linas Vepstas <linas@austin.ibm.com>
Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de> Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de>
......
...@@ -7214,6 +7214,7 @@ M: Shiraz Saleem <shiraz.saleem@intel.com> ...@@ -7214,6 +7214,7 @@ M: Shiraz Saleem <shiraz.saleem@intel.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
S: Supported S: Supported
F: drivers/infiniband/hw/i40iw/ F: drivers/infiniband/hw/i40iw/
F: include/uapi/rdma/i40iw-abi.h
INTEL SHA MULTIBUFFER DRIVER INTEL SHA MULTIBUFFER DRIVER
M: Megha Dey <megha.dey@linux.intel.com> M: Megha Dey <megha.dey@linux.intel.com>
......
...@@ -35,14 +35,13 @@ config INFINIBAND_USER_ACCESS ...@@ -35,14 +35,13 @@ config INFINIBAND_USER_ACCESS
libibverbs, libibcm and a hardware driver library from libibverbs, libibcm and a hardware driver library from
rdma-core <https://github.com/linux-rdma/rdma-core>. rdma-core <https://github.com/linux-rdma/rdma-core>.
config INFINIBAND_EXP_USER_ACCESS config INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI
bool "Enable the full uverbs ioctl interface (EXPERIMENTAL)" bool "Allow experimental legacy verbs in new ioctl uAPI (EXPERIMENTAL)"
depends on INFINIBAND_USER_ACCESS depends on INFINIBAND_USER_ACCESS
---help--- ---help---
IOCTL based ABI support for Infiniband. This allows userspace IOCTL based uAPI support for Infiniband is enabled by default for
to invoke the experimental IOCTL based ABI. new verbs only. This allows userspace to invoke the IOCTL based uAPI
These commands are parsed via per-device parsing tree and for current legacy verbs too.
enables per-device features.
config INFINIBAND_USER_MEM config INFINIBAND_USER_MEM
bool bool
......
...@@ -34,4 +34,6 @@ ib_ucm-y := ucm.o ...@@ -34,4 +34,6 @@ ib_ucm-y := ucm.o
ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \ ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \
rdma_core.o uverbs_std_types.o uverbs_ioctl.o \ rdma_core.o uverbs_std_types.o uverbs_ioctl.o \
uverbs_ioctl_merge.o uverbs_ioctl_merge.o uverbs_std_types_cq.o \
uverbs_std_types_flow_action.o uverbs_std_types_dm.o \
uverbs_std_types_mr.o
...@@ -329,7 +329,8 @@ static void queue_req(struct addr_req *req) ...@@ -329,7 +329,8 @@ static void queue_req(struct addr_req *req)
mutex_unlock(&lock); mutex_unlock(&lock);
} }
static int ib_nl_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr, static int ib_nl_fetch_ha(const struct dst_entry *dst,
struct rdma_dev_addr *dev_addr,
const void *daddr, u32 seq, u16 family) const void *daddr, u32 seq, u16 family)
{ {
if (rdma_nl_chk_listeners(RDMA_NL_GROUP_LS)) if (rdma_nl_chk_listeners(RDMA_NL_GROUP_LS))
...@@ -340,7 +341,8 @@ static int ib_nl_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr, ...@@ -340,7 +341,8 @@ static int ib_nl_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
return ib_nl_ip_send_msg(dev_addr, daddr, seq, family); return ib_nl_ip_send_msg(dev_addr, daddr, seq, family);
} }
static int dst_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr, static int dst_fetch_ha(const struct dst_entry *dst,
struct rdma_dev_addr *dev_addr,
const void *daddr) const void *daddr)
{ {
struct neighbour *n; struct neighbour *n;
...@@ -364,7 +366,7 @@ static int dst_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr, ...@@ -364,7 +366,7 @@ static int dst_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
return ret; return ret;
} }
static bool has_gateway(struct dst_entry *dst, sa_family_t family) static bool has_gateway(const struct dst_entry *dst, sa_family_t family)
{ {
struct rtable *rt; struct rtable *rt;
struct rt6_info *rt6; struct rt6_info *rt6;
...@@ -378,7 +380,7 @@ static bool has_gateway(struct dst_entry *dst, sa_family_t family) ...@@ -378,7 +380,7 @@ static bool has_gateway(struct dst_entry *dst, sa_family_t family)
return rt6->rt6i_flags & RTF_GATEWAY; return rt6->rt6i_flags & RTF_GATEWAY;
} }
static int fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr, static int fetch_ha(const struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
const struct sockaddr *dst_in, u32 seq) const struct sockaddr *dst_in, u32 seq)
{ {
const struct sockaddr_in *dst_in4 = const struct sockaddr_in *dst_in4 =
...@@ -482,7 +484,7 @@ static int addr6_resolve(struct sockaddr_in6 *src_in, ...@@ -482,7 +484,7 @@ static int addr6_resolve(struct sockaddr_in6 *src_in,
} }
#endif #endif
static int addr_resolve_neigh(struct dst_entry *dst, static int addr_resolve_neigh(const struct dst_entry *dst,
const struct sockaddr *dst_in, const struct sockaddr *dst_in,
struct rdma_dev_addr *addr, struct rdma_dev_addr *addr,
u32 seq) u32 seq)
...@@ -736,7 +738,6 @@ int rdma_resolve_ip_route(struct sockaddr *src_addr, ...@@ -736,7 +738,6 @@ int rdma_resolve_ip_route(struct sockaddr *src_addr,
return addr_resolve(src_in, dst_addr, addr, false, 0); return addr_resolve(src_in, dst_addr, addr, false, 0);
} }
EXPORT_SYMBOL(rdma_resolve_ip_route);
void rdma_addr_cancel(struct rdma_dev_addr *addr) void rdma_addr_cancel(struct rdma_dev_addr *addr)
{ {
......
This diff is collapsed.
...@@ -462,13 +462,31 @@ static int cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc, ...@@ -462,13 +462,31 @@ static int cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
grh, &av->ah_attr); grh, &av->ah_attr);
} }
static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av, static int add_cm_id_to_port_list(struct cm_id_private *cm_id_priv,
struct cm_id_private *cm_id_priv) struct cm_av *av,
struct cm_port *port)
{
unsigned long flags;
int ret = 0;
spin_lock_irqsave(&cm.lock, flags);
if (&cm_id_priv->av == av)
list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list);
else if (&cm_id_priv->alt_av == av)
list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list);
else
ret = -EINVAL;
spin_unlock_irqrestore(&cm.lock, flags);
return ret;
}
static struct cm_port *get_cm_port_from_path(struct sa_path_rec *path)
{ {
struct cm_device *cm_dev; struct cm_device *cm_dev;
struct cm_port *port = NULL; struct cm_port *port = NULL;
unsigned long flags; unsigned long flags;
int ret;
u8 p; u8 p;
struct net_device *ndev = ib_get_ndev_from_path(path); struct net_device *ndev = ib_get_ndev_from_path(path);
...@@ -477,7 +495,7 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av, ...@@ -477,7 +495,7 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av,
if (!ib_find_cached_gid(cm_dev->ib_device, &path->sgid, if (!ib_find_cached_gid(cm_dev->ib_device, &path->sgid,
sa_conv_pathrec_to_gid_type(path), sa_conv_pathrec_to_gid_type(path),
ndev, &p, NULL)) { ndev, &p, NULL)) {
port = cm_dev->port[p-1]; port = cm_dev->port[p - 1];
break; break;
} }
} }
...@@ -485,9 +503,20 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av, ...@@ -485,9 +503,20 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av,
if (ndev) if (ndev)
dev_put(ndev); dev_put(ndev);
return port;
}
static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av,
struct cm_id_private *cm_id_priv)
{
struct cm_device *cm_dev;
struct cm_port *port;
int ret;
port = get_cm_port_from_path(path);
if (!port) if (!port)
return -EINVAL; return -EINVAL;
cm_dev = port->cm_dev;
ret = ib_find_cached_pkey(cm_dev->ib_device, port->port_num, ret = ib_find_cached_pkey(cm_dev->ib_device, port->port_num,
be16_to_cpu(path->pkey), &av->pkey_index); be16_to_cpu(path->pkey), &av->pkey_index);
...@@ -502,16 +531,7 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av, ...@@ -502,16 +531,7 @@ static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av,
av->timeout = path->packet_life_time + 1; av->timeout = path->packet_life_time + 1;
spin_lock_irqsave(&cm.lock, flags); ret = add_cm_id_to_port_list(cm_id_priv, av, port);
if (&cm_id_priv->av == av)
list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list);
else if (&cm_id_priv->alt_av == av)
list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list);
else
ret = -EINVAL;
spin_unlock_irqrestore(&cm.lock, flags);
return ret; return ret;
} }
...@@ -1523,6 +1543,8 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg, ...@@ -1523,6 +1543,8 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
cm_req_get_primary_local_ack_timeout(req_msg); cm_req_get_primary_local_ack_timeout(req_msg);
primary_path->packet_life_time -= (primary_path->packet_life_time > 0); primary_path->packet_life_time -= (primary_path->packet_life_time > 0);
primary_path->service_id = req_msg->service_id; primary_path->service_id = req_msg->service_id;
if (sa_path_is_roce(primary_path))
primary_path->roce.route_resolved = false;
if (cm_req_has_alt_path(req_msg)) { if (cm_req_has_alt_path(req_msg)) {
alt_path->dgid = req_msg->alt_local_gid; alt_path->dgid = req_msg->alt_local_gid;
...@@ -1542,6 +1564,9 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg, ...@@ -1542,6 +1564,9 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
cm_req_get_alt_local_ack_timeout(req_msg); cm_req_get_alt_local_ack_timeout(req_msg);
alt_path->packet_life_time -= (alt_path->packet_life_time > 0); alt_path->packet_life_time -= (alt_path->packet_life_time > 0);
alt_path->service_id = req_msg->service_id; alt_path->service_id = req_msg->service_id;
if (sa_path_is_roce(alt_path))
alt_path->roce.route_resolved = false;
} }
cm_format_path_lid_from_req(req_msg, primary_path, alt_path); cm_format_path_lid_from_req(req_msg, primary_path, alt_path);
} }
...@@ -3150,6 +3175,13 @@ static int cm_lap_handler(struct cm_work *work) ...@@ -3150,6 +3175,13 @@ static int cm_lap_handler(struct cm_work *work)
struct ib_mad_send_buf *msg = NULL; struct ib_mad_send_buf *msg = NULL;
int ret; int ret;
/* Currently Alternate path messages are not supported for
* RoCE link layer.
*/
if (rdma_protocol_roce(work->port->cm_dev->ib_device,
work->port->port_num))
return -EINVAL;
/* todo: verify LAP request and send reject APR if invalid. */ /* todo: verify LAP request and send reject APR if invalid. */
lap_msg = (struct cm_lap_msg *)work->mad_recv_wc->recv_buf.mad; lap_msg = (struct cm_lap_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(lap_msg->remote_comm_id, cm_id_priv = cm_acquire_id(lap_msg->remote_comm_id,
...@@ -3299,6 +3331,13 @@ static int cm_apr_handler(struct cm_work *work) ...@@ -3299,6 +3331,13 @@ static int cm_apr_handler(struct cm_work *work)
struct cm_apr_msg *apr_msg; struct cm_apr_msg *apr_msg;
int ret; int ret;
/* Currently Alternate path messages are not supported for
* RoCE link layer.
*/
if (rdma_protocol_roce(work->port->cm_dev->ib_device,
work->port->port_num))
return -EINVAL;
apr_msg = (struct cm_apr_msg *)work->mad_recv_wc->recv_buf.mad; apr_msg = (struct cm_apr_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(apr_msg->remote_comm_id, cm_id_priv = cm_acquire_id(apr_msg->remote_comm_id,
apr_msg->local_comm_id); apr_msg->local_comm_id);
......
This diff is collapsed.
/*
* Copyright (c) 2005 Voltaire Inc. All rights reserved.
* Copyright (c) 2002-2005, Network Appliance, Inc. All rights reserved.
* Copyright (c) 1999-2005, Mellanox Technologies, Inc. All rights reserved.
* Copyright (c) 2005-2006 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _CMA_PRIV_H
#define _CMA_PRIV_H
enum rdma_cm_state {
RDMA_CM_IDLE,
RDMA_CM_ADDR_QUERY,
RDMA_CM_ADDR_RESOLVED,
RDMA_CM_ROUTE_QUERY,
RDMA_CM_ROUTE_RESOLVED,
RDMA_CM_CONNECT,
RDMA_CM_DISCONNECT,
RDMA_CM_ADDR_BOUND,
RDMA_CM_LISTEN,
RDMA_CM_DEVICE_REMOVAL,
RDMA_CM_DESTROYING
};
struct rdma_id_private {
struct rdma_cm_id id;
struct rdma_bind_list *bind_list;
struct hlist_node node;
struct list_head list; /* listen_any_list or cma_device.list */
struct list_head listen_list; /* per device listens */
struct cma_device *cma_dev;
struct list_head mc_list;
int internal_id;
enum rdma_cm_state state;
spinlock_t lock;
struct mutex qp_mutex;
struct completion comp;
atomic_t refcount;
struct mutex handler_mutex;
int backlog;
int timeout_ms;
struct ib_sa_query *query;
int query_id;
union {
struct ib_cm_id *ib;
struct iw_cm_id *iw;
} cm_id;
u32 seq_num;
u32 qkey;
u32 qp_num;
u32 options;
u8 srq;
u8 tos;
bool tos_set;
u8 reuseaddr;
u8 afonly;
enum ib_gid_type gid_type;
/*
* Internal to RDMA/core, don't use in the drivers
*/
struct rdma_restrack_entry res;
};
#endif /* _CMA_PRIV_H */
...@@ -333,4 +333,15 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev, ...@@ -333,4 +333,15 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev,
return qp; return qp;
} }
struct rdma_dev_addr;
int rdma_resolve_ip_route(struct sockaddr *src_addr,
const struct sockaddr *dst_addr,
struct rdma_dev_addr *addr);
int rdma_addr_find_l2_eth_by_grh(const union ib_gid *sgid,
const union ib_gid *dgid,
u8 *dmac, const struct net_device *ndev,
int *hoplimit);
#endif /* _CORE_PRIV_H */ #endif /* _CORE_PRIV_H */
...@@ -103,7 +103,6 @@ static int ib_device_check_mandatory(struct ib_device *device) ...@@ -103,7 +103,6 @@ static int ib_device_check_mandatory(struct ib_device *device)
IB_MANDATORY_FUNC(query_device), IB_MANDATORY_FUNC(query_device),
IB_MANDATORY_FUNC(query_port), IB_MANDATORY_FUNC(query_port),
IB_MANDATORY_FUNC(query_pkey), IB_MANDATORY_FUNC(query_pkey),
IB_MANDATORY_FUNC(query_gid),
IB_MANDATORY_FUNC(alloc_pd), IB_MANDATORY_FUNC(alloc_pd),
IB_MANDATORY_FUNC(dealloc_pd), IB_MANDATORY_FUNC(dealloc_pd),
IB_MANDATORY_FUNC(create_ah), IB_MANDATORY_FUNC(create_ah),
...@@ -853,7 +852,7 @@ int ib_query_port(struct ib_device *device, ...@@ -853,7 +852,7 @@ int ib_query_port(struct ib_device *device,
if (rdma_port_get_link_layer(device, port_num) != IB_LINK_LAYER_INFINIBAND) if (rdma_port_get_link_layer(device, port_num) != IB_LINK_LAYER_INFINIBAND)
return 0; return 0;
err = ib_query_gid(device, port_num, 0, &gid, NULL); err = device->query_gid(device, port_num, 0, &gid);
if (err) if (err)
return err; return err;
...@@ -871,19 +870,13 @@ EXPORT_SYMBOL(ib_query_port); ...@@ -871,19 +870,13 @@ EXPORT_SYMBOL(ib_query_port);
* @attr: Returned GID attributes related to this GID index (only in RoCE). * @attr: Returned GID attributes related to this GID index (only in RoCE).
* NULL means ignore. * NULL means ignore.
* *
* ib_query_gid() fetches the specified GID table entry. * ib_query_gid() fetches the specified GID table entry from the cache.
*/ */
int ib_query_gid(struct ib_device *device, int ib_query_gid(struct ib_device *device,
u8 port_num, int index, union ib_gid *gid, u8 port_num, int index, union ib_gid *gid,
struct ib_gid_attr *attr) struct ib_gid_attr *attr)
{ {
if (rdma_cap_roce_gid_table(device, port_num)) return ib_get_cached_gid(device, port_num, index, gid, attr);
return ib_get_cached_gid(device, port_num, index, gid, attr);
if (attr)
return -EINVAL;
return device->query_gid(device, port_num, index, gid);
} }
EXPORT_SYMBOL(ib_query_gid); EXPORT_SYMBOL(ib_query_gid);
...@@ -1049,19 +1042,18 @@ EXPORT_SYMBOL(ib_modify_port); ...@@ -1049,19 +1042,18 @@ EXPORT_SYMBOL(ib_modify_port);
* a specified GID value occurs. Its searches only for IB link layer. * a specified GID value occurs. Its searches only for IB link layer.
* @device: The device to query. * @device: The device to query.
* @gid: The GID value to search for. * @gid: The GID value to search for.
* @ndev: The ndev related to the GID to search for.
* @port_num: The port number of the device where the GID value was found. * @port_num: The port number of the device where the GID value was found.
* @index: The index into the GID table where the GID was found. This * @index: The index into the GID table where the GID was found. This
* parameter may be NULL. * parameter may be NULL.
*/ */
int ib_find_gid(struct ib_device *device, union ib_gid *gid, int ib_find_gid(struct ib_device *device, union ib_gid *gid,
struct net_device *ndev, u8 *port_num, u16 *index) u8 *port_num, u16 *index)
{ {
union ib_gid tmp_gid; union ib_gid tmp_gid;
int ret, port, i; int ret, port, i;
for (port = rdma_start_port(device); port <= rdma_end_port(device); ++port) { for (port = rdma_start_port(device); port <= rdma_end_port(device); ++port) {
if (rdma_cap_roce_gid_table(device, port)) if (!rdma_protocol_ib(device, port))
continue; continue;
for (i = 0; i < device->port_immutable[port].gid_tbl_len; ++i) { for (i = 0; i < device->port_immutable[port].gid_tbl_len; ++i) {
......
...@@ -439,10 +439,9 @@ struct sk_buff *iwpm_create_nlmsg(u32 nl_op, struct nlmsghdr **nlh, ...@@ -439,10 +439,9 @@ struct sk_buff *iwpm_create_nlmsg(u32 nl_op, struct nlmsghdr **nlh,
struct sk_buff *skb = NULL; struct sk_buff *skb = NULL;
skb = dev_alloc_skb(IWPM_MSG_SIZE); skb = dev_alloc_skb(IWPM_MSG_SIZE);
if (!skb) { if (!skb)
pr_err("%s Unable to allocate skb\n", __func__);
goto create_nlmsg_exit; goto create_nlmsg_exit;
}
if (!(ibnl_put_msg(skb, nlh, 0, 0, nl_client, nl_op, if (!(ibnl_put_msg(skb, nlh, 0, 0, nl_client, nl_op,
NLM_F_REQUEST))) { NLM_F_REQUEST))) {
pr_warn("%s: Unable to put the nlmsg header\n", __func__); pr_warn("%s: Unable to put the nlmsg header\n", __func__);
......
...@@ -724,21 +724,19 @@ int ib_init_ah_from_mcmember(struct ib_device *device, u8 port_num, ...@@ -724,21 +724,19 @@ int ib_init_ah_from_mcmember(struct ib_device *device, u8 port_num,
{ {
int ret; int ret;
u16 gid_index; u16 gid_index;
u8 p;
if (rdma_protocol_roce(device, port_num)) {
ret = ib_find_cached_gid_by_port(device, &rec->port_gid,
gid_type, port_num,
ndev,
&gid_index);
} else if (rdma_protocol_ib(device, port_num)) {
ret = ib_find_cached_gid(device, &rec->port_gid,
IB_GID_TYPE_IB, NULL, &p,
&gid_index);
} else {
ret = -EINVAL;
}
/* GID table is not based on the netdevice for IB link layer,
* so ignore ndev during search.
*/
if (rdma_protocol_ib(device, port_num))
ndev = NULL;
else if (!rdma_protocol_roce(device, port_num))
return -EINVAL;
ret = ib_find_cached_gid_by_port(device, &rec->port_gid,
gid_type, port_num,
ndev,
&gid_index);
if (ret) if (ret)
return ret; return ret;
......
This diff is collapsed.
...@@ -350,13 +350,6 @@ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, ...@@ -350,13 +350,6 @@ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type,
return type->type_class->alloc_begin(type, ucontext); return type->type_class->alloc_begin(type, ucontext);
} }
static void uverbs_uobject_add(struct ib_uobject *uobject)
{
mutex_lock(&uobject->context->uobjects_lock);
list_add(&uobject->list, &uobject->context->uobjects);
mutex_unlock(&uobject->context->uobjects_lock);
}
static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj, static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj,
enum rdma_remove_reason why) enum rdma_remove_reason why)
{ {
...@@ -502,7 +495,6 @@ int rdma_explicit_destroy(struct ib_uobject *uobject) ...@@ -502,7 +495,6 @@ int rdma_explicit_destroy(struct ib_uobject *uobject)
static void alloc_commit_idr_uobject(struct ib_uobject *uobj) static void alloc_commit_idr_uobject(struct ib_uobject *uobj)
{ {
uverbs_uobject_add(uobj);
spin_lock(&uobj->context->ufile->idr_lock); spin_lock(&uobj->context->ufile->idr_lock);
/* /*
* We already allocated this IDR with a NULL object, so * We already allocated this IDR with a NULL object, so
...@@ -518,7 +510,6 @@ static void alloc_commit_fd_uobject(struct ib_uobject *uobj) ...@@ -518,7 +510,6 @@ static void alloc_commit_fd_uobject(struct ib_uobject *uobj)
struct ib_uobject_file *uobj_file = struct ib_uobject_file *uobj_file =
container_of(uobj, struct ib_uobject_file, uobj); container_of(uobj, struct ib_uobject_file, uobj);
uverbs_uobject_add(&uobj_file->uobj);
fd_install(uobj_file->uobj.id, uobj->object); fd_install(uobj_file->uobj.id, uobj->object);
/* This shouldn't be used anymore. Use the file object instead */ /* This shouldn't be used anymore. Use the file object instead */
uobj_file->uobj.id = 0; uobj_file->uobj.id = 0;
...@@ -545,6 +536,10 @@ int rdma_alloc_commit_uobject(struct ib_uobject *uobj) ...@@ -545,6 +536,10 @@ int rdma_alloc_commit_uobject(struct ib_uobject *uobj)
assert_uverbs_usecnt(uobj, true); assert_uverbs_usecnt(uobj, true);
atomic_set(&uobj->usecnt, 0); atomic_set(&uobj->usecnt, 0);
mutex_lock(&uobj->context->uobjects_lock);
list_add(&uobj->list, &uobj->context->uobjects);
mutex_unlock(&uobj->context->uobjects_lock);
uobj->type->type_class->alloc_commit(uobj); uobj->type->type_class->alloc_commit(uobj);
up_read(&uobj->context->cleanup_rwsem); up_read(&uobj->context->cleanup_rwsem);
......
...@@ -3,20 +3,66 @@ ...@@ -3,20 +3,66 @@
* Copyright (c) 2017-2018 Mellanox Technologies. All rights reserved. * Copyright (c) 2017-2018 Mellanox Technologies. All rights reserved.
*/ */
#include <rdma/rdma_cm.h>
#include <rdma/ib_verbs.h> #include <rdma/ib_verbs.h>
#include <rdma/restrack.h> #include <rdma/restrack.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/pid_namespace.h> #include <linux/pid_namespace.h>
#include "cma_priv.h"
void rdma_restrack_init(struct rdma_restrack_root *res) void rdma_restrack_init(struct rdma_restrack_root *res)
{ {
init_rwsem(&res->rwsem); init_rwsem(&res->rwsem);
} }
static const char *type2str(enum rdma_restrack_type type)
{
static const char * const names[RDMA_RESTRACK_MAX] = {
[RDMA_RESTRACK_PD] = "PD",
[RDMA_RESTRACK_CQ] = "CQ",
[RDMA_RESTRACK_QP] = "QP",
[RDMA_RESTRACK_CM_ID] = "CM_ID",
[RDMA_RESTRACK_MR] = "MR",
};
return names[type];
};
void rdma_restrack_clean(struct rdma_restrack_root *res) void rdma_restrack_clean(struct rdma_restrack_root *res)
{ {
WARN_ON_ONCE(!hash_empty(res->hash)); struct rdma_restrack_entry *e;
char buf[TASK_COMM_LEN];
struct ib_device *dev;
const char *owner;
int bkt;
if (hash_empty(res->hash))
return;
dev = container_of(res, struct ib_device, res);
pr_err("restrack: %s", CUT_HERE);
pr_err("restrack: BUG: RESTRACK detected leak of resources on %s\n",
dev->name);
hash_for_each(res->hash, bkt, e, node) {
if (rdma_is_kernel_res(e)) {
owner = e->kern_name;
} else {
/*
* There is no need to call get_task_struct here,
* because we can be here only if there are more
* get_task_struct() call than put_task_struct().
*/
get_task_comm(buf, e->task);
owner = buf;
}
pr_err("restrack: %s %s object allocated by %s is not freed\n",
rdma_is_kernel_res(e) ? "Kernel" : "User",
type2str(e->type), owner);
}
pr_err("restrack: %s", CUT_HERE);
} }
int rdma_restrack_count(struct rdma_restrack_root *res, int rdma_restrack_count(struct rdma_restrack_root *res,
...@@ -40,51 +86,48 @@ EXPORT_SYMBOL(rdma_restrack_count); ...@@ -40,51 +86,48 @@ EXPORT_SYMBOL(rdma_restrack_count);
static void set_kern_name(struct rdma_restrack_entry *res) static void set_kern_name(struct rdma_restrack_entry *res)
{ {
enum rdma_restrack_type type = res->type; struct ib_pd *pd;
struct ib_qp *qp;
if (type != RDMA_RESTRACK_QP)
/* PD and CQ types already have this name embedded in */
return;
qp = container_of(res, struct ib_qp, res); switch (res->type) {
if (!qp->pd) { case RDMA_RESTRACK_QP:
WARN_ONCE(true, "XRC QPs are not supported\n"); pd = container_of(res, struct ib_qp, res)->pd;
/* Survive, despite the programmer's error */ if (!pd) {
res->kern_name = " "; WARN_ONCE(true, "XRC QPs are not supported\n");
return; /* Survive, despite the programmer's error */
res->kern_name = " ";
}
break;
case RDMA_RESTRACK_MR:
pd = container_of(res, struct ib_mr, res)->pd;
break;
default:
/* Other types set kern_name directly */
pd = NULL;
break;
} }
res->kern_name = qp->pd->res.kern_name; if (pd)
res->kern_name = pd->res.kern_name;
} }
static struct ib_device *res_to_dev(struct rdma_restrack_entry *res) static struct ib_device *res_to_dev(struct rdma_restrack_entry *res)
{ {
enum rdma_restrack_type type = res->type; switch (res->type) {
struct ib_device *dev;
struct ib_pd *pd;
struct ib_cq *cq;
struct ib_qp *qp;
switch (type) {
case RDMA_RESTRACK_PD: case RDMA_RESTRACK_PD:
pd = container_of(res, struct ib_pd, res); return container_of(res, struct ib_pd, res)->device;
dev = pd->device;
break;
case RDMA_RESTRACK_CQ: case RDMA_RESTRACK_CQ:
cq = container_of(res, struct ib_cq, res); return container_of(res, struct ib_cq, res)->device;
dev = cq->device;
break;
case RDMA_RESTRACK_QP: case RDMA_RESTRACK_QP:
qp = container_of(res, struct ib_qp, res); return container_of(res, struct ib_qp, res)->device;
dev = qp->device; case RDMA_RESTRACK_CM_ID:
break; return container_of(res, struct rdma_id_private,
res)->id.device;
case RDMA_RESTRACK_MR:
return container_of(res, struct ib_mr, res)->device;
default: default:
WARN_ONCE(true, "Wrong resource tracking type %u\n", type); WARN_ONCE(true, "Wrong resource tracking type %u\n", res->type);
return NULL; return NULL;
} }
return dev;
} }
static bool res_is_user(struct rdma_restrack_entry *res) static bool res_is_user(struct rdma_restrack_entry *res)
...@@ -96,6 +139,10 @@ static bool res_is_user(struct rdma_restrack_entry *res) ...@@ -96,6 +139,10 @@ static bool res_is_user(struct rdma_restrack_entry *res)
return container_of(res, struct ib_cq, res)->uobject; return container_of(res, struct ib_cq, res)->uobject;
case RDMA_RESTRACK_QP: case RDMA_RESTRACK_QP:
return container_of(res, struct ib_qp, res)->uobject; return container_of(res, struct ib_qp, res)->uobject;
case RDMA_RESTRACK_CM_ID:
return !res->kern_name;
case RDMA_RESTRACK_MR:
return container_of(res, struct ib_mr, res)->pd->uobject;
default: default:
WARN_ONCE(true, "Wrong resource tracking type %u\n", res->type); WARN_ONCE(true, "Wrong resource tracking type %u\n", res->type);
return false; return false;
...@@ -109,13 +156,15 @@ void rdma_restrack_add(struct rdma_restrack_entry *res) ...@@ -109,13 +156,15 @@ void rdma_restrack_add(struct rdma_restrack_entry *res)
if (!dev) if (!dev)
return; return;
if (res->type != RDMA_RESTRACK_CM_ID || !res_is_user(res))
res->task = NULL;
if (res_is_user(res)) { if (res_is_user(res)) {
get_task_struct(current); if (!res->task)
res->task = current; rdma_restrack_set_task(res, current);
res->kern_name = NULL; res->kern_name = NULL;
} else { } else {
set_kern_name(res); set_kern_name(res);
res->task = NULL;
} }
kref_init(&res->kref); kref_init(&res->kref);
......
...@@ -1227,118 +1227,130 @@ static u8 get_src_path_mask(struct ib_device *device, u8 port_num) ...@@ -1227,118 +1227,130 @@ static u8 get_src_path_mask(struct ib_device *device, u8 port_num)
return src_path_mask; return src_path_mask;
} }
int ib_init_ah_attr_from_path(struct ib_device *device, u8 port_num, static int
struct sa_path_rec *rec, roce_resolve_route_from_path(struct ib_device *device, u8 port_num,
struct rdma_ah_attr *ah_attr) struct sa_path_rec *rec)
{ {
struct net_device *resolved_dev;
struct net_device *ndev;
struct net_device *idev;
struct rdma_dev_addr dev_addr = {
.bound_dev_if = ((sa_path_get_ifindex(rec) >= 0) ?
sa_path_get_ifindex(rec) : 0),
.net = sa_path_get_ndev(rec) ?
sa_path_get_ndev(rec) :
&init_net
};
union {
struct sockaddr _sockaddr;
struct sockaddr_in _sockaddr_in;
struct sockaddr_in6 _sockaddr_in6;
} sgid_addr, dgid_addr;
int ret; int ret;
u16 gid_index;
int use_roce;
struct net_device *ndev = NULL;
memset(ah_attr, 0, sizeof *ah_attr); if (rec->roce.route_resolved)
ah_attr->type = rdma_ah_find_type(device, port_num); return 0;
rdma_ah_set_dlid(ah_attr, be32_to_cpu(sa_path_get_dlid(rec))); if (!device->get_netdev)
return -EOPNOTSUPP;
if ((ah_attr->type == RDMA_AH_ATTR_TYPE_OPA) && rdma_gid2ip(&sgid_addr._sockaddr, &rec->sgid);
(rdma_ah_get_dlid(ah_attr) == be16_to_cpu(IB_LID_PERMISSIVE))) rdma_gid2ip(&dgid_addr._sockaddr, &rec->dgid);
rdma_ah_set_make_grd(ah_attr, true);
rdma_ah_set_sl(ah_attr, rec->sl); /* validate the route */
rdma_ah_set_path_bits(ah_attr, be32_to_cpu(sa_path_get_slid(rec)) & ret = rdma_resolve_ip_route(&sgid_addr._sockaddr,
get_src_path_mask(device, port_num)); &dgid_addr._sockaddr, &dev_addr);
rdma_ah_set_port_num(ah_attr, port_num); if (ret)
rdma_ah_set_static_rate(ah_attr, rec->rate); return ret;
use_roce = rdma_cap_eth_ah(device, port_num);
if (use_roce) {
struct net_device *idev;
struct net_device *resolved_dev;
struct rdma_dev_addr dev_addr = {
.bound_dev_if = ((sa_path_get_ifindex(rec) >= 0) ?
sa_path_get_ifindex(rec) : 0),
.net = sa_path_get_ndev(rec) ?
sa_path_get_ndev(rec) :
&init_net
};
union {
struct sockaddr _sockaddr;
struct sockaddr_in _sockaddr_in;
struct sockaddr_in6 _sockaddr_in6;
} sgid_addr, dgid_addr;
if (!device->get_netdev)
return -EOPNOTSUPP;
rdma_gid2ip(&sgid_addr._sockaddr, &rec->sgid);
rdma_gid2ip(&dgid_addr._sockaddr, &rec->dgid);
/* validate the route */
ret = rdma_resolve_ip_route(&sgid_addr._sockaddr,
&dgid_addr._sockaddr, &dev_addr);
if (ret)
return ret;
if ((dev_addr.network == RDMA_NETWORK_IPV4 || if ((dev_addr.network == RDMA_NETWORK_IPV4 ||
dev_addr.network == RDMA_NETWORK_IPV6) && dev_addr.network == RDMA_NETWORK_IPV6) &&
rec->rec_type != SA_PATH_REC_TYPE_ROCE_V2) rec->rec_type != SA_PATH_REC_TYPE_ROCE_V2)
return -EINVAL; return -EINVAL;
idev = device->get_netdev(device, port_num); idev = device->get_netdev(device, port_num);
if (!idev) if (!idev)
return -ENODEV; return -ENODEV;
resolved_dev = dev_get_by_index(dev_addr.net, resolved_dev = dev_get_by_index(dev_addr.net,
dev_addr.bound_dev_if); dev_addr.bound_dev_if);
if (!resolved_dev) { if (!resolved_dev) {
dev_put(idev); ret = -ENODEV;
return -ENODEV; goto done;
}
ndev = ib_get_ndev_from_path(rec);
rcu_read_lock();
if ((ndev && ndev != resolved_dev) ||
(resolved_dev != idev &&
!rdma_is_upper_dev_rcu(idev, resolved_dev)))
ret = -EHOSTUNREACH;
rcu_read_unlock();
dev_put(idev);
dev_put(resolved_dev);
if (ret) {
if (ndev)
dev_put(ndev);
return ret;
}
} }
ndev = ib_get_ndev_from_path(rec);
rcu_read_lock();
if ((ndev && ndev != resolved_dev) ||
(resolved_dev != idev &&
!rdma_is_upper_dev_rcu(idev, resolved_dev)))
ret = -EHOSTUNREACH;
rcu_read_unlock();
dev_put(resolved_dev);
if (ndev)
dev_put(ndev);
done:
dev_put(idev);
if (!ret)
rec->roce.route_resolved = true;
return ret;
}
if (rec->hop_limit > 0 || use_roce) { static int init_ah_attr_grh_fields(struct ib_device *device, u8 port_num,
enum ib_gid_type type = sa_conv_pathrec_to_gid_type(rec); struct sa_path_rec *rec,
struct rdma_ah_attr *ah_attr)
{
enum ib_gid_type type = sa_conv_pathrec_to_gid_type(rec);
struct net_device *ndev;
u16 gid_index;
int ret;
ret = ib_find_cached_gid_by_port(device, &rec->sgid, type, ndev = ib_get_ndev_from_path(rec);
port_num, ndev, &gid_index); ret = ib_find_cached_gid_by_port(device, &rec->sgid, type,
if (ret) { port_num, ndev, &gid_index);
if (ndev) if (ndev)
dev_put(ndev); dev_put(ndev);
return ret; if (ret)
} return ret;
rdma_ah_set_grh(ah_attr, &rec->dgid, rdma_ah_set_grh(ah_attr, &rec->dgid,
be32_to_cpu(rec->flow_label), be32_to_cpu(rec->flow_label),
gid_index, rec->hop_limit, gid_index, rec->hop_limit,
rec->traffic_class); rec->traffic_class);
if (ndev) return 0;
dev_put(ndev); }
}
if (use_roce) { int ib_init_ah_attr_from_path(struct ib_device *device, u8 port_num,
u8 *dmac = sa_path_get_dmac(rec); struct sa_path_rec *rec,
struct rdma_ah_attr *ah_attr)
{
int ret = 0;
memset(ah_attr, 0, sizeof(*ah_attr));
ah_attr->type = rdma_ah_find_type(device, port_num);
rdma_ah_set_sl(ah_attr, rec->sl);
rdma_ah_set_port_num(ah_attr, port_num);
rdma_ah_set_static_rate(ah_attr, rec->rate);
if (!dmac) if (sa_path_is_roce(rec)) {
return -EINVAL; ret = roce_resolve_route_from_path(device, port_num, rec);
memcpy(ah_attr->roce.dmac, dmac, ETH_ALEN); if (ret)
return ret;
memcpy(ah_attr->roce.dmac, sa_path_get_dmac(rec), ETH_ALEN);
} else {
rdma_ah_set_dlid(ah_attr, be32_to_cpu(sa_path_get_dlid(rec)));
if (sa_path_is_opa(rec) &&
rdma_ah_get_dlid(ah_attr) == be16_to_cpu(IB_LID_PERMISSIVE))
rdma_ah_set_make_grd(ah_attr, true);
rdma_ah_set_path_bits(ah_attr,
be32_to_cpu(sa_path_get_slid(rec)) &
get_src_path_mask(device, port_num));
} }
return 0; if (rec->hop_limit > 0 || sa_path_is_roce(rec))
ret = init_ah_attr_grh_fields(device, port_num, rec, ah_attr);
return ret;
} }
EXPORT_SYMBOL(ib_init_ah_attr_from_path); EXPORT_SYMBOL(ib_init_ah_attr_from_path);
......
...@@ -273,6 +273,7 @@ static ssize_t rate_show(struct ib_port *p, struct port_attribute *unused, ...@@ -273,6 +273,7 @@ static ssize_t rate_show(struct ib_port *p, struct port_attribute *unused,
break; break;
case IB_SPEED_SDR: case IB_SPEED_SDR:
default: /* default to SDR for invalid rates */ default: /* default to SDR for invalid rates */
speed = " SDR";
rate = 25; rate = 25;
break; break;
} }
...@@ -388,14 +389,26 @@ static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr, ...@@ -388,14 +389,26 @@ static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr,
{ {
struct port_table_attribute *tab_attr = struct port_table_attribute *tab_attr =
container_of(attr, struct port_table_attribute, attr); container_of(attr, struct port_table_attribute, attr);
union ib_gid *pgid;
union ib_gid gid; union ib_gid gid;
ssize_t ret; ssize_t ret;
ret = ib_query_gid(p->ibdev, p->port_num, tab_attr->index, &gid, NULL); ret = ib_query_gid(p->ibdev, p->port_num, tab_attr->index, &gid, NULL);
if (ret)
return ret;
return sprintf(buf, "%pI6\n", gid.raw); /* If reading GID fails, it is likely due to GID entry being empty
* (invalid) or reserved GID in the table.
* User space expects to read GID table entries as long as it given
* index is within GID table size.
* Administrative/debugging tool fails to query rest of the GID entries
* if it hits error while querying a GID of the given index.
* To avoid user space throwing such error on fail to read gid, return
* zero GID as before. This maintains backward compatibility.
*/
if (ret)
pgid = &zgid;
else
pgid = &gid;
return sprintf(buf, "%pI6\n", pgid->raw);
} }
static ssize_t show_port_gid_attr_ndev(struct ib_port *p, static ssize_t show_port_gid_attr_ndev(struct ib_port *p,
...@@ -810,10 +823,15 @@ static ssize_t show_hw_stats(struct kobject *kobj, struct attribute *attr, ...@@ -810,10 +823,15 @@ static ssize_t show_hw_stats(struct kobject *kobj, struct attribute *attr,
dev = port->ibdev; dev = port->ibdev;
stats = port->hw_stats; stats = port->hw_stats;
} }
mutex_lock(&stats->lock);
ret = update_hw_stats(dev, stats, hsa->port_num, hsa->index); ret = update_hw_stats(dev, stats, hsa->port_num, hsa->index);
if (ret) if (ret)
return ret; goto unlock;
return print_hw_stat(stats, hsa->index, buf); ret = print_hw_stat(stats, hsa->index, buf);
unlock:
mutex_unlock(&stats->lock);
return ret;
} }
static ssize_t show_stats_lifespan(struct kobject *kobj, static ssize_t show_stats_lifespan(struct kobject *kobj,
...@@ -821,17 +839,25 @@ static ssize_t show_stats_lifespan(struct kobject *kobj, ...@@ -821,17 +839,25 @@ static ssize_t show_stats_lifespan(struct kobject *kobj,
char *buf) char *buf)
{ {
struct hw_stats_attribute *hsa; struct hw_stats_attribute *hsa;
struct rdma_hw_stats *stats;
int msecs; int msecs;
hsa = container_of(attr, struct hw_stats_attribute, attr); hsa = container_of(attr, struct hw_stats_attribute, attr);
if (!hsa->port_num) { if (!hsa->port_num) {
struct ib_device *dev = container_of((struct device *)kobj, struct ib_device *dev = container_of((struct device *)kobj,
struct ib_device, dev); struct ib_device, dev);
msecs = jiffies_to_msecs(dev->hw_stats->lifespan);
stats = dev->hw_stats;
} else { } else {
struct ib_port *p = container_of(kobj, struct ib_port, kobj); struct ib_port *p = container_of(kobj, struct ib_port, kobj);
msecs = jiffies_to_msecs(p->hw_stats->lifespan);
stats = p->hw_stats;
} }
mutex_lock(&stats->lock);
msecs = jiffies_to_msecs(stats->lifespan);
mutex_unlock(&stats->lock);
return sprintf(buf, "%d\n", msecs); return sprintf(buf, "%d\n", msecs);
} }
...@@ -840,6 +866,7 @@ static ssize_t set_stats_lifespan(struct kobject *kobj, ...@@ -840,6 +866,7 @@ static ssize_t set_stats_lifespan(struct kobject *kobj,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct hw_stats_attribute *hsa; struct hw_stats_attribute *hsa;
struct rdma_hw_stats *stats;
int msecs; int msecs;
int jiffies; int jiffies;
int ret; int ret;
...@@ -854,11 +881,18 @@ static ssize_t set_stats_lifespan(struct kobject *kobj, ...@@ -854,11 +881,18 @@ static ssize_t set_stats_lifespan(struct kobject *kobj,
if (!hsa->port_num) { if (!hsa->port_num) {
struct ib_device *dev = container_of((struct device *)kobj, struct ib_device *dev = container_of((struct device *)kobj,
struct ib_device, dev); struct ib_device, dev);
dev->hw_stats->lifespan = jiffies;
stats = dev->hw_stats;
} else { } else {
struct ib_port *p = container_of(kobj, struct ib_port, kobj); struct ib_port *p = container_of(kobj, struct ib_port, kobj);
p->hw_stats->lifespan = jiffies;
stats = p->hw_stats;
} }
mutex_lock(&stats->lock);
stats->lifespan = jiffies;
mutex_unlock(&stats->lock);
return count; return count;
} }
...@@ -951,6 +985,7 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, ...@@ -951,6 +985,7 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port,
sysfs_attr_init(hsag->attrs[i]); sysfs_attr_init(hsag->attrs[i]);
} }
mutex_init(&stats->lock);
/* treat an error here as non-fatal */ /* treat an error here as non-fatal */
hsag->attrs[i] = alloc_hsa_lifespan("lifespan", port_num); hsag->attrs[i] = alloc_hsa_lifespan("lifespan", port_num);
if (hsag->attrs[i]) if (hsag->attrs[i])
......
...@@ -430,7 +430,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file, ...@@ -430,7 +430,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file,
uevent->resp.id = ctx->id; uevent->resp.id = ctx->id;
} }
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&uevent->resp, sizeof(uevent->resp))) { &uevent->resp, sizeof(uevent->resp))) {
result = -EFAULT; result = -EFAULT;
goto done; goto done;
...@@ -441,7 +441,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file, ...@@ -441,7 +441,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file,
result = -ENOMEM; result = -ENOMEM;
goto done; goto done;
} }
if (copy_to_user((void __user *)(unsigned long)cmd.data, if (copy_to_user(u64_to_user_ptr(cmd.data),
uevent->data, uevent->data_len)) { uevent->data, uevent->data_len)) {
result = -EFAULT; result = -EFAULT;
goto done; goto done;
...@@ -453,7 +453,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file, ...@@ -453,7 +453,7 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file,
result = -ENOMEM; result = -ENOMEM;
goto done; goto done;
} }
if (copy_to_user((void __user *)(unsigned long)cmd.info, if (copy_to_user(u64_to_user_ptr(cmd.info),
uevent->info, uevent->info_len)) { uevent->info, uevent->info_len)) {
result = -EFAULT; result = -EFAULT;
goto done; goto done;
...@@ -502,7 +502,7 @@ static ssize_t ib_ucm_create_id(struct ib_ucm_file *file, ...@@ -502,7 +502,7 @@ static ssize_t ib_ucm_create_id(struct ib_ucm_file *file,
} }
resp.id = ctx->id; resp.id = ctx->id;
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) { &resp, sizeof(resp))) {
result = -EFAULT; result = -EFAULT;
goto err2; goto err2;
...@@ -556,7 +556,7 @@ static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file, ...@@ -556,7 +556,7 @@ static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file,
ib_ucm_cleanup_events(ctx); ib_ucm_cleanup_events(ctx);
resp.events_reported = ctx->events_reported; resp.events_reported = ctx->events_reported;
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
result = -EFAULT; result = -EFAULT;
...@@ -588,7 +588,7 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file, ...@@ -588,7 +588,7 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
resp.local_id = ctx->cm_id->local_id; resp.local_id = ctx->cm_id->local_id;
resp.remote_id = ctx->cm_id->remote_id; resp.remote_id = ctx->cm_id->remote_id;
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
result = -EFAULT; result = -EFAULT;
...@@ -625,7 +625,7 @@ static ssize_t ib_ucm_init_qp_attr(struct ib_ucm_file *file, ...@@ -625,7 +625,7 @@ static ssize_t ib_ucm_init_qp_attr(struct ib_ucm_file *file,
ib_copy_qp_attr_to_user(ctx->cm_id->device, &resp, &qp_attr); ib_copy_qp_attr_to_user(ctx->cm_id->device, &resp, &qp_attr);
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
result = -EFAULT; result = -EFAULT;
...@@ -699,7 +699,7 @@ static int ib_ucm_alloc_data(const void **dest, u64 src, u32 len) ...@@ -699,7 +699,7 @@ static int ib_ucm_alloc_data(const void **dest, u64 src, u32 len)
if (!len) if (!len)
return 0; return 0;
data = memdup_user((void __user *)(unsigned long)src, len); data = memdup_user(u64_to_user_ptr(src), len);
if (IS_ERR(data)) if (IS_ERR(data))
return PTR_ERR(data); return PTR_ERR(data);
...@@ -721,7 +721,7 @@ static int ib_ucm_path_get(struct sa_path_rec **path, u64 src) ...@@ -721,7 +721,7 @@ static int ib_ucm_path_get(struct sa_path_rec **path, u64 src)
if (!sa_path) if (!sa_path)
return -ENOMEM; return -ENOMEM;
if (copy_from_user(&upath, (void __user *)(unsigned long)src, if (copy_from_user(&upath, u64_to_user_ptr(src),
sizeof(upath))) { sizeof(upath))) {
kfree(sa_path); kfree(sa_path);
......
...@@ -382,7 +382,11 @@ static ssize_t ucma_get_event(struct ucma_file *file, const char __user *inbuf, ...@@ -382,7 +382,11 @@ static ssize_t ucma_get_event(struct ucma_file *file, const char __user *inbuf,
struct ucma_event *uevent; struct ucma_event *uevent;
int ret = 0; int ret = 0;
if (out_len < sizeof uevent->resp) /*
* Old 32 bit user space does not send the 4 byte padding in the
* reserved field. We don't care, allow it to keep working.
*/
if (out_len < sizeof(uevent->resp) - sizeof(uevent->resp.reserved))
return -ENOSPC; return -ENOSPC;
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
...@@ -416,8 +420,9 @@ static ssize_t ucma_get_event(struct ucma_file *file, const char __user *inbuf, ...@@ -416,8 +420,9 @@ static ssize_t ucma_get_event(struct ucma_file *file, const char __user *inbuf,
uevent->resp.id = ctx->id; uevent->resp.id = ctx->id;
} }
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&uevent->resp, sizeof uevent->resp)) { &uevent->resp,
min_t(size_t, out_len, sizeof(uevent->resp)))) {
ret = -EFAULT; ret = -EFAULT;
goto done; goto done;
} }
...@@ -477,15 +482,15 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf, ...@@ -477,15 +482,15 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
return -ENOMEM; return -ENOMEM;
ctx->uid = cmd.uid; ctx->uid = cmd.uid;
cm_id = rdma_create_id(current->nsproxy->net_ns, cm_id = __rdma_create_id(current->nsproxy->net_ns,
ucma_event_handler, ctx, cmd.ps, qp_type); ucma_event_handler, ctx, cmd.ps, qp_type, NULL);
if (IS_ERR(cm_id)) { if (IS_ERR(cm_id)) {
ret = PTR_ERR(cm_id); ret = PTR_ERR(cm_id);
goto err1; goto err1;
} }
resp.id = ctx->id; resp.id = ctx->id;
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) { &resp, sizeof(resp))) {
ret = -EFAULT; ret = -EFAULT;
goto err2; goto err2;
...@@ -615,7 +620,7 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, ...@@ -615,7 +620,7 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
} }
resp.events_reported = ucma_free_ctx(ctx); resp.events_reported = ucma_free_ctx(ctx);
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
...@@ -845,7 +850,7 @@ static ssize_t ucma_query_route(struct ucma_file *file, ...@@ -845,7 +850,7 @@ static ssize_t ucma_query_route(struct ucma_file *file,
ucma_copy_iw_route(&resp, &ctx->cm_id->route); ucma_copy_iw_route(&resp, &ctx->cm_id->route);
out: out:
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
...@@ -991,7 +996,7 @@ static ssize_t ucma_query(struct ucma_file *file, ...@@ -991,7 +996,7 @@ static ssize_t ucma_query(struct ucma_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd))) if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT; return -EFAULT;
response = (void __user *)(unsigned long) cmd.response; response = u64_to_user_ptr(cmd.response);
ctx = ucma_get_ctx(file, cmd.id); ctx = ucma_get_ctx(file, cmd.id);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
...@@ -1094,12 +1099,12 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf, ...@@ -1094,12 +1099,12 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
if (cmd.conn_param.valid) { if (cmd.conn_param.valid) {
ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param); ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param);
mutex_lock(&file->mut); mutex_lock(&file->mut);
ret = rdma_accept(ctx->cm_id, &conn_param); ret = __rdma_accept(ctx->cm_id, &conn_param, NULL);
if (!ret) if (!ret)
ctx->uid = cmd.uid; ctx->uid = cmd.uid;
mutex_unlock(&file->mut); mutex_unlock(&file->mut);
} else } else
ret = rdma_accept(ctx->cm_id, NULL); ret = __rdma_accept(ctx->cm_id, NULL, NULL);
ucma_put_ctx(ctx); ucma_put_ctx(ctx);
return ret; return ret;
...@@ -1179,7 +1184,7 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file, ...@@ -1179,7 +1184,7 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
goto out; goto out;
ib_copy_qp_attr_to_user(ctx->cm_id->device, &resp, &qp_attr); ib_copy_qp_attr_to_user(ctx->cm_id->device, &resp, &qp_attr);
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
...@@ -1241,6 +1246,9 @@ static int ucma_set_ib_path(struct ucma_context *ctx, ...@@ -1241,6 +1246,9 @@ static int ucma_set_ib_path(struct ucma_context *ctx,
if (!optlen) if (!optlen)
return -EINVAL; return -EINVAL;
if (!ctx->cm_id->device)
return -EINVAL;
memset(&sa_path, 0, sizeof(sa_path)); memset(&sa_path, 0, sizeof(sa_path));
sa_path.rec_type = SA_PATH_REC_TYPE_IB; sa_path.rec_type = SA_PATH_REC_TYPE_IB;
...@@ -1315,7 +1323,7 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf, ...@@ -1315,7 +1323,7 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf,
if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE)) if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
return -EINVAL; return -EINVAL;
optval = memdup_user((void __user *) (unsigned long) cmd.optval, optval = memdup_user(u64_to_user_ptr(cmd.optval),
cmd.optlen); cmd.optlen);
if (IS_ERR(optval)) { if (IS_ERR(optval)) {
ret = PTR_ERR(optval); ret = PTR_ERR(optval);
...@@ -1395,7 +1403,7 @@ static ssize_t ucma_process_join(struct ucma_file *file, ...@@ -1395,7 +1403,7 @@ static ssize_t ucma_process_join(struct ucma_file *file,
goto err2; goto err2;
resp.id = mc->id; resp.id = mc->id;
if (copy_to_user((void __user *)(unsigned long) cmd->response, if (copy_to_user(u64_to_user_ptr(cmd->response),
&resp, sizeof(resp))) { &resp, sizeof(resp))) {
ret = -EFAULT; ret = -EFAULT;
goto err3; goto err3;
...@@ -1500,7 +1508,7 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file, ...@@ -1500,7 +1508,7 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file,
resp.events_reported = mc->events_reported; resp.events_reported = mc->events_reported;
kfree(mc); kfree(mc);
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
out: out:
...@@ -1587,7 +1595,7 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file, ...@@ -1587,7 +1595,7 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
ucma_unlock_files(cur_file, new_file); ucma_unlock_files(cur_file, new_file);
response: response:
if (copy_to_user((void __user *)(unsigned long)cmd.response, if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
......
...@@ -46,6 +46,10 @@ ...@@ -46,6 +46,10 @@
#include <rdma/ib_verbs.h> #include <rdma/ib_verbs.h>
#include <rdma/ib_umem.h> #include <rdma/ib_umem.h>
#include <rdma/ib_user_verbs.h> #include <rdma/ib_user_verbs.h>
#include <rdma/uverbs_std_types.h>
#define UVERBS_MODULE_NAME ib_uverbs
#include <rdma/uverbs_named_ioctl.h>
static inline void static inline void
ib_uverbs_init_udata(struct ib_udata *udata, ib_uverbs_init_udata(struct ib_udata *udata,
...@@ -199,11 +203,18 @@ struct ib_ucq_object { ...@@ -199,11 +203,18 @@ struct ib_ucq_object {
u32 async_events_reported; u32 async_events_reported;
}; };
struct ib_uflow_resources;
struct ib_uflow_object {
struct ib_uobject uobject;
struct ib_uflow_resources *resources;
};
extern const struct file_operations uverbs_event_fops; extern const struct file_operations uverbs_event_fops;
void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue); void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue);
struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file, struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file,
struct ib_device *ib_dev); struct ib_device *ib_dev);
void ib_uverbs_free_async_event_file(struct ib_uverbs_file *uverbs_file); void ib_uverbs_free_async_event_file(struct ib_uverbs_file *uverbs_file);
void ib_uverbs_flow_resources_free(struct ib_uflow_resources *uflow_res);
void ib_uverbs_release_ucq(struct ib_uverbs_file *file, void ib_uverbs_release_ucq(struct ib_uverbs_file *file,
struct ib_uverbs_completion_event_file *ev_file, struct ib_uverbs_completion_event_file *ev_file,
...@@ -226,7 +237,13 @@ int uverbs_dealloc_mw(struct ib_mw *mw); ...@@ -226,7 +237,13 @@ int uverbs_dealloc_mw(struct ib_mw *mw);
void ib_uverbs_detach_umcast(struct ib_qp *qp, void ib_uverbs_detach_umcast(struct ib_qp *qp,
struct ib_uqp_object *uobj); struct ib_uqp_object *uobj);
void create_udata(struct uverbs_attr_bundle *ctx, struct ib_udata *udata);
extern const struct uverbs_attr_def uverbs_uhw_compat_in;
extern const struct uverbs_attr_def uverbs_uhw_compat_out;
long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
int uverbs_destroy_def_handler(struct ib_device *ib_dev,
struct ib_uverbs_file *file,
struct uverbs_attr_bundle *attrs);
struct ib_uverbs_flow_spec { struct ib_uverbs_flow_spec {
union { union {
...@@ -240,13 +257,37 @@ struct ib_uverbs_flow_spec { ...@@ -240,13 +257,37 @@ struct ib_uverbs_flow_spec {
}; };
struct ib_uverbs_flow_spec_eth eth; struct ib_uverbs_flow_spec_eth eth;
struct ib_uverbs_flow_spec_ipv4 ipv4; struct ib_uverbs_flow_spec_ipv4 ipv4;
struct ib_uverbs_flow_spec_esp esp;
struct ib_uverbs_flow_spec_tcp_udp tcp_udp; struct ib_uverbs_flow_spec_tcp_udp tcp_udp;
struct ib_uverbs_flow_spec_ipv6 ipv6; struct ib_uverbs_flow_spec_ipv6 ipv6;
struct ib_uverbs_flow_spec_action_tag flow_tag; struct ib_uverbs_flow_spec_action_tag flow_tag;
struct ib_uverbs_flow_spec_action_drop drop; struct ib_uverbs_flow_spec_action_drop drop;
struct ib_uverbs_flow_spec_action_handle action;
}; };
}; };
int ib_uverbs_kern_spec_to_ib_spec_filter(enum ib_flow_spec_type type,
const void *kern_spec_mask,
const void *kern_spec_val,
size_t kern_filter_sz,
union ib_flow_spec *ib_spec);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_DEVICE);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_PD);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_MR);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_COMP_CHANNEL);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_CQ);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_QP);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_AH);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_MW);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_SRQ);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_FLOW);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_WQ);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_RWQ_IND_TBL);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_XRCD);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_FLOW_ACTION);
extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_DM);
#define IB_UVERBS_DECLARE_CMD(name) \ #define IB_UVERBS_DECLARE_CMD(name) \
ssize_t ib_uverbs_##name(struct ib_uverbs_file *file, \ ssize_t ib_uverbs_##name(struct ib_uverbs_file *file, \
struct ib_device *ib_dev, \ struct ib_device *ib_dev, \
......
This diff is collapsed.
...@@ -35,6 +35,17 @@ ...@@ -35,6 +35,17 @@
#include "rdma_core.h" #include "rdma_core.h"
#include "uverbs.h" #include "uverbs.h"
static bool uverbs_is_attr_cleared(const struct ib_uverbs_attr *uattr,
u16 len)
{
if (uattr->len > sizeof(((struct ib_uverbs_attr *)0)->data))
return ib_is_buffer_cleared(u64_to_user_ptr(uattr->data) + len,
uattr->len - len);
return !memchr_inv((const void *)&uattr->data + len,
0, uattr->len - len);
}
static int uverbs_process_attr(struct ib_device *ibdev, static int uverbs_process_attr(struct ib_device *ibdev,
struct ib_ucontext *ucontext, struct ib_ucontext *ucontext,
const struct ib_uverbs_attr *uattr, const struct ib_uverbs_attr *uattr,
...@@ -44,14 +55,12 @@ static int uverbs_process_attr(struct ib_device *ibdev, ...@@ -44,14 +55,12 @@ static int uverbs_process_attr(struct ib_device *ibdev,
struct ib_uverbs_attr __user *uattr_ptr) struct ib_uverbs_attr __user *uattr_ptr)
{ {
const struct uverbs_attr_spec *spec; const struct uverbs_attr_spec *spec;
const struct uverbs_attr_spec *val_spec;
struct uverbs_attr *e; struct uverbs_attr *e;
const struct uverbs_object_spec *object; const struct uverbs_object_spec *object;
struct uverbs_obj_attr *o_attr; struct uverbs_obj_attr *o_attr;
struct uverbs_attr *elements = attr_bundle_h->attrs; struct uverbs_attr *elements = attr_bundle_h->attrs;
if (uattr->reserved)
return -EINVAL;
if (attr_id >= attr_spec_bucket->num_attrs) { if (attr_id >= attr_spec_bucket->num_attrs) {
if (uattr->flags & UVERBS_ATTR_F_MANDATORY) if (uattr->flags & UVERBS_ATTR_F_MANDATORY)
return -EINVAL; return -EINVAL;
...@@ -63,15 +72,46 @@ static int uverbs_process_attr(struct ib_device *ibdev, ...@@ -63,15 +72,46 @@ static int uverbs_process_attr(struct ib_device *ibdev,
return -EINVAL; return -EINVAL;
spec = &attr_spec_bucket->attrs[attr_id]; spec = &attr_spec_bucket->attrs[attr_id];
val_spec = spec;
e = &elements[attr_id]; e = &elements[attr_id];
e->uattr = uattr_ptr; e->uattr = uattr_ptr;
switch (spec->type) { switch (spec->type) {
case UVERBS_ATTR_TYPE_ENUM_IN:
if (uattr->attr_data.enum_data.elem_id >= spec->enum_def.num_elems)
return -EOPNOTSUPP;
if (uattr->attr_data.enum_data.reserved)
return -EINVAL;
val_spec = &spec->enum_def.ids[uattr->attr_data.enum_data.elem_id];
/* Currently we only support PTR_IN based enums */
if (val_spec->type != UVERBS_ATTR_TYPE_PTR_IN)
return -EOPNOTSUPP;
e->ptr_attr.enum_id = uattr->attr_data.enum_data.elem_id;
/* fall through */
case UVERBS_ATTR_TYPE_PTR_IN: case UVERBS_ATTR_TYPE_PTR_IN:
/* Ensure that any data provided by userspace beyond the known
* struct is zero. Userspace that knows how to use some future
* longer struct will fail here if used with an old kernel and
* non-zero content, making ABI compat/discovery simpler.
*/
if (uattr->len > val_spec->ptr.len &&
val_spec->flags & UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO &&
!uverbs_is_attr_cleared(uattr, val_spec->ptr.len))
return -EOPNOTSUPP;
/* fall through */
case UVERBS_ATTR_TYPE_PTR_OUT: case UVERBS_ATTR_TYPE_PTR_OUT:
if (uattr->len < spec->len || if (uattr->len < val_spec->ptr.min_len ||
(!(spec->flags & UVERBS_ATTR_SPEC_F_MIN_SZ) && (!(val_spec->flags & UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO) &&
uattr->len > spec->len)) uattr->len > val_spec->ptr.len))
return -EINVAL;
if (spec->type != UVERBS_ATTR_TYPE_ENUM_IN &&
uattr->attr_data.reserved)
return -EINVAL; return -EINVAL;
e->ptr_attr.data = uattr->data; e->ptr_attr.data = uattr->data;
...@@ -84,6 +124,9 @@ static int uverbs_process_attr(struct ib_device *ibdev, ...@@ -84,6 +124,9 @@ static int uverbs_process_attr(struct ib_device *ibdev,
return -EINVAL; return -EINVAL;
/* fall through */ /* fall through */
case UVERBS_ATTR_TYPE_FD: case UVERBS_ATTR_TYPE_FD:
if (uattr->attr_data.reserved)
return -EINVAL;
if (uattr->len != 0 || !ucontext || uattr->data > INT_MAX) if (uattr->len != 0 || !ucontext || uattr->data > INT_MAX)
return -EINVAL; return -EINVAL;
...@@ -246,6 +289,9 @@ static long ib_uverbs_cmd_verbs(struct ib_device *ib_dev, ...@@ -246,6 +289,9 @@ static long ib_uverbs_cmd_verbs(struct ib_device *ib_dev,
size_t ctx_size; size_t ctx_size;
uintptr_t data[UVERBS_OPTIMIZE_USING_STACK_SZ / sizeof(uintptr_t)]; uintptr_t data[UVERBS_OPTIMIZE_USING_STACK_SZ / sizeof(uintptr_t)];
if (hdr->driver_id != ib_dev->driver_id)
return -EINVAL;
object_spec = uverbs_get_object(ib_dev, hdr->object_id); object_spec = uverbs_get_object(ib_dev, hdr->object_id);
if (!object_spec) if (!object_spec)
return -EPROTONOSUPPORT; return -EPROTONOSUPPORT;
...@@ -350,7 +396,7 @@ long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ...@@ -350,7 +396,7 @@ long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
goto out; goto out;
} }
if (hdr.reserved) { if (hdr.reserved1 || hdr.reserved2) {
err = -EPROTONOSUPPORT; err = -EPROTONOSUPPORT;
goto out; goto out;
} }
......
...@@ -379,7 +379,7 @@ static struct uverbs_method_spec *build_method_with_attrs(const struct uverbs_me ...@@ -379,7 +379,7 @@ static struct uverbs_method_spec *build_method_with_attrs(const struct uverbs_me
"ib_uverbs: Tried to merge attr (%d) but it's an object with new/destroy access but isn't mandatory\n", "ib_uverbs: Tried to merge attr (%d) but it's an object with new/destroy access but isn't mandatory\n",
min_id) || min_id) ||
WARN(IS_ATTR_OBJECT(attr) && WARN(IS_ATTR_OBJECT(attr) &&
attr->flags & UVERBS_ATTR_SPEC_F_MIN_SZ, attr->flags & UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO,
"ib_uverbs: Tried to merge attr (%d) but it's an object with min_sz flag\n", "ib_uverbs: Tried to merge attr (%d) but it's an object with min_sz flag\n",
min_id)) { min_id)) {
res = -EINVAL; res = -EINVAL;
......
...@@ -468,7 +468,7 @@ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context) ...@@ -468,7 +468,7 @@ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
return; return;
} }
entry = kmalloc(sizeof *entry, GFP_ATOMIC); entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry) { if (!entry) {
spin_unlock_irqrestore(&ev_queue->lock, flags); spin_unlock_irqrestore(&ev_queue->lock, flags);
return; return;
...@@ -501,7 +501,7 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file, ...@@ -501,7 +501,7 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
return; return;
} }
entry = kmalloc(sizeof *entry, GFP_ATOMIC); entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry) { if (!entry) {
spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags); spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
return; return;
...@@ -635,39 +635,87 @@ struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file ...@@ -635,39 +635,87 @@ struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file
return filp; return filp;
} }
static int verify_command_mask(struct ib_device *ib_dev, __u32 command) static bool verify_command_mask(struct ib_device *ib_dev,
u32 command, bool extended)
{ {
u64 mask; if (!extended)
return ib_dev->uverbs_cmd_mask & BIT_ULL(command);
if (command <= IB_USER_VERBS_CMD_OPEN_QP) return ib_dev->uverbs_ex_cmd_mask & BIT_ULL(command);
mask = ib_dev->uverbs_cmd_mask;
else
mask = ib_dev->uverbs_ex_cmd_mask;
if (mask & ((u64)1 << command))
return 0;
return -1;
} }
static bool verify_command_idx(u32 command, bool extended) static bool verify_command_idx(u32 command, bool extended)
{ {
if (extended) if (extended)
return command < ARRAY_SIZE(uverbs_ex_cmd_table); return command < ARRAY_SIZE(uverbs_ex_cmd_table) &&
uverbs_ex_cmd_table[command];
return command < ARRAY_SIZE(uverbs_cmd_table) &&
uverbs_cmd_table[command];
}
static ssize_t process_hdr(struct ib_uverbs_cmd_hdr *hdr,
u32 *command, bool *extended)
{
if (hdr->command & ~(u32)(IB_USER_VERBS_CMD_FLAG_EXTENDED |
IB_USER_VERBS_CMD_COMMAND_MASK))
return -EINVAL;
*command = hdr->command & IB_USER_VERBS_CMD_COMMAND_MASK;
*extended = hdr->command & IB_USER_VERBS_CMD_FLAG_EXTENDED;
if (!verify_command_idx(*command, *extended))
return -EOPNOTSUPP;
return 0;
}
static ssize_t verify_hdr(struct ib_uverbs_cmd_hdr *hdr,
struct ib_uverbs_ex_cmd_hdr *ex_hdr,
size_t count, bool extended)
{
if (extended) {
count -= sizeof(*hdr) + sizeof(*ex_hdr);
if ((hdr->in_words + ex_hdr->provider_in_words) * 8 != count)
return -EINVAL;
if (ex_hdr->cmd_hdr_reserved)
return -EINVAL;
return command < ARRAY_SIZE(uverbs_cmd_table); if (ex_hdr->response) {
if (!hdr->out_words && !ex_hdr->provider_out_words)
return -EINVAL;
if (!access_ok(VERIFY_WRITE,
u64_to_user_ptr(ex_hdr->response),
(hdr->out_words + ex_hdr->provider_out_words) * 8))
return -EFAULT;
} else {
if (hdr->out_words || ex_hdr->provider_out_words)
return -EINVAL;
}
return 0;
}
/* not extended command */
if (hdr->in_words * 4 != count)
return -EINVAL;
return 0;
} }
static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
size_t count, loff_t *pos) size_t count, loff_t *pos)
{ {
struct ib_uverbs_file *file = filp->private_data; struct ib_uverbs_file *file = filp->private_data;
struct ib_uverbs_ex_cmd_hdr ex_hdr;
struct ib_device *ib_dev; struct ib_device *ib_dev;
struct ib_uverbs_cmd_hdr hdr; struct ib_uverbs_cmd_hdr hdr;
bool extended_command; bool extended;
__u32 command;
__u32 flags;
int srcu_key; int srcu_key;
u32 command;
ssize_t ret; ssize_t ret;
if (!ib_safe_file_access(filp)) { if (!ib_safe_file_access(filp)) {
...@@ -676,12 +724,31 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, ...@@ -676,12 +724,31 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
return -EACCES; return -EACCES;
} }
if (count < sizeof hdr) if (count < sizeof(hdr))
return -EINVAL; return -EINVAL;
if (copy_from_user(&hdr, buf, sizeof hdr)) if (copy_from_user(&hdr, buf, sizeof(hdr)))
return -EFAULT; return -EFAULT;
ret = process_hdr(&hdr, &command, &extended);
if (ret)
return ret;
if (!file->ucontext &&
(command != IB_USER_VERBS_CMD_GET_CONTEXT || extended))
return -EINVAL;
if (extended) {
if (count < (sizeof(hdr) + sizeof(ex_hdr)))
return -EINVAL;
if (copy_from_user(&ex_hdr, buf + sizeof(hdr), sizeof(ex_hdr)))
return -EFAULT;
}
ret = verify_hdr(&hdr, &ex_hdr, count, extended);
if (ret)
return ret;
srcu_key = srcu_read_lock(&file->device->disassociate_srcu); srcu_key = srcu_read_lock(&file->device->disassociate_srcu);
ib_dev = srcu_dereference(file->device->ib_dev, ib_dev = srcu_dereference(file->device->ib_dev,
&file->device->disassociate_srcu); &file->device->disassociate_srcu);
...@@ -690,106 +757,22 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, ...@@ -690,106 +757,22 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
goto out; goto out;
} }
if (hdr.command & ~(__u32)(IB_USER_VERBS_CMD_FLAGS_MASK | if (!verify_command_mask(ib_dev, command, extended)) {
IB_USER_VERBS_CMD_COMMAND_MASK)) {
ret = -EINVAL;
goto out;
}
command = hdr.command & IB_USER_VERBS_CMD_COMMAND_MASK;
flags = (hdr.command &
IB_USER_VERBS_CMD_FLAGS_MASK) >> IB_USER_VERBS_CMD_FLAGS_SHIFT;
extended_command = flags & IB_USER_VERBS_CMD_FLAG_EXTENDED;
if (!verify_command_idx(command, extended_command)) {
ret = -EINVAL;
goto out;
}
if (verify_command_mask(ib_dev, command)) {
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
goto out; goto out;
} }
if (!file->ucontext && buf += sizeof(hdr);
command != IB_USER_VERBS_CMD_GET_CONTEXT) {
ret = -EINVAL;
goto out;
}
if (!flags) {
if (!uverbs_cmd_table[command]) {
ret = -EINVAL;
goto out;
}
if (hdr.in_words * 4 != count) {
ret = -EINVAL;
goto out;
}
ret = uverbs_cmd_table[command](file, ib_dev, if (!extended) {
buf + sizeof(hdr), ret = uverbs_cmd_table[command](file, ib_dev, buf,
hdr.in_words * 4, hdr.in_words * 4,
hdr.out_words * 4); hdr.out_words * 4);
} else {
} else if (flags == IB_USER_VERBS_CMD_FLAG_EXTENDED) {
struct ib_uverbs_ex_cmd_hdr ex_hdr;
struct ib_udata ucore; struct ib_udata ucore;
struct ib_udata uhw; struct ib_udata uhw;
size_t written_count = count;
if (!uverbs_ex_cmd_table[command]) { buf += sizeof(ex_hdr);
ret = -ENOSYS;
goto out;
}
if (!file->ucontext) {
ret = -EINVAL;
goto out;
}
if (count < (sizeof(hdr) + sizeof(ex_hdr))) {
ret = -EINVAL;
goto out;
}
if (copy_from_user(&ex_hdr, buf + sizeof(hdr), sizeof(ex_hdr))) {
ret = -EFAULT;
goto out;
}
count -= sizeof(hdr) + sizeof(ex_hdr);
buf += sizeof(hdr) + sizeof(ex_hdr);
if ((hdr.in_words + ex_hdr.provider_in_words) * 8 != count) {
ret = -EINVAL;
goto out;
}
if (ex_hdr.cmd_hdr_reserved) {
ret = -EINVAL;
goto out;
}
if (ex_hdr.response) {
if (!hdr.out_words && !ex_hdr.provider_out_words) {
ret = -EINVAL;
goto out;
}
if (!access_ok(VERIFY_WRITE,
u64_to_user_ptr(ex_hdr.response),
(hdr.out_words + ex_hdr.provider_out_words) * 8)) {
ret = -EFAULT;
goto out;
}
} else {
if (hdr.out_words || ex_hdr.provider_out_words) {
ret = -EINVAL;
goto out;
}
}
ib_uverbs_init_udata_buf_or_null(&ucore, buf, ib_uverbs_init_udata_buf_or_null(&ucore, buf,
u64_to_user_ptr(ex_hdr.response), u64_to_user_ptr(ex_hdr.response),
...@@ -802,10 +785,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, ...@@ -802,10 +785,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
ex_hdr.provider_out_words * 8); ex_hdr.provider_out_words * 8);
ret = uverbs_ex_cmd_table[command](file, ib_dev, &ucore, &uhw); ret = uverbs_ex_cmd_table[command](file, ib_dev, &ucore, &uhw);
if (!ret) ret = (ret) ? : count;
ret = written_count;
} else {
ret = -ENOSYS;
} }
out: out:
...@@ -953,10 +933,8 @@ static const struct file_operations uverbs_fops = { ...@@ -953,10 +933,8 @@ static const struct file_operations uverbs_fops = {
.open = ib_uverbs_open, .open = ib_uverbs_open,
.release = ib_uverbs_close, .release = ib_uverbs_close,
.llseek = no_llseek, .llseek = no_llseek,
#if IS_ENABLED(CONFIG_INFINIBAND_EXP_USER_ACCESS)
.unlocked_ioctl = ib_uverbs_ioctl, .unlocked_ioctl = ib_uverbs_ioctl,
.compat_ioctl = ib_uverbs_ioctl, .compat_ioctl = ib_uverbs_ioctl,
#endif
}; };
static const struct file_operations uverbs_mmap_fops = { static const struct file_operations uverbs_mmap_fops = {
...@@ -966,10 +944,8 @@ static const struct file_operations uverbs_mmap_fops = { ...@@ -966,10 +944,8 @@ static const struct file_operations uverbs_mmap_fops = {
.open = ib_uverbs_open, .open = ib_uverbs_open,
.release = ib_uverbs_close, .release = ib_uverbs_close,
.llseek = no_llseek, .llseek = no_llseek,
#if IS_ENABLED(CONFIG_INFINIBAND_EXP_USER_ACCESS)
.unlocked_ioctl = ib_uverbs_ioctl, .unlocked_ioctl = ib_uverbs_ioctl,
.compat_ioctl = ib_uverbs_ioctl, .compat_ioctl = ib_uverbs_ioctl,
#endif
}; };
static struct ib_client uverbs_client = { static struct ib_client uverbs_client = {
...@@ -1032,7 +1008,7 @@ static void ib_uverbs_add_one(struct ib_device *device) ...@@ -1032,7 +1008,7 @@ static void ib_uverbs_add_one(struct ib_device *device)
if (!device->alloc_ucontext) if (!device->alloc_ucontext)
return; return;
uverbs_dev = kzalloc(sizeof *uverbs_dev, GFP_KERNEL); uverbs_dev = kzalloc(sizeof(*uverbs_dev), GFP_KERNEL);
if (!uverbs_dev) if (!uverbs_dev)
return; return;
......
This diff is collapsed.
/*
* Copyright (c) 2017, Mellanox Technologies inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <rdma/uverbs_std_types.h>
#include "rdma_core.h"
#include "uverbs.h"
static int uverbs_free_cq(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_cq *cq = uobject->object;
struct ib_uverbs_event_queue *ev_queue = cq->cq_context;
struct ib_ucq_object *ucq =
container_of(uobject, struct ib_ucq_object, uobject);
int ret;
ret = ib_destroy_cq(cq);
if (!ret || why != RDMA_REMOVE_DESTROY)
ib_uverbs_release_ucq(uobject->context->ufile, ev_queue ?
container_of(ev_queue,
struct ib_uverbs_completion_event_file,
ev_queue) : NULL,
ucq);
return ret;
}
static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)(struct ib_device *ib_dev,
struct ib_uverbs_file *file,
struct uverbs_attr_bundle *attrs)
{
struct ib_ucontext *ucontext = file->ucontext;
struct ib_ucq_object *obj;
struct ib_udata uhw;
int ret;
u64 user_handle;
struct ib_cq_init_attr attr = {};
struct ib_cq *cq;
struct ib_uverbs_completion_event_file *ev_file = NULL;
const struct uverbs_attr *ev_file_attr;
struct ib_uobject *ev_file_uobj;
if (!(ib_dev->uverbs_cmd_mask & 1ULL << IB_USER_VERBS_CMD_CREATE_CQ))
return -EOPNOTSUPP;
ret = uverbs_copy_from(&attr.comp_vector, attrs,
UVERBS_ATTR_CREATE_CQ_COMP_VECTOR);
if (!ret)
ret = uverbs_copy_from(&attr.cqe, attrs,
UVERBS_ATTR_CREATE_CQ_CQE);
if (!ret)
ret = uverbs_copy_from(&user_handle, attrs,
UVERBS_ATTR_CREATE_CQ_USER_HANDLE);
if (ret)
return ret;
/* Optional param, if it doesn't exist, we get -ENOENT and skip it */
if (IS_UVERBS_COPY_ERR(uverbs_copy_from(&attr.flags, attrs,
UVERBS_ATTR_CREATE_CQ_FLAGS)))
return -EFAULT;
ev_file_attr = uverbs_attr_get(attrs, UVERBS_ATTR_CREATE_CQ_COMP_CHANNEL);
if (!IS_ERR(ev_file_attr)) {
ev_file_uobj = ev_file_attr->obj_attr.uobject;
ev_file = container_of(ev_file_uobj,
struct ib_uverbs_completion_event_file,
uobj_file.uobj);
uverbs_uobject_get(ev_file_uobj);
}
if (attr.comp_vector >= ucontext->ufile->device->num_comp_vectors) {
ret = -EINVAL;
goto err_event_file;
}
obj = container_of(uverbs_attr_get(attrs,
UVERBS_ATTR_CREATE_CQ_HANDLE)->obj_attr.uobject,
typeof(*obj), uobject);
obj->uverbs_file = ucontext->ufile;
obj->comp_events_reported = 0;
obj->async_events_reported = 0;
INIT_LIST_HEAD(&obj->comp_list);
INIT_LIST_HEAD(&obj->async_list);
/* Temporary, only until drivers get the new uverbs_attr_bundle */
create_udata(attrs, &uhw);
cq = ib_dev->create_cq(ib_dev, &attr, ucontext, &uhw);
if (IS_ERR(cq)) {
ret = PTR_ERR(cq);
goto err_event_file;
}
cq->device = ib_dev;
cq->uobject = &obj->uobject;
cq->comp_handler = ib_uverbs_comp_handler;
cq->event_handler = ib_uverbs_cq_event_handler;
cq->cq_context = ev_file ? &ev_file->ev_queue : NULL;
obj->uobject.object = cq;
obj->uobject.user_handle = user_handle;
atomic_set(&cq->usecnt, 0);
cq->res.type = RDMA_RESTRACK_CQ;
rdma_restrack_add(&cq->res);
ret = uverbs_copy_to(attrs, UVERBS_ATTR_CREATE_CQ_RESP_CQE, &cq->cqe,
sizeof(cq->cqe));
if (ret)
goto err_cq;
return 0;
err_cq:
ib_destroy_cq(cq);
err_event_file:
if (ev_file)
uverbs_uobject_put(ev_file_uobj);
return ret;
};
static DECLARE_UVERBS_NAMED_METHOD(UVERBS_METHOD_CQ_CREATE,
&UVERBS_ATTR_IDR(UVERBS_ATTR_CREATE_CQ_HANDLE, UVERBS_OBJECT_CQ,
UVERBS_ACCESS_NEW,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_CREATE_CQ_CQE,
UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_CREATE_CQ_USER_HANDLE,
UVERBS_ATTR_TYPE(u64),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_FD(UVERBS_ATTR_CREATE_CQ_COMP_CHANNEL,
UVERBS_OBJECT_COMP_CHANNEL,
UVERBS_ACCESS_READ),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_CREATE_CQ_COMP_VECTOR, UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_CREATE_CQ_FLAGS, UVERBS_ATTR_TYPE(u32)),
&UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_CREATE_CQ_RESP_CQE, UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&uverbs_uhw_compat_in, &uverbs_uhw_compat_out);
static int UVERBS_HANDLER(UVERBS_METHOD_CQ_DESTROY)(struct ib_device *ib_dev,
struct ib_uverbs_file *file,
struct uverbs_attr_bundle *attrs)
{
struct ib_uverbs_destroy_cq_resp resp;
struct ib_uobject *uobj =
uverbs_attr_get(attrs, UVERBS_ATTR_DESTROY_CQ_HANDLE)->obj_attr.uobject;
struct ib_ucq_object *obj = container_of(uobj, struct ib_ucq_object,
uobject);
int ret;
if (!(ib_dev->uverbs_cmd_mask & 1ULL << IB_USER_VERBS_CMD_DESTROY_CQ))
return -EOPNOTSUPP;
ret = rdma_explicit_destroy(uobj);
if (ret)
return ret;
resp.comp_events_reported = obj->comp_events_reported;
resp.async_events_reported = obj->async_events_reported;
return uverbs_copy_to(attrs, UVERBS_ATTR_DESTROY_CQ_RESP, &resp,
sizeof(resp));
}
static DECLARE_UVERBS_NAMED_METHOD(UVERBS_METHOD_CQ_DESTROY,
&UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_CQ_HANDLE, UVERBS_OBJECT_CQ,
UVERBS_ACCESS_DESTROY,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_DESTROY_CQ_RESP,
UVERBS_ATTR_TYPE(struct ib_uverbs_destroy_cq_resp),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)));
DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_CQ,
&UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_ucq_object), 0,
uverbs_free_cq),
#if IS_ENABLED(CONFIG_INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI)
&UVERBS_METHOD(UVERBS_METHOD_CQ_CREATE),
&UVERBS_METHOD(UVERBS_METHOD_CQ_DESTROY)
#endif
);
/*
* Copyright (c) 2018, Mellanox Technologies inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "uverbs.h"
#include <rdma/uverbs_std_types.h>
static int uverbs_free_dm(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_dm *dm = uobject->object;
if (why == RDMA_REMOVE_DESTROY && atomic_read(&dm->usecnt))
return -EBUSY;
return dm->device->dealloc_dm(dm);
}
static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_device *ib_dev,
struct ib_uverbs_file *file,
struct uverbs_attr_bundle *attrs)
{
struct ib_ucontext *ucontext = file->ucontext;
struct ib_dm_alloc_attr attr = {};
struct ib_uobject *uobj;
struct ib_dm *dm;
int ret;
if (!ib_dev->alloc_dm)
return -EOPNOTSUPP;
ret = uverbs_copy_from(&attr.length, attrs,
UVERBS_ATTR_ALLOC_DM_LENGTH);
if (ret)
return ret;
ret = uverbs_copy_from(&attr.alignment, attrs,
UVERBS_ATTR_ALLOC_DM_ALIGNMENT);
if (ret)
return ret;
uobj = uverbs_attr_get(attrs, UVERBS_ATTR_ALLOC_DM_HANDLE)->obj_attr.uobject;
dm = ib_dev->alloc_dm(ib_dev, ucontext, &attr, attrs);
if (IS_ERR(dm))
return PTR_ERR(dm);
dm->device = ib_dev;
dm->length = attr.length;
dm->uobject = uobj;
atomic_set(&dm->usecnt, 0);
uobj->object = dm;
return 0;
}
static DECLARE_UVERBS_NAMED_METHOD(UVERBS_METHOD_DM_ALLOC,
&UVERBS_ATTR_IDR(UVERBS_ATTR_ALLOC_DM_HANDLE, UVERBS_OBJECT_DM,
UVERBS_ACCESS_NEW,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_ALLOC_DM_LENGTH,
UVERBS_ATTR_TYPE(u64),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_ALLOC_DM_ALIGNMENT,
UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)));
static DECLARE_UVERBS_NAMED_METHOD_WITH_HANDLER(UVERBS_METHOD_DM_FREE,
uverbs_destroy_def_handler,
&UVERBS_ATTR_IDR(UVERBS_ATTR_FREE_DM_HANDLE,
UVERBS_OBJECT_DM,
UVERBS_ACCESS_DESTROY,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)));
DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_DM,
/* 1 is used in order to free the DM after MRs */
&UVERBS_TYPE_ALLOC_IDR(1, uverbs_free_dm),
&UVERBS_METHOD(UVERBS_METHOD_DM_ALLOC),
&UVERBS_METHOD(UVERBS_METHOD_DM_FREE));
This diff is collapsed.
/*
* Copyright (c) 2018, Mellanox Technologies inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "uverbs.h"
#include <rdma/uverbs_std_types.h>
static int uverbs_free_mr(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
return ib_dereg_mr((struct ib_mr *)uobject->object);
}
static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)(struct ib_device *ib_dev,
struct ib_uverbs_file *file,
struct uverbs_attr_bundle *attrs)
{
struct ib_dm_mr_attr attr = {};
struct ib_uobject *uobj;
struct ib_dm *dm;
struct ib_pd *pd;
struct ib_mr *mr;
int ret;
if (!ib_dev->reg_dm_mr)
return -EOPNOTSUPP;
ret = uverbs_copy_from(&attr.offset, attrs, UVERBS_ATTR_REG_DM_MR_OFFSET);
if (ret)
return ret;
ret = uverbs_copy_from(&attr.length, attrs,
UVERBS_ATTR_REG_DM_MR_LENGTH);
if (ret)
return ret;
ret = uverbs_copy_from(&attr.access_flags, attrs,
UVERBS_ATTR_REG_DM_MR_ACCESS_FLAGS);
if (ret)
return ret;
if (!(attr.access_flags & IB_ZERO_BASED))
return -EINVAL;
ret = ib_check_mr_access(attr.access_flags);
if (ret)
return ret;
pd = uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_PD_HANDLE);
dm = uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_DM_HANDLE);
uobj = uverbs_attr_get(attrs, UVERBS_ATTR_REG_DM_MR_HANDLE)->obj_attr.uobject;
if (attr.offset > dm->length || attr.length > dm->length ||
attr.length > dm->length - attr.offset)
return -EINVAL;
mr = pd->device->reg_dm_mr(pd, dm, &attr, attrs);
if (IS_ERR(mr))
return PTR_ERR(mr);
mr->device = pd->device;
mr->pd = pd;
mr->dm = dm;
mr->uobject = uobj;
atomic_inc(&pd->usecnt);
atomic_inc(&dm->usecnt);
uobj->object = mr;
ret = uverbs_copy_to(attrs, UVERBS_ATTR_REG_DM_MR_RESP_LKEY, &mr->lkey,
sizeof(mr->lkey));
if (ret)
goto err_dereg;
ret = uverbs_copy_to(attrs, UVERBS_ATTR_REG_DM_MR_RESP_RKEY,
&mr->rkey, sizeof(mr->rkey));
if (ret)
goto err_dereg;
return 0;
err_dereg:
ib_dereg_mr(mr);
return ret;
}
static DECLARE_UVERBS_NAMED_METHOD(UVERBS_METHOD_DM_MR_REG,
&UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DM_MR_HANDLE, UVERBS_OBJECT_MR,
UVERBS_ACCESS_NEW,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DM_MR_OFFSET,
UVERBS_ATTR_TYPE(u64),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DM_MR_LENGTH,
UVERBS_ATTR_TYPE(u64),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DM_MR_PD_HANDLE, UVERBS_OBJECT_PD,
UVERBS_ACCESS_READ,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DM_MR_ACCESS_FLAGS,
UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DM_MR_DM_HANDLE, UVERBS_OBJECT_DM,
UVERBS_ACCESS_READ,
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_REG_DM_MR_RESP_LKEY,
UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)),
&UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_REG_DM_MR_RESP_RKEY,
UVERBS_ATTR_TYPE(u32),
UA_FLAGS(UVERBS_ATTR_SPEC_F_MANDATORY)));
DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_MR,
/* 1 is used in order to free the MR after all the MWs */
&UVERBS_TYPE_ALLOC_IDR(1, uverbs_free_mr),
&UVERBS_METHOD(UVERBS_METHOD_DM_MR_REG));
This diff is collapsed.
...@@ -314,12 +314,11 @@ int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num, ...@@ -314,12 +314,11 @@ int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num,
return rc; return rc;
} }
int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num, int bnxt_re_del_gid(const struct ib_gid_attr *attr, void **context)
unsigned int index, void **context)
{ {
int rc = 0; int rc = 0;
struct bnxt_re_gid_ctx *ctx, **ctx_tbl; struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev); struct bnxt_re_dev *rdev = to_bnxt_re_dev(attr->device, ibdev);
struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl; struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
struct bnxt_qplib_gid *gid_to_del; struct bnxt_qplib_gid *gid_to_del;
...@@ -365,15 +364,14 @@ int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num, ...@@ -365,15 +364,14 @@ int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num,
return rc; return rc;
} }
int bnxt_re_add_gid(struct ib_device *ibdev, u8 port_num, int bnxt_re_add_gid(const union ib_gid *gid,
unsigned int index, const union ib_gid *gid,
const struct ib_gid_attr *attr, void **context) const struct ib_gid_attr *attr, void **context)
{ {
int rc; int rc;
u32 tbl_idx = 0; u32 tbl_idx = 0;
u16 vlan_id = 0xFFFF; u16 vlan_id = 0xFFFF;
struct bnxt_re_gid_ctx *ctx, **ctx_tbl; struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev); struct bnxt_re_dev *rdev = to_bnxt_re_dev(attr->device, ibdev);
struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl; struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
if ((attr->ndev) && is_vlan_dev(attr->ndev)) if ((attr->ndev) && is_vlan_dev(attr->ndev))
...@@ -718,8 +716,7 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd, ...@@ -718,8 +716,7 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
grh->sgid_index); grh->sgid_index);
goto fail; goto fail;
} }
if (sgid_attr.ndev) dev_put(sgid_attr.ndev);
dev_put(sgid_attr.ndev);
/* Get network header type for this GID */ /* Get network header type for this GID */
nw_type = ib_gid_to_network_type(sgid_attr.gid_type, &sgid); nw_type = ib_gid_to_network_type(sgid_attr.gid_type, &sgid);
switch (nw_type) { switch (nw_type) {
...@@ -1540,14 +1537,13 @@ int bnxt_re_post_srq_recv(struct ib_srq *ib_srq, struct ib_recv_wr *wr, ...@@ -1540,14 +1537,13 @@ int bnxt_re_post_srq_recv(struct ib_srq *ib_srq, struct ib_recv_wr *wr,
ib_srq); ib_srq);
struct bnxt_qplib_swqe wqe; struct bnxt_qplib_swqe wqe;
unsigned long flags; unsigned long flags;
int rc = 0, payload_sz = 0; int rc = 0;
spin_lock_irqsave(&srq->lock, flags); spin_lock_irqsave(&srq->lock, flags);
while (wr) { while (wr) {
/* Transcribe each ib_recv_wr to qplib_swqe */ /* Transcribe each ib_recv_wr to qplib_swqe */
wqe.num_sge = wr->num_sge; wqe.num_sge = wr->num_sge;
payload_sz = bnxt_re_build_sgl(wr->sg_list, wqe.sg_list, bnxt_re_build_sgl(wr->sg_list, wqe.sg_list, wr->num_sge);
wr->num_sge);
wqe.wr_id = wr->wr_id; wqe.wr_id = wr->wr_id;
wqe.type = BNXT_QPLIB_SWQE_TYPE_RECV; wqe.type = BNXT_QPLIB_SWQE_TYPE_RECV;
...@@ -1698,7 +1694,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr, ...@@ -1698,7 +1694,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
status = ib_get_cached_gid(&rdev->ibdev, 1, status = ib_get_cached_gid(&rdev->ibdev, 1,
grh->sgid_index, grh->sgid_index,
&sgid, &sgid_attr); &sgid, &sgid_attr);
if (!status && sgid_attr.ndev) { if (!status) {
memcpy(qp->qplib_qp.smac, sgid_attr.ndev->dev_addr, memcpy(qp->qplib_qp.smac, sgid_attr.ndev->dev_addr,
ETH_ALEN); ETH_ALEN);
dev_put(sgid_attr.ndev); dev_put(sgid_attr.ndev);
......
...@@ -157,10 +157,8 @@ int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num, ...@@ -157,10 +157,8 @@ int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,
void bnxt_re_query_fw_str(struct ib_device *ibdev, char *str); void bnxt_re_query_fw_str(struct ib_device *ibdev, char *str);
int bnxt_re_query_pkey(struct ib_device *ibdev, u8 port_num, int bnxt_re_query_pkey(struct ib_device *ibdev, u8 port_num,
u16 index, u16 *pkey); u16 index, u16 *pkey);
int bnxt_re_del_gid(struct ib_device *ibdev, u8 port_num, int bnxt_re_del_gid(const struct ib_gid_attr *attr, void **context);
unsigned int index, void **context); int bnxt_re_add_gid(const union ib_gid *gid,
int bnxt_re_add_gid(struct ib_device *ibdev, u8 port_num,
unsigned int index, const union ib_gid *gid,
const struct ib_gid_attr *attr, void **context); const struct ib_gid_attr *attr, void **context);
int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num, int bnxt_re_query_gid(struct ib_device *ibdev, u8 port_num,
int index, union ib_gid *gid); int index, union ib_gid *gid);
......
...@@ -574,7 +574,6 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) ...@@ -574,7 +574,6 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
ibdev->get_port_immutable = bnxt_re_get_port_immutable; ibdev->get_port_immutable = bnxt_re_get_port_immutable;
ibdev->get_dev_fw_str = bnxt_re_query_fw_str; ibdev->get_dev_fw_str = bnxt_re_query_fw_str;
ibdev->query_pkey = bnxt_re_query_pkey; ibdev->query_pkey = bnxt_re_query_pkey;
ibdev->query_gid = bnxt_re_query_gid;
ibdev->get_netdev = bnxt_re_get_netdev; ibdev->get_netdev = bnxt_re_get_netdev;
ibdev->add_gid = bnxt_re_add_gid; ibdev->add_gid = bnxt_re_add_gid;
ibdev->del_gid = bnxt_re_del_gid; ibdev->del_gid = bnxt_re_del_gid;
...@@ -619,6 +618,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) ...@@ -619,6 +618,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
ibdev->get_hw_stats = bnxt_re_ib_get_hw_stats; ibdev->get_hw_stats = bnxt_re_ib_get_hw_stats;
ibdev->alloc_hw_stats = bnxt_re_ib_alloc_hw_stats; ibdev->alloc_hw_stats = bnxt_re_ib_alloc_hw_stats;
ibdev->driver_id = RDMA_DRIVER_BNXT_RE;
return ib_register_device(ibdev, NULL); return ib_register_device(ibdev, NULL);
} }
......
...@@ -154,7 +154,7 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw, ...@@ -154,7 +154,7 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
attr->tqm_alloc_reqs[i * 4 + 3] = *(++tqm_alloc); attr->tqm_alloc_reqs[i * 4 + 3] = *(++tqm_alloc);
} }
attr->is_atomic = 0; attr->is_atomic = false;
bail: bail:
bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf);
return rc; return rc;
......
...@@ -16,12 +16,3 @@ config INFINIBAND_CXGB3 ...@@ -16,12 +16,3 @@ config INFINIBAND_CXGB3
To compile this driver as a module, choose M here: the module To compile this driver as a module, choose M here: the module
will be called iw_cxgb3. will be called iw_cxgb3.
config INFINIBAND_CXGB3_DEBUG
bool "Verbose debugging output"
depends on INFINIBAND_CXGB3
default n
---help---
This option causes the Chelsio RDMA driver to produce copious
amounts of debug messages. Select this if you are developing
the driver or trying to diagnose a problem.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment