- 01 Jun, 2018 6 commits
-
-
Leon Romanovsky authored
There is no need to check existence of mad_queue, because we already did pointer dereference before call to dequeue_mad(). Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Leon Romanovsky authored
There is no need to bring down the whole machine, just because unknown event was received. It is better to ignore it silently. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Leon Romanovsky authored
In case CM work list is empty, the work pointer will be NULL, so instead of kernel crash it is better to abort processing of works. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Leon Romanovsky authored
cxgb3 driver properly handles errors returned by IDR, so there is no need to have special case (kernel crash) just because IDR is full. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Leon Romanovsky authored
There is no need to crash the machine if unknown work request was received in SQP MAD. Cc: <stable@vger.kernel.org> # 3.6 Fixes: 37bfc7c1 ("IB/mlx4: SR-IOV multiplex and demultiplex MADs") Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Leon Romanovsky authored
Any steering QP is supposed be above steering_qp_base, see function mlx4_ib_steer_qp_alloc() for it, however in case of misalignment between SW and FW, this qp_base can be wrong. Use WARN() to catch such situation without killing the machine. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 31 May, 2018 7 commits
-
-
Colin Ian King authored
Trivial fix to spelling mistake in DP_ERR error message Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Sergey Gorenko authored
The iSER driver reduces max_sectors. For example, if you load the ib_iser module with max_sectors=1024, you will see that /sys/class/block/<bdev>/queue/max_hw_sectors_kb is 508. It is an incorrect value. The expected value is (max_sectors * sector_size) / 1024 = 512. Reducing of max_sectors can cause performance degradation due to unnecessary splitting of IO requests. The number of pages per MR has been fixed here, so there is no longer any need to reduce max_sectors. Fixes: 9c674815 ("IB/iser: Fix max_sectors calculation") Signed-off-by: Sergey Gorenko <sergeygo@mellanox.com> Reviewed-by: Israel Rukshin <israelr@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Sagi Grimberg <sagi@grimberg.me> Acked-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Wei Hu(Xavier) authored
This patch implemented the IB core disassociate_ucontext API. Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Wei Hu(Xavier) authored
This patch hoisted the common process of disassociate_ucontext callback function into ib core code, and these code are common to ervery ib_device driver. Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Acked-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Wei Hu(Xavier) authored
This patch fixed the potential illegal operation when using the extend sge buffer cross page in post send operation. The bug will cause the calltrace as below. [ 3302.922107] Unable to handle kernel paging request at virtual address ffff00003b3a0004 [ 3302.930009] Mem abort info: [ 3302.932790] Exception class = DABT (current EL), IL = 32 bits [ 3302.938695] SET = 0, FnV = 0 [ 3302.941735] EA = 0, S1PTW = 0 [ 3302.944863] Data abort info: [ 3302.947729] ISV = 0, ISS = 0x00000047 [ 3302.951551] CM = 0, WnR = 1 [ 3302.954506] swapper pgtable: 4k pages, 48-bit VAs, pgd = ffff000009ea5000 [ 3302.961279] [ffff00003b3a0004] *pgd=00000023dfffe003, *pud=00000023dfffd003, *pmd=00000022dc84c003, *pte=0000000000000000 [ 3302.972224] Internal error: Oops: 96000047 [#1] SMP [ 3302.999509] CPU: 9 PID: 19628 Comm: roce_test_main Tainted: G OE 4.14.10 #1 [ 3303.007498] task: ffff80234df78000 task.stack: ffff00000f640000 [ 3303.013412] PC is at hns_roce_v2_post_send+0x690/0xe20 [hns_roce_pci] [ 3303.019843] LR is at hns_roce_v2_post_send+0x658/0xe20 [hns_roce_pci] [ 3303.026269] pc : [<ffff0000020694f8>] lr : [<ffff0000020694c0>] pstate: 804001c9 [ 3303.033649] sp : ffff00000f643870 [ 3303.036951] x29: ffff00000f643870 x28: ffff80232bfa9c00 [ 3303.042250] x27: ffff80234d909380 x26: ffff00003b37f0c0 [ 3303.047549] x25: 0000000000000000 x24: 0000000000000003 [ 3303.052848] x23: 0000000000000000 x22: 0000000000000000 [ 3303.058148] x21: 0000000000000101 x20: 0000000000000001 [ 3303.063447] x19: ffff80236163f800 x18: 0000000000000000 [ 3303.068746] x17: 0000ffff86b76fc8 x16: ffff000008301600 [ 3303.074045] x15: 000020a51c000000 x14: 3128726464615f65 [ 3303.079344] x13: 746f6d6572202c29 x12: 303035312879656b [ 3303.084643] x11: 723a6f666e692072 x10: 573a6f666e693a5d [ 3303.089943] x9 : 0000000000000004 x8 : ffff8023ce38b000 [ 3303.095242] x7 : ffff8023ce38b320 x6 : 0000000000000418 [ 3303.100541] x5 : ffff80232bfa9cc8 x4 : 0000000000000030 [ 3303.105839] x3 : 0000000000000100 x2 : 0000000000000200 [ 3303.111138] x1 : 0000000000000320 x0 : ffff00003b3a0000 [ 3303.116438] Process roce_test_main (pid: 19628, stack limit = 0xffff00000f640000) [ 3303.123906] Call trace: [ 3303.126339] Exception stack(0xffff00000f643730 to 0xffff00000f643870) [ 3303.215790] [<ffff0000020694f8>] hns_roce_v2_post_send+0x690/0xe20 [hns_roce_pci] [ 3303.223293] [<ffff0000021c3750>] rt_ktest_post_send+0x5d0/0x8b8 [rdma_test] [ 3303.230261] [<ffff0000021b3234>] exec_send_cmd+0x664/0x1350 [rdma_test] [ 3303.236881] [<ffff0000021b8b30>] rt_ktest_dispatch_cmd_3+0x1510/0x3790 [rdma_test] [ 3303.244455] [<ffff0000021bae54>] rt_ktest_dispatch_cmd_2+0xa4/0x118 [rdma_test] [ 3303.251770] [<ffff0000021bafec>] rt_ktest_dispatch_cmd+0x124/0xaa8 [rdma_test] [ 3303.258997] [<ffff0000021bbc3c>] rt_ktest_dev_write+0x2cc/0x568 [rdma_test] [ 3303.265947] [<ffff0000082ad688>] __vfs_write+0x60/0x18c [ 3303.271158] [<ffff0000082ad998>] vfs_write+0xa8/0x198 [ 3303.276196] [<ffff0000082adc7c>] SyS_write+0x6c/0xd4 [ 3303.281147] Exception stack(0xffff00000f643ec0 to 0xffff00000f644000) [ 3303.287573] 3ec0: 0000000000000003 0000fffffc85faa8 0000000000004e60 0000000000000000 [ 3303.295388] 3ee0: 0000000021fb2000 000000000000ffff eff0e3efe4e58080 0000fffffcc724fe [ 3303.303204] 3f00: 0000000000000040 1999999999999999 0101010101010101 0000000000000038 [ 3303.311019] 3f20: 0000000000000005 ffffffffffffffff 0d73757461747320 ffffffffffffffff [ 3303.318835] 3f40: 0000000000000000 0000000000459b00 0000fffffc85e360 000000000043d788 [ 3303.326650] 3f60: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 3303.334465] 3f80: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 3303.342281] 3fa0: 0000000000000000 0000fffffc85e570 0000000000438804 0000fffffc85e570 [ 3303.350096] 3fc0: 0000ffff8553f618 0000000080000000 0000000000000003 0000000000000040 [ 3303.357911] 3fe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 3303.365729] [<ffff000008083808>] __sys_trace_return+0x0/0x4 [ 3303.371288] Code: b94008e9 34000129 b9400ce2 110006b5 (b9000402) [ 3303.377377] ---[ end trace fd5ab98b3325cf9a ]--- Reported-by: Jie Chen <chenjie103@huawei.com> Reported-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com> Fixes: b1c15835("RDMA/hns: Get rid of virt_to_page and vmap calls after dma_alloc_coherent") Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Wei Hu(Xavier) authored
This patch added reset process for RoCE in hip08. Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 29 May, 2018 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaJason Gunthorpe authored
Leon Romanovsky says: ==================== Introduce new internal to mlx5 CQE format - mini-CQE. It is a CQE in compressed form that holds data needed to extra a single full CQE. It is a stride index, byte count and packet checksum. ==================== * mini_cqe: IB/mlx5: Introduce a new mini-CQE format IB/mlx5: Refactor CQE compression response net/mlx5: Exposing a new mini-CQE format Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This once might have made sense when cache.c was in a different module from device.c, but today it just obfuscation. Get rid of the wrappers and call roge_gid_mgmt_init()/cleanup() directly. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
-
Yonatan Cohen authored
The new mini-CQE format includes the stride index, byte count and packet checksum. Stride index is needed for striding WQ feature. This patch exposes this capability and enables its setting via mlx5 UHW data as part of query device and cq creation. Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Guy Levi <guyle@mellanox.com> Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yonatan Cohen authored
Refactor CQE compression response to be fully set only when it`s really supported. There is no change from user perspective because anyway resp.cqe_comp_caps.max_num was set to zero. Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>W Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yonatan Cohen authored
The new mini-CQE format includes byte-count, checksum and stride index. Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Guy Levi <guyle@mellanox.com> Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
- 28 May, 2018 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaJason Gunthorpe authored
Update mlx4 to support user MR creation against read-only memory, previously it required the memory to be writable. Based on rdma for-rc due to dependencies. * mr_fix: (2 commits) IB/mlx4: Mark user MR as writable if actual virtual memory is writable IB/core: Make testing MR flags for writability a static inline function
-
Jack Morgenstein authored
To allow rereg_user_mr to modify the MR from read-only to writable without using get_user_pages again, we needed to define the initial MR as writable. However, this was originally done unconditionally, without taking into account the writability of the underlying virtual memory. As a result, any attempt to register a read-only MR over read-only virtual memory failed. To fix this, do not add the writable flag bit when the user virtual memory is not writable (e.g. const memory). However, when the underlying memory is NOT writable (and we therefore do not define the initial MR as writable), the IB core adds a "force writable" flag to its user-pages request. If this succeeds, the reg_user_mr caller gets a writable copy of the original pages. If the user-space caller then does a rereg_user_mr operation to enable writability, this will succeed. This should not be allowed, since the original virtual memory was not writable. Cc: <stable@vger.kernel.org> Fixes: 9376932d ("IB/mlx4_ib: Add support for user MR re-registration") Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jack Morgenstein authored
Make the MR writability flags check, which is performed in umem.c, a static inline function in file ib_verbs.h This allows the function to be used by low-level infiniband drivers. Cc: <stable@vger.kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Zhu Yanjun authored
The function rxe_remove_all is only used in this modules. There is no other modules that call this function. So it is not necessary to export it. Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 24 May, 2018 18 commits
-
-
Wei Hu(Xavier) authored
This patch increases checking CMQ status timeout value and uses the same value with NIC driver to avoid deficiency of time. Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Wei Hu(Xavier) authored
This patch modified uar allocation algorithm in hns_roce_uar_alloc function to avoid bitmap exhaust. Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Merge tag 'mlx5-updates-2018-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into for-next mlx5-updates-2018-05-17 mlx5 core dirver updates for both net-next and rdma-next branches. From Christophe JAILLET, first three patche to use kvfree where needed. From: Or Gerlitz <ogerlitz@mellanox.com> Next six patches from Roi and Co adds support for merged sriov e-switch which comes to serve cases where both PFs, VFs set on them and both uplinks are to be used in single v-switch SW model. When merged e-switch is supported, the per-port e-switch is logically merged into one e-switch that spans both physical ports and all the VFs. This model allows to offload TC eswitch rules between VFs belonging to different PFs (and hence have different eswitch affinity), it also sets the some of the foundations needed for uplink LAG support. * tag 'mlx5-updates-2018-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux: net/mlx5e: Explicitly set source e-switch in offloaded TC rules net/mlx5: Add source e-switch owner net/mlx5e: Explicitly set destination e-switch in FDB rules net/mlx5: Add destination e-switch owner net/mlx5: Properly handle a vport destination when setting FTE net/mlx5: Add merged e-switch cap IB/mlx5: Use 'kvfree()' for memory allocated by 'kvzalloc()' net/mlx5: Eswitch, Use 'kvfree()' for memory allocated by 'kvzalloc()' net/mlx5: Vport, Use 'kvfree()' for memory allocated by 'kvzalloc()'
-
Parav Pandit authored
There are several places a gid table is accessed. Have a helper tiny function rdma_gid_table() to avoid code duplication at such places. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
Instead of open coding memcmp() to check whether a given GID is zero or not, use a helper function to do so, and replace instances of memcpy(z,&zgid) with memset. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Erez Shitrit authored
On fatal error the driver simulates CQE's for ULPs that rely on completion of all their posted work-request. For the GSI traffic, the mlx5 has its own mechanism that sends the completions via software CQE's directly to the relevant CQ. This should be kept in fatal error too, so the driver should simulate such CQE's with the specified error state in order to complete GSI QP work requests. Without the fix the next deadlock might appears: schedule_timeout+0x274/0x350 wait_for_common+0xec/0x240 mcast_remove_one+0xd0/0x120 [ib_core] ib_unregister_device+0x12c/0x230 [ib_core] mlx5_ib_remove+0xc4/0x270 [mlx5_ib] mlx5_detach_device+0x184/0x1a0 [mlx5_core] mlx5_unload_one+0x308/0x340 [mlx5_core] mlx5_pci_err_detected+0x74/0xe0 [mlx5_core] Cc: <stable@vger.kernel.org> # 4.7 Fixes: 89ea94a7 ("IB/mlx5: Reset flow support for IB kernel ULPs") Signed-off-by: Erez Shitrit <erezsh@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Leon Romanovsky authored
In commit 357d23c811a7 ("Remove the obsolete libibcm library") in rdma-core [1], we removed obsolete library which used the /dev/infiniband/ucmX interface. Following multiple syzkaller reports about non-sanitized user input in the UCMA module, the short audit reveals the same issues in UCM module too. It is better to disable this interface in the kernel, before syzkaller team invests time and energy to harden this unused interface. [1] https://github.com/linux-rdma/rdma-core/pull/279Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
Remove duplicate declaration of gid_cache_wq. Fixes: d4186194 ("IB/core: Add generic function to extract IB speed from netdev") Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Leon Romanovsky authored
Remove various prints of VMA pointers. Reported-by: Michal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Reviewed-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
oulijun authored
The lower 15 bit of paramter of db structure means different meanings when db type is sq, rq and srq. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mike Marciniszyn authored
The following error occurs in a debug build when running MPI PSM: [ 307.415911] WARNING: CPU: 4 PID: 23867 at lib/dma-debug.c:1158 check_unmap+0x4ee/0xa20 [ 307.455661] ib_qib 0000:05:00.0: DMA-API: device driver failed to check map error[device address=0x00000000df82b000] [size=4096 bytes] [mapped as page] [ 307.517494] Modules linked in: [ 307.531584] ib_isert iscsi_target_mod ib_srpt target_core_mod rpcrdma sunrpc ib_srp scsi_transport_srp scsi_tgt ib_iser libiscsi ib_ipoib scsi_transport_iscsi rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_qib intel_powerclamp coretemp rdmavt intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel ipmi_ssif ib_core aesni_intel sg ipmi_si lrw gf128mul dca glue_helper ipmi_devintf iTCO_wdt gpio_ich hpwdt iTCO_vendor_support ablk_helper hpilo acpi_power_meter cryptd ipmi_msghandler ie31200_edac shpchp pcc_cpufreq lpc_ich pcspkr ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci crct10dif_pclmul crct10dif_common drm crc32c_intel libahci tg3 libata serio_raw ptp i2c_core [ 307.846113] pps_core dm_mirror dm_region_hash dm_log dm_mod [ 307.866505] CPU: 4 PID: 23867 Comm: mpitests-IMB-MP Kdump: loaded Not tainted 3.10.0-862.el7.x86_64.debug #1 [ 307.911178] Hardware name: HP ProLiant DL320e Gen8, BIOS J05 11/09/2013 [ 307.944206] Call Trace: [ 307.956973] [<ffffffffbd9e915b>] dump_stack+0x19/0x1b [ 307.982201] [<ffffffffbd2a2f58>] __warn+0xd8/0x100 [ 308.005999] [<ffffffffbd2a2fdf>] warn_slowpath_fmt+0x5f/0x80 [ 308.034260] [<ffffffffbd5f667e>] check_unmap+0x4ee/0xa20 [ 308.060801] [<ffffffffbd41acaa>] ? page_add_file_rmap+0x2a/0x1d0 [ 308.090689] [<ffffffffbd5f6c4d>] debug_dma_unmap_page+0x9d/0xb0 [ 308.120155] [<ffffffffbd4082e0>] ? might_fault+0xa0/0xb0 [ 308.146656] [<ffffffffc07761a5>] qib_tid_free.isra.14+0x215/0x2a0 [ib_qib] [ 308.180739] [<ffffffffc0776bf4>] qib_write+0x894/0x1280 [ib_qib] [ 308.210733] [<ffffffffbd540b00>] ? __inode_security_revalidate+0x70/0x80 [ 308.244837] [<ffffffffbd53c2b7>] ? security_file_permission+0x27/0xb0 [ 308.266025] qib_ib0.8006: multicast join failed for ff12:401b:8006:0000:0000:0000:ffff:ffff, status -22 [ 308.323421] [<ffffffffbd46f5d3>] vfs_write+0xc3/0x1f0 [ 308.347077] [<ffffffffbd492a5c>] ? fget_light+0xfc/0x510 [ 308.372533] [<ffffffffbd47045a>] SyS_write+0x8a/0x100 [ 308.396456] [<ffffffffbd9ff355>] system_call_fastpath+0x1c/0x21 The code calls a qib_map_page() which has never correctly tested for a mapping error. Fix by testing for pci_dma_mapping_error() in all cases and properly handling the failure in the caller. Additionally, streamline qib_map_page() arguments to satisfy just the single caller. Cc: <stable@vger.kernel.org> Reviewed-by: Alex Estrin <alex.estrin@intel.com> Tested-by: Don Dutile <ddutile@redhat.com> Reviewed-by: Don Dutile <ddutile@redhat.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Alex Estrin authored
The following error message occurs on a target host in a debug build during session login: [ 3524.411874] WARNING: CPU: 5 PID: 12063 at lib/dma-debug.c:1207 check_sync+0x4ec/0x5b0 [ 3524.421057] infiniband hfi1_0: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x0000000000000000] [size=76 bytes] ......snip ..... [ 3524.535846] CPU: 5 PID: 12063 Comm: iscsi_np Kdump: loaded Not tainted 3.10.0-862.el7.x86_64.debug #1 [ 3524.546764] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6 06/08/2015 [ 3524.555740] Call Trace: [ 3524.559102] [<ffffffffa5fe915b>] dump_stack+0x19/0x1b [ 3524.565477] [<ffffffffa58a2f58>] __warn+0xd8/0x100 [ 3524.571557] [<ffffffffa58a2fdf>] warn_slowpath_fmt+0x5f/0x80 [ 3524.578610] [<ffffffffa5bf5b8c>] check_sync+0x4ec/0x5b0 [ 3524.585177] [<ffffffffa58efc3f>] ? set_cpus_allowed_ptr+0x5f/0x1c0 [ 3524.592812] [<ffffffffa5bf5cd0>] debug_dma_sync_single_for_cpu+0x80/0x90 [ 3524.601029] [<ffffffffa586add3>] ? x2apic_send_IPI_mask+0x13/0x20 [ 3524.608574] [<ffffffffa585ee1b>] ? native_smp_send_reschedule+0x5b/0x80 [ 3524.616699] [<ffffffffa58e9b76>] ? resched_curr+0xf6/0x140 [ 3524.623567] [<ffffffffc0879af0>] isert_create_send_desc.isra.26+0xe0/0x110 [ib_isert] [ 3524.633060] [<ffffffffc087af95>] isert_put_login_tx+0x55/0x8b0 [ib_isert] [ 3524.641383] [<ffffffffa58ef114>] ? try_to_wake_up+0x1a4/0x430 [ 3524.648561] [<ffffffffc098cfed>] iscsi_target_do_tx_login_io+0xdd/0x230 [iscsi_target_mod] [ 3524.658557] [<ffffffffc098d827>] iscsi_target_do_login+0x1a7/0x600 [iscsi_target_mod] [ 3524.668084] [<ffffffffa59f9bc9>] ? kstrdup+0x49/0x60 [ 3524.674420] [<ffffffffc098e976>] iscsi_target_start_negotiation+0x56/0xc0 [iscsi_target_mod] [ 3524.684656] [<ffffffffc098c2ee>] __iscsi_target_login_thread+0x90e/0x1070 [iscsi_target_mod] [ 3524.694901] [<ffffffffc098ca50>] ? __iscsi_target_login_thread+0x1070/0x1070 [iscsi_target_mod] [ 3524.705446] [<ffffffffc098ca50>] ? __iscsi_target_login_thread+0x1070/0x1070 [iscsi_target_mod] [ 3524.715976] [<ffffffffc098ca78>] iscsi_target_login_thread+0x28/0x60 [iscsi_target_mod] [ 3524.725739] [<ffffffffa58d60ff>] kthread+0xef/0x100 [ 3524.732007] [<ffffffffa58d6010>] ? insert_kthread_work+0x80/0x80 [ 3524.739540] [<ffffffffa5fff1b7>] ret_from_fork_nospec_begin+0x21/0x21 [ 3524.747558] [<ffffffffa58d6010>] ? insert_kthread_work+0x80/0x80 [ 3524.755088] ---[ end trace 23f8bf9238bd1ed8 ]--- [ 3595.510822] iSCSI/iqn.1994-05.com.redhat:537fa56299: Unsupported SCSI Opcode 0xa3, sending CHECK_CONDITION. The code calls dma_sync on login_tx_desc->dma_addr prior to initializing it with dma-mapped address. login_tx_desc is a part of iser_conn structure and is used only once during login negotiation, so the issue is fixed by eliminating dma_sync call for this buffer using a special case routine. Cc: <stable@vger.kernel.org> Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: Don Dutile <ddutile@redhat.com> Signed-off-by: Alex Estrin <alex.estrin@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Mike Marciniszyn authored
Given we are dealing with nano-second level timers, when the timer pops, ensure it happens on the CPU which caused the timer to be set in the first place. This avoids excessive jitter from the desired expiration time by avoiding the cost of switching our context to another CPU that is cache cold for this given timer. Reviewed-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Michael J. Ruhl authored
For errorinfo MAD requests, the response has a 0 port number left over from a memset. Instead we should always set the port number in the response. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Mike Marciniszyn authored
The knowledge of the internal workings of the expect receive is too distributed. Fix by: - right size several rcd fields associated with expect receive - making an init entrance to init all the lists - consolidate all the allocations into an array anchored in the rcd Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Don Hiatt authored
Add trace support for 16B Management Packets. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Don Hiatt authored
16B Management Packets (L4=0x08) replace the BTH and DETH of normal MAD packet packets with a header containing the the source and destination queue pair numbers; fields that were originally retrieved from the BTH/DETH are now populated from this header as well as from the 16B LRH (e.g. pkey). 16B Management Packets are used as an optimized management format on 16B fabrics. These management packets have an opcode of IB_OPCODE_UD_SEND_ONLY, a fixed 3Byte pad, and a header length of 24Bytes. The decision as to when we send a management packet is based upon either the source or destination queue pair number being 0 or 1. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Don Hiatt authored
Add 16B Management Packet definition. This optimized packet format replaces the ib_other_headers and BTH with a source and destination QP number. To support these packets we introduce struct opa_16b_mgmt into the struct hfi1_16b_header. This packet format is only used for MAD packets using the IB_OPCODE_UD_SEND_ONLY opcode on QP0/1. The original 16B implementation failed to use 16B management packets so now we add their definition. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-