1. 01 Jul, 2024 1 commit
    • Michael Ellerman's avatar
      selftests/sigaltstack: Fix ppc64 GCC build · 17c743b9
      Michael Ellerman authored
      Building the sigaltstack test with GCC on 64-bit powerpc errors with:
      
        gcc -Wall     sas.c  -o /home/michael/linux/.build/kselftest/sigaltstack/sas
        In file included from sas.c:23:
        current_stack_pointer.h:22:2: error: #error "implement current_stack_pointer equivalent"
           22 | #error "implement current_stack_pointer equivalent"
              |  ^~~~~
        sas.c: In function ‘my_usr1’:
        sas.c:50:13: error: ‘sp’ undeclared (first use in this function); did you mean ‘p’?
           50 |         if (sp < (unsigned long)sstack ||
              |             ^~
      
      This happens because GCC doesn't define __ppc__ for 64-bit builds, only
      32-bit builds. Instead use __powerpc__ to detect powerpc builds, which
      is defined by clang and GCC for 64-bit and 32-bit builds.
      
      Fixes: 05107edc ("selftests: sigaltstack: fix -Wuninitialized")
      Cc: stable@vger.kernel.org # v6.3+
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://msgid.link/20240520062647.688667-1-mpe@ellerman.id.au
      17c743b9
  2. 28 Jun, 2024 16 commits
  3. 17 Jun, 2024 3 commits
  4. 11 Jun, 2024 1 commit
  5. 04 Jun, 2024 4 commits
    • Nathan Lynch's avatar
      powerpc/mm/drmem: Silence drmem_init() early return · 11e6e6d8
      Nathan Lynch authored
      It's not an error or noteworthy condition if the
      "ibm,dynamic-reconfiguration-memory" node isn't present.
      
      Drop the needless message.
      Signed-off-by: default avatarNathan Lynch <nathanl@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://msgid.link/20240603-silence-drmem_init-v1-1-e9d71646bc3d@linux.ibm.com
      11e6e6d8
    • Haren Myneni's avatar
      powerpc/pseries/vas: Use usleep_range() to support HCALL delay · 43ac9f5c
      Haren Myneni authored
      VAS allocate, modify and deallocate HCALLs returns
      H_LONG_BUSY_ORDER_1_MSEC or H_LONG_BUSY_ORDER_10_MSEC for busy
      delay and expects OS to reissue HCALL after that delay. But using
      msleep() will often sleep at least 20 msecs even though the
      hypervisor suggests OS reissue these HCALLs after 1 or 10msecs.
      
      The open and close VAS window functions hold mutex and then issue
      these HCALLs. So these operations can take longer than the
      necessary when multiple threads issue open or close window APIs
      simultaneously, especially might affect the performance in the
      case of repeat open/close APIs for each compression request.
      
      Multiple tasks can open / close VAS windows at the same time
      which depends on the available VAS credits. For example, 240
      cores system provides 4800 VAS credits. It means 4800 tasks can
      execute open VAS windows HCALLs with the mutex. Since each
      msleep() will often sleep more than 20 msecs, some tasks are
      waiting more than 120 secs to acquire mutex. It can cause hung
      traces for these tasks in dmesg due to mutex contention around
      open/close HCALLs.
      
      Instead of msleep(), use usleep_range() to ensure sleep with
      the expected value before issuing HCALL again. So since each
      task sleep 10 msecs maximum, this patch allow more tasks can
      issue open/close VAS calls without any hung traces in the
      dmesg.
      Signed-off-by: default avatarHaren Myneni <haren@linux.ibm.com>
      Suggested-by: default avatarNathan Lynch <nathanl@linux.ibm.com>
      Reviewed-by: default avatarNathan Lynch <nathanl@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://msgid.link/20240116055910.421605-1-haren@linux.ibm.com
      43ac9f5c
    • Nilay Shroff's avatar
      powerpc/numa: Online a node if PHB is attached. · 11981816
      Nilay Shroff authored
      In the current design, a numa-node is made online only if that node is
      attached to cpu/memory. With this design, if any PCI/IO device is found
      to be attached to a numa-node which is not online then the numa-node
      id of the corresponding PCI/IO device is set to NUMA_NO_NODE(-1). This
      design may negatively impact the performance of PCIe device if the
      numa-node assigned to PCIe device is -1 because in such case we may not
      be able to accurately calculate the distance between two nodes.
      
      The multi-controller NVMe PCIe disk has an issue with calculating the
      node distance if the PCIe NVMe controller is attached to a PCI host
      bridge which has numa-node id value set to NUMA_NO_NODE. This patch
      helps fix this ensuring that a cpu/memory less numa node is made online
      if it's attached to PCI host bridge.
      Signed-off-by: default avatarNilay Shroff <nilay@linux.ibm.com>
      Reviewed-by: default avatarSrikar Dronamraju <srikar@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://msgid.link/20240517142531.3273464-3-nilay@linux.ibm.com
      11981816
    • Gaurav Batra's avatar
      powerpc/pseries/iommu: Split Dynamic DMA Window to be used in Hybrid mode · ff5163bb
      Gaurav Batra authored
      Dynamic DMA Window (DDW) supports TCEs that are backed by 2MB page
      size. In most configurations, DDW is big enough to pre-map all of LPAR
      memory for IO. Pre-mapping of memory for DMA results in improvements in
      IO performance.
      
      Persistent memory, vPMEM, can be assigned to an LPAR as well. vPMEM is
      not contiguous with LPAR memory and usually is assigned at high memory
      addresses.  This makes is not possible to pre-map both vPMEM and LPAR
      memory in the same DDW.
      
      For a dedicated adapter this limitation is not an issue. Dedicated
      adapters can have both Default DMA window, which is backed by 4K page
      size and a DDW backed by 2MB page size TCEs. In this scenario, LPAR
      memory is pre-mapped in the DDW.  Any DMA going to the vPMEM is routed
      via dynamically allocated TCEs in the default window.
      
      The issue arises with SR-IOV adapters. There is only one DMA window -
      either Default or DDW. If an LPAR has vPMEM assigned, memory is not
      pre-mapped in the DDW since TCEs needs to be allocated for vPMEM as well.
      In this case, DDW is created and TCEs are dynamically allocated for both
      vPMEM and LPAR memory.
      
      Today, DDW is only used in single mode - direct mapped TCEs or
      dynamically mapped TCEs. This enhancement breaks a single DDW in 2
      regions -
      
      	1. First region to pre-map LPAR memory
      	2. Second region to dynamically allocate TCEs for IO to vPMEM
      
      The DDW is split only if it is big enough to pre-map complete LPAR
      memory and still have some space left to dynamically map vPMEM. Maximum
      size possible DDW is created as permitted by the Hypervisor.
      Signed-off-by: default avatarGaurav Batra <gbatra@linux.ibm.com>
      Reviewed-by: default avatarBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://msgid.link/20240514014608.35537-1-gbatra@linux.ibm.com
      ff5163bb
  6. 03 Jun, 2024 1 commit
  7. 02 Jun, 2024 8 commits
  8. 01 Jun, 2024 6 commits