- 07 Sep, 2019 1 commit
-
-
Daniel Vetter authored
This is a similar idea to the fs_reclaim fake lockdep lock. It's fairly easy to provoke a specific notifier to be run on a specific range: Just prep it, and then munmap() it. A bit harder, but still doable, is to provoke the mmu notifiers for all the various callchains that might lead to them. But both at the same time is really hard to reliably hit, especially when you want to exercise paths like direct reclaim or compaction, where it's not easy to control what exactly will be unmapped. By introducing a lockdep map to tie them all together we allow lockdep to see a lot more dependencies, without having to actually hit them in a single challchain while testing. On Jason's suggestion this is is rolled out for both invalidate_range_start and invalidate_range_end. They both have the same calling context, hence we can share the same lockdep map. Note that the annotation for invalidate_ranage_start is outside of the mm_has_notifiers(), to make sure lockdep is informed about all paths leading to this context irrespective of whether mmu notifiers are present for a given context. We don't do that on the invalidate_range_end side to avoid paying the overhead twice, there the lockdep annotation is pushed down behind the mm_has_notifiers() check. Link: https://lore.kernel.org/r/20190826201425.17547-2-daniel.vetter@ffwll.chReviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 28 Aug, 2019 1 commit
-
-
Christoph Hellwig authored
No modular code uses these, which makes a lot of sense given the wrappers around them are only called by core mm code. Link: https://lore.kernel.org/r/20190828142109.29012-1-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 27 Aug, 2019 2 commits
-
-
Ralph Campbell authored
Normally, callers to handle_mm_fault() are supposed to check the vma->vm_flags first. hmm_range_fault() checks for VM_READ but doesn't check for VM_WRITE if the caller requests a page to be faulted in with write permission (via the hmm_range.pfns[] value). If the vma is write protected, this can result in an infinite loop: hmm_range_fault() walk_page_range() ... hmm_vma_walk_hole() hmm_vma_walk_hole_() hmm_vma_do_fault() handle_mm_fault(FAULT_FLAG_WRITE) /* returns VM_FAULT_WRITE */ /* returns -EBUSY */ /* returns -EBUSY */ /* returns -EBUSY */ /* loops on -EBUSY and range->valid */ Prevent this by checking for vma->vm_flags & VM_WRITE before calling handle_mm_fault(). Link: https://lore.kernel.org/r/20190823221753.2514-3-rcampbell@nvidia.comSigned-off-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Ralph Campbell authored
Although hmm_range_fault() calls find_vma() to make sure that a vma exists before calling walk_page_range(), hmm_vma_walk_hole() can still be called with walk->vma == NULL if the start and end address are not contained within the vma range. hmm_range_fault() /* calls find_vma() but no range check */ walk_page_range() /* calls find_vma(), sets walk->vma = NULL */ __walk_page_range() walk_pgd_range() walk_p4d_range() walk_pud_range() hmm_vma_walk_hole() hmm_vma_walk_hole_() hmm_vma_do_fault() handle_mm_fault(vma=0) Link: https://lore.kernel.org/r/20190823221753.2514-2-rcampbell@nvidia.comSigned-off-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 23 Aug, 2019 1 commit
-
-
Yang, Philip authored
hmm_range_fault() may return NULL pages because some of the pfns are equal to HMM_PFN_NONE. This happens randomly under memory pressure. The reason is during the swapped out page pte path, hmm_vma_handle_pte() doesn't update the fault variable from cpu_flags, so it failed to call hmm_vam_do_fault() to swap the page in. The fix is to call hmm_pte_need_fault() to update fault variable. Fixes: 74eee180 ("mm/hmm/mirror: device page fault handler") Link: https://lore.kernel.org/r/20190815205227.7949-1-Philip.Yang@amd.comSigned-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: "Jérôme Glisse" <jglisse@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 21 Aug, 2019 16 commits
-
-
Jason Gunthorpe authored
mmu_notifier_unregister_no_release() and mmu_notifier_call_srcu() no longer have any users, they have all been converted to use mmu_notifier_put(). So delete this difficult to use interface. Link: https://lore.kernel.org/r/20190806231548.25242-12-jgg@ziepe.caReviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
At this point the ucontext is only being stored to access the ib_device, so just store the ib_device directly instead. This is more natural and logical as the umem has nothing to do with the ucontext. Link: https://lore.kernel.org/r/20190806231548.25242-8-jgg@ziepe.caSigned-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This is a significant simplification, no extra list is kept per FD, and the interval tree is now shared between all the ucontexts, reducing overhead if there are multiple ucontexts active. Link: https://lore.kernel.org/r/20190806231548.25242-7-jgg@ziepe.caSigned-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
From rdma.git Jason Gunthorpe says: ==================== This is a collection of general cleanups for ODP to clarify some of the flows around umem creation and use of the interval tree. ==================== The branch is based on v5.3-rc5 due to dependencies, and is being taken into hmm.git due to dependencies in the next patches. * odp_fixes: RDMA/mlx5: Use odp instead of mr->umem in pagefault_mr RDMA/mlx5: Use ib_umem_start instead of umem.address RDMA/core: Make invalidate_range a device operation RDMA/odp: Use kvcalloc for the dma_list and page_list RDMA/odp: Check for overflow when computing the umem_odp end RDMA/odp: Provide ib_umem_odp_release() to undo the allocs RDMA/odp: Split creating a umem_odp from ib_umem_get RDMA/odp: Make the three ways to create a umem_odp clear RMDA/odp: Consolidate umem_odp initialization RDMA/odp: Make it clearer when a umem is an implicit ODP umem RDMA/odp: Iterate over the whole rbtree directly RDMA/odp: Use the common interval tree library instead of generic RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
These are the same thing since mr always comes from odp->private. It is confusing to reference the same memory via two names. Link: https://lore.kernel.org/r/20190819111710.18440-13-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
These are subtly different, the address is the original VA requested during umem_get, while ib_umem_start() is the version that is rounded to the proper page size, ie is the true start of the umem's dma map. Link: https://lore.kernel.org/r/20190819111710.18440-12-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Moni Shoua authored
The callback function 'invalidate_range' is implemented in a driver so the place for it is in the ib_device_ops structure and not in ib_ucontext. Signed-off-by: Moni Shoua <monis@mellanox.com> Reviewed-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Link: https://lore.kernel.org/r/20190819111710.18440-11-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
There is no specific need for these to be in the valloc space, let the system decide automatically how to do the allocation. Link: https://lore.kernel.org/r/20190819111710.18440-10-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Since the page size can be extended in the ODP case by IB_ACCESS_HUGETLB the existing overflow checks done by ib_umem_get() are not sufficient. Check for overflow again. Further, remove the unchecked math from the inlines and just use the precomputed value stored in the interval_tree_node. Link: https://lore.kernel.org/r/20190819111710.18440-9-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Now that there are allocator APIs that return the ib_umem_odp directly it should be freed through a umem_odp free'er as well. Link: https://lore.kernel.org/r/20190819111710.18440-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This is the last creation API that is overloaded for both, there is very little code sharing and a driver has to be specifically ready for a umem_odp to be created to use the odp version. Link: https://lore.kernel.org/r/20190819111710.18440-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The three paths to build the umem_odps are kind of muddled, they are: - As a normal ib_mr umem - As a child in an implicit ODP umem tree - As the root of an implicit ODP umem tree Only the first two are actually umem's, the last is an abuse. The implicit case can only be triggered by explicit driver request, it should never be co-mingled with the normal case. While we are here, make sensible function names and add some comments to make this clearer. Link: https://lore.kernel.org/r/20190819111710.18440-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This is done in two different places, consolidate all the post-allocation initialization into a single function. Link: https://lore.kernel.org/r/20190819111710.18440-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Implicit ODP umems are special, they don't have any page lists, they don't exist in the interval tree and they are never DMA mapped. Instead of trying to guess this based on a zero length use an explicit flag. Further, do not allow non-implicit umems to be 0 size. Link: https://lore.kernel.org/r/20190819111710.18440-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Instead of intersecting a full interval, just iterate over every element directly. This is faster and clearer. Link: https://lore.kernel.org/r/20190819111710.18440-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
ODP is working with userspace VA's in the interval tree which always fit into an unsigned long, so we can use the common code. This comes at a cost of a 16 byte increase in ib_umem_odp struct size due to storing the interval tree start/last in addition to the umem addr/length. However these values were computed and are performance critical for the interval lookup, so this seems like a worthwhile trade off. Removes 2k of .text from the kernel. Link: https://lore.kernel.org/r/20190819111710.18440-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 20 Aug, 2019 19 commits
-
-
Jason Gunthorpe authored
When ODP is enabled with IB_ACCESS_HUGETLB then the required pages should be calculated based on the extent of the MR, which is rounded to the nearest huge page alignment. Fixes: d2183c6f ("RDMA/umem: Move page_shift from ib_umem to ib_odp_umem") Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Link: https://lore.kernel.org/r/20190815083834.9245-5-leon@kernel.orgSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Christoph Hellwig authored
The kvmppc ultravisor code wants a device private memory pool that is system wide and not attached to a device. Instead of faking up one provide a low-level memremap_pages for it. Link: https://lore.kernel.org/r/20190818090557.17853-5-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Just clean up for early failures and then piggy back on devm_memremap_pages_release. This helps with a pending not device managed version of devm_memremap_pages. Link: https://lore.kernel.org/r/20190818090557.17853-4-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
The dev field in struct dev_pagemap is only used to print dev_name in two places, which are at best nice to have. Just remove the field and thus the name in those two messages. Link: https://lore.kernel.org/r/20190818090557.17853-3-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Factor out the guts of devm_request_free_mem_region so that we can implement both a device managed and a manually release version as tiny wrappers around it. Link: https://lore.kernel.org/r/20190818090557.17853-2-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Daniel Vetter authored
Just a bit of paranoia, since if we start pushing this deep into callchains it's hard to spot all places where an mmu notifier implementation might fail when it's not allowed to. Inspired by some confusion we had discussing i915 mmu notifiers and whether we could use the newly-introduced return value to handle some corner cases. Until we realized that these are only for when a task has been killed by the oom reaper. An alternative approach would be to split the callback into two versions, one with the int return value, and the other with void return value like in older kernels. But that's a lot more churn for fairly little gain I think. Summary from the m-l discussion on why we want something at warning level: This allows automated tooling in CI to catch bugs without humans having to look at everything. If we just upgrade the existing pr_info to a pr_warn, then we'll have false positives. And as-is, no one will ever spot the problem since it's lost in the massive amounts of overall dmesg noise. Link: https://lore.kernel.org/r/20190814202027.18735-2-daniel.vetter@ffwll.chSigned-off-by: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
CONFIG_MIGRATE_VMA_HELPER guards helpers that are required for proper devic private memory support. Remove the option and just check for CONFIG_DEVICE_PRIVATE instead. Link: https://lore.kernel.org/r/20190814075928.23766-11-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
No one ever checks this flag, and we could easily get that information from the page if needed. Link: https://lore.kernel.org/r/20190814075928.23766-10-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Now that we can rely errors in the normal control flow there is no need for this flag, remove it. Link: https://lore.kernel.org/r/20190814075928.23766-9-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Factor the main copy page to vram routine out into a helper that acts on a single page and which doesn't require the nouveau_dmem_migrate structure for argument passing. As an added benefit the new version only allocates the dma address array once and reuses it for each subsequent chunk of work. Link: https://lore.kernel.org/r/20190814075928.23766-8-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Factor the main copy page to ram routine out into a helper that acts on a single page and which doesn't require the nouveau_dmem_fault structure for argument passing. Also remove the loop over multiple pages as we only handle one at the moment, although the structure of the main worker function makes it relatively easy to add multi page support back if needed in the future. But at least for now this avoid the needed to dynamically allocate memory for the dma addresses in what is essentially the page fault path. Link: https://lore.kernel.org/r/20190814075928.23766-7-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
nouveau_dmem_migrate_vma and nouveau_dmem_convert_pfn are only called when CONFIG_DRM_NOUVEAU_SVM is enabled, so there is no need to provide !CONFIG_DRM_NOUVEAU_SVM stubs for them. Link: https://lore.kernel.org/r/20190814075928.23766-6-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Factor out the end of fencing logic from the two migration routines. Link: https://lore.kernel.org/r/20190814075928.23766-5-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
Factor out the repeated device memory address calculation into a helper. Link: https://lore.kernel.org/r/20190814075928.23766-4-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
When we start a new batch of dma_map operations we need to reset dma_nr, as we start filling a newly allocated array. Link: https://lore.kernel.org/r/20190814075928.23766-3-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Christoph Hellwig authored
There isn't any good reason to pass callbacks to migrate_vma. Instead we can just export the three steps done by this function to drivers and let them sequence the operation without callbacks. This removes a lot of boilerplate code as-is, and will allow the drivers to drastically improve code flow and error handling further on. Link: https://lore.kernel.org/r/20190814075928.23766-2-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Tested-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Jason Gunthorpe says: ==================== Add mmu_notifier_get/put for managing mmu notifier registrations This series introduces a new registration flow for mmu_notifiers based on the idea that the user would like to get a single refcounted piece of memory for a mm, keyed to its use. For instance many users of mmu_notifiers use an interval tree or similar to dispatch notifications to some object. There are many objects but only one notifier subscription per mm holding the tree. Of the 12 places that call mmu_notifier_register: - 7 are maintaining some kind of obvious mapping of mm_struct to mmu_notifier registration, ie in some linked list or hash table. Of the 7 this series converts 4 (gru, hmm, RDMA, radeon) - 3 (hfi1, gntdev, vhost) are registering multiple notifiers, but each one immediately does some VA range filtering, ie with an interval tree. These would be better with a global subsystem-wide range filter and could convert to this API. - 2 (kvm, amd_iommu) are deliberately using a single mm at a time, and really can't use this API. One of the intel-svm's modes is also in this list The 3/7 unconverted drivers are: - intel-svm This driver tracks mm's in a global linked list 'global_svm_list' and would benefit from this API. Its flow is a bit complex, since it also wants a set of non-shared notifiers. - i915_gem_usrptr This driver tracks mm's in a per-device hash table (dev_priv->mm_structs), but only has an optional use of mmu_notifiers. Since it still seems to need the hash table it is difficult to convert. - amdkfd/kfd_process This driver is using a global SRCU hash table to track mm's The control flow here is very complicated and the driver is relying on this hash table to be fast on the ioctl syscall path. It would definitely benefit, but only if the ioctl path didn't need to do the search so often. ==================== Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The sequence of mmu_notifier_unregister_no_release(), mmu_notifier_call_srcu() is identical to mmu_notifier_put() with the free_notifier callback. As this is the last user of those APIs, converting it means we can drop them. Link: https://lore.kernel.org/r/20190806231548.25242-11-jgg@ziepe.caReviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
When using mmu_notifer_unregister_no_release() the caller must ensure there is a SRCU synchronize before the mn memory is freed, otherwise use after free races are possible, for instance: CPU0 CPU1 invalidate_range_start hlist_for_each_entry_rcu(..) mmu_notifier_unregister_no_release(&p->mn) kfree(mn) if (mn->ops->invalidate_range_end) The error unwind in amdkfd misses the SRCU synchronization. amdkfd keeps the kfd_process around until the mm is released, so split the flow to fully initialize the kfd_process and register it for find_process, and with the notifier. Past this point the kfd_process does not need to be cleaned up as it is fully ready. The final failable step does a vm_mmap() and does not seem to impact the kfd_process global state. Since it also cannot be undone (and already has problems with undo if it internally fails), it has to be last. This way we don't have to try to unwind the mmu_notifier_register() and avoid the problem with the SRCU. Along the way this also fixes various other error unwind bugs in the flow. Fixes: 45102048 ("amdkfd: Add process queue manager module") Link: https://lore.kernel.org/r/20190806231548.25242-10-jgg@ziepe.caReviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-