- 05 Jul, 2019 5 commits
-
-
Steffen Maier authored
Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-4-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Steffen Maier authored
For non-static-inlines, debug.c already had non-compliant function header docs. So move the pure prototype kdocs of ("s390: include/asm/debug.h add kerneldoc markups") from debug.h to debug.c and merge them with the old function docs. Also, I had the impression that kdoc typically is at the implementation in the compile unit rather than at the prototype in the header file. While at it, update the short kdoc description to distinguish the different functions. And a few more consistency cleanups. Added a new kdoc for debug_set_critical() since debug.h comments it as part of the API. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-3-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Steffen Maier authored
Complements previous ("s390: include/asm/debug.h add kerneldoc markups") which seemed to have dropped important non-kdoc parts such as user space interface (level, size, flush) as well as views and caution regarding strings in the sprintf view. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-2-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Merge tag 'vfio-ccw-20190705' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into features Fix a bug introduced in the refactoring. * tag 'vfio-ccw-20190705' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw: vfio-ccw: Fix the conversion of Format-0 CCWs to Format-1 Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Eric Farman authored
When processing Format-0 CCWs, we use the "len" variable as the number of CCWs to convert to Format-1. But that variable contains zero here, and is not a meaningful CCW count until ccwchain_calc_length() returns. Since that routine requires and expects Format-1 CCWs to identify the chaining behavior, the format conversion must be done first. Convert the 2KB we copied even if it's more than we need. Fixes: 7f8e89a8 ("vfio-ccw: Factor out the ccw0-to-ccw1 transition") Reported-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190702180928.18113-1-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
- 04 Jul, 2019 2 commits
-
-
Sebastian Ott authored
Do not issue CLP_SET_ENABLE_MIO after opting out of MIO instruction usage. This should not fix a bug but reduce overhead within firmware. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sebastian Ott authored
Unfortunately we have to handle a class of devices that don't support the new MIO instructions. Adjust resource assignment and mapping accordingly. Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 02 Jul, 2019 10 commits
-
-
Pierre Morel authored
AP Queue Interruption Control (AQIC) facility gives the guest the possibility to control interruption for the Cryptographic Adjunct Processor queues. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> [ Modified while picking: we may not expose STFLE facility 65 unconditionally because AIV is a pre-requirement.] Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Pierre Morel authored
We register a AP PQAP instruction hook during the open of the mediated device. And unregister it on release. During the probe of the AP device, we allocate a vfio_ap_queue structure to keep track of the information we need for the PQAP/AQIC instruction interception. In the AP PQAP instruction hook, if we receive a demand to enable IRQs, - we retrieve the vfio_ap_queue based on the APQN we receive in REG1, - we retrieve the page of the guest address, (NIB), from register REG2 - we retrieve the mediated device to use the VFIO pinning infrastructure to pin the page of the guest address, - we retrieve the pointer to KVM to register the guest ISC and retrieve the host ISC - finaly we activate GISA If we receive a demand to disable IRQs, - we deactivate GISA - unregister from the GIB - unpin the NIB When removing the AP device from the driver the device is reseted and this process unregisters the GISA from the GIB, and unpins the NIB address then we free the vfio_ap_queue structure. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Acked-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Pierre Morel authored
To be able to use the VFIO interface to facilitate the mediated device memory pinning/unpinning we need to register a notifier for IOMMU. While we will start to pin one guest page for the interrupt indicator byte, this is still ok with ballooning as this page will never be used by the guest virtio-balloon driver. So the pinned page will never be freed. And even a broken guest does so, that would not impact the host as the original page is still in control by vfio. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Pierre Morel authored
We prepare the interception of the PQAP/AQIC instruction for the case the AQIC facility is enabled in the guest. First of all we do not want to change existing behavior when intercepting AP instructions without the SIE allowing the guest to use AP instructions. In this patch we only handle the AQIC interception allowed by facility 65 which will be enabled when the complete interception infrastructure will be present. We add a callback inside the KVM arch structure for s390 for a VFIO driver to handle a specific response to the PQAP instruction with the AQIC command and only this command. But we want to be able to return a correct answer to the guest even there is no VFIO AP driver in the kernel. Therefor, we inject the correct exceptions from inside KVM for the case the callback is not initialized, which happens when the vfio_ap driver is not loaded. We do consider the responsibility of the driver to always initialize the PQAP callback if it defines queues by initializing the CRYCB for a guest. If the callback has been setup we call it. If not we setup an answer considering that no queue is available for the guest when no callback has been setup. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Kasan instrumentation of backchain unwinder stack reads is disabled completely and simply uses READ_ONCE_NOCHECK now. READ_ONCE_TASK_STACK macro is unused and could be removed. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Avoid kasan false positive when current task is interrupted in-between stack frame allocation and backchain write instructions leaving new stack frame backchain invalid. In particular if backchain is 0 the unwinder tries to read pt_regs from the stack and might hit kasan poisoned bytes, leading to kasan "stack-out-of-bounds" report. Disable kasan instrumentation of unwinder stack reads, since this limitation couldn't be handled otherwise with current backchain unwinder implementation. Fixes: 78c98f90 ("s390/unwind: introduce stack unwind API") Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Tested-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Julian Wiedmann authored
Current code sets the dsci to 0x00000080. Which doesn't make any sense, as the indicator area is located in the _left-most_ byte. Worse: if the dsci is the _shared_ indicator, this potentially clears the indication of activity for a _different_ device. tiqdio_thinint_handler() will then have no reason to call that device's IRQ handler, and the device ends up stalling. Fixes: d0c9d4a8 ("[S390] qdio: set correct bit in dsci") Cc: <stable@vger.kernel.org> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Julian Wiedmann authored
When tiqdio_remove_input_queues() removes a queue from the tiq_list as part of qdio_shutdown(), it doesn't re-initialize the queue's list entry and the prev/next pointers go stale. If a subsequent qdio_establish() fails while sending the ESTABLISH cmd, it calls qdio_shutdown() again in QDIO_IRQ_STATE_ERR state and tiqdio_remove_input_queues() will attempt to remove the queue entry a second time. This dereferences the stale pointers, and bad things ensue. Fix this by re-initializing the list entry after removing it from the list. For good practice also initialize the list entry when the queue is first allocated, and remove the quirky checks that papered over this omission. Note that prior to commit e5218134 ("s390/qdio: fix access to uninitialized qdio_q fields"), these checks were bogus anyway. setup_queues_misc() clears the whole queue struct, and thus needs to re-init the prev/next pointers as well. Fixes: 779e6e1c ("[S390] qdio: new qdio driver.") Cc: <stable@vger.kernel.org> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Dan Carpenter authored
The "len" variable is the length of the option up to the next option or to the end of the string which ever first. We want to print the invalid option so we want precision "%.*s" but the format is width "%*s" so it prints up to the end of the string. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Tested-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Cornelia Huck authored
Sometimes, we want to control which of the matching drivers binds to a subchannel device (e.g. for subchannels we want to handle via vfio-ccw). For pci devices, a mechanism to do so has been introduced in 782a985d ("PCI: Introduce new device binding path using pci_dev.driver_override"). It makes sense to introduce the driver_override attribute for subchannel devices as well, so that we can easily extend the 'driverctl' tool (which makes use of the driver_override attribute for pci). Note that unlike pci we still require a driver override to match the subchannel type; matching more than one subchannel type is probably not useful anyway. Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 24 Jun, 2019 2 commits
-
-
Cornelia Huck authored
Reported by sparse. Fixes: 7f8e89a8 ("vfio-ccw: Factor out the ccw0-to-ccw1 transition") Signed-off-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190624090721.16241-1-cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Merge tag 'vfio-ccw-20190621' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into features Refactoring of the vfio-ccw cp handling, simplifying the code and avoiding unneeded allocating/copying. * tag 'vfio-ccw-20190621' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw: vfio-ccw: Remove copy_ccw_from_iova() vfio-ccw: Factor out the ccw0-to-ccw1 transition vfio-ccw: Copy CCW data outside length calculation vfio-ccw: Skip second copy of guest cp to host vfio-ccw: Move guest_cp storage into common struct s390/cio: Combine direct and indirect CCW paths vfio-ccw: Rearrange IDAL allocation in direct CCW vfio-ccw: Remove pfn_array_table vfio-ccw: Adjust the first IDAW outside of the nested loops vfio-ccw: Rearrange pfn_array and pfn_array_table arrays s390/cio: Use generalized CCW handler in cp_init() s390/cio: Generalize the TIC handler s390/cio: Refactor the routine that handles TIC CCWs s390/cio: Squash cp_free() and cp_unpin_free() Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 21 Jun, 2019 5 commits
-
-
Eric Farman authored
Just to keep things tidy. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-6-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
This is a really useful function, but it's buried in the copy_ccw_from_iova() routine so that ccwchain_calc_length() can just work with Format-1 CCWs while doing its counting. But it means we're translating a full 2K of "CCWs" to Format-1, when in reality there's probably far fewer in that space. Let's factor it out, so maybe we can do something with it later. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-5-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
It doesn't make much sense to "hide" the copy to the channel_program struct inside a routine that calculates the length of the chain. Let's move it to the calling routine, which will later copy from channel_program to the memory it allocated itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-4-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
We already pinned/copied/unpinned 2K (256 CCWs) of guest memory to the host space anchored off vfio_ccw_private. There's no need to do that again once we have the length calculated, when we could just copy the section we need to the "permanent" space for the I/O. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-3-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
Rather than allocating/freeing a piece of memory every time we try to figure out how long a CCW chain is, let's use a piece of memory allocated for each device. The io_mutex added with commit 4f766173 ("vfio-ccw: protect the I/O region") is held for the duration of the VFIO_CCW_EVENT_IO_REQ event that accesses/uses this space, so there should be no race concerns with another CPU attempting an (unexpected) SSCH for the same device. Suggested-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-2-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
- 19 Jun, 2019 4 commits
-
-
Julian Wiedmann authored
This allows device drivers (eg. qeth) to use the struct when processing information retrieved via RCD. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
The stfle inline assembly returns the number of double words written (condition code 0) or the double words it would have written (condition code 3), if the memory array it got as parameter would have been large enough. The current stfle implementation assumes that the array is always large enough and clears those parts of the array that have not been written to with a subsequent memset call. If however the array is not large enough memset will get a negative length parameter, which means that memset clears memory until it gets an exception and the kernel crashes. To fix this simply limit the maximum length. Move also the inline assembly to an extra function to avoid clobbering of register 0, which might happen because of the added min_t invocation together with code instrumentation. The bug was introduced with commit 14375bc4 ("[S390] cleanup facility list handling") but was rather harmless, since it would only write to a rather large array. It became a potential problem with commit 3ab121ab ("[S390] kernel: Add z/VM LGR detection"). Since then it writes to an array with only four double words, while some machines already deliver three double words. As soon as machines have a facility bit within the fifth double a crash on IPL would happen. Fixes: 14375bc4 ("[S390] cleanup facility list handling") Cc: <stable@vger.kernel.org> # v2.6.37+ Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
This feature has never been used, so remove it. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Acked-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Replace defconfig with performance_defconfig. defconfig had some more or less random debug options enabled, where nobody knows why anymore. Just remove the old defconfig and replace it with performance_defconfig, which reduces the number of configs to maintain. A config with debugging options enabled is debug_defconfig which is supposed to be rather close to performance_defconfig except that is has debug options enabled. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 17 Jun, 2019 9 commits
-
-
Eric Farman authored
With both the direct-addressed and indirect-addressed CCW paths simplified to this point, the amount of shared code between them is (hopefully) more easily visible. Move the processing of IDA-specific bits into the direct-addressed path, and add some useful commentary of what the individual pieces are doing. This allows us to remove the entire ccwchain_fetch_idal() routine and maintain a single function for any non-TIC CCW. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-10-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
This is purely deck furniture, to help understand the merge of the direct and indirect handlers. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-9-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
Now that both CCW codepaths build this nested array: ccwchain->pfn_array_table[1]->pfn_array[#idaws/#pages] We can collapse this into simply: ccwchain->pfn_array[#idaws/#pages] Let's do that, so that we don't have to continually navigate two nested arrays when the first array always has a count of one. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-8-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
Now that pfn_array_table[] is always an array of 1, it seems silly to check for the very first entry in an array in the middle of two nested loops, since we know it'll only ever happen once. Let's move this outside the loops to simplify things, even though the "k" variable is still necessary. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-7-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
While processing a channel program, we currently have two nested arrays that carry a slightly different structure. The direct CCW path creates this: ccwchain->pfn_array_table[1]->pfn_array[#pages] while an IDA CCW creates: ccwchain->pfn_array_table[#idaws]->pfn_array[1] The distinction appears to state that each pfn_array_table entry points to an array of contiguous pages, represented by a pfn_array, um, array. Since the direct-addressed scenario can ONLY represent contiguous pages, it makes the intermediate array necessary but difficult to recognize. Meanwhile, since an IDAL can contain non-contiguous pages and there is no logic in vfio-ccw to detect adjacent IDAWs, it is the second array that is necessary but appearing to be superfluous. I am not aware of any documentation that states the pfn_array[] needs to be of contiguous pages; it is just what the code does today. I don't see any reason for this either, let's just flip the IDA codepath around so that it generates: ch_pat->pfn_array_table[1]->pfn_array[#idaws] This will bring it in line with the direct-addressed codepath, so that we can understand the behavior of this memory regardless of what type of CCW is being processed. And it means the casual observer does not need to know/care whether the pfn_array[] represents contiguous pages or not. NB: The existing vfio-ccw code only supports 4K-block Format-2 IDAs, so that "#pages" == "#idaws" in this area. This means that we will have difficulty with this overlap in terminology if support for Format-1 or 2K-block Format-2 IDAs is ever added. I don't think that this patch changes our ability to make that distinction. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-6-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
It is now pretty apparent that ccwchain_handle_ccw() (nee ccwchain_handle_tic()) does everything that cp_init() wants to do. Let's remove that duplicated code from cp_init() and let ccwchain_handle_ccw() handle it itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-5-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
Refactor ccwchain_handle_tic() into a routine that handles a channel program address (which itself is a CCW pointer), rather than a CCW pointer that is only a TIC CCW. This will make it easier to reuse this code for other CCW commands. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-4-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
Extract the "does the target of this TIC already exist?" check from ccwchain_handle_tic(), so that it's easier to refactor that function into one that cp_init() is able to use. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-3-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
Eric Farman authored
The routine cp_free() does nothing but call cp_unpin_free(), and while most places call cp_free() there is one caller of cp_unpin_free() used when the cp is guaranteed to have not been marked initialized. This seems like a dubious way to make a distinction, so let's combine these routines and make cp_free() do all the work. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-2-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
-
- 15 Jun, 2019 3 commits
-
-
Martin Schwidefsky authored
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
-
Heiko Carstens authored
stop_machine is the only user left of cpu_relax_yield. Given that it now has special semantics which are tied to stop_machine introduce a weak stop_machine_yield function which architectures can override, and get rid of the generic cpu_relax_yield implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
-
Martin Schwidefsky authored
The stop_machine loop to advance the state machine and to wait for all affected CPUs to check-in calls cpu_relax_yield in a tight loop until the last missing CPUs acknowledged the state transition. On a virtual system where not all logical CPUs are backed by real CPUs all the time it can take a while for all CPUs to check-in. With the current definition of cpu_relax_yield a diagnose 0x44 is done which tells the hypervisor to schedule *some* other CPU. That can be any CPU and not necessarily one of the CPUs that need to run in order to advance the state machine. This can lead to a pretty bad diagnose 0x44 storm until the last missing CPU finally checked-in. Replace the undirected cpu_relax_yield based on diagnose 0x44 with a directed yield. Each CPU in the wait loop will pick up the next CPU in the cpumask of stop_machine. The diagnose 0x9c is used to tell the hypervisor to run this next CPU instead of the current one. If there is only a limited number of real CPUs backing the virtual CPUs we end up with the real CPUs passed around in a round-robin fashion. [heiko.carstens@de.ibm.com]: Use cpumask_next_wrap as suggested by Peter Zijlstra. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
-