- 31 Aug, 2016 17 commits
-
-
Chunyan Zhang authored
CoreSight STM device allows direct mapping of the channel regions to userspace for zero-copy writing. To support this ability, the STM framework has provided a hook 'mmio_addr', this patch just implemented this hook for CoreSight STM. This patch also added an item into 'channel_space' to save the physical base address of channel region which mmap operation needs to know. Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sudeep Holla authored
If the addition of the coresight devices get deferred, then there's a window before child_name is populated by of_get_coresight_platform_data from the respective component driver's probe and the attempted to access the same from coresight_orphan_match resulting in kernel NULL pointer dereference as below: Unable to handle kernel NULL pointer dereference at virtual address 0x0 Internal error: Oops: 96000004 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 1038 Comm: kworker/0:1 Not tainted 4.7.0-rc3 #124 Hardware name: ARM Juno development board (r2) (DT) Workqueue: events amba_deferred_retry_func PC is at strcmp+0x1c/0x160 LR is at coresight_orphan_match+0x7c/0xd0 Call trace: strcmp+0x1c/0x160 bus_for_each_dev+0x60/0xa0 coresight_register+0x264/0x2e0 tmc_probe+0x130/0x310 amba_probe+0xd4/0x1c8 driver_probe_device+0x22c/0x418 __device_attach_driver+0xbc/0x158 bus_for_each_drv+0x58/0x98 __device_attach+0xc4/0x160 device_initial_probe+0x10/0x18 bus_probe_device+0x94/0xa0 device_add+0x344/0x580 amba_device_try_add+0x194/0x238 amba_deferred_retry_func+0x48/0xd0 process_one_work+0x118/0x378 worker_thread+0x48/0x498 kthread+0xd0/0xe8 ret_from_fork+0x10/0x40 This patch adds a check for non-NULL conn->child_name before accessing the same. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Ng authored
Reports for available memory should use the si_mem_available() value. The previous freeram value does not include available page cache memory. Signed-off-by: Alex Ng <alexng@messages.microsoft.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
lockdep reports possible circular locking dependency when udev is used for memory onlining: systemd-udevd/3996 is trying to acquire lock: ((memory_chain).rwsem){++++.+}, at: [<ffffffff810d137e>] __blocking_notifier_call_chain+0x4e/0xc0 but task is already holding lock: (&dm_device.ha_region_mutex){+.+.+.}, at: [<ffffffffa015382e>] hv_memory_notifier+0x5e/0xc0 [hv_balloon] ... which is probably a false positive because we take and release ha_region_mutex from memory notifier chain depending on the arg. No real deadlocks were reported so far (though I'm not really sure about preemptible kernels...) but we don't really need to hold the mutex for so long. We use it to protect ha_region_list (and its members) and the num_pages_onlined counter. None of these operations require us to sleep and nothing is slow, switch to using spinlock with interrupts disabled. While on it, replace list_for_each -> list_for_each_entry as we actually need entries in all these cases, drop meaningless list_empty() checks. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
With the recently introduced in-kernel memory onlining (MEMORY_HOTPLUG_DEFAULT_ONLINE) these is no point in waiting for pages to come online in the driver and we can get rid of the waiting. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
I'm observing the following hot add requests from the WS2012 host: hot_add_req: start_pfn = 0x108200 count = 330752 hot_add_req: start_pfn = 0x158e00 count = 193536 hot_add_req: start_pfn = 0x188400 count = 239616 As the host doesn't specify hot add regions we're trying to create 128Mb-aligned region covering the first request, we create the 0x108000 - 0x160000 region and we add 0x108000 - 0x158e00 memory. The second request passes the pfn_covered() check, we enlarge the region to 0x108000 - 0x190000 and add 0x158e00 - 0x188200 memory. The problem emerges with the third request as it starts at 0x188400 so there is a 0x200 gap which is not covered. As the end of our region is 0x190000 now it again passes the pfn_covered() check were we just adjust the covered_end_pfn and make it 0x188400 instead of 0x188200 which means that we'll try to online 0x188200-0x188400 pages but these pages were never assigned to us and we crash. We can't react to such requests by creating new hot add regions as it may happen that the whole suggested range falls into the previously identified 128Mb-aligned area so we'll end up adding nothing or create intersecting regions and our current logic doesn't allow that. Instead, create a list of such 'gaps' and check for them in the page online callback. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
Windows 2012 (non-R2) does not specify hot add region in hot add requests and the logic in hot_add_req() is trying to find a 128Mb-aligned region covering the request. It may also happen that host's requests are not 128Mb aligned and the created ha_region will start before the first specified PFN. We can't online these non-present pages but we don't remember the real start of the region. This is a regression introduced by the commit 5abbbb75 ("Drivers: hv: hv_balloon: don't lose memory when onlining order is not natural"). While the idea of keeping the 'moving window' was wrong (as there is no guarantee that hot add requests come ordered) we should still keep track of covered_start_pfn. This is not a revert, the logic is different. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
KVP daemon does fork()/exec() (with popen()) so we need to close our fds to avoid sharing them with child processes. The immediate implication of not doing so I see is SELinux complaining about 'ip' trying to access '/dev/vmbus/hv_kvp'. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
On Hyper-V, performance critical channels use the monitor mechanism to signal the host when the guest posts mesages for the host. This mechanism minimizes the hypervisor intercepts and also makes the host more efficient in that each time the host is woken up, it processes a batch of messages as opposed to just one. The goal here is improve the throughput and this is at the expense of increased latency. Implement a mechanism to let the client driver decide if latency is important. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
The current delay between retries is unnecessarily high and is negatively affecting the time it takes to boot the system. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
For synthetic NIC channels, enable explicit signaling policy as netvsc wants to explicitly control when the host is to be signaled. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dexuan Cui authored
There is a rare race when we remove an entry from the global list hv_context.percpu_list[cpu] in hv_process_channel_removal() -> percpu_channel_deq() -> list_del(): at this time, if vmbus_on_event() -> process_chn_event() -> pcpu_relid2channel() is trying to query the list, we can get the kernel fault. Similarly, we also have the issue in the code path: vmbus_process_offer() -> percpu_channel_enq(). We can resolve the issue by disabling the tasklet when updating the list. The patch also moves vmbus_release_relid() to a later place where the channel has been removed from the per-cpu and the global lists. Reported-by: Rolf Neugebauer <rolf.neugebauer@docker.com> Signed-off-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
Background: userspace daemons registration protocol for Hyper-V utilities drivers has two steps: 1) daemon writes its own version to kernel 2) kernel reads it and replies with module version at this point we consider the handshake procedure being completed and we do hv_poll_channel() transitioning the utility device to HVUTIL_READY state. At this point we're ready to handle messages from kernel. When hvutil_transport is in HVUTIL_TRANSPORT_CHARDEV mode we have a single buffer for outgoing message. hvutil_transport_send() puts to this buffer and till the buffer is cleared with hvt_op_read() returns -EFAULT to all consequent calls. Host<->guest protocol guarantees there is no more than one request at a time and we will not get new requests till we reply to the previous one so this single message buffer is enough. Now to the race. When we finish negotiation procedure and send kernel module version to userspace with hvutil_transport_send() it goes into the above mentioned buffer and if the daemon is slow enough to read it from there we can get a collision when a request from the host comes, we won't be able to put anything to the buffer so the request will be lost. To solve the issue we need to know when the negotiation is really done (when the version message is read by the daemon) and transition to HVUTIL_READY state after this happens. Implement a callback on read to support this. Old style netlink communication is not affected by the change, we don't really know when these messages are delivered but we don't have a single message buffer there. Reported-by: Barry Davis <barry_davis@stormagic.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
vmbus_teardown_gpadl() can result in infinite wait when it is called on 5 second timeout in vmbus_open(). The issue is caused by the fact that gpadl teardown operation won't ever succeed for an opened channel and the timeout isn't always enough. As a guest, we can always trust the host to respond to our request (and there is nothing we can do if it doesn't). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
In some cases create_gpadl_header() allocates submessages but we never free them. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
We use messagecount only once in vmbus_establish_gpadl() to check if it is safe to iterate through the submsglist. We can just initialize the list header in all cases in create_gpadl_header() instead. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vitaly Kuznetsov authored
When we crash from NMI context (e.g. after NMI injection from host when 'sysctl -w kernel.unknown_nmi_panic=1' is set) we hit kernel BUG at mm/vmalloc.c:1530! as vfree() is denied. While the issue could be solved with in_nmi() check instead I opted for skipping vfree on all sorts of crashes to reduce the amount of work which can cause consequent crashes. We don't really need to free anything on crash. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 30 Aug, 2016 23 commits
-
-
Chris Metcalf authored
Joe Perches points out [1] that this pattern isn't currently safe. This driver doesn't really need the zeroing semantic anyway; by restructuring the code slightly we can initialize all the fields of the structure up front instead. [1] https://lkml.kernel.org/r/1469729491.3998.58.camel@perches.comSigned-off-by: Chris Metcalf <cmetcalf@mellanox.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Muhammad Falak R Wani authored
Replace explicit computation of vma page count by a call to vma_pages() Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexandre Belloni authored
v3.20 doesn't exist. heartbeat_enable was actually added in v4.4. Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arnd Bergmann authored
When building with W=1, the __scif_rma_destroy_tcw function causes a harmless warning about an argument variable that is modified but not used: drivers/misc/mic/scif/scif_dma.c: In function ‘__scif_rma_destroy_tcw’: drivers/misc/mic/scif/scif_dma.c:118:27: error: parameter ‘ep’ set but not used [-Werror=unused-but-set-parameter] In this case, we can just remove the argument, since all callers are in the same file. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
The device lock was unnecessary obtained in bus rescan work before the amthif client search. That causes incorrect lock ordering and task hang: ... [88004.613213] INFO: task kworker/1:14:21832 blocked for more than 120 seconds. ... [88004.645934] Workqueue: events mei_cl_bus_rescan_work ... The correct lock order is cl_bus_lock device_lock me_clients_rwsem Move device_lock into amthif init function that called after me_clients_rwsem is released. This fixes regression introduced by commit: commit 025fb792 ("mei: split amthif client init from end of clients enumeration") Cc: <stable@vger.kernel.org> # 4.6+ Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
mei_amthif_read have only one difference from mei_read, it is not calling mei_read_start(). Make mei_read_start return immediately for amthif client and drop the special mei_amthif_read function. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
The FW supports only one pending read per host client, in order to support issuing of consecutive reads the driver queues read requests internally and send them to the firmware after pending one has completed. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
Enclose the boiler plate code of allocating a control/hbm command cb and enqueueing it onto ctrl_wr.list in a convenient wrapper mei_cl_enqueue_ctrl_wr_cb(). This is a preparatory patch for enabling consecutive reads. Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
With the introduction of the receive control flow credits prefixed with rx_ we add tx_ prefix to the variables and function used for tracking the transmit control flow credits. Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Use RX flow control counter in the host client structure to track the number of simultaneous outstanding reads. This eliminates search in queues and makes ground for enabling for parallel read. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
The read callbacks for the fixed address clients, that don't have flow control are built now on the receive path. In order to have a single allocation place we remove the allocation from the read request. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
The read callback is always prepared with MTU-sized buffer and the FW can't send more than the MTU in one message. Checking for buffer existence and krealloc to increase receive buffer size are redundant and may be safely discarded. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Open code mei_clear_lists into its only caller mei_amthif_releas and drop unused parameter 'dev' form from mei_clear_list function. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
The Fixed address clients do not work with the flow control, and the packet RX callback was allocated upon TX with anticipation of a following RX. This won't work if the clients with unsolicited Rx. Rather than preparing read callback upon a write we allocate one directly on the reciev path if one doesn't exists. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Store the file associated with a client in the host client structure, this enables dropping the special amthif client file pointer from struct mei_device, and this is also a preparation for changing the way rx packet allocation for fixed_address clients Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Move read cb to the completion queue if a read finds out that client is not connected. This expedite user space reader wake on error condition. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
Correct errno on client disconnection is -ENODEV not -EBUSY Cc: <stable@vger.kernel.org> #4.3+ Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
In the course of the read flow we want to wait for read completion only if the read queue is empty. However the calling list_empty(&cl->rd_completed) is a duplication as the same check was performed by mei_cl_read_cb() and the waiting is skipped if it returns not NULL. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
In mei_hbm_cl_hdr buf argument was not described Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Schedule link reset if failed to perform runtime suspend or resume. Set active runtime pm stte on link reset to clean runtimr pm error, if present. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
mei_io_cb_alloc_buf have a single caller :mei_cl_alloc_cb. After amthif stopped using it, the code can be integrated into the caller and the function can be dropped. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
Use mei_cl_alloc_cb wrapper instead of open code steps Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-