Commit 8eee93e2 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'char-misc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc updates from Greg KH:
 "Here is the big char/misc driver update for 4.6-rc1.

  The majority of the patches here is hwtracing and some new mic
  drivers, but there's a lot of other driver updates as well.  Full
  details in the shortlog.

  All have been in linux-next for a while with no reported issues"

* tag 'char-misc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (238 commits)
  goldfish: Fix build error of missing ioremap on UM
  nvmem: mediatek: Fix later provider initialization
  nvmem: imx-ocotp: Fix return value of imx_ocotp_read
  nvmem: Fix dependencies for !HAS_IOMEM archs
  char: genrtc: replace blacklist with whitelist
  drivers/hwtracing: make coresight-etm-perf.c explicitly non-modular
  drivers: char: mem: fix IS_ERROR_VALUE usage
  char: xillybus: Fix internal data structure initialization
  pch_phub: return -ENODATA if ROM can't be mapped
  Drivers: hv: vmbus: Support kexec on ws2012 r2 and above
  Drivers: hv: vmbus: Support handling messages on multiple CPUs
  Drivers: hv: utils: Remove util transport handler from list if registration fails
  Drivers: hv: util: Pass the channel information during the init call
  Drivers: hv: vmbus: avoid unneeded compiler optimizations in vmbus_wait_for_unload()
  Drivers: hv: vmbus: remove code duplication in message handling
  Drivers: hv: vmbus: avoid wait_for_completion() on crash
  Drivers: hv: vmbus: don't loose HVMSG_TIMER_EXPIRED messages
  misc: at24: replace memory_accessor with nvmem_device_read
  eeprom: 93xx46: extend driver to plug into the NVMEM framework
  eeprom: at25: extend driver to plug into the NVMEM framework
  ...
parents 1a4ab084 16617535
...@@ -27,3 +27,17 @@ Description: The mapping of which primary/sub channels are bound to which ...@@ -27,3 +27,17 @@ Description: The mapping of which primary/sub channels are bound to which
Virtual Processors. Virtual Processors.
Format: <channel's child_relid:the bound cpu's number> Format: <channel's child_relid:the bound cpu's number>
Users: tools/hv/lsvmbus Users: tools/hv/lsvmbus
What: /sys/bus/vmbus/devices/vmbus_*/device
Date: Dec. 2015
KernelVersion: 4.5
Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit device ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries
What: /sys/bus/vmbus/devices/vmbus_*/vendor
Date: Dec. 2015
KernelVersion: 4.5
Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit vendor ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries
Android Goldfish QEMU Pipe
Andorid pipe virtual device generated by android emulator.
Required properties:
- compatible : should contain "google,android-pipe" to match emulator
- reg : <registers mapping>
- interrupts : <interrupt mapping>
Example:
android_pipe@a010000 {
compatible = "google,android-pipe";
reg = <ff018000 0x2000>;
interrupts = <0x12>;
};
EEPROMs (SPI) compatible with Microchip Technology 93xx46 family.
Required properties:
- compatible : shall be one of:
"atmel,at93c46d"
"eeprom-93xx46"
- data-size : number of data bits per word (either 8 or 16)
Optional properties:
- read-only : parameter-less property which disables writes to the EEPROM
- select-gpios : if present, specifies the GPIO that will be asserted prior to
each access to the EEPROM (e.g. for SPI bus multiplexing)
Property rules described in Documentation/devicetree/bindings/spi/spi-bus.txt
apply. In particular, "reg" and "spi-max-frequency" properties must be given.
Example:
eeprom@0 {
compatible = "eeprom-93xx46";
reg = <0>;
spi-max-frequency = <1000000>;
spi-cs-high;
data-size = <8>;
select-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>;
};
* NXP LPC18xx EEPROM memory NVMEM driver
Required properties:
- compatible: Should be "nxp,lpc1857-eeprom"
- reg: Must contain an entry with the physical base address and length
for each entry in reg-names.
- reg-names: Must include the following entries.
- reg: EEPROM registers.
- mem: EEPROM address space.
- clocks: Must contain an entry for each entry in clock-names.
- clock-names: Must include the following entries.
- eeprom: EEPROM operating clock.
- resets: Should contain a reference to the reset controller asserting
the EEPROM in reset.
- interrupts: Should contain EEPROM interrupt.
Example:
eeprom: eeprom@4000e000 {
compatible = "nxp,lpc1857-eeprom";
reg = <0x4000e000 0x1000>,
<0x20040000 0x4000>;
reg-names = "reg", "mem";
clocks = <&ccu1 CLK_CPU_EEPROM>;
clock-names = "eeprom";
resets = <&rgu 27>;
interrupts = <4>;
};
= Mediatek MTK-EFUSE device tree bindings =
This binding is intended to represent MTK-EFUSE which is found in most Mediatek SOCs.
Required properties:
- compatible: should be "mediatek,mt8173-efuse" or "mediatek,efuse"
- reg: Should contain registers location and length
= Data cells =
Are child nodes of MTK-EFUSE, bindings of which as described in
bindings/nvmem/nvmem.txt
Example:
efuse: efuse@10206000 {
compatible = "mediatek,mt8173-efuse";
reg = <0 0x10206000 0 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
/* Data cells */
thermal_calibration: calib@528 {
reg = <0x528 0xc>;
};
};
= Data consumers =
Are device nodes which consume nvmem data cells.
For example:
thermal {
...
nvmem-cells = <&thermal_calibration>;
nvmem-cell-names = "calibration";
};
...@@ -12,10 +12,19 @@ for the X100 devices. ...@@ -12,10 +12,19 @@ for the X100 devices.
Since it is a PCIe card, it does not have the ability to host hardware Since it is a PCIe card, it does not have the ability to host hardware
devices for networking, storage and console. We provide these devices devices for networking, storage and console. We provide these devices
on X100 coprocessors thus enabling a self-bootable equivalent environment on X100 coprocessors thus enabling a self-bootable equivalent
for applications. A key benefit of our solution is that it leverages environment for applications. A key benefit of our solution is that it
the standard virtio framework for network, disk and console devices, leverages the standard virtio framework for network, disk and console
though in our case the virtio framework is used across a PCIe bus. devices, though in our case the virtio framework is used across a PCIe
bus. A Virtio Over PCIe (VOP) driver allows creating user space
backends or devices on the host which are used to probe virtio drivers
for these devices on the MIC card. The existing VRINGH infrastructure
in the kernel is used to access virtio rings from the host. The card
VOP driver allows card virtio drivers to communicate with their user
space backends on the host via a device page. Ring 3 apps on the host
can add, remove and configure virtio devices. A thin MIC specific
virtio_config_ops is implemented which is borrowed heavily from
previous similar implementations in lguest and s390.
MIC PCIe card has a dma controller with 8 channels. These channels are MIC PCIe card has a dma controller with 8 channels. These channels are
shared between the host s/w and the card s/w. 0 to 3 are used by host shared between the host s/w and the card s/w. 0 to 3 are used by host
...@@ -38,7 +47,6 @@ single threaded performance for the host compared to MIC, the ability of ...@@ -38,7 +47,6 @@ single threaded performance for the host compared to MIC, the ability of
the host to initiate DMA's to/from the card using the MIC DMA engine and the host to initiate DMA's to/from the card using the MIC DMA engine and
the fact that the virtio block storage backend can only be on the host. the fact that the virtio block storage backend can only be on the host.
|
+----------+ | +----------+ +----------+ | +----------+
| Card OS | | | Host OS | | Card OS | | | Host OS |
+----------+ | +----------+ +----------+ | +----------+
...@@ -47,27 +55,25 @@ the fact that the virtio block storage backend can only be on the host. ...@@ -47,27 +55,25 @@ the fact that the virtio block storage backend can only be on the host.
| Virtio| |Virtio | |Virtio| | |Virtio | |Virtio | |Virtio | | Virtio| |Virtio | |Virtio| | |Virtio | |Virtio | |Virtio |
| Net | |Console | |Block | | |Net | |Console | |Block | | Net | |Console | |Block | | |Net | |Console | |Block |
| Driver| |Driver | |Driver| | |backend | |backend | |backend | | Driver| |Driver | |Driver| | |backend | |backend | |backend |
+-------+ +--------+ +------+ | +---------+ +--------+ +--------+ +---+---+ +---+----+ +--+---+ | +---------+ +----+---+ +--------+
| | | | | | | | | | | | | |
| | | |User | | | | | | |User | | |
| | | |------|------------|---------|------- | | | |------|------------|--+------|-------
+-------------------+ |Kernel +--------------------------+ +---------+---------+ |Kernel |
| | | Virtio over PCIe IOCTLs | | | |
| | +--------------------------+ +---------+ +---+----+ +------+ | +------+ +------+ +--+---+ +-------+
+-----------+ | | | +-----------+ |MIC DMA | | VOP | | SCIF | | | SCIF | | COSM | | VOP | |MIC DMA|
| MIC DMA | | +------+ | +------+ +------+ | | MIC DMA | +---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
| Driver | | | SCIF | | | SCIF | | COSM | | | Driver | | | | | | | |
+-----------+ | +------+ | +------+ +--+---+ | +-----------+ +---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
| | | | | | | | |MIC | | VOP | |SCIF | | |SCIF | | COSM | | VOP | | MIC |
+---------------+ | +------+ | +--+---+ +--+---+ | +----------------+ |HW Bus | | HW Bus| |HW Bus| | |HW Bus| | Bus | |HW Bus| |HW Bus |
|MIC virtual Bus| | |SCIF | | |SCIF | | COSM | | |MIC virtual Bus | +---------+ +--------+ +--+---+ | +--+---+ +------+ +------+ +-------+
+---------------+ | |HW Bus| | |HW Bus| | Bus | | +----------------+ | | | | | | |
| | +------+ | +--+---+ +------+ | | | +-----------+--+ | | | +---------------+ |
| | | | | | | | | |Intel MIC | | | | |Intel MIC | |
| +-----------+---+ | | | +---------------+ | | |Card Driver | | | | |Host Driver | |
| |Intel MIC | | | | |Intel MIC | | +---+--------------+------+ | +----+---------------+-----+
+---|Card Driver | | | | |Host Driver | |
+------------+--------+ | +----+---------------+-----+
| | | | | |
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| | | |
......
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
exec=/usr/sbin/mpssd exec=/usr/sbin/mpssd
sysfs="/sys/class/mic" sysfs="/sys/class/mic"
mic_modules="mic_host mic_x100_dma scif" mic_modules="mic_host mic_x100_dma scif vop"
start() start()
{ {
......
...@@ -926,7 +926,7 @@ add_virtio_device(struct mic_info *mic, struct mic_device_desc *dd) ...@@ -926,7 +926,7 @@ add_virtio_device(struct mic_info *mic, struct mic_device_desc *dd)
char path[PATH_MAX]; char path[PATH_MAX];
int fd, err; int fd, err;
snprintf(path, PATH_MAX, "/dev/mic%d", mic->id); snprintf(path, PATH_MAX, "/dev/vop_virtio%d", mic->id);
fd = open(path, O_RDWR); fd = open(path, O_RDWR);
if (fd < 0) { if (fd < 0) {
mpsslog("Could not open %s %s\n", path, strerror(errno)); mpsslog("Could not open %s %s\n", path, strerror(errno));
......
...@@ -231,15 +231,15 @@ IT knows when a platform crashes even when there is a hard failure on the host. ...@@ -231,15 +231,15 @@ IT knows when a platform crashes even when there is a hard failure on the host.
The Intel AMT Watchdog is composed of two parts: The Intel AMT Watchdog is composed of two parts:
1) Firmware feature - receives the heartbeats 1) Firmware feature - receives the heartbeats
and sends an event when the heartbeats stop. and sends an event when the heartbeats stop.
2) Intel MEI driver - connects to the watchdog feature, configures the 2) Intel MEI iAMT watchdog driver - connects to the watchdog feature,
watchdog and sends the heartbeats. configures the watchdog and sends the heartbeats.
The Intel MEI driver uses the kernel watchdog API to configure the Intel AMT The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure
Watchdog and to send heartbeats to it. The default timeout of the the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the
watchdog is 120 seconds. watchdog is 120 seconds.
If the Intel AMT Watchdog feature does not exist (i.e. the connection failed), If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate
the Intel MEI driver will disable the sending of heartbeats. on the me client bus and watchdog devices won't be exposed.
Supported Chipsets Supported Chipsets
......
...@@ -5765,6 +5765,7 @@ S: Supported ...@@ -5765,6 +5765,7 @@ S: Supported
F: include/uapi/linux/mei.h F: include/uapi/linux/mei.h
F: include/linux/mei_cl_bus.h F: include/linux/mei_cl_bus.h
F: drivers/misc/mei/* F: drivers/misc/mei/*
F: drivers/watchdog/mei_wdt.c
F: Documentation/misc-devices/mei/* F: Documentation/misc-devices/mei/*
INTEL MIC DRIVERS (mic) INTEL MIC DRIVERS (mic)
...@@ -6598,6 +6599,11 @@ F: samples/livepatch/ ...@@ -6598,6 +6599,11 @@ F: samples/livepatch/
L: live-patching@vger.kernel.org L: live-patching@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git
LINUX KERNEL DUMP TEST MODULE (LKDTM)
M: Kees Cook <keescook@chromium.org>
S: Maintained
F: drivers/misc/lkdtm.c
LLC (802.2) LLC (802.2)
M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
S: Maintained S: Maintained
......
...@@ -562,8 +562,7 @@ tps659038_gpio: tps659038_gpio { ...@@ -562,8 +562,7 @@ tps659038_gpio: tps659038_gpio {
extcon_usb2: tps659038_usb { extcon_usb2: tps659038_usb {
compatible = "ti,palmas-usb-vid"; compatible = "ti,palmas-usb-vid";
ti,enable-vbus-detection; ti,enable-vbus-detection;
ti,enable-id-detection; vbus-gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>;
id-gpios = <&gpio7 24 GPIO_ACTIVE_HIGH>;
}; };
}; };
......
...@@ -115,13 +115,14 @@ static void mityomapl138_cpufreq_init(const char *partnum) ...@@ -115,13 +115,14 @@ static void mityomapl138_cpufreq_init(const char *partnum)
static void mityomapl138_cpufreq_init(const char *partnum) { } static void mityomapl138_cpufreq_init(const char *partnum) { }
#endif #endif
static void read_factory_config(struct memory_accessor *a, void *context) static void read_factory_config(struct nvmem_device *nvmem, void *context)
{ {
int ret; int ret;
const char *partnum = NULL; const char *partnum = NULL;
struct davinci_soc_info *soc_info = &davinci_soc_info; struct davinci_soc_info *soc_info = &davinci_soc_info;
ret = a->read(a, (char *)&factory_config, 0, sizeof(factory_config)); ret = nvmem_device_read(nvmem, 0, sizeof(factory_config),
&factory_config);
if (ret != sizeof(struct factory_config)) { if (ret != sizeof(struct factory_config)) {
pr_warn("Read Factory Config Failed: %d\n", ret); pr_warn("Read Factory Config Failed: %d\n", ret);
goto bad_config; goto bad_config;
......
...@@ -28,13 +28,13 @@ EXPORT_SYMBOL(davinci_soc_info); ...@@ -28,13 +28,13 @@ EXPORT_SYMBOL(davinci_soc_info);
void __iomem *davinci_intc_base; void __iomem *davinci_intc_base;
int davinci_intc_type; int davinci_intc_type;
void davinci_get_mac_addr(struct memory_accessor *mem_acc, void *context) void davinci_get_mac_addr(struct nvmem_device *nvmem, void *context)
{ {
char *mac_addr = davinci_soc_info.emac_pdata->mac_addr; char *mac_addr = davinci_soc_info.emac_pdata->mac_addr;
off_t offset = (off_t)context; off_t offset = (off_t)context;
/* Read MAC addr from EEPROM */ /* Read MAC addr from EEPROM */
if (mem_acc->read(mem_acc, mac_addr, offset, ETH_ALEN) == ETH_ALEN) if (nvmem_device_read(nvmem, offset, ETH_ALEN, mac_addr) == ETH_ALEN)
pr_info("Read MAC addr from EEPROM: %pM\n", mac_addr); pr_info("Read MAC addr from EEPROM: %pM\n", mac_addr);
} }
......
...@@ -1321,6 +1321,7 @@ static void binder_transaction(struct binder_proc *proc, ...@@ -1321,6 +1321,7 @@ static void binder_transaction(struct binder_proc *proc,
struct binder_transaction *t; struct binder_transaction *t;
struct binder_work *tcomplete; struct binder_work *tcomplete;
binder_size_t *offp, *off_end; binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc; struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL; struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL; struct binder_node *target_node = NULL;
...@@ -1522,18 +1523,24 @@ static void binder_transaction(struct binder_proc *proc, ...@@ -1522,18 +1523,24 @@ static void binder_transaction(struct binder_proc *proc,
goto err_bad_offset; goto err_bad_offset;
} }
off_end = (void *)offp + tr->offsets_size; off_end = (void *)offp + tr->offsets_size;
off_min = 0;
for (; offp < off_end; offp++) { for (; offp < off_end; offp++) {
struct flat_binder_object *fp; struct flat_binder_object *fp;
if (*offp > t->buffer->data_size - sizeof(*fp) || if (*offp > t->buffer->data_size - sizeof(*fp) ||
*offp < off_min ||
t->buffer->data_size < sizeof(*fp) || t->buffer->data_size < sizeof(*fp) ||
!IS_ALIGNED(*offp, sizeof(u32))) { !IS_ALIGNED(*offp, sizeof(u32))) {
binder_user_error("%d:%d got transaction with invalid offset, %lld\n", binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",
proc->pid, thread->pid, (u64)*offp); proc->pid, thread->pid, (u64)*offp,
(u64)off_min,
(u64)(t->buffer->data_size -
sizeof(*fp)));
return_error = BR_FAILED_REPLY; return_error = BR_FAILED_REPLY;
goto err_bad_offset; goto err_bad_offset;
} }
fp = (struct flat_binder_object *)(t->buffer->data + *offp); fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) { switch (fp->type) {
case BINDER_TYPE_BINDER: case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: { case BINDER_TYPE_WEAK_BINDER: {
...@@ -3593,13 +3600,24 @@ static int binder_transactions_show(struct seq_file *m, void *unused) ...@@ -3593,13 +3600,24 @@ static int binder_transactions_show(struct seq_file *m, void *unused)
static int binder_proc_show(struct seq_file *m, void *unused) static int binder_proc_show(struct seq_file *m, void *unused)
{ {
struct binder_proc *itr;
struct binder_proc *proc = m->private; struct binder_proc *proc = m->private;
int do_lock = !binder_debug_no_lock; int do_lock = !binder_debug_no_lock;
bool valid_proc = false;
if (do_lock) if (do_lock)
binder_lock(__func__); binder_lock(__func__);
seq_puts(m, "binder proc state:\n");
print_binder_proc(m, proc, 1); hlist_for_each_entry(itr, &binder_procs, proc_node) {
if (itr == proc) {
valid_proc = true;
break;
}
}
if (valid_proc) {
seq_puts(m, "binder proc state:\n");
print_binder_proc(m, proc, 1);
}
if (do_lock) if (do_lock)
binder_unlock(__func__); binder_unlock(__func__);
return 0; return 0;
......
...@@ -258,7 +258,7 @@ static void __fw_free_buf(struct kref *ref) ...@@ -258,7 +258,7 @@ static void __fw_free_buf(struct kref *ref)
vunmap(buf->data); vunmap(buf->data);
for (i = 0; i < buf->nr_pages; i++) for (i = 0; i < buf->nr_pages; i++)
__free_page(buf->pages[i]); __free_page(buf->pages[i]);
kfree(buf->pages); vfree(buf->pages);
} else } else
#endif #endif
vfree(buf->data); vfree(buf->data);
...@@ -635,7 +635,7 @@ static ssize_t firmware_loading_store(struct device *dev, ...@@ -635,7 +635,7 @@ static ssize_t firmware_loading_store(struct device *dev,
if (!test_bit(FW_STATUS_DONE, &fw_buf->status)) { if (!test_bit(FW_STATUS_DONE, &fw_buf->status)) {
for (i = 0; i < fw_buf->nr_pages; i++) for (i = 0; i < fw_buf->nr_pages; i++)
__free_page(fw_buf->pages[i]); __free_page(fw_buf->pages[i]);
kfree(fw_buf->pages); vfree(fw_buf->pages);
fw_buf->pages = NULL; fw_buf->pages = NULL;
fw_buf->page_array_size = 0; fw_buf->page_array_size = 0;
fw_buf->nr_pages = 0; fw_buf->nr_pages = 0;
...@@ -746,8 +746,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size) ...@@ -746,8 +746,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size)
buf->page_array_size * 2); buf->page_array_size * 2);
struct page **new_pages; struct page **new_pages;
new_pages = kmalloc(new_array_size * sizeof(void *), new_pages = vmalloc(new_array_size * sizeof(void *));
GFP_KERNEL);
if (!new_pages) { if (!new_pages) {
fw_load_abort(fw_priv); fw_load_abort(fw_priv);
return -ENOMEM; return -ENOMEM;
...@@ -756,7 +755,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size) ...@@ -756,7 +755,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size)
buf->page_array_size * sizeof(void *)); buf->page_array_size * sizeof(void *));
memset(&new_pages[buf->page_array_size], 0, sizeof(void *) * memset(&new_pages[buf->page_array_size], 0, sizeof(void *) *
(new_array_size - buf->page_array_size)); (new_array_size - buf->page_array_size));
kfree(buf->pages); vfree(buf->pages);
buf->pages = new_pages; buf->pages = new_pages;
buf->page_array_size = new_array_size; buf->page_array_size = new_array_size;
} }
......
...@@ -328,7 +328,8 @@ config JS_RTC ...@@ -328,7 +328,8 @@ config JS_RTC
config GEN_RTC config GEN_RTC
tristate "Generic /dev/rtc emulation" tristate "Generic /dev/rtc emulation"
depends on RTC!=y && !IA64 && !ARM && !M32R && !MIPS && !SPARC && !FRV && !S390 && !SUPERH && !AVR32 && !BLACKFIN && !UML depends on RTC!=y
depends on ALPHA || M68K || MN10300 || PARISC || PPC || X86
---help--- ---help---
If you say Y here and create a character special file /dev/rtc with If you say Y here and create a character special file /dev/rtc with
major number 10 and minor number 135 using mknod ("man mknod"), you major number 10 and minor number 135 using mknod ("man mknod"), you
......
...@@ -695,7 +695,7 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig) ...@@ -695,7 +695,7 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
offset += file->f_pos; offset += file->f_pos;
case SEEK_SET: case SEEK_SET:
/* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */ /* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */
if (IS_ERR_VALUE((unsigned long long)offset)) { if ((unsigned long long)offset >= -MAX_ERRNO) {
ret = -EOVERFLOW; ret = -EOVERFLOW;
break; break;
} }
......
...@@ -496,12 +496,12 @@ static void pc_set_checksum(void) ...@@ -496,12 +496,12 @@ static void pc_set_checksum(void)
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
static char *floppy_types[] = { static const char * const floppy_types[] = {
"none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M", "none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M",
"3.5'' 2.88M", "3.5'' 2.88M" "3.5'' 2.88M", "3.5'' 2.88M"
}; };
static char *gfx_types[] = { static const char * const gfx_types[] = {
"EGA, VGA, ... (with BIOS)", "EGA, VGA, ... (with BIOS)",
"CGA (40 cols)", "CGA (40 cols)",
"CGA (80 cols)", "CGA (80 cols)",
...@@ -602,7 +602,7 @@ static void atari_set_checksum(void) ...@@ -602,7 +602,7 @@ static void atari_set_checksum(void)
static struct { static struct {
unsigned char val; unsigned char val;
char *name; const char *name;
} boot_prefs[] = { } boot_prefs[] = {
{ 0x80, "TOS" }, { 0x80, "TOS" },
{ 0x40, "ASV" }, { 0x40, "ASV" },
...@@ -611,7 +611,7 @@ static struct { ...@@ -611,7 +611,7 @@ static struct {
{ 0x00, "unspecified" } { 0x00, "unspecified" }
}; };
static char *languages[] = { static const char * const languages[] = {
"English (US)", "English (US)",
"German", "German",
"French", "French",
...@@ -623,7 +623,7 @@ static char *languages[] = { ...@@ -623,7 +623,7 @@ static char *languages[] = {
"Swiss (German)" "Swiss (German)"
}; };
static char *dateformat[] = { static const char * const dateformat[] = {
"MM%cDD%cYY", "MM%cDD%cYY",
"DD%cMM%cYY", "DD%cMM%cYY",
"YY%cMM%cDD", "YY%cMM%cDD",
...@@ -634,7 +634,7 @@ static char *dateformat[] = { ...@@ -634,7 +634,7 @@ static char *dateformat[] = {
"7 (undefined)" "7 (undefined)"
}; };
static char *colors[] = { static const char * const colors[] = {
"2", "4", "16", "256", "65536", "??", "??", "??" "2", "4", "16", "256", "65536", "??", "??", "??"
}; };
......
...@@ -129,10 +129,9 @@ static void button_consume_callbacks (int bpcount) ...@@ -129,10 +129,9 @@ static void button_consume_callbacks (int bpcount)
static void button_sequence_finished (unsigned long parameters) static void button_sequence_finished (unsigned long parameters)
{ {
#ifdef CONFIG_NWBUTTON_REBOOT /* Reboot using button is enabled */ if (IS_ENABLED(CONFIG_NWBUTTON_REBOOT) &&
if (button_press_count == reboot_count) button_press_count == reboot_count)
kill_cad_pid(SIGINT, 1); /* Ask init to reboot us */ kill_cad_pid(SIGINT, 1); /* Ask init to reboot us */
#endif /* CONFIG_NWBUTTON_REBOOT */
button_consume_callbacks (button_press_count); button_consume_callbacks (button_press_count);
bcount = sprintf (button_output_buffer, "%d\n", button_press_count); bcount = sprintf (button_output_buffer, "%d\n", button_press_count);
button_press_count = 0; /* Reset the button press counter */ button_press_count = 0; /* Reset the button press counter */
......
This diff is collapsed.
...@@ -334,10 +334,8 @@ static int __init raw_init(void) ...@@ -334,10 +334,8 @@ static int __init raw_init(void)
cdev_init(&raw_cdev, &raw_fops); cdev_init(&raw_cdev, &raw_fops);
ret = cdev_add(&raw_cdev, dev, max_raw_minors); ret = cdev_add(&raw_cdev, dev, max_raw_minors);
if (ret) { if (ret)
goto error_region; goto error_region;
}
raw_class = class_create(THIS_MODULE, "raw"); raw_class = class_create(THIS_MODULE, "raw");
if (IS_ERR(raw_class)) { if (IS_ERR(raw_class)) {
printk(KERN_ERR "Error creating raw class.\n"); printk(KERN_ERR "Error creating raw class.\n");
......
...@@ -509,7 +509,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep, ...@@ -509,7 +509,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
channel->log2_element_size = ((format > 2) ? channel->log2_element_size = ((format > 2) ?
2 : format); 2 : format);
bytebufsize = channel->rd_buf_size = bufsize * bytebufsize = bufsize *
(1 << channel->log2_element_size); (1 << channel->log2_element_size);
buffers = devm_kcalloc(dev, bufnum, buffers = devm_kcalloc(dev, bufnum,
...@@ -523,6 +523,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep, ...@@ -523,6 +523,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
if (!is_writebuf) { if (!is_writebuf) {
channel->num_rd_buffers = bufnum; channel->num_rd_buffers = bufnum;
channel->rd_buf_size = bytebufsize;
channel->rd_allow_partial = allowpartial; channel->rd_allow_partial = allowpartial;
channel->rd_synchronous = synchronous; channel->rd_synchronous = synchronous;
channel->rd_exclusive_open = exclusive_open; channel->rd_exclusive_open = exclusive_open;
...@@ -533,6 +534,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep, ...@@ -533,6 +534,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
bufnum, bytebufsize); bufnum, bytebufsize);
} else if (channelnum > 0) { } else if (channelnum > 0) {
channel->num_wr_buffers = bufnum; channel->num_wr_buffers = bufnum;
channel->wr_buf_size = bytebufsize;
channel->seekable = seekable; channel->seekable = seekable;
channel->wr_supports_nonempty = supports_nonempty; channel->wr_supports_nonempty = supports_nonempty;
......
...@@ -185,7 +185,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info, ...@@ -185,7 +185,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
break; break;
}; };
mutex_lock(&arizona->dapm->card->dapm_mutex); snd_soc_dapm_mutex_lock(arizona->dapm);
arizona->hpdet_clamp = clamp; arizona->hpdet_clamp = clamp;
...@@ -227,7 +227,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info, ...@@ -227,7 +227,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
ret); ret);
} }
mutex_unlock(&arizona->dapm->card->dapm_mutex); snd_soc_dapm_mutex_unlock(arizona->dapm);
} }
static void arizona_extcon_set_mode(struct arizona_extcon_info *info, int mode) static void arizona_extcon_set_mode(struct arizona_extcon_info *info, int mode)
......
...@@ -126,7 +126,7 @@ static int gpio_extcon_probe(struct platform_device *pdev) ...@@ -126,7 +126,7 @@ static int gpio_extcon_probe(struct platform_device *pdev)
INIT_DELAYED_WORK(&data->work, gpio_extcon_work); INIT_DELAYED_WORK(&data->work, gpio_extcon_work);
/* /*
* Request the interrput of gpio to detect whether external connector * Request the interrupt of gpio to detect whether external connector
* is attached or detached. * is attached or detached.
*/ */
ret = devm_request_any_context_irq(&pdev->dev, data->irq, ret = devm_request_any_context_irq(&pdev->dev, data->irq,
......
...@@ -150,6 +150,7 @@ enum max14577_muic_acc_type { ...@@ -150,6 +150,7 @@ enum max14577_muic_acc_type {
static const unsigned int max14577_extcon_cable[] = { static const unsigned int max14577_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST, EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW, EXTCON_CHG_USB_SLOW,
...@@ -454,6 +455,8 @@ static int max14577_muic_chg_handler(struct max14577_muic_info *info) ...@@ -454,6 +455,8 @@ static int max14577_muic_chg_handler(struct max14577_muic_info *info)
return ret; return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached); extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break; break;
case MAX14577_CHARGER_TYPE_DEDICATED_CHG: case MAX14577_CHARGER_TYPE_DEDICATED_CHG:
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_DCP, extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_DCP,
......
...@@ -204,6 +204,7 @@ enum max77693_muic_acc_type { ...@@ -204,6 +204,7 @@ enum max77693_muic_acc_type {
static const unsigned int max77693_extcon_cable[] = { static const unsigned int max77693_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_USB_HOST, EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST, EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW, EXTCON_CHG_USB_SLOW,
...@@ -512,8 +513,11 @@ static int max77693_muic_dock_handler(struct max77693_muic_info *info, ...@@ -512,8 +513,11 @@ static int max77693_muic_dock_handler(struct max77693_muic_info *info,
break; break;
case MAX77693_MUIC_ADC_AV_CABLE_NOLOAD: /* Dock-Audio */ case MAX77693_MUIC_ADC_AV_CABLE_NOLOAD: /* Dock-Audio */
dock_id = EXTCON_DOCK; dock_id = EXTCON_DOCK;
if (!attached) if (!attached) {
extcon_set_cable_state_(info->edev, EXTCON_USB, false); extcon_set_cable_state_(info->edev, EXTCON_USB, false);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
false);
}
break; break;
default: default:
dev_err(info->dev, "failed to detect %s dock device\n", dev_err(info->dev, "failed to detect %s dock device\n",
...@@ -601,6 +605,8 @@ static int max77693_muic_adc_ground_handler(struct max77693_muic_info *info) ...@@ -601,6 +605,8 @@ static int max77693_muic_adc_ground_handler(struct max77693_muic_info *info)
if (ret < 0) if (ret < 0)
return ret; return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached); extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break; break;
case MAX77693_MUIC_GND_MHL: case MAX77693_MUIC_GND_MHL:
case MAX77693_MUIC_GND_MHL_VB: case MAX77693_MUIC_GND_MHL_VB:
...@@ -830,6 +836,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info) ...@@ -830,6 +836,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
*/ */
extcon_set_cable_state_(info->edev, EXTCON_USB, extcon_set_cable_state_(info->edev, EXTCON_USB,
attached); attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
if (!cable_attached) if (!cable_attached)
extcon_set_cable_state_(info->edev, EXTCON_DOCK, extcon_set_cable_state_(info->edev, EXTCON_DOCK,
...@@ -899,6 +907,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info) ...@@ -899,6 +907,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
extcon_set_cable_state_(info->edev, EXTCON_USB, extcon_set_cable_state_(info->edev, EXTCON_USB,
attached); attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break; break;
case MAX77693_CHARGER_TYPE_DEDICATED_CHG: case MAX77693_CHARGER_TYPE_DEDICATED_CHG:
/* Only TA cable */ /* Only TA cable */
......
...@@ -122,6 +122,7 @@ enum max77843_muic_charger_type { ...@@ -122,6 +122,7 @@ enum max77843_muic_charger_type {
static const unsigned int max77843_extcon_cable[] = { static const unsigned int max77843_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_USB_HOST, EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_CDP, EXTCON_CHG_USB_CDP,
EXTCON_CHG_USB_FAST, EXTCON_CHG_USB_FAST,
...@@ -486,6 +487,8 @@ static int max77843_muic_chg_handler(struct max77843_muic_info *info) ...@@ -486,6 +487,8 @@ static int max77843_muic_chg_handler(struct max77843_muic_info *info)
return ret; return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached); extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break; break;
case MAX77843_MUIC_CHG_DOWNSTREAM: case MAX77843_MUIC_CHG_DOWNSTREAM:
ret = max77843_muic_set_path(info, ret = max77843_muic_set_path(info,
...@@ -803,7 +806,7 @@ static int max77843_muic_probe(struct platform_device *pdev) ...@@ -803,7 +806,7 @@ static int max77843_muic_probe(struct platform_device *pdev)
/* Clear IRQ bits before request IRQs */ /* Clear IRQ bits before request IRQs */
ret = regmap_bulk_read(max77843->regmap_muic, ret = regmap_bulk_read(max77843->regmap_muic,
MAX77843_MUIC_REG_INT1, info->status, MAX77843_MUIC_REG_INT1, info->status,
MAX77843_MUIC_IRQ_NUM); MAX77843_MUIC_STATUS_NUM);
if (ret) { if (ret) {
dev_err(&pdev->dev, "Failed to Clear IRQ bits\n"); dev_err(&pdev->dev, "Failed to Clear IRQ bits\n");
goto err_muic_irq; goto err_muic_irq;
......
...@@ -148,6 +148,7 @@ struct max8997_muic_info { ...@@ -148,6 +148,7 @@ struct max8997_muic_info {
static const unsigned int max8997_extcon_cable[] = { static const unsigned int max8997_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_USB_HOST, EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST, EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW, EXTCON_CHG_USB_SLOW,
...@@ -334,6 +335,8 @@ static int max8997_muic_handle_usb(struct max8997_muic_info *info, ...@@ -334,6 +335,8 @@ static int max8997_muic_handle_usb(struct max8997_muic_info *info,
break; break;
case MAX8997_USB_DEVICE: case MAX8997_USB_DEVICE:
extcon_set_cable_state_(info->edev, EXTCON_USB, attached); extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break; break;
default: default:
dev_err(info->dev, "failed to detect %s usb cable\n", dev_err(info->dev, "failed to detect %s usb cable\n",
......
...@@ -216,11 +216,23 @@ static int palmas_usb_probe(struct platform_device *pdev) ...@@ -216,11 +216,23 @@ static int palmas_usb_probe(struct platform_device *pdev)
return PTR_ERR(palmas_usb->id_gpiod); return PTR_ERR(palmas_usb->id_gpiod);
} }
palmas_usb->vbus_gpiod = devm_gpiod_get_optional(&pdev->dev, "vbus",
GPIOD_IN);
if (IS_ERR(palmas_usb->vbus_gpiod)) {
dev_err(&pdev->dev, "failed to get vbus gpio\n");
return PTR_ERR(palmas_usb->vbus_gpiod);
}
if (palmas_usb->enable_id_detection && palmas_usb->id_gpiod) { if (palmas_usb->enable_id_detection && palmas_usb->id_gpiod) {
palmas_usb->enable_id_detection = false; palmas_usb->enable_id_detection = false;
palmas_usb->enable_gpio_id_detection = true; palmas_usb->enable_gpio_id_detection = true;
} }
if (palmas_usb->enable_vbus_detection && palmas_usb->vbus_gpiod) {
palmas_usb->enable_vbus_detection = false;
palmas_usb->enable_gpio_vbus_detection = true;
}
if (palmas_usb->enable_gpio_id_detection) { if (palmas_usb->enable_gpio_id_detection) {
u32 debounce; u32 debounce;
...@@ -266,7 +278,7 @@ static int palmas_usb_probe(struct platform_device *pdev) ...@@ -266,7 +278,7 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->id_irq, palmas_usb->id_irq,
NULL, palmas_id_irq_handler, NULL, palmas_id_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
IRQF_ONESHOT | IRQF_EARLY_RESUME, IRQF_ONESHOT,
"palmas_usb_id", palmas_usb); "palmas_usb_id", palmas_usb);
if (status < 0) { if (status < 0) {
dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
...@@ -304,13 +316,47 @@ static int palmas_usb_probe(struct platform_device *pdev) ...@@ -304,13 +316,47 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->vbus_irq, NULL, palmas_usb->vbus_irq, NULL,
palmas_vbus_irq_handler, palmas_vbus_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
IRQF_ONESHOT | IRQF_EARLY_RESUME, IRQF_ONESHOT,
"palmas_usb_vbus", palmas_usb); "palmas_usb_vbus", palmas_usb);
if (status < 0) { if (status < 0) {
dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
palmas_usb->vbus_irq, status); palmas_usb->vbus_irq, status);
return status; return status;
} }
} else if (palmas_usb->enable_gpio_vbus_detection) {
/* remux GPIO_1 as VBUSDET */
status = palmas_update_bits(palmas,
PALMAS_PU_PD_OD_BASE,
PALMAS_PRIMARY_SECONDARY_PAD1,
PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_MASK,
(1 << PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_SHIFT));
if (status < 0) {
dev_err(&pdev->dev, "can't remux GPIO1\n");
return status;
}
palmas_usb->vbus_otg_irq = regmap_irq_get_virq(palmas->irq_data,
PALMAS_VBUS_OTG_IRQ);
palmas_usb->gpio_vbus_irq = gpiod_to_irq(palmas_usb->vbus_gpiod);
if (palmas_usb->gpio_vbus_irq < 0) {
dev_err(&pdev->dev, "failed to get vbus irq\n");
return palmas_usb->gpio_vbus_irq;
}
status = devm_request_threaded_irq(&pdev->dev,
palmas_usb->gpio_vbus_irq,
NULL,
palmas_vbus_irq_handler,
IRQF_TRIGGER_FALLING |
IRQF_TRIGGER_RISING |
IRQF_ONESHOT |
IRQF_EARLY_RESUME,
"palmas_usb_vbus",
palmas_usb);
if (status < 0) {
dev_err(&pdev->dev,
"failed to request handler for vbus irq\n");
return status;
}
} }
palmas_enable_irq(palmas_usb); palmas_enable_irq(palmas_usb);
...@@ -337,6 +383,8 @@ static int palmas_usb_suspend(struct device *dev) ...@@ -337,6 +383,8 @@ static int palmas_usb_suspend(struct device *dev)
if (device_may_wakeup(dev)) { if (device_may_wakeup(dev)) {
if (palmas_usb->enable_vbus_detection) if (palmas_usb->enable_vbus_detection)
enable_irq_wake(palmas_usb->vbus_irq); enable_irq_wake(palmas_usb->vbus_irq);
if (palmas_usb->enable_gpio_vbus_detection)
enable_irq_wake(palmas_usb->gpio_vbus_irq);
if (palmas_usb->enable_id_detection) if (palmas_usb->enable_id_detection)
enable_irq_wake(palmas_usb->id_irq); enable_irq_wake(palmas_usb->id_irq);
if (palmas_usb->enable_gpio_id_detection) if (palmas_usb->enable_gpio_id_detection)
...@@ -352,6 +400,8 @@ static int palmas_usb_resume(struct device *dev) ...@@ -352,6 +400,8 @@ static int palmas_usb_resume(struct device *dev)
if (device_may_wakeup(dev)) { if (device_may_wakeup(dev)) {
if (palmas_usb->enable_vbus_detection) if (palmas_usb->enable_vbus_detection)
disable_irq_wake(palmas_usb->vbus_irq); disable_irq_wake(palmas_usb->vbus_irq);
if (palmas_usb->enable_gpio_vbus_detection)
disable_irq_wake(palmas_usb->gpio_vbus_irq);
if (palmas_usb->enable_id_detection) if (palmas_usb->enable_id_detection)
disable_irq_wake(palmas_usb->id_irq); disable_irq_wake(palmas_usb->id_irq);
if (palmas_usb->enable_gpio_id_detection) if (palmas_usb->enable_gpio_id_detection)
......
...@@ -93,6 +93,7 @@ static struct reg_data rt8973a_reg_data[] = { ...@@ -93,6 +93,7 @@ static struct reg_data rt8973a_reg_data[] = {
static const unsigned int rt8973a_extcon_cable[] = { static const unsigned int rt8973a_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_USB_HOST, EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_JIG, EXTCON_JIG,
EXTCON_NONE, EXTCON_NONE,
...@@ -398,6 +399,9 @@ static int rt8973a_muic_cable_handler(struct rt8973a_muic_info *info, ...@@ -398,6 +399,9 @@ static int rt8973a_muic_cable_handler(struct rt8973a_muic_info *info,
/* Change the state of external accessory */ /* Change the state of external accessory */
extcon_set_cable_state_(info->edev, id, attached); extcon_set_cable_state_(info->edev, id, attached);
if (id == EXTCON_USB)
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
return 0; return 0;
} }
...@@ -663,7 +667,7 @@ MODULE_DEVICE_TABLE(of, rt8973a_dt_match); ...@@ -663,7 +667,7 @@ MODULE_DEVICE_TABLE(of, rt8973a_dt_match);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static int rt8973a_muic_suspend(struct device *dev) static int rt8973a_muic_suspend(struct device *dev)
{ {
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct i2c_client *i2c = to_i2c_client(dev);
struct rt8973a_muic_info *info = i2c_get_clientdata(i2c); struct rt8973a_muic_info *info = i2c_get_clientdata(i2c);
enable_irq_wake(info->irq); enable_irq_wake(info->irq);
...@@ -673,7 +677,7 @@ static int rt8973a_muic_suspend(struct device *dev) ...@@ -673,7 +677,7 @@ static int rt8973a_muic_suspend(struct device *dev)
static int rt8973a_muic_resume(struct device *dev) static int rt8973a_muic_resume(struct device *dev)
{ {
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct i2c_client *i2c = to_i2c_client(dev);
struct rt8973a_muic_info *info = i2c_get_clientdata(i2c); struct rt8973a_muic_info *info = i2c_get_clientdata(i2c);
disable_irq_wake(info->irq); disable_irq_wake(info->irq);
......
...@@ -95,6 +95,7 @@ static struct reg_data sm5502_reg_data[] = { ...@@ -95,6 +95,7 @@ static struct reg_data sm5502_reg_data[] = {
static const unsigned int sm5502_extcon_cable[] = { static const unsigned int sm5502_extcon_cable[] = {
EXTCON_USB, EXTCON_USB,
EXTCON_USB_HOST, EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP, EXTCON_CHG_USB_DCP,
EXTCON_NONE, EXTCON_NONE,
}; };
...@@ -411,6 +412,9 @@ static int sm5502_muic_cable_handler(struct sm5502_muic_info *info, ...@@ -411,6 +412,9 @@ static int sm5502_muic_cable_handler(struct sm5502_muic_info *info,
/* Change the state of external accessory */ /* Change the state of external accessory */
extcon_set_cable_state_(info->edev, id, attached); extcon_set_cable_state_(info->edev, id, attached);
if (id == EXTCON_USB)
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
return 0; return 0;
} }
...@@ -655,7 +659,7 @@ MODULE_DEVICE_TABLE(of, sm5502_dt_match); ...@@ -655,7 +659,7 @@ MODULE_DEVICE_TABLE(of, sm5502_dt_match);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static int sm5502_muic_suspend(struct device *dev) static int sm5502_muic_suspend(struct device *dev)
{ {
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct i2c_client *i2c = to_i2c_client(dev);
struct sm5502_muic_info *info = i2c_get_clientdata(i2c); struct sm5502_muic_info *info = i2c_get_clientdata(i2c);
enable_irq_wake(info->irq); enable_irq_wake(info->irq);
...@@ -665,7 +669,7 @@ static int sm5502_muic_suspend(struct device *dev) ...@@ -665,7 +669,7 @@ static int sm5502_muic_suspend(struct device *dev)
static int sm5502_muic_resume(struct device *dev) static int sm5502_muic_resume(struct device *dev)
{ {
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct i2c_client *i2c = to_i2c_client(dev);
struct sm5502_muic_info *info = i2c_get_clientdata(i2c); struct sm5502_muic_info *info = i2c_get_clientdata(i2c);
disable_irq_wake(info->irq); disable_irq_wake(info->irq);
......
...@@ -219,6 +219,21 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, ...@@ -219,6 +219,21 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
} }
EXPORT_SYMBOL_GPL(vmbus_open); EXPORT_SYMBOL_GPL(vmbus_open);
/* Used for Hyper-V Socket: a guest client's connect() to the host */
int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
const uuid_le *shv_host_servie_id)
{
struct vmbus_channel_tl_connect_request conn_msg;
memset(&conn_msg, 0, sizeof(conn_msg));
conn_msg.header.msgtype = CHANNELMSG_TL_CONNECT_REQUEST;
conn_msg.guest_endpoint_id = *shv_guest_servie_id;
conn_msg.host_service_id = *shv_host_servie_id;
return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
}
EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
/* /*
* create_gpadl_header - Creates a gpadl for the specified buffer * create_gpadl_header - Creates a gpadl for the specified buffer
*/ */
...@@ -624,6 +639,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer, ...@@ -624,6 +639,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
u64 aligned_data = 0; u64 aligned_data = 0;
int ret; int ret;
bool signal = false; bool signal = false;
bool lock = channel->acquire_ring_lock;
int num_vecs = ((bufferlen != 0) ? 3 : 1); int num_vecs = ((bufferlen != 0) ? 3 : 1);
...@@ -643,7 +659,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer, ...@@ -643,7 +659,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
bufferlist[2].iov_len = (packetlen_aligned - packetlen); bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs, ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs,
&signal); &signal, lock);
/* /*
* Signalling the host is conditional on many factors: * Signalling the host is conditional on many factors:
...@@ -659,6 +675,9 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer, ...@@ -659,6 +675,9 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
* If we cannot write to the ring-buffer; signal the host * If we cannot write to the ring-buffer; signal the host
* even if we may not have written anything. This is a rare * even if we may not have written anything. This is a rare
* enough condition that it should not matter. * enough condition that it should not matter.
* NOTE: in this case, the hvsock channel is an exception, because
* it looks the host side's hvsock implementation has a throttling
* mechanism which can hurt the performance otherwise.
*/ */
if (channel->signal_policy) if (channel->signal_policy)
...@@ -666,7 +685,8 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer, ...@@ -666,7 +685,8 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
else else
kick_q = true; kick_q = true;
if (((ret == 0) && kick_q && signal) || (ret)) if (((ret == 0) && kick_q && signal) ||
(ret && !is_hvsock_channel(channel)))
vmbus_setevent(channel); vmbus_setevent(channel);
return ret; return ret;
...@@ -719,6 +739,7 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel, ...@@ -719,6 +739,7 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
struct kvec bufferlist[3]; struct kvec bufferlist[3];
u64 aligned_data = 0; u64 aligned_data = 0;
bool signal = false; bool signal = false;
bool lock = channel->acquire_ring_lock;
if (pagecount > MAX_PAGE_BUFFER_COUNT) if (pagecount > MAX_PAGE_BUFFER_COUNT)
return -EINVAL; return -EINVAL;
...@@ -755,7 +776,8 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel, ...@@ -755,7 +776,8 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data; bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen); bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal); ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
/* /*
* Signalling the host is conditional on many factors: * Signalling the host is conditional on many factors:
...@@ -818,6 +840,7 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel, ...@@ -818,6 +840,7 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
struct kvec bufferlist[3]; struct kvec bufferlist[3];
u64 aligned_data = 0; u64 aligned_data = 0;
bool signal = false; bool signal = false;
bool lock = channel->acquire_ring_lock;
packetlen = desc_size + bufferlen; packetlen = desc_size + bufferlen;
packetlen_aligned = ALIGN(packetlen, sizeof(u64)); packetlen_aligned = ALIGN(packetlen, sizeof(u64));
...@@ -837,7 +860,8 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel, ...@@ -837,7 +860,8 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data; bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen); bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal); ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
if (ret == 0 && signal) if (ret == 0 && signal)
vmbus_setevent(channel); vmbus_setevent(channel);
...@@ -862,6 +886,7 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel, ...@@ -862,6 +886,7 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
struct kvec bufferlist[3]; struct kvec bufferlist[3];
u64 aligned_data = 0; u64 aligned_data = 0;
bool signal = false; bool signal = false;
bool lock = channel->acquire_ring_lock;
u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset, u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset,
multi_pagebuffer->len); multi_pagebuffer->len);
...@@ -900,7 +925,8 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel, ...@@ -900,7 +925,8 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data; bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen); bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal); ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
if (ret == 0 && signal) if (ret == 0 && signal)
vmbus_setevent(channel); vmbus_setevent(channel);
......
This diff is collapsed.
...@@ -88,8 +88,16 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, ...@@ -88,8 +88,16 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
* This has been the behavior pre-win8. This is not * This has been the behavior pre-win8. This is not
* perf issue and having all channel messages delivered on CPU 0 * perf issue and having all channel messages delivered on CPU 0
* would be ok. * would be ok.
* For post win8 hosts, we support receiving channel messagges on
* all the CPUs. This is needed for kexec to work correctly where
* the CPU attempting to connect may not be CPU 0.
*/ */
msg->target_vcpu = 0; if (version >= VERSION_WIN8_1) {
msg->target_vcpu = hv_context.vp_index[get_cpu()];
put_cpu();
} else {
msg->target_vcpu = 0;
}
/* /*
* Add to list before we send the request since we may * Add to list before we send the request since we may
...@@ -236,7 +244,7 @@ void vmbus_disconnect(void) ...@@ -236,7 +244,7 @@ void vmbus_disconnect(void)
/* /*
* First send the unload request to the host. * First send the unload request to the host.
*/ */
vmbus_initiate_unload(); vmbus_initiate_unload(false);
if (vmbus_connection.work_queue) { if (vmbus_connection.work_queue) {
drain_workqueue(vmbus_connection.work_queue); drain_workqueue(vmbus_connection.work_queue);
...@@ -288,7 +296,8 @@ struct vmbus_channel *relid2channel(u32 relid) ...@@ -288,7 +296,8 @@ struct vmbus_channel *relid2channel(u32 relid)
struct list_head *cur, *tmp; struct list_head *cur, *tmp;
struct vmbus_channel *cur_sc; struct vmbus_channel *cur_sc;
mutex_lock(&vmbus_connection.channel_mutex); BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
if (channel->offermsg.child_relid == relid) { if (channel->offermsg.child_relid == relid) {
found_channel = channel; found_channel = channel;
...@@ -307,7 +316,6 @@ struct vmbus_channel *relid2channel(u32 relid) ...@@ -307,7 +316,6 @@ struct vmbus_channel *relid2channel(u32 relid)
} }
} }
} }
mutex_unlock(&vmbus_connection.channel_mutex);
return found_channel; return found_channel;
} }
...@@ -474,7 +482,7 @@ int vmbus_post_msg(void *buffer, size_t buflen) ...@@ -474,7 +482,7 @@ int vmbus_post_msg(void *buffer, size_t buflen)
/* /*
* vmbus_set_event - Send an event notification to the parent * vmbus_set_event - Send an event notification to the parent
*/ */
int vmbus_set_event(struct vmbus_channel *channel) void vmbus_set_event(struct vmbus_channel *channel)
{ {
u32 child_relid = channel->offermsg.child_relid; u32 child_relid = channel->offermsg.child_relid;
...@@ -485,5 +493,5 @@ int vmbus_set_event(struct vmbus_channel *channel) ...@@ -485,5 +493,5 @@ int vmbus_set_event(struct vmbus_channel *channel)
(child_relid >> 5)); (child_relid >> 5));
} }
return hv_signal_event(channel->sig_event); hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL);
} }
...@@ -204,6 +204,8 @@ int hv_init(void) ...@@ -204,6 +204,8 @@ int hv_init(void)
sizeof(int) * NR_CPUS); sizeof(int) * NR_CPUS);
memset(hv_context.event_dpc, 0, memset(hv_context.event_dpc, 0,
sizeof(void *) * NR_CPUS); sizeof(void *) * NR_CPUS);
memset(hv_context.msg_dpc, 0,
sizeof(void *) * NR_CPUS);
memset(hv_context.clk_evt, 0, memset(hv_context.clk_evt, 0,
sizeof(void *) * NR_CPUS); sizeof(void *) * NR_CPUS);
...@@ -295,8 +297,14 @@ void hv_cleanup(void) ...@@ -295,8 +297,14 @@ void hv_cleanup(void)
* Cleanup the TSC page based CS. * Cleanup the TSC page based CS.
*/ */
if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) { if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
clocksource_change_rating(&hyperv_cs_tsc, 10); /*
clocksource_unregister(&hyperv_cs_tsc); * Crash can happen in an interrupt context and unregistering
* a clocksource is impossible and redundant in this case.
*/
if (!oops_in_progress) {
clocksource_change_rating(&hyperv_cs_tsc, 10);
clocksource_unregister(&hyperv_cs_tsc);
}
hypercall_msr.as_uint64 = 0; hypercall_msr.as_uint64 = 0;
wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64); wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64);
...@@ -337,22 +345,6 @@ int hv_post_message(union hv_connection_id connection_id, ...@@ -337,22 +345,6 @@ int hv_post_message(union hv_connection_id connection_id,
return status & 0xFFFF; return status & 0xFFFF;
} }
/*
* hv_signal_event -
* Signal an event on the specified connection using the hypervisor event IPC.
*
* This involves a hypercall.
*/
int hv_signal_event(void *con_id)
{
u64 status;
status = hv_do_hypercall(HVCALL_SIGNAL_EVENT, con_id, NULL);
return status & 0xFFFF;
}
static int hv_ce_set_next_event(unsigned long delta, static int hv_ce_set_next_event(unsigned long delta,
struct clock_event_device *evt) struct clock_event_device *evt)
{ {
...@@ -425,6 +417,13 @@ int hv_synic_alloc(void) ...@@ -425,6 +417,13 @@ int hv_synic_alloc(void)
} }
tasklet_init(hv_context.event_dpc[cpu], vmbus_on_event, cpu); tasklet_init(hv_context.event_dpc[cpu], vmbus_on_event, cpu);
hv_context.msg_dpc[cpu] = kmalloc(size, GFP_ATOMIC);
if (hv_context.msg_dpc[cpu] == NULL) {
pr_err("Unable to allocate event dpc\n");
goto err;
}
tasklet_init(hv_context.msg_dpc[cpu], vmbus_on_msg_dpc, cpu);
hv_context.clk_evt[cpu] = kzalloc(ced_size, GFP_ATOMIC); hv_context.clk_evt[cpu] = kzalloc(ced_size, GFP_ATOMIC);
if (hv_context.clk_evt[cpu] == NULL) { if (hv_context.clk_evt[cpu] == NULL) {
pr_err("Unable to allocate clock event device\n"); pr_err("Unable to allocate clock event device\n");
...@@ -466,6 +465,7 @@ int hv_synic_alloc(void) ...@@ -466,6 +465,7 @@ int hv_synic_alloc(void)
static void hv_synic_free_cpu(int cpu) static void hv_synic_free_cpu(int cpu)
{ {
kfree(hv_context.event_dpc[cpu]); kfree(hv_context.event_dpc[cpu]);
kfree(hv_context.msg_dpc[cpu]);
kfree(hv_context.clk_evt[cpu]); kfree(hv_context.clk_evt[cpu]);
if (hv_context.synic_event_page[cpu]) if (hv_context.synic_event_page[cpu])
free_page((unsigned long)hv_context.synic_event_page[cpu]); free_page((unsigned long)hv_context.synic_event_page[cpu]);
......
...@@ -251,7 +251,6 @@ void hv_fcopy_onchannelcallback(void *context) ...@@ -251,7 +251,6 @@ void hv_fcopy_onchannelcallback(void *context)
*/ */
fcopy_transaction.recv_len = recvlen; fcopy_transaction.recv_len = recvlen;
fcopy_transaction.recv_channel = channel;
fcopy_transaction.recv_req_id = requestid; fcopy_transaction.recv_req_id = requestid;
fcopy_transaction.fcopy_msg = fcopy_msg; fcopy_transaction.fcopy_msg = fcopy_msg;
...@@ -317,6 +316,7 @@ static void fcopy_on_reset(void) ...@@ -317,6 +316,7 @@ static void fcopy_on_reset(void)
int hv_fcopy_init(struct hv_util_service *srv) int hv_fcopy_init(struct hv_util_service *srv)
{ {
recv_buffer = srv->recv_buffer; recv_buffer = srv->recv_buffer;
fcopy_transaction.recv_channel = srv->channel;
/* /*
* When this driver loads, the user level daemon that * When this driver loads, the user level daemon that
......
...@@ -639,7 +639,6 @@ void hv_kvp_onchannelcallback(void *context) ...@@ -639,7 +639,6 @@ void hv_kvp_onchannelcallback(void *context)
*/ */
kvp_transaction.recv_len = recvlen; kvp_transaction.recv_len = recvlen;
kvp_transaction.recv_channel = channel;
kvp_transaction.recv_req_id = requestid; kvp_transaction.recv_req_id = requestid;
kvp_transaction.kvp_msg = kvp_msg; kvp_transaction.kvp_msg = kvp_msg;
...@@ -688,6 +687,7 @@ int ...@@ -688,6 +687,7 @@ int
hv_kvp_init(struct hv_util_service *srv) hv_kvp_init(struct hv_util_service *srv)
{ {
recv_buffer = srv->recv_buffer; recv_buffer = srv->recv_buffer;
kvp_transaction.recv_channel = srv->channel;
/* /*
* When this driver loads, the user level daemon that * When this driver loads, the user level daemon that
......
...@@ -263,7 +263,6 @@ void hv_vss_onchannelcallback(void *context) ...@@ -263,7 +263,6 @@ void hv_vss_onchannelcallback(void *context)
*/ */
vss_transaction.recv_len = recvlen; vss_transaction.recv_len = recvlen;
vss_transaction.recv_channel = channel;
vss_transaction.recv_req_id = requestid; vss_transaction.recv_req_id = requestid;
vss_transaction.msg = (struct hv_vss_msg *)vss_msg; vss_transaction.msg = (struct hv_vss_msg *)vss_msg;
...@@ -337,6 +336,7 @@ hv_vss_init(struct hv_util_service *srv) ...@@ -337,6 +336,7 @@ hv_vss_init(struct hv_util_service *srv)
return -ENOTSUPP; return -ENOTSUPP;
} }
recv_buffer = srv->recv_buffer; recv_buffer = srv->recv_buffer;
vss_transaction.recv_channel = srv->channel;
/* /*
* When this driver loads, the user level daemon that * When this driver loads, the user level daemon that
......
...@@ -322,6 +322,7 @@ static int util_probe(struct hv_device *dev, ...@@ -322,6 +322,7 @@ static int util_probe(struct hv_device *dev,
srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL);
if (!srv->recv_buffer) if (!srv->recv_buffer)
return -ENOMEM; return -ENOMEM;
srv->channel = dev->channel;
if (srv->util_init) { if (srv->util_init) {
ret = srv->util_init(srv); ret = srv->util_init(srv);
if (ret) { if (ret) {
......
...@@ -310,6 +310,9 @@ struct hvutil_transport *hvutil_transport_init(const char *name, ...@@ -310,6 +310,9 @@ struct hvutil_transport *hvutil_transport_init(const char *name,
return hvt; return hvt;
err_free_hvt: err_free_hvt:
spin_lock(&hvt_list_lock);
list_del(&hvt->list);
spin_unlock(&hvt_list_lock);
kfree(hvt); kfree(hvt);
return NULL; return NULL;
} }
......
...@@ -443,10 +443,11 @@ struct hv_context { ...@@ -443,10 +443,11 @@ struct hv_context {
u32 vp_index[NR_CPUS]; u32 vp_index[NR_CPUS];
/* /*
* Starting with win8, we can take channel interrupts on any CPU; * Starting with win8, we can take channel interrupts on any CPU;
* we will manage the tasklet that handles events on a per CPU * we will manage the tasklet that handles events messages on a per CPU
* basis. * basis.
*/ */
struct tasklet_struct *event_dpc[NR_CPUS]; struct tasklet_struct *event_dpc[NR_CPUS];
struct tasklet_struct *msg_dpc[NR_CPUS];
/* /*
* To optimize the mapping of relid to channel, maintain * To optimize the mapping of relid to channel, maintain
* per-cpu list of the channels based on their CPU affinity. * per-cpu list of the channels based on their CPU affinity.
...@@ -495,8 +496,6 @@ extern int hv_post_message(union hv_connection_id connection_id, ...@@ -495,8 +496,6 @@ extern int hv_post_message(union hv_connection_id connection_id,
enum hv_message_type message_type, enum hv_message_type message_type,
void *payload, size_t payload_size); void *payload, size_t payload_size);
extern int hv_signal_event(void *con_id);
extern int hv_synic_alloc(void); extern int hv_synic_alloc(void);
extern void hv_synic_free(void); extern void hv_synic_free(void);
...@@ -525,7 +524,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info); ...@@ -525,7 +524,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info);
int hv_ringbuffer_write(struct hv_ring_buffer_info *ring_info, int hv_ringbuffer_write(struct hv_ring_buffer_info *ring_info,
struct kvec *kv_list, struct kvec *kv_list,
u32 kv_count, bool *signal); u32 kv_count, bool *signal, bool lock);
int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
void *buffer, u32 buflen, u32 *buffer_actual_len, void *buffer, u32 buflen, u32 *buffer_actual_len,
...@@ -620,6 +619,30 @@ struct vmbus_channel_message_table_entry { ...@@ -620,6 +619,30 @@ struct vmbus_channel_message_table_entry {
extern struct vmbus_channel_message_table_entry extern struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT]; channel_message_table[CHANNELMSG_COUNT];
/* Free the message slot and signal end-of-message if required */
static inline void vmbus_signal_eom(struct hv_message *msg)
{
msg->header.message_type = HVMSG_NONE;
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
}
/* General vmbus interface */ /* General vmbus interface */
struct hv_device *vmbus_device_create(const uuid_le *type, struct hv_device *vmbus_device_create(const uuid_le *type,
...@@ -644,9 +667,10 @@ void vmbus_disconnect(void); ...@@ -644,9 +667,10 @@ void vmbus_disconnect(void);
int vmbus_post_msg(void *buffer, size_t buflen); int vmbus_post_msg(void *buffer, size_t buflen);
int vmbus_set_event(struct vmbus_channel *channel); void vmbus_set_event(struct vmbus_channel *channel);
void vmbus_on_event(unsigned long data); void vmbus_on_event(unsigned long data);
void vmbus_on_msg_dpc(unsigned long data);
int hv_kvp_init(struct hv_util_service *); int hv_kvp_init(struct hv_util_service *);
void hv_kvp_deinit(void); void hv_kvp_deinit(void);
...@@ -659,7 +683,7 @@ void hv_vss_onchannelcallback(void *); ...@@ -659,7 +683,7 @@ void hv_vss_onchannelcallback(void *);
int hv_fcopy_init(struct hv_util_service *); int hv_fcopy_init(struct hv_util_service *);
void hv_fcopy_deinit(void); void hv_fcopy_deinit(void);
void hv_fcopy_onchannelcallback(void *); void hv_fcopy_onchannelcallback(void *);
void vmbus_initiate_unload(void); void vmbus_initiate_unload(bool crash);
static inline void hv_poll_channel(struct vmbus_channel *channel, static inline void hv_poll_channel(struct vmbus_channel *channel,
void (*cb)(void *)) void (*cb)(void *))
......
...@@ -314,7 +314,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info) ...@@ -314,7 +314,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
/* Write to the ring buffer. */ /* Write to the ring buffer. */
int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
struct kvec *kv_list, u32 kv_count, bool *signal) struct kvec *kv_list, u32 kv_count, bool *signal, bool lock)
{ {
int i = 0; int i = 0;
u32 bytes_avail_towrite; u32 bytes_avail_towrite;
...@@ -324,14 +324,15 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, ...@@ -324,14 +324,15 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
u32 next_write_location; u32 next_write_location;
u32 old_write; u32 old_write;
u64 prev_indices = 0; u64 prev_indices = 0;
unsigned long flags; unsigned long flags = 0;
for (i = 0; i < kv_count; i++) for (i = 0; i < kv_count; i++)
totalbytes_towrite += kv_list[i].iov_len; totalbytes_towrite += kv_list[i].iov_len;
totalbytes_towrite += sizeof(u64); totalbytes_towrite += sizeof(u64);
spin_lock_irqsave(&outring_info->ring_lock, flags); if (lock)
spin_lock_irqsave(&outring_info->ring_lock, flags);
hv_get_ringbuffer_availbytes(outring_info, hv_get_ringbuffer_availbytes(outring_info,
&bytes_avail_toread, &bytes_avail_toread,
...@@ -343,7 +344,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, ...@@ -343,7 +344,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
* is empty since the read index == write index. * is empty since the read index == write index.
*/ */
if (bytes_avail_towrite <= totalbytes_towrite) { if (bytes_avail_towrite <= totalbytes_towrite) {
spin_unlock_irqrestore(&outring_info->ring_lock, flags); if (lock)
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
return -EAGAIN; return -EAGAIN;
} }
...@@ -374,7 +376,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, ...@@ -374,7 +376,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
hv_set_next_write_location(outring_info, next_write_location); hv_set_next_write_location(outring_info, next_write_location);
spin_unlock_irqrestore(&outring_info->ring_lock, flags); if (lock)
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
*signal = hv_need_to_signal(old_write, outring_info); *signal = hv_need_to_signal(old_write, outring_info);
return 0; return 0;
...@@ -388,7 +391,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, ...@@ -388,7 +391,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
u32 bytes_avail_toread; u32 bytes_avail_toread;
u32 next_read_location = 0; u32 next_read_location = 0;
u64 prev_indices = 0; u64 prev_indices = 0;
unsigned long flags;
struct vmpacket_descriptor desc; struct vmpacket_descriptor desc;
u32 offset; u32 offset;
u32 packetlen; u32 packetlen;
...@@ -397,7 +399,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, ...@@ -397,7 +399,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
if (buflen <= 0) if (buflen <= 0)
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&inring_info->ring_lock, flags);
*buffer_actual_len = 0; *buffer_actual_len = 0;
*requestid = 0; *requestid = 0;
...@@ -412,7 +413,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, ...@@ -412,7 +413,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
* No error is set when there is even no header, drivers are * No error is set when there is even no header, drivers are
* supposed to analyze buffer_actual_len. * supposed to analyze buffer_actual_len.
*/ */
goto out_unlock; return ret;
} }
next_read_location = hv_get_next_read_location(inring_info); next_read_location = hv_get_next_read_location(inring_info);
...@@ -425,15 +426,11 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, ...@@ -425,15 +426,11 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
*buffer_actual_len = packetlen; *buffer_actual_len = packetlen;
*requestid = desc.trans_id; *requestid = desc.trans_id;
if (bytes_avail_toread < packetlen + offset) { if (bytes_avail_toread < packetlen + offset)
ret = -EAGAIN; return -EAGAIN;
goto out_unlock;
}
if (packetlen > buflen) { if (packetlen > buflen)
ret = -ENOBUFS; return -ENOBUFS;
goto out_unlock;
}
next_read_location = next_read_location =
hv_get_next_readlocation_withoffset(inring_info, offset); hv_get_next_readlocation_withoffset(inring_info, offset);
...@@ -460,7 +457,5 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, ...@@ -460,7 +457,5 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
*signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info); *signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info);
out_unlock:
spin_unlock_irqrestore(&inring_info->ring_lock, flags);
return ret; return ret;
} }
...@@ -45,7 +45,6 @@ ...@@ -45,7 +45,6 @@
static struct acpi_device *hv_acpi_dev; static struct acpi_device *hv_acpi_dev;
static struct tasklet_struct msg_dpc;
static struct completion probe_event; static struct completion probe_event;
...@@ -477,6 +476,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev, ...@@ -477,6 +476,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
} }
static DEVICE_ATTR_RO(channel_vp_mapping); static DEVICE_ATTR_RO(channel_vp_mapping);
static ssize_t vendor_show(struct device *dev,
struct device_attribute *dev_attr,
char *buf)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
return sprintf(buf, "0x%x\n", hv_dev->vendor_id);
}
static DEVICE_ATTR_RO(vendor);
static ssize_t device_show(struct device *dev,
struct device_attribute *dev_attr,
char *buf)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
return sprintf(buf, "0x%x\n", hv_dev->device_id);
}
static DEVICE_ATTR_RO(device);
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */ /* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_attrs[] = { static struct attribute *vmbus_attrs[] = {
&dev_attr_id.attr, &dev_attr_id.attr,
...@@ -502,6 +519,8 @@ static struct attribute *vmbus_attrs[] = { ...@@ -502,6 +519,8 @@ static struct attribute *vmbus_attrs[] = {
&dev_attr_in_read_bytes_avail.attr, &dev_attr_in_read_bytes_avail.attr,
&dev_attr_in_write_bytes_avail.attr, &dev_attr_in_write_bytes_avail.attr,
&dev_attr_channel_vp_mapping.attr, &dev_attr_channel_vp_mapping.attr,
&dev_attr_vendor.attr,
&dev_attr_device.attr,
NULL, NULL,
}; };
ATTRIBUTE_GROUPS(vmbus); ATTRIBUTE_GROUPS(vmbus);
...@@ -562,6 +581,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver) ...@@ -562,6 +581,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver); struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device); struct hv_device *hv_dev = device_to_hv_device(device);
/* The hv_sock driver handles all hv_sock offers. */
if (is_hvsock_channel(hv_dev->channel))
return drv->hvsock;
if (hv_vmbus_get_id(drv->id_table, &hv_dev->dev_type)) if (hv_vmbus_get_id(drv->id_table, &hv_dev->dev_type))
return 1; return 1;
...@@ -685,28 +708,10 @@ static void hv_process_timer_expiration(struct hv_message *msg, int cpu) ...@@ -685,28 +708,10 @@ static void hv_process_timer_expiration(struct hv_message *msg, int cpu)
if (dev->event_handler) if (dev->event_handler)
dev->event_handler(dev); dev->event_handler(dev);
msg->header.message_type = HVMSG_NONE; vmbus_signal_eom(msg);
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
} }
static void vmbus_on_msg_dpc(unsigned long data) void vmbus_on_msg_dpc(unsigned long data)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
void *page_addr = hv_context.synic_message_page[cpu]; void *page_addr = hv_context.synic_message_page[cpu];
...@@ -716,52 +721,32 @@ static void vmbus_on_msg_dpc(unsigned long data) ...@@ -716,52 +721,32 @@ static void vmbus_on_msg_dpc(unsigned long data)
struct vmbus_channel_message_table_entry *entry; struct vmbus_channel_message_table_entry *entry;
struct onmessage_work_context *ctx; struct onmessage_work_context *ctx;
while (1) { if (msg->header.message_type == HVMSG_NONE)
if (msg->header.message_type == HVMSG_NONE) /* no msg */
/* no msg */ return;
break;
hdr = (struct vmbus_channel_message_header *)msg->u.payload; hdr = (struct vmbus_channel_message_header *)msg->u.payload;
if (hdr->msgtype >= CHANNELMSG_COUNT) { if (hdr->msgtype >= CHANNELMSG_COUNT) {
WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype); WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
goto msg_handled; goto msg_handled;
} }
entry = &channel_message_table[hdr->msgtype]; entry = &channel_message_table[hdr->msgtype];
if (entry->handler_type == VMHT_BLOCKING) { if (entry->handler_type == VMHT_BLOCKING) {
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx == NULL) if (ctx == NULL)
continue; return;
INIT_WORK(&ctx->work, vmbus_onmessage_work); INIT_WORK(&ctx->work, vmbus_onmessage_work);
memcpy(&ctx->msg, msg, sizeof(*msg)); memcpy(&ctx->msg, msg, sizeof(*msg));
queue_work(vmbus_connection.work_queue, &ctx->work); queue_work(vmbus_connection.work_queue, &ctx->work);
} else } else
entry->message_handler(hdr); entry->message_handler(hdr);
msg_handled: msg_handled:
msg->header.message_type = HVMSG_NONE; vmbus_signal_eom(msg);
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
}
} }
static void vmbus_isr(void) static void vmbus_isr(void)
...@@ -814,7 +799,7 @@ static void vmbus_isr(void) ...@@ -814,7 +799,7 @@ static void vmbus_isr(void)
if (msg->header.message_type == HVMSG_TIMER_EXPIRED) if (msg->header.message_type == HVMSG_TIMER_EXPIRED)
hv_process_timer_expiration(msg, cpu); hv_process_timer_expiration(msg, cpu);
else else
tasklet_schedule(&msg_dpc); tasklet_schedule(hv_context.msg_dpc[cpu]);
} }
} }
...@@ -838,8 +823,6 @@ static int vmbus_bus_init(void) ...@@ -838,8 +823,6 @@ static int vmbus_bus_init(void)
return ret; return ret;
} }
tasklet_init(&msg_dpc, vmbus_on_msg_dpc, 0);
ret = bus_register(&hv_bus); ret = bus_register(&hv_bus);
if (ret) if (ret)
goto err_cleanup; goto err_cleanup;
...@@ -957,6 +940,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type, ...@@ -957,6 +940,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le)); memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
memcpy(&child_device_obj->dev_instance, instance, memcpy(&child_device_obj->dev_instance, instance,
sizeof(uuid_le)); sizeof(uuid_le));
child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */
return child_device_obj; return child_device_obj;
...@@ -1268,7 +1252,7 @@ static void hv_kexec_handler(void) ...@@ -1268,7 +1252,7 @@ static void hv_kexec_handler(void)
int cpu; int cpu;
hv_synic_clockevents_cleanup(); hv_synic_clockevents_cleanup();
vmbus_initiate_unload(); vmbus_initiate_unload(false);
for_each_online_cpu(cpu) for_each_online_cpu(cpu)
smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1); smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
hv_cleanup(); hv_cleanup();
...@@ -1276,7 +1260,7 @@ static void hv_kexec_handler(void) ...@@ -1276,7 +1260,7 @@ static void hv_kexec_handler(void)
static void hv_crash_handler(struct pt_regs *regs) static void hv_crash_handler(struct pt_regs *regs)
{ {
vmbus_initiate_unload(); vmbus_initiate_unload(true);
/* /*
* In crash handler we can't schedule synic cleanup for all CPUs, * In crash handler we can't schedule synic cleanup for all CPUs,
* doing the cleanup for current CPU only. This should be sufficient * doing the cleanup for current CPU only. This should be sufficient
...@@ -1334,7 +1318,8 @@ static void __exit vmbus_exit(void) ...@@ -1334,7 +1318,8 @@ static void __exit vmbus_exit(void)
hv_synic_clockevents_cleanup(); hv_synic_clockevents_cleanup();
vmbus_disconnect(); vmbus_disconnect();
hv_remove_vmbus_irq(); hv_remove_vmbus_irq();
tasklet_kill(&msg_dpc); for_each_online_cpu(cpu)
tasklet_kill(hv_context.msg_dpc[cpu]);
vmbus_free_channels(); vmbus_free_channels();
if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) { if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
unregister_die_notifier(&hyperv_die_block); unregister_die_notifier(&hyperv_die_block);
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
menuconfig CORESIGHT menuconfig CORESIGHT
bool "CoreSight Tracing Support" bool "CoreSight Tracing Support"
select ARM_AMBA select ARM_AMBA
select PERF_EVENTS
help help
This framework provides a kernel interface for the CoreSight debug This framework provides a kernel interface for the CoreSight debug
and trace drivers to register themselves with. It's intended to build and trace drivers to register themselves with. It's intended to build
......
...@@ -8,6 +8,8 @@ obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o ...@@ -8,6 +8,8 @@ obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o
obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \ obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
coresight-replicator.o coresight-replicator.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
coresight-etm3x-sysfs.o \
coresight-etm-perf.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
This diff is collapsed.
/*
* Copyright(C) 2015 Linaro Limited. All rights reserved.
* Author: Mathieu Poirier <mathieu.poirier@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _CORESIGHT_ETM_PERF_H
#define _CORESIGHT_ETM_PERF_H
struct coresight_device;
#ifdef CONFIG_CORESIGHT
int etm_perf_symlink(struct coresight_device *csdev, bool link);
#else
static inline int etm_perf_symlink(struct coresight_device *csdev, bool link)
{ return -EINVAL; }
#endif /* CONFIG_CORESIGHT */
#endif
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#ifndef _CORESIGHT_CORESIGHT_ETM_H #ifndef _CORESIGHT_CORESIGHT_ETM_H
#define _CORESIGHT_CORESIGHT_ETM_H #define _CORESIGHT_CORESIGHT_ETM_H
#include <asm/local.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include "coresight-priv.h" #include "coresight-priv.h"
...@@ -109,7 +110,10 @@ ...@@ -109,7 +110,10 @@
#define ETM_MODE_STALL BIT(2) #define ETM_MODE_STALL BIT(2)
#define ETM_MODE_TIMESTAMP BIT(3) #define ETM_MODE_TIMESTAMP BIT(3)
#define ETM_MODE_CTXID BIT(4) #define ETM_MODE_CTXID BIT(4)
#define ETM_MODE_ALL 0x1f #define ETM_MODE_ALL (ETM_MODE_EXCLUDE | ETM_MODE_CYCACC | \
ETM_MODE_STALL | ETM_MODE_TIMESTAMP | \
ETM_MODE_CTXID | ETM_MODE_EXCL_KERN | \
ETM_MODE_EXCL_USER)
#define ETM_SQR_MASK 0x3 #define ETM_SQR_MASK 0x3
#define ETM_TRACEID_MASK 0x3f #define ETM_TRACEID_MASK 0x3f
...@@ -136,35 +140,16 @@ ...@@ -136,35 +140,16 @@
#define ETM_DEFAULT_EVENT_VAL (ETM_HARD_WIRE_RES_A | \ #define ETM_DEFAULT_EVENT_VAL (ETM_HARD_WIRE_RES_A | \
ETM_ADD_COMP_0 | \ ETM_ADD_COMP_0 | \
ETM_EVENT_NOT_A) ETM_EVENT_NOT_A)
/** /**
* struct etm_drvdata - specifics associated to an ETM component * struct etm_config - configuration information related to an ETM
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
* @cpu: the cpu this component is affined to.
* @port_size: port size as reported by ETMCR bit 4-6 and 21.
* @arch: ETM/PTM version number.
* @use_cpu14: true if management registers need to be accessed via CP14.
* @enable: is this ETM/PTM currently tracing.
* @sticky_enable: true if ETM base configuration has been done.
* @boot_enable:true if we should start tracing at boot time.
* @os_unlock: true if access to management registers is allowed.
* @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
* @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
* @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
* @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
* @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
* @etmccr: value of register ETMCCR.
* @etmccer: value of register ETMCCER.
* @traceid: value of the current ID for this component.
* @mode: controls various modes supported by this ETM/PTM. * @mode: controls various modes supported by this ETM/PTM.
* @ctrl: used in conjunction with @mode. * @ctrl: used in conjunction with @mode.
* @trigger_event: setting for register ETMTRIGGER. * @trigger_event: setting for register ETMTRIGGER.
* @startstop_ctrl: setting for register ETMTSSCR. * @startstop_ctrl: setting for register ETMTSSCR.
* @enable_event: setting for register ETMTEEVR. * @enable_event: setting for register ETMTEEVR.
* @enable_ctrl1: setting for register ETMTECR1. * @enable_ctrl1: setting for register ETMTECR1.
* @enable_ctrl2: setting for register ETMTECR2.
* @fifofull_level: setting for register ETMFFLR. * @fifofull_level: setting for register ETMFFLR.
* @addr_idx: index for the address comparator selection. * @addr_idx: index for the address comparator selection.
* @addr_val: value for address comparator register. * @addr_val: value for address comparator register.
...@@ -189,36 +174,16 @@ ...@@ -189,36 +174,16 @@
* @ctxid_mask: mask applicable to all the context IDs. * @ctxid_mask: mask applicable to all the context IDs.
* @sync_freq: Synchronisation frequency. * @sync_freq: Synchronisation frequency.
* @timestamp_event: Defines an event that requests the insertion * @timestamp_event: Defines an event that requests the insertion
of a timestamp into the trace stream. * of a timestamp into the trace stream.
*/ */
struct etm_drvdata { struct etm_config {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
int cpu;
int port_size;
u8 arch;
bool use_cp14;
bool enable;
bool sticky_enable;
bool boot_enable;
bool os_unlock;
u8 nr_addr_cmp;
u8 nr_cntr;
u8 nr_ext_inp;
u8 nr_ext_out;
u8 nr_ctxid_cmp;
u32 etmccr;
u32 etmccer;
u32 traceid;
u32 mode; u32 mode;
u32 ctrl; u32 ctrl;
u32 trigger_event; u32 trigger_event;
u32 startstop_ctrl; u32 startstop_ctrl;
u32 enable_event; u32 enable_event;
u32 enable_ctrl1; u32 enable_ctrl1;
u32 enable_ctrl2;
u32 fifofull_level; u32 fifofull_level;
u8 addr_idx; u8 addr_idx;
u32 addr_val[ETM_MAX_ADDR_CMP]; u32 addr_val[ETM_MAX_ADDR_CMP];
...@@ -244,6 +209,56 @@ struct etm_drvdata { ...@@ -244,6 +209,56 @@ struct etm_drvdata {
u32 timestamp_event; u32 timestamp_event;
}; };
/**
* struct etm_drvdata - specifics associated to an ETM component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
* @cpu: the cpu this component is affined to.
* @port_size: port size as reported by ETMCR bit 4-6 and 21.
* @arch: ETM/PTM version number.
* @use_cpu14: true if management registers need to be accessed via CP14.
* @mode: this tracer's mode, i.e sysFS, Perf or disabled.
* @sticky_enable: true if ETM base configuration has been done.
* @boot_enable:true if we should start tracing at boot time.
* @os_unlock: true if access to management registers is allowed.
* @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
* @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
* @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
* @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
* @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
* @etmccr: value of register ETMCCR.
* @etmccer: value of register ETMCCER.
* @traceid: value of the current ID for this component.
* @config: structure holding configuration parameters.
*/
struct etm_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
int cpu;
int port_size;
u8 arch;
bool use_cp14;
local_t mode;
bool sticky_enable;
bool boot_enable;
bool os_unlock;
u8 nr_addr_cmp;
u8 nr_cntr;
u8 nr_ext_inp;
u8 nr_ext_out;
u8 nr_ctxid_cmp;
u32 etmccr;
u32 etmccer;
u32 traceid;
struct etm_config config;
};
enum etm_addr_type { enum etm_addr_type {
ETM_ADDR_TYPE_NONE, ETM_ADDR_TYPE_NONE,
ETM_ADDR_TYPE_SINGLE, ETM_ADDR_TYPE_SINGLE,
...@@ -251,4 +266,39 @@ enum etm_addr_type { ...@@ -251,4 +266,39 @@ enum etm_addr_type {
ETM_ADDR_TYPE_START, ETM_ADDR_TYPE_START,
ETM_ADDR_TYPE_STOP, ETM_ADDR_TYPE_STOP,
}; };
static inline void etm_writel(struct etm_drvdata *drvdata,
u32 val, u32 off)
{
if (drvdata->use_cp14) {
if (etm_writel_cp14(off, val)) {
dev_err(drvdata->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
writel_relaxed(val, drvdata->base + off);
}
}
static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
{
u32 val;
if (drvdata->use_cp14) {
if (etm_readl_cp14(off, &val)) {
dev_err(drvdata->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
val = readl_relaxed(drvdata->base + off);
}
return val;
}
extern const struct attribute_group *coresight_etm_groups[];
int etm_get_trace_id(struct etm_drvdata *drvdata);
void etm_set_default(struct etm_config *config);
void etm_config_trace_mode(struct etm_config *config);
struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
#endif #endif
This diff is collapsed.
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
* Description: CoreSight Funnel driver
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and * it under the terms of the GNU General Public License version 2 and
...@@ -11,7 +13,6 @@ ...@@ -11,7 +13,6 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/device.h> #include <linux/device.h>
...@@ -69,7 +70,6 @@ static int funnel_enable(struct coresight_device *csdev, int inport, ...@@ -69,7 +70,6 @@ static int funnel_enable(struct coresight_device *csdev, int inport,
{ {
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
pm_runtime_get_sync(drvdata->dev);
funnel_enable_hw(drvdata, inport); funnel_enable_hw(drvdata, inport);
dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport); dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
...@@ -95,7 +95,6 @@ static void funnel_disable(struct coresight_device *csdev, int inport, ...@@ -95,7 +95,6 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
funnel_disable_hw(drvdata, inport); funnel_disable_hw(drvdata, inport);
pm_runtime_put(drvdata->dev);
dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport); dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
} }
...@@ -226,14 +225,6 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -226,14 +225,6 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id)
return 0; return 0;
} }
static int funnel_remove(struct amba_device *adev)
{
struct funnel_drvdata *drvdata = amba_get_drvdata(adev);
coresight_unregister(drvdata->csdev);
return 0;
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int funnel_runtime_suspend(struct device *dev) static int funnel_runtime_suspend(struct device *dev)
{ {
...@@ -273,13 +264,9 @@ static struct amba_driver funnel_driver = { ...@@ -273,13 +264,9 @@ static struct amba_driver funnel_driver = {
.name = "coresight-funnel", .name = "coresight-funnel",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.pm = &funnel_dev_pm_ops, .pm = &funnel_dev_pm_ops,
.suppress_bind_attrs = true,
}, },
.probe = funnel_probe, .probe = funnel_probe,
.remove = funnel_remove,
.id_table = funnel_ids, .id_table = funnel_ids,
}; };
builtin_amba_driver(funnel_driver);
module_amba_driver(funnel_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CoreSight Funnel driver");
This diff is collapsed.
This diff is collapsed.
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
* GNU General Public License for more details. * GNU General Public License for more details.
*/ */
#include <linux/module.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -86,7 +85,7 @@ static int of_coresight_alloc_memory(struct device *dev, ...@@ -86,7 +85,7 @@ static int of_coresight_alloc_memory(struct device *dev,
return -ENOMEM; return -ENOMEM;
/* Children connected to this component via @outports */ /* Children connected to this component via @outports */
pdata->child_names = devm_kzalloc(dev, pdata->nr_outport * pdata->child_names = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->child_names), sizeof(*pdata->child_names),
GFP_KERNEL); GFP_KERNEL);
if (!pdata->child_names) if (!pdata->child_names)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment