Commit 6f23ee1f authored by Pete Zaitcev's avatar Pete Zaitcev Committed by Greg Kroah-Hartman

USB: add binary API to usbmon

This patch adds a new, "binary" API in addition to the old, text API usbmon
had before. The new API allows for less CPU use, and it allows to capture
all data from a packet where old API only captured 32 bytes at most. There
are some limitations and conditions to this, e.g. in case someone constructs
a URB with 1GB of data, it's not likely to be captured, because even the
huge buffers of the new reader are finite. Nonetheless, I expect this new
capability to capture all data for all real life scenarios.

The downside is, a special user mode application is required where cat(1)
worked before. I have sample code at http://people.redhat.com/zaitcev/linux/
and Paolo Abeni is working on patching libpcap.

This patch was initially written by Paolo and later I tweaked it, and
we had a little back-and-forth. So this is a jointly authored patch, but
I am submitting this I am responsible for the bugs.
Signed-off-by: default avatarPaolo Abeni <paolo.abeni@email.it>
Signed-off-by: default avatarPete Zaitcev <zaitcev@redhat.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
parent a8ef36bc
...@@ -77,7 +77,7 @@ that the file size is not excessive for your favourite editor. ...@@ -77,7 +77,7 @@ that the file size is not excessive for your favourite editor.
The '1t' type data consists of a stream of events, such as URB submission, The '1t' type data consists of a stream of events, such as URB submission,
URB callback, submission error. Every event is a text line, which consists URB callback, submission error. Every event is a text line, which consists
of whitespace separated words. The number of position of words may depend of whitespace separated words. The number or position of words may depend
on the event type, but there is a set of words, common for all types. on the event type, but there is a set of words, common for all types.
Here is the list of words, from left to right: Here is the list of words, from left to right:
...@@ -170,4 +170,152 @@ dd65f0e8 4128379808 C Bo:005:02 0 31 > ...@@ -170,4 +170,152 @@ dd65f0e8 4128379808 C Bo:005:02 0 31 >
* Raw binary format and API * Raw binary format and API
TBD The overall architecture of the API is about the same as the one above,
only the events are delivered in binary format. Each event is sent in
the following structure (its name is made up, so that we can refer to it):
struct usbmon_packet {
u64 id; /* 0: URB ID - from submission to callback */
unsigned char type; /* 8: Same as text; extensible. */
unsigned char xfer_type; /* ISO (0), Intr, Control, Bulk (3) */
unsigned char epnum; /* Endpoint number and transfer direction */
unsigned char devnum; /* Device address */
u16 busnum; /* 12: Bus number */
char flag_setup; /* 14: Same as text */
char flag_data; /* 15: Same as text; Binary zero is OK. */
s64 ts_sec; /* 16: gettimeofday */
s32 ts_usec; /* 24: gettimeofday */
int status; /* 28: */
unsigned int length; /* 32: Length of data (submitted or actual) */
unsigned int len_cap; /* 36: Delivered length */
unsigned char setup[8]; /* 40: Only for Control 'S' */
}; /* 48 bytes total */
These events can be received from a character device by reading with read(2),
with an ioctl(2), or by accessing the buffer with mmap.
The character device is usually called /dev/usbmonN, where N is the USB bus
number. Number zero (/dev/usbmon0) is special and means "all buses".
However, this feature is not implemented yet. Note that specific naming
policy is set by your Linux distribution.
If you create /dev/usbmon0 by hand, make sure that it is owned by root
and has mode 0600. Otherwise, unpriviledged users will be able to snoop
keyboard traffic.
The following ioctl calls are available, with MON_IOC_MAGIC 0x92:
MON_IOCQ_URB_LEN, defined as _IO(MON_IOC_MAGIC, 1)
This call returns the length of data in the next event. Note that majority of
events contain no data, so if this call returns zero, it does not mean that
no events are available.
MON_IOCG_STATS, defined as _IOR(MON_IOC_MAGIC, 3, struct mon_bin_stats)
The argument is a pointer to the following structure:
struct mon_bin_stats {
u32 queued;
u32 dropped;
};
The member "queued" refers to the number of events currently queued in the
buffer (and not to the number of events processed since the last reset).
The member "dropped" is the number of events lost since the last call
to MON_IOCG_STATS.
MON_IOCT_RING_SIZE, defined as _IO(MON_IOC_MAGIC, 4)
This call sets the buffer size. The argument is the size in bytes.
The size may be rounded down to the next chunk (or page). If the requested
size is out of [unspecified] bounds for this kernel, the call fails with
-EINVAL.
MON_IOCQ_RING_SIZE, defined as _IO(MON_IOC_MAGIC, 5)
This call returns the current size of the buffer in bytes.
MON_IOCX_GET, defined as _IOW(MON_IOC_MAGIC, 6, struct mon_get_arg)
This call waits for events to arrive if none were in the kernel buffer,
then returns the first event. Its argument is a pointer to the following
structure:
struct mon_get_arg {
struct usbmon_packet *hdr;
void *data;
size_t alloc; /* Length of data (can be zero) */
};
Before the call, hdr, data, and alloc should be filled. Upon return, the area
pointed by hdr contains the next event structure, and the data buffer contains
the data, if any. The event is removed from the kernel buffer.
MON_IOCX_MFETCH, defined as _IOWR(MON_IOC_MAGIC, 7, struct mon_mfetch_arg)
This ioctl is primarily used when the application accesses the buffer
with mmap(2). Its argument is a pointer to the following structure:
struct mon_mfetch_arg {
uint32_t *offvec; /* Vector of events fetched */
uint32_t nfetch; /* Number of events to fetch (out: fetched) */
uint32_t nflush; /* Number of events to flush */
};
The ioctl operates in 3 stages.
First, it removes and discards up to nflush events from the kernel buffer.
The actual number of events discarded is returned in nflush.
Second, it waits for an event to be present in the buffer, unless the pseudo-
device is open with O_NONBLOCK.
Third, it extracts up to nfetch offsets into the mmap buffer, and stores
them into the offvec. The actual number of event offsets is stored into
the nfetch.
MON_IOCH_MFLUSH, defined as _IO(MON_IOC_MAGIC, 8)
This call removes a number of events from the kernel buffer. Its argument
is the number of events to remove. If the buffer contains fewer events
than requested, all events present are removed, and no error is reported.
This works when no events are available too.
FIONBIO
The ioctl FIONBIO may be implemented in the future, if there's a need.
In addition to ioctl(2) and read(2), the special file of binary API can
be polled with select(2) and poll(2). But lseek(2) does not work.
* Memory-mapped access of the kernel buffer for the binary API
The basic idea is simple:
To prepare, map the buffer by getting the current size, then using mmap(2).
Then, execute a loop similar to the one written in pseudo-code below:
struct mon_mfetch_arg fetch;
struct usbmon_packet *hdr;
int nflush = 0;
for (;;) {
fetch.offvec = vec; // Has N 32-bit words
fetch.nfetch = N; // Or less than N
fetch.nflush = nflush;
ioctl(fd, MON_IOCX_MFETCH, &fetch); // Process errors, too
nflush = fetch.nfetch; // This many packets to flush when done
for (i = 0; i < nflush; i++) {
hdr = (struct ubsmon_packet *) &mmap_area[vec[i]];
if (hdr->type == '@') // Filler packet
continue;
caddr_t data = &mmap_area[vec[i]] + 64;
process_packet(hdr, data);
}
}
Thus, the main idea is to execute only one ioctl per N events.
Although the buffer is circular, the returned headers and data do not cross
the end of the buffer, so the above pseudo-code does not need any gathering.
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# Makefile for USB Core files and filesystem # Makefile for USB Core files and filesystem
# #
usbmon-objs := mon_main.o mon_stat.o mon_text.o mon_dma.o usbmon-objs := mon_main.o mon_stat.o mon_text.o mon_bin.o mon_dma.o
# This does not use CONFIG_USB_MON because we want this to use a tristate. # This does not use CONFIG_USB_MON because we want this to use a tristate.
obj-$(CONFIG_USB) += usbmon.o obj-$(CONFIG_USB) += usbmon.o
/*
* The USB Monitor, inspired by Dave Harding's USBMon.
*
* This is a binary format reader.
*
* Copyright (C) 2006 Paolo Abeni (paolo.abeni@email.it)
* Copyright (C) 2006 Pete Zaitcev (zaitcev@redhat.com)
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/usb.h>
#include <linux/poll.h>
#include <linux/compat.h>
#include <linux/mm.h>
#include <asm/uaccess.h>
#include "usb_mon.h"
/*
* Defined by USB 2.0 clause 9.3, table 9.2.
*/
#define SETUP_LEN 8
/* ioctl macros */
#define MON_IOC_MAGIC 0x92
#define MON_IOCQ_URB_LEN _IO(MON_IOC_MAGIC, 1)
/* #2 used to be MON_IOCX_URB, removed before it got into Linus tree */
#define MON_IOCG_STATS _IOR(MON_IOC_MAGIC, 3, struct mon_bin_stats)
#define MON_IOCT_RING_SIZE _IO(MON_IOC_MAGIC, 4)
#define MON_IOCQ_RING_SIZE _IO(MON_IOC_MAGIC, 5)
#define MON_IOCX_GET _IOW(MON_IOC_MAGIC, 6, struct mon_bin_get)
#define MON_IOCX_MFETCH _IOWR(MON_IOC_MAGIC, 7, struct mon_bin_mfetch)
#define MON_IOCH_MFLUSH _IO(MON_IOC_MAGIC, 8)
#ifdef CONFIG_COMPAT
#define MON_IOCX_GET32 _IOW(MON_IOC_MAGIC, 6, struct mon_bin_get32)
#define MON_IOCX_MFETCH32 _IOWR(MON_IOC_MAGIC, 7, struct mon_bin_mfetch32)
#endif
/*
* Some architectures have enormous basic pages (16KB for ia64, 64KB for ppc).
* But it's all right. Just use a simple way to make sure the chunk is never
* smaller than a page.
*
* N.B. An application does not know our chunk size.
*
* Woops, get_zeroed_page() returns a single page. I guess we're stuck with
* page-sized chunks for the time being.
*/
#define CHUNK_SIZE PAGE_SIZE
#define CHUNK_ALIGN(x) (((x)+CHUNK_SIZE-1) & ~(CHUNK_SIZE-1))
/*
* The magic limit was calculated so that it allows the monitoring
* application to pick data once in two ticks. This way, another application,
* which presumably drives the bus, gets to hog CPU, yet we collect our data.
* If HZ is 100, a 480 mbit/s bus drives 614 KB every jiffy. USB has an
* enormous overhead built into the bus protocol, so we need about 1000 KB.
*
* This is still too much for most cases, where we just snoop a few
* descriptor fetches for enumeration. So, the default is a "reasonable"
* amount for systems with HZ=250 and incomplete bus saturation.
*
* XXX What about multi-megabyte URBs which take minutes to transfer?
*/
#define BUFF_MAX CHUNK_ALIGN(1200*1024)
#define BUFF_DFL CHUNK_ALIGN(300*1024)
#define BUFF_MIN CHUNK_ALIGN(8*1024)
/*
* The per-event API header (2 per URB).
*
* This structure is seen in userland as defined by the documentation.
*/
struct mon_bin_hdr {
u64 id; /* URB ID - from submission to callback */
unsigned char type; /* Same as in text API; extensible. */
unsigned char xfer_type; /* ISO, Intr, Control, Bulk */
unsigned char epnum; /* Endpoint number and transfer direction */
unsigned char devnum; /* Device address */
unsigned short busnum; /* Bus number */
char flag_setup;
char flag_data;
s64 ts_sec; /* gettimeofday */
s32 ts_usec; /* gettimeofday */
int status;
unsigned int len_urb; /* Length of data (submitted or actual) */
unsigned int len_cap; /* Delivered length */
unsigned char setup[SETUP_LEN]; /* Only for Control S-type */
};
/* per file statistic */
struct mon_bin_stats {
u32 queued;
u32 dropped;
};
struct mon_bin_get {
struct mon_bin_hdr __user *hdr; /* Only 48 bytes, not 64. */
void __user *data;
size_t alloc; /* Length of data (can be zero) */
};
struct mon_bin_mfetch {
u32 __user *offvec; /* Vector of events fetched */
u32 nfetch; /* Number of events to fetch (out: fetched) */
u32 nflush; /* Number of events to flush */
};
#ifdef CONFIG_COMPAT
struct mon_bin_get32 {
u32 hdr32;
u32 data32;
u32 alloc32;
};
struct mon_bin_mfetch32 {
u32 offvec32;
u32 nfetch32;
u32 nflush32;
};
#endif
/* Having these two values same prevents wrapping of the mon_bin_hdr */
#define PKT_ALIGN 64
#define PKT_SIZE 64
/* max number of USB bus supported */
#define MON_BIN_MAX_MINOR 128
/*
* The buffer: map of used pages.
*/
struct mon_pgmap {
struct page *pg;
unsigned char *ptr; /* XXX just use page_to_virt everywhere? */
};
/*
* This gets associated with an open file struct.
*/
struct mon_reader_bin {
/* The buffer: one per open. */
spinlock_t b_lock; /* Protect b_cnt, b_in */
unsigned int b_size; /* Current size of the buffer - bytes */
unsigned int b_cnt; /* Bytes used */
unsigned int b_in, b_out; /* Offsets into buffer - bytes */
unsigned int b_read; /* Amount of read data in curr. pkt. */
struct mon_pgmap *b_vec; /* The map array */
wait_queue_head_t b_wait; /* Wait for data here */
struct mutex fetch_lock; /* Protect b_read, b_out */
int mmap_active;
/* A list of these is needed for "bus 0". Some time later. */
struct mon_reader r;
/* Stats */
unsigned int cnt_lost;
};
static inline struct mon_bin_hdr *MON_OFF2HDR(const struct mon_reader_bin *rp,
unsigned int offset)
{
return (struct mon_bin_hdr *)
(rp->b_vec[offset / CHUNK_SIZE].ptr + offset % CHUNK_SIZE);
}
#define MON_RING_EMPTY(rp) ((rp)->b_cnt == 0)
static dev_t mon_bin_dev0;
static struct cdev mon_bin_cdev;
static void mon_buff_area_fill(const struct mon_reader_bin *rp,
unsigned int offset, unsigned int size);
static int mon_bin_wait_event(struct file *file, struct mon_reader_bin *rp);
static int mon_alloc_buff(struct mon_pgmap *map, int npages);
static void mon_free_buff(struct mon_pgmap *map, int npages);
/*
* This is a "chunked memcpy". It does not manipulate any counters.
* But it returns the new offset for repeated application.
*/
unsigned int mon_copy_to_buff(const struct mon_reader_bin *this,
unsigned int off, const unsigned char *from, unsigned int length)
{
unsigned int step_len;
unsigned char *buf;
unsigned int in_page;
while (length) {
/*
* Determine step_len.
*/
step_len = length;
in_page = CHUNK_SIZE - (off & (CHUNK_SIZE-1));
if (in_page < step_len)
step_len = in_page;
/*
* Copy data and advance pointers.
*/
buf = this->b_vec[off / CHUNK_SIZE].ptr + off % CHUNK_SIZE;
memcpy(buf, from, step_len);
if ((off += step_len) >= this->b_size) off = 0;
from += step_len;
length -= step_len;
}
return off;
}
/*
* This is a little worse than the above because it's "chunked copy_to_user".
* The return value is an error code, not an offset.
*/
static int copy_from_buf(const struct mon_reader_bin *this, unsigned int off,
char __user *to, int length)
{
unsigned int step_len;
unsigned char *buf;
unsigned int in_page;
while (length) {
/*
* Determine step_len.
*/
step_len = length;
in_page = CHUNK_SIZE - (off & (CHUNK_SIZE-1));
if (in_page < step_len)
step_len = in_page;
/*
* Copy data and advance pointers.
*/
buf = this->b_vec[off / CHUNK_SIZE].ptr + off % CHUNK_SIZE;
if (copy_to_user(to, buf, step_len))
return -EINVAL;
if ((off += step_len) >= this->b_size) off = 0;
to += step_len;
length -= step_len;
}
return 0;
}
/*
* Allocate an (aligned) area in the buffer.
* This is called under b_lock.
* Returns ~0 on failure.
*/
static unsigned int mon_buff_area_alloc(struct mon_reader_bin *rp,
unsigned int size)
{
unsigned int offset;
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
if (rp->b_cnt + size > rp->b_size)
return ~0;
offset = rp->b_in;
rp->b_cnt += size;
if ((rp->b_in += size) >= rp->b_size)
rp->b_in -= rp->b_size;
return offset;
}
/*
* This is the same thing as mon_buff_area_alloc, only it does not allow
* buffers to wrap. This is needed by applications which pass references
* into mmap-ed buffers up their stacks (libpcap can do that).
*
* Currently, we always have the header stuck with the data, although
* it is not strictly speaking necessary.
*
* When a buffer would wrap, we place a filler packet to mark the space.
*/
static unsigned int mon_buff_area_alloc_contiguous(struct mon_reader_bin *rp,
unsigned int size)
{
unsigned int offset;
unsigned int fill_size;
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
if (rp->b_cnt + size > rp->b_size)
return ~0;
if (rp->b_in + size > rp->b_size) {
/*
* This would wrap. Find if we still have space after
* skipping to the end of the buffer. If we do, place
* a filler packet and allocate a new packet.
*/
fill_size = rp->b_size - rp->b_in;
if (rp->b_cnt + size + fill_size > rp->b_size)
return ~0;
mon_buff_area_fill(rp, rp->b_in, fill_size);
offset = 0;
rp->b_in = size;
rp->b_cnt += size + fill_size;
} else if (rp->b_in + size == rp->b_size) {
offset = rp->b_in;
rp->b_in = 0;
rp->b_cnt += size;
} else {
offset = rp->b_in;
rp->b_in += size;
rp->b_cnt += size;
}
return offset;
}
/*
* Return a few (kilo-)bytes to the head of the buffer.
* This is used if a DMA fetch fails.
*/
static void mon_buff_area_shrink(struct mon_reader_bin *rp, unsigned int size)
{
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
rp->b_cnt -= size;
if (rp->b_in < size)
rp->b_in += rp->b_size;
rp->b_in -= size;
}
/*
* This has to be called under both b_lock and fetch_lock, because
* it accesses both b_cnt and b_out.
*/
static void mon_buff_area_free(struct mon_reader_bin *rp, unsigned int size)
{
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
rp->b_cnt -= size;
if ((rp->b_out += size) >= rp->b_size)
rp->b_out -= rp->b_size;
}
static void mon_buff_area_fill(const struct mon_reader_bin *rp,
unsigned int offset, unsigned int size)
{
struct mon_bin_hdr *ep;
ep = MON_OFF2HDR(rp, offset);
memset(ep, 0, PKT_SIZE);
ep->type = '@';
ep->len_cap = size - PKT_SIZE;
}
static inline char mon_bin_get_setup(unsigned char *setupb,
const struct urb *urb, char ev_type)
{
if (!usb_pipecontrol(urb->pipe) || ev_type != 'S')
return '-';
if (urb->transfer_flags & URB_NO_SETUP_DMA_MAP)
return mon_dmapeek(setupb, urb->setup_dma, SETUP_LEN);
if (urb->setup_packet == NULL)
return 'Z';
memcpy(setupb, urb->setup_packet, SETUP_LEN);
return 0;
}
static char mon_bin_get_data(const struct mon_reader_bin *rp,
unsigned int offset, struct urb *urb, unsigned int length)
{
if (urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP) {
mon_dmapeek_vec(rp, offset, urb->transfer_dma, length);
return 0;
}
if (urb->transfer_buffer == NULL)
return 'Z';
mon_copy_to_buff(rp, offset, urb->transfer_buffer, length);
return 0;
}
static void mon_bin_event(struct mon_reader_bin *rp, struct urb *urb,
char ev_type)
{
unsigned long flags;
struct timeval ts;
unsigned int urb_length;
unsigned int offset;
unsigned int length;
struct mon_bin_hdr *ep;
char data_tag = 0;
do_gettimeofday(&ts);
spin_lock_irqsave(&rp->b_lock, flags);
/*
* Find the maximum allowable length, then allocate space.
*/
urb_length = (ev_type == 'S') ?
urb->transfer_buffer_length : urb->actual_length;
length = urb_length;
if (length >= rp->b_size/5)
length = rp->b_size/5;
if (usb_pipein(urb->pipe)) {
if (ev_type == 'S') {
length = 0;
data_tag = '<';
}
} else {
if (ev_type == 'C') {
length = 0;
data_tag = '>';
}
}
if (rp->mmap_active)
offset = mon_buff_area_alloc_contiguous(rp, length + PKT_SIZE);
else
offset = mon_buff_area_alloc(rp, length + PKT_SIZE);
if (offset == ~0) {
rp->cnt_lost++;
spin_unlock_irqrestore(&rp->b_lock, flags);
return;
}
ep = MON_OFF2HDR(rp, offset);
if ((offset += PKT_SIZE) >= rp->b_size) offset = 0;
/*
* Fill the allocated area.
*/
memset(ep, 0, PKT_SIZE);
ep->type = ev_type;
ep->xfer_type = usb_pipetype(urb->pipe);
/* We use the fact that usb_pipein() returns 0x80 */
ep->epnum = usb_pipeendpoint(urb->pipe) | usb_pipein(urb->pipe);
ep->devnum = usb_pipedevice(urb->pipe);
ep->busnum = rp->r.m_bus->u_bus->busnum;
ep->id = (unsigned long) urb;
ep->ts_sec = ts.tv_sec;
ep->ts_usec = ts.tv_usec;
ep->status = urb->status;
ep->len_urb = urb_length;
ep->len_cap = length;
ep->flag_setup = mon_bin_get_setup(ep->setup, urb, ev_type);
if (length != 0) {
ep->flag_data = mon_bin_get_data(rp, offset, urb, length);
if (ep->flag_data != 0) { /* Yes, it's 0x00, not '0' */
ep->len_cap = 0;
mon_buff_area_shrink(rp, length);
}
} else {
ep->flag_data = data_tag;
}
spin_unlock_irqrestore(&rp->b_lock, flags);
wake_up(&rp->b_wait);
}
static void mon_bin_submit(void *data, struct urb *urb)
{
struct mon_reader_bin *rp = data;
mon_bin_event(rp, urb, 'S');
}
static void mon_bin_complete(void *data, struct urb *urb)
{
struct mon_reader_bin *rp = data;
mon_bin_event(rp, urb, 'C');
}
static void mon_bin_error(void *data, struct urb *urb, int error)
{
struct mon_reader_bin *rp = data;
unsigned long flags;
unsigned int offset;
struct mon_bin_hdr *ep;
spin_lock_irqsave(&rp->b_lock, flags);
offset = mon_buff_area_alloc(rp, PKT_SIZE);
if (offset == ~0) {
/* Not incrementing cnt_lost. Just because. */
spin_unlock_irqrestore(&rp->b_lock, flags);
return;
}
ep = MON_OFF2HDR(rp, offset);
memset(ep, 0, PKT_SIZE);
ep->type = 'E';
ep->xfer_type = usb_pipetype(urb->pipe);
/* We use the fact that usb_pipein() returns 0x80 */
ep->epnum = usb_pipeendpoint(urb->pipe) | usb_pipein(urb->pipe);
ep->devnum = usb_pipedevice(urb->pipe);
ep->busnum = rp->r.m_bus->u_bus->busnum;
ep->id = (unsigned long) urb;
ep->status = error;
ep->flag_setup = '-';
ep->flag_data = 'E';
spin_unlock_irqrestore(&rp->b_lock, flags);
wake_up(&rp->b_wait);
}
static int mon_bin_open(struct inode *inode, struct file *file)
{
struct mon_bus *mbus;
struct usb_bus *ubus;
struct mon_reader_bin *rp;
size_t size;
int rc;
mutex_lock(&mon_lock);
if ((mbus = mon_bus_lookup(iminor(inode))) == NULL) {
mutex_unlock(&mon_lock);
return -ENODEV;
}
if ((ubus = mbus->u_bus) == NULL) {
printk(KERN_ERR TAG ": consistency error on open\n");
mutex_unlock(&mon_lock);
return -ENODEV;
}
rp = kzalloc(sizeof(struct mon_reader_bin), GFP_KERNEL);
if (rp == NULL) {
rc = -ENOMEM;
goto err_alloc;
}
spin_lock_init(&rp->b_lock);
init_waitqueue_head(&rp->b_wait);
mutex_init(&rp->fetch_lock);
rp->b_size = BUFF_DFL;
size = sizeof(struct mon_pgmap) * (rp->b_size/CHUNK_SIZE);
if ((rp->b_vec = kzalloc(size, GFP_KERNEL)) == NULL) {
rc = -ENOMEM;
goto err_allocvec;
}
if ((rc = mon_alloc_buff(rp->b_vec, rp->b_size/CHUNK_SIZE)) < 0)
goto err_allocbuff;
rp->r.m_bus = mbus;
rp->r.r_data = rp;
rp->r.rnf_submit = mon_bin_submit;
rp->r.rnf_error = mon_bin_error;
rp->r.rnf_complete = mon_bin_complete;
mon_reader_add(mbus, &rp->r);
file->private_data = rp;
mutex_unlock(&mon_lock);
return 0;
err_allocbuff:
kfree(rp->b_vec);
err_allocvec:
kfree(rp);
err_alloc:
mutex_unlock(&mon_lock);
return rc;
}
/*
* Extract an event from buffer and copy it to user space.
* Wait if there is no event ready.
* Returns zero or error.
*/
static int mon_bin_get_event(struct file *file, struct mon_reader_bin *rp,
struct mon_bin_hdr __user *hdr, void __user *data, unsigned int nbytes)
{
unsigned long flags;
struct mon_bin_hdr *ep;
size_t step_len;
unsigned int offset;
int rc;
mutex_lock(&rp->fetch_lock);
if ((rc = mon_bin_wait_event(file, rp)) < 0) {
mutex_unlock(&rp->fetch_lock);
return rc;
}
ep = MON_OFF2HDR(rp, rp->b_out);
if (copy_to_user(hdr, ep, sizeof(struct mon_bin_hdr))) {
mutex_unlock(&rp->fetch_lock);
return -EFAULT;
}
step_len = min(ep->len_cap, nbytes);
if ((offset = rp->b_out + PKT_SIZE) >= rp->b_size) offset = 0;
if (copy_from_buf(rp, offset, data, step_len)) {
mutex_unlock(&rp->fetch_lock);
return -EFAULT;
}
spin_lock_irqsave(&rp->b_lock, flags);
mon_buff_area_free(rp, PKT_SIZE + ep->len_cap);
spin_unlock_irqrestore(&rp->b_lock, flags);
rp->b_read = 0;
mutex_unlock(&rp->fetch_lock);
return 0;
}
static int mon_bin_release(struct inode *inode, struct file *file)
{
struct mon_reader_bin *rp = file->private_data;
struct mon_bus* mbus = rp->r.m_bus;
mutex_lock(&mon_lock);
if (mbus->nreaders <= 0) {
printk(KERN_ERR TAG ": consistency error on close\n");
mutex_unlock(&mon_lock);
return 0;
}
mon_reader_del(mbus, &rp->r);
mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
kfree(rp->b_vec);
kfree(rp);
mutex_unlock(&mon_lock);
return 0;
}
static ssize_t mon_bin_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
{
struct mon_reader_bin *rp = file->private_data;
unsigned long flags;
struct mon_bin_hdr *ep;
unsigned int offset;
size_t step_len;
char *ptr;
ssize_t done = 0;
int rc;
mutex_lock(&rp->fetch_lock);
if ((rc = mon_bin_wait_event(file, rp)) < 0) {
mutex_unlock(&rp->fetch_lock);
return rc;
}
ep = MON_OFF2HDR(rp, rp->b_out);
if (rp->b_read < sizeof(struct mon_bin_hdr)) {
step_len = min(nbytes, sizeof(struct mon_bin_hdr) - rp->b_read);
ptr = ((char *)ep) + rp->b_read;
if (step_len && copy_to_user(buf, ptr, step_len)) {
mutex_unlock(&rp->fetch_lock);
return -EFAULT;
}
nbytes -= step_len;
buf += step_len;
rp->b_read += step_len;
done += step_len;
}
if (rp->b_read >= sizeof(struct mon_bin_hdr)) {
step_len = min(nbytes, (size_t)ep->len_cap);
offset = rp->b_out + PKT_SIZE;
offset += rp->b_read - sizeof(struct mon_bin_hdr);
if (offset >= rp->b_size)
offset -= rp->b_size;
if (copy_from_buf(rp, offset, buf, step_len)) {
mutex_unlock(&rp->fetch_lock);
return -EFAULT;
}
nbytes -= step_len;
buf += step_len;
rp->b_read += step_len;
done += step_len;
}
/*
* Check if whole packet was read, and if so, jump to the next one.
*/
if (rp->b_read >= sizeof(struct mon_bin_hdr) + ep->len_cap) {
spin_lock_irqsave(&rp->b_lock, flags);
mon_buff_area_free(rp, PKT_SIZE + ep->len_cap);
spin_unlock_irqrestore(&rp->b_lock, flags);
rp->b_read = 0;
}
mutex_unlock(&rp->fetch_lock);
return done;
}
/*
* Remove at most nevents from chunked buffer.
* Returns the number of removed events.
*/
static int mon_bin_flush(struct mon_reader_bin *rp, unsigned nevents)
{
unsigned long flags;
struct mon_bin_hdr *ep;
int i;
mutex_lock(&rp->fetch_lock);
spin_lock_irqsave(&rp->b_lock, flags);
for (i = 0; i < nevents; ++i) {
if (MON_RING_EMPTY(rp))
break;
ep = MON_OFF2HDR(rp, rp->b_out);
mon_buff_area_free(rp, PKT_SIZE + ep->len_cap);
}
spin_unlock_irqrestore(&rp->b_lock, flags);
rp->b_read = 0;
mutex_unlock(&rp->fetch_lock);
return i;
}
/*
* Fetch at most max event offsets into the buffer and put them into vec.
* The events are usually freed later with mon_bin_flush.
* Return the effective number of events fetched.
*/
static int mon_bin_fetch(struct file *file, struct mon_reader_bin *rp,
u32 __user *vec, unsigned int max)
{
unsigned int cur_out;
unsigned int bytes, avail;
unsigned int size;
unsigned int nevents;
struct mon_bin_hdr *ep;
unsigned long flags;
int rc;
mutex_lock(&rp->fetch_lock);
if ((rc = mon_bin_wait_event(file, rp)) < 0) {
mutex_unlock(&rp->fetch_lock);
return rc;
}
spin_lock_irqsave(&rp->b_lock, flags);
avail = rp->b_cnt;
spin_unlock_irqrestore(&rp->b_lock, flags);
cur_out = rp->b_out;
nevents = 0;
bytes = 0;
while (bytes < avail) {
if (nevents >= max)
break;
ep = MON_OFF2HDR(rp, cur_out);
if (put_user(cur_out, &vec[nevents])) {
mutex_unlock(&rp->fetch_lock);
return -EFAULT;
}
nevents++;
size = ep->len_cap + PKT_SIZE;
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
if ((cur_out += size) >= rp->b_size)
cur_out -= rp->b_size;
bytes += size;
}
mutex_unlock(&rp->fetch_lock);
return nevents;
}
/*
* Count events. This is almost the same as the above mon_bin_fetch,
* only we do not store offsets into user vector, and we have no limit.
*/
static int mon_bin_queued(struct mon_reader_bin *rp)
{
unsigned int cur_out;
unsigned int bytes, avail;
unsigned int size;
unsigned int nevents;
struct mon_bin_hdr *ep;
unsigned long flags;
mutex_lock(&rp->fetch_lock);
spin_lock_irqsave(&rp->b_lock, flags);
avail = rp->b_cnt;
spin_unlock_irqrestore(&rp->b_lock, flags);
cur_out = rp->b_out;
nevents = 0;
bytes = 0;
while (bytes < avail) {
ep = MON_OFF2HDR(rp, cur_out);
nevents++;
size = ep->len_cap + PKT_SIZE;
size = (size + PKT_ALIGN-1) & ~(PKT_ALIGN-1);
if ((cur_out += size) >= rp->b_size)
cur_out -= rp->b_size;
bytes += size;
}
mutex_unlock(&rp->fetch_lock);
return nevents;
}
/*
*/
static int mon_bin_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct mon_reader_bin *rp = file->private_data;
// struct mon_bus* mbus = rp->r.m_bus;
int ret = 0;
struct mon_bin_hdr *ep;
unsigned long flags;
switch (cmd) {
case MON_IOCQ_URB_LEN:
/*
* N.B. This only returns the size of data, without the header.
*/
spin_lock_irqsave(&rp->b_lock, flags);
if (!MON_RING_EMPTY(rp)) {
ep = MON_OFF2HDR(rp, rp->b_out);
ret = ep->len_cap;
}
spin_unlock_irqrestore(&rp->b_lock, flags);
break;
case MON_IOCQ_RING_SIZE:
ret = rp->b_size;
break;
case MON_IOCT_RING_SIZE:
/*
* Changing the buffer size will flush it's contents; the new
* buffer is allocated before releasing the old one to be sure
* the device will stay functional also in case of memory
* pressure.
*/
{
int size;
struct mon_pgmap *vec;
if (arg < BUFF_MIN || arg > BUFF_MAX)
return -EINVAL;
size = CHUNK_ALIGN(arg);
if ((vec = kzalloc(sizeof(struct mon_pgmap) * (size/CHUNK_SIZE),
GFP_KERNEL)) == NULL) {
ret = -ENOMEM;
break;
}
ret = mon_alloc_buff(vec, size/CHUNK_SIZE);
if (ret < 0) {
kfree(vec);
break;
}
mutex_lock(&rp->fetch_lock);
spin_lock_irqsave(&rp->b_lock, flags);
mon_free_buff(rp->b_vec, size/CHUNK_SIZE);
kfree(rp->b_vec);
rp->b_vec = vec;
rp->b_size = size;
rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
rp->cnt_lost = 0;
spin_unlock_irqrestore(&rp->b_lock, flags);
mutex_unlock(&rp->fetch_lock);
}
break;
case MON_IOCH_MFLUSH:
ret = mon_bin_flush(rp, arg);
break;
case MON_IOCX_GET:
{
struct mon_bin_get getb;
if (copy_from_user(&getb, (void __user *)arg,
sizeof(struct mon_bin_get)))
return -EFAULT;
if (getb.alloc > 0x10000000) /* Want to cast to u32 */
return -EINVAL;
ret = mon_bin_get_event(file, rp,
getb.hdr, getb.data, (unsigned int)getb.alloc);
}
break;
#ifdef CONFIG_COMPAT
case MON_IOCX_GET32: {
struct mon_bin_get32 getb;
if (copy_from_user(&getb, (void __user *)arg,
sizeof(struct mon_bin_get32)))
return -EFAULT;
ret = mon_bin_get_event(file, rp,
compat_ptr(getb.hdr32), compat_ptr(getb.data32),
getb.alloc32);
}
break;
#endif
case MON_IOCX_MFETCH:
{
struct mon_bin_mfetch mfetch;
struct mon_bin_mfetch __user *uptr;
uptr = (struct mon_bin_mfetch __user *)arg;
if (copy_from_user(&mfetch, uptr, sizeof(mfetch)))
return -EFAULT;
if (mfetch.nflush) {
ret = mon_bin_flush(rp, mfetch.nflush);
if (ret < 0)
return ret;
if (put_user(ret, &uptr->nflush))
return -EFAULT;
}
ret = mon_bin_fetch(file, rp, mfetch.offvec, mfetch.nfetch);
if (ret < 0)
return ret;
if (put_user(ret, &uptr->nfetch))
return -EFAULT;
ret = 0;
}
break;
#ifdef CONFIG_COMPAT
case MON_IOCX_MFETCH32:
{
struct mon_bin_mfetch32 mfetch;
struct mon_bin_mfetch32 __user *uptr;
uptr = (struct mon_bin_mfetch32 __user *) compat_ptr(arg);
if (copy_from_user(&mfetch, uptr, sizeof(mfetch)))
return -EFAULT;
if (mfetch.nflush32) {
ret = mon_bin_flush(rp, mfetch.nflush32);
if (ret < 0)
return ret;
if (put_user(ret, &uptr->nflush32))
return -EFAULT;
}
ret = mon_bin_fetch(file, rp, compat_ptr(mfetch.offvec32),
mfetch.nfetch32);
if (ret < 0)
return ret;
if (put_user(ret, &uptr->nfetch32))
return -EFAULT;
ret = 0;
}
break;
#endif
case MON_IOCG_STATS: {
struct mon_bin_stats __user *sp;
unsigned int nevents;
unsigned int ndropped;
spin_lock_irqsave(&rp->b_lock, flags);
ndropped = rp->cnt_lost;
rp->cnt_lost = 0;
spin_unlock_irqrestore(&rp->b_lock, flags);
nevents = mon_bin_queued(rp);
sp = (struct mon_bin_stats __user *)arg;
if (put_user(rp->cnt_lost, &sp->dropped))
return -EFAULT;
if (put_user(nevents, &sp->queued))
return -EFAULT;
}
break;
default:
return -ENOTTY;
}
return ret;
}
static unsigned int
mon_bin_poll(struct file *file, struct poll_table_struct *wait)
{
struct mon_reader_bin *rp = file->private_data;
unsigned int mask = 0;
unsigned long flags;
if (file->f_mode & FMODE_READ)
poll_wait(file, &rp->b_wait, wait);
spin_lock_irqsave(&rp->b_lock, flags);
if (!MON_RING_EMPTY(rp))
mask |= POLLIN | POLLRDNORM; /* readable */
spin_unlock_irqrestore(&rp->b_lock, flags);
return mask;
}
/*
* open and close: just keep track of how many times the device is
* mapped, to use the proper memory allocation function.
*/
static void mon_bin_vma_open(struct vm_area_struct *vma)
{
struct mon_reader_bin *rp = vma->vm_private_data;
rp->mmap_active++;
}
static void mon_bin_vma_close(struct vm_area_struct *vma)
{
struct mon_reader_bin *rp = vma->vm_private_data;
rp->mmap_active--;
}
/*
* Map ring pages to user space.
*/
struct page *mon_bin_vma_nopage(struct vm_area_struct *vma,
unsigned long address, int *type)
{
struct mon_reader_bin *rp = vma->vm_private_data;
unsigned long offset, chunk_idx;
struct page *pageptr;
offset = (address - vma->vm_start) + (vma->vm_pgoff << PAGE_SHIFT);
if (offset >= rp->b_size)
return NOPAGE_SIGBUS;
chunk_idx = offset / CHUNK_SIZE;
pageptr = rp->b_vec[chunk_idx].pg;
get_page(pageptr);
if (type)
*type = VM_FAULT_MINOR;
return pageptr;
}
struct vm_operations_struct mon_bin_vm_ops = {
.open = mon_bin_vma_open,
.close = mon_bin_vma_close,
.nopage = mon_bin_vma_nopage,
};
int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
{
/* don't do anything here: "nopage" will set up page table entries */
vma->vm_ops = &mon_bin_vm_ops;
vma->vm_flags |= VM_RESERVED;
vma->vm_private_data = filp->private_data;
mon_bin_vma_open(vma);
return 0;
}
struct file_operations mon_fops_binary = {
.owner = THIS_MODULE,
.open = mon_bin_open,
.llseek = no_llseek,
.read = mon_bin_read,
/* .write = mon_text_write, */
.poll = mon_bin_poll,
.ioctl = mon_bin_ioctl,
.release = mon_bin_release,
};
static int mon_bin_wait_event(struct file *file, struct mon_reader_bin *rp)
{
DECLARE_WAITQUEUE(waita, current);
unsigned long flags;
add_wait_queue(&rp->b_wait, &waita);
set_current_state(TASK_INTERRUPTIBLE);
spin_lock_irqsave(&rp->b_lock, flags);
while (MON_RING_EMPTY(rp)) {
spin_unlock_irqrestore(&rp->b_lock, flags);
if (file->f_flags & O_NONBLOCK) {
set_current_state(TASK_RUNNING);
remove_wait_queue(&rp->b_wait, &waita);
return -EWOULDBLOCK; /* Same as EAGAIN in Linux */
}
schedule();
if (signal_pending(current)) {
remove_wait_queue(&rp->b_wait, &waita);
return -EINTR;
}
set_current_state(TASK_INTERRUPTIBLE);
spin_lock_irqsave(&rp->b_lock, flags);
}
spin_unlock_irqrestore(&rp->b_lock, flags);
set_current_state(TASK_RUNNING);
remove_wait_queue(&rp->b_wait, &waita);
return 0;
}
static int mon_alloc_buff(struct mon_pgmap *map, int npages)
{
int n;
unsigned long vaddr;
for (n = 0; n < npages; n++) {
vaddr = get_zeroed_page(GFP_KERNEL);
if (vaddr == 0) {
while (n-- != 0)
free_page((unsigned long) map[n].ptr);
return -ENOMEM;
}
map[n].ptr = (unsigned char *) vaddr;
map[n].pg = virt_to_page(vaddr);
}
return 0;
}
static void mon_free_buff(struct mon_pgmap *map, int npages)
{
int n;
for (n = 0; n < npages; n++)
free_page((unsigned long) map[n].ptr);
}
int __init mon_bin_init(void)
{
int rc;
rc = alloc_chrdev_region(&mon_bin_dev0, 0, MON_BIN_MAX_MINOR, "usbmon");
if (rc < 0)
goto err_dev;
cdev_init(&mon_bin_cdev, &mon_fops_binary);
mon_bin_cdev.owner = THIS_MODULE;
rc = cdev_add(&mon_bin_cdev, mon_bin_dev0, MON_BIN_MAX_MINOR);
if (rc < 0)
goto err_add;
return 0;
err_add:
unregister_chrdev_region(mon_bin_dev0, MON_BIN_MAX_MINOR);
err_dev:
return rc;
}
void __exit mon_bin_exit(void)
{
cdev_del(&mon_bin_cdev);
unregister_chrdev_region(mon_bin_dev0, MON_BIN_MAX_MINOR);
}
...@@ -48,6 +48,36 @@ char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len) ...@@ -48,6 +48,36 @@ char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len)
local_irq_restore(flags); local_irq_restore(flags);
return 0; return 0;
} }
void mon_dmapeek_vec(const struct mon_reader_bin *rp,
unsigned int offset, dma_addr_t dma_addr, unsigned int length)
{
unsigned long flags;
unsigned int step_len;
struct page *pg;
unsigned char *map;
unsigned long page_off, page_len;
local_irq_save(flags);
while (length) {
/* compute number of bytes we are going to copy in this page */
step_len = length;
page_off = dma_addr & (PAGE_SIZE-1);
page_len = PAGE_SIZE - page_off;
if (page_len < step_len)
step_len = page_len;
/* copy data and advance pointers */
pg = phys_to_page(dma_addr);
map = kmap_atomic(pg, KM_IRQ0);
offset = mon_copy_to_buff(rp, offset, map + page_off, step_len);
kunmap_atomic(map, KM_IRQ0);
dma_addr += step_len;
length -= step_len;
}
local_irq_restore(flags);
}
#endif /* __i386__ */ #endif /* __i386__ */
#ifndef MON_HAS_UNMAP #ifndef MON_HAS_UNMAP
...@@ -55,4 +85,11 @@ char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len) ...@@ -55,4 +85,11 @@ char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len)
{ {
return 'D'; return 'D';
} }
#endif
void mon_dmapeek_vec(const struct mon_reader_bin *rp,
unsigned int offset, dma_addr_t dma_addr, unsigned int length)
{
;
}
#endif /* MON_HAS_UNMAP */
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/usb.h> #include <linux/usb.h>
#include <linux/debugfs.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/mutex.h> #include <linux/mutex.h>
...@@ -22,11 +21,10 @@ static void mon_complete(struct usb_bus *ubus, struct urb *urb); ...@@ -22,11 +21,10 @@ static void mon_complete(struct usb_bus *ubus, struct urb *urb);
static void mon_stop(struct mon_bus *mbus); static void mon_stop(struct mon_bus *mbus);
static void mon_dissolve(struct mon_bus *mbus, struct usb_bus *ubus); static void mon_dissolve(struct mon_bus *mbus, struct usb_bus *ubus);
static void mon_bus_drop(struct kref *r); static void mon_bus_drop(struct kref *r);
static void mon_bus_init(struct dentry *mondir, struct usb_bus *ubus); static void mon_bus_init(struct usb_bus *ubus);
DEFINE_MUTEX(mon_lock); DEFINE_MUTEX(mon_lock);
static struct dentry *mon_dir; /* /dbg/usbmon */
static LIST_HEAD(mon_buses); /* All buses we know: struct mon_bus */ static LIST_HEAD(mon_buses); /* All buses we know: struct mon_bus */
/* /*
...@@ -200,7 +198,7 @@ static void mon_stop(struct mon_bus *mbus) ...@@ -200,7 +198,7 @@ static void mon_stop(struct mon_bus *mbus)
*/ */
static void mon_bus_add(struct usb_bus *ubus) static void mon_bus_add(struct usb_bus *ubus)
{ {
mon_bus_init(mon_dir, ubus); mon_bus_init(ubus);
} }
/* /*
...@@ -212,8 +210,8 @@ static void mon_bus_remove(struct usb_bus *ubus) ...@@ -212,8 +210,8 @@ static void mon_bus_remove(struct usb_bus *ubus)
mutex_lock(&mon_lock); mutex_lock(&mon_lock);
list_del(&mbus->bus_link); list_del(&mbus->bus_link);
debugfs_remove(mbus->dent_t); if (mbus->text_inited)
debugfs_remove(mbus->dent_s); mon_text_del(mbus);
mon_dissolve(mbus, ubus); mon_dissolve(mbus, ubus);
kref_put(&mbus->ref, mon_bus_drop); kref_put(&mbus->ref, mon_bus_drop);
...@@ -281,13 +279,9 @@ static void mon_bus_drop(struct kref *r) ...@@ -281,13 +279,9 @@ static void mon_bus_drop(struct kref *r)
* - refcount USB bus struct * - refcount USB bus struct
* - link * - link
*/ */
static void mon_bus_init(struct dentry *mondir, struct usb_bus *ubus) static void mon_bus_init(struct usb_bus *ubus)
{ {
struct dentry *d;
struct mon_bus *mbus; struct mon_bus *mbus;
enum { NAMESZ = 10 };
char name[NAMESZ];
int rc;
if ((mbus = kzalloc(sizeof(struct mon_bus), GFP_KERNEL)) == NULL) if ((mbus = kzalloc(sizeof(struct mon_bus), GFP_KERNEL)) == NULL)
goto err_alloc; goto err_alloc;
...@@ -303,57 +297,54 @@ static void mon_bus_init(struct dentry *mondir, struct usb_bus *ubus) ...@@ -303,57 +297,54 @@ static void mon_bus_init(struct dentry *mondir, struct usb_bus *ubus)
ubus->mon_bus = mbus; ubus->mon_bus = mbus;
mbus->uses_dma = ubus->uses_dma; mbus->uses_dma = ubus->uses_dma;
rc = snprintf(name, NAMESZ, "%dt", ubus->busnum); mbus->text_inited = mon_text_add(mbus, ubus);
if (rc <= 0 || rc >= NAMESZ) // mon_bin_add(...)
goto err_print_t;
d = debugfs_create_file(name, 0600, mondir, mbus, &mon_fops_text);
if (d == NULL)
goto err_create_t;
mbus->dent_t = d;
rc = snprintf(name, NAMESZ, "%ds", ubus->busnum);
if (rc <= 0 || rc >= NAMESZ)
goto err_print_s;
d = debugfs_create_file(name, 0600, mondir, mbus, &mon_fops_stat);
if (d == NULL)
goto err_create_s;
mbus->dent_s = d;
mutex_lock(&mon_lock); mutex_lock(&mon_lock);
list_add_tail(&mbus->bus_link, &mon_buses); list_add_tail(&mbus->bus_link, &mon_buses);
mutex_unlock(&mon_lock); mutex_unlock(&mon_lock);
return; return;
err_create_s:
err_print_s:
debugfs_remove(mbus->dent_t);
err_create_t:
err_print_t:
kfree(mbus);
err_alloc: err_alloc:
return; return;
} }
/*
* Search a USB bus by number. Notice that USB bus numbers start from one,
* which we may later use to identify "all" with zero.
*
* This function must be called with mon_lock held.
*
* This is obviously inefficient and may be revised in the future.
*/
struct mon_bus *mon_bus_lookup(unsigned int num)
{
struct list_head *p;
struct mon_bus *mbus;
list_for_each (p, &mon_buses) {
mbus = list_entry(p, struct mon_bus, bus_link);
if (mbus->u_bus->busnum == num) {
return mbus;
}
}
return NULL;
}
static int __init mon_init(void) static int __init mon_init(void)
{ {
struct usb_bus *ubus; struct usb_bus *ubus;
struct dentry *mondir; int rc;
mondir = debugfs_create_dir("usbmon", NULL); if ((rc = mon_text_init()) != 0)
if (IS_ERR(mondir)) { goto err_text;
printk(KERN_NOTICE TAG ": debugfs is not available\n"); if ((rc = mon_bin_init()) != 0)
return -ENODEV; goto err_bin;
}
if (mondir == NULL) {
printk(KERN_NOTICE TAG ": unable to create usbmon directory\n");
return -ENODEV;
}
mon_dir = mondir;
if (usb_mon_register(&mon_ops_0) != 0) { if (usb_mon_register(&mon_ops_0) != 0) {
printk(KERN_NOTICE TAG ": unable to register with the core\n"); printk(KERN_NOTICE TAG ": unable to register with the core\n");
debugfs_remove(mondir); rc = -ENODEV;
return -ENODEV; goto err_reg;
} }
// MOD_INC_USE_COUNT(which_module?); // MOD_INC_USE_COUNT(which_module?);
...@@ -361,10 +352,17 @@ static int __init mon_init(void) ...@@ -361,10 +352,17 @@ static int __init mon_init(void)
mutex_lock(&usb_bus_list_lock); mutex_lock(&usb_bus_list_lock);
list_for_each_entry (ubus, &usb_bus_list, bus_list) { list_for_each_entry (ubus, &usb_bus_list, bus_list) {
mon_bus_init(mondir, ubus); mon_bus_init(ubus);
} }
mutex_unlock(&usb_bus_list_lock); mutex_unlock(&usb_bus_list_lock);
return 0; return 0;
err_reg:
mon_bin_exit();
err_bin:
mon_text_exit();
err_text:
return rc;
} }
static void __exit mon_exit(void) static void __exit mon_exit(void)
...@@ -381,8 +379,8 @@ static void __exit mon_exit(void) ...@@ -381,8 +379,8 @@ static void __exit mon_exit(void)
mbus = list_entry(p, struct mon_bus, bus_link); mbus = list_entry(p, struct mon_bus, bus_link);
list_del(p); list_del(p);
debugfs_remove(mbus->dent_t); if (mbus->text_inited)
debugfs_remove(mbus->dent_s); mon_text_del(mbus);
/* /*
* This never happens, because the open/close paths in * This never happens, because the open/close paths in
...@@ -401,7 +399,8 @@ static void __exit mon_exit(void) ...@@ -401,7 +399,8 @@ static void __exit mon_exit(void)
} }
mutex_unlock(&mon_lock); mutex_unlock(&mon_lock);
debugfs_remove(mon_dir); mon_text_exit();
mon_bin_exit();
} }
module_init(mon_init); module_init(mon_init);
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <linux/usb.h> #include <linux/usb.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/debugfs.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include "usb_mon.h" #include "usb_mon.h"
...@@ -63,6 +64,8 @@ struct mon_reader_text { ...@@ -63,6 +64,8 @@ struct mon_reader_text {
char slab_name[SLAB_NAME_SZ]; char slab_name[SLAB_NAME_SZ];
}; };
static struct dentry *mon_dir; /* Usually /sys/kernel/debug/usbmon */
static void mon_text_ctor(void *, struct kmem_cache *, unsigned long); static void mon_text_ctor(void *, struct kmem_cache *, unsigned long);
/* /*
...@@ -436,7 +439,7 @@ static int mon_text_release(struct inode *inode, struct file *file) ...@@ -436,7 +439,7 @@ static int mon_text_release(struct inode *inode, struct file *file)
return 0; return 0;
} }
const struct file_operations mon_fops_text = { static const struct file_operations mon_fops_text = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.open = mon_text_open, .open = mon_text_open,
.llseek = no_llseek, .llseek = no_llseek,
...@@ -447,6 +450,47 @@ const struct file_operations mon_fops_text = { ...@@ -447,6 +450,47 @@ const struct file_operations mon_fops_text = {
.release = mon_text_release, .release = mon_text_release,
}; };
int mon_text_add(struct mon_bus *mbus, const struct usb_bus *ubus)
{
struct dentry *d;
enum { NAMESZ = 10 };
char name[NAMESZ];
int rc;
rc = snprintf(name, NAMESZ, "%dt", ubus->busnum);
if (rc <= 0 || rc >= NAMESZ)
goto err_print_t;
d = debugfs_create_file(name, 0600, mon_dir, mbus, &mon_fops_text);
if (d == NULL)
goto err_create_t;
mbus->dent_t = d;
/* XXX The stats do not belong to here (text API), but oh well... */
rc = snprintf(name, NAMESZ, "%ds", ubus->busnum);
if (rc <= 0 || rc >= NAMESZ)
goto err_print_s;
d = debugfs_create_file(name, 0600, mon_dir, mbus, &mon_fops_stat);
if (d == NULL)
goto err_create_s;
mbus->dent_s = d;
return 1;
err_create_s:
err_print_s:
debugfs_remove(mbus->dent_t);
mbus->dent_t = NULL;
err_create_t:
err_print_t:
return 0;
}
void mon_text_del(struct mon_bus *mbus)
{
debugfs_remove(mbus->dent_t);
debugfs_remove(mbus->dent_s);
}
/* /*
* Slab interface: constructor. * Slab interface: constructor.
*/ */
...@@ -459,3 +503,24 @@ static void mon_text_ctor(void *mem, struct kmem_cache *slab, unsigned long sfla ...@@ -459,3 +503,24 @@ static void mon_text_ctor(void *mem, struct kmem_cache *slab, unsigned long sfla
memset(mem, 0xe5, sizeof(struct mon_event_text)); memset(mem, 0xe5, sizeof(struct mon_event_text));
} }
int __init mon_text_init(void)
{
struct dentry *mondir;
mondir = debugfs_create_dir("usbmon", NULL);
if (IS_ERR(mondir)) {
printk(KERN_NOTICE TAG ": debugfs is not available\n");
return -ENODEV;
}
if (mondir == NULL) {
printk(KERN_NOTICE TAG ": unable to create usbmon directory\n");
return -ENODEV;
}
mon_dir = mondir;
return 0;
}
void __exit mon_text_exit(void)
{
debugfs_remove(mon_dir);
}
...@@ -17,9 +17,11 @@ ...@@ -17,9 +17,11 @@
struct mon_bus { struct mon_bus {
struct list_head bus_link; struct list_head bus_link;
spinlock_t lock; spinlock_t lock;
struct usb_bus *u_bus;
int text_inited;
struct dentry *dent_s; /* Debugging file */ struct dentry *dent_s; /* Debugging file */
struct dentry *dent_t; /* Text interface file */ struct dentry *dent_t; /* Text interface file */
struct usb_bus *u_bus;
int uses_dma; int uses_dma;
/* Ref */ /* Ref */
...@@ -48,13 +50,35 @@ struct mon_reader { ...@@ -48,13 +50,35 @@ struct mon_reader {
void mon_reader_add(struct mon_bus *mbus, struct mon_reader *r); void mon_reader_add(struct mon_bus *mbus, struct mon_reader *r);
void mon_reader_del(struct mon_bus *mbus, struct mon_reader *r); void mon_reader_del(struct mon_bus *mbus, struct mon_reader *r);
struct mon_bus *mon_bus_lookup(unsigned int num);
int /*bool*/ mon_text_add(struct mon_bus *mbus, const struct usb_bus *ubus);
void mon_text_del(struct mon_bus *mbus);
// void mon_bin_add(struct mon_bus *);
int __init mon_text_init(void);
void __exit mon_text_exit(void);
int __init mon_bin_init(void);
void __exit mon_bin_exit(void);
/* /*
*/ * DMA interface.
*
* XXX The vectored side needs a serious re-thinking. Abstracting vectors,
* like in Paolo's original patch, produces a double pkmap. We need an idea.
*/
extern char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len); extern char mon_dmapeek(unsigned char *dst, dma_addr_t dma_addr, int len);
struct mon_reader_bin;
extern void mon_dmapeek_vec(const struct mon_reader_bin *rp,
unsigned int offset, dma_addr_t dma_addr, unsigned int len);
extern unsigned int mon_copy_to_buff(const struct mon_reader_bin *rp,
unsigned int offset, const unsigned char *from, unsigned int len);
/*
*/
extern struct mutex mon_lock; extern struct mutex mon_lock;
extern const struct file_operations mon_fops_text;
extern const struct file_operations mon_fops_stat; extern const struct file_operations mon_fops_stat;
#endif /* __USB_MON_H */ #endif /* __USB_MON_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment