Commit fe097651 authored by Linus Torvalds's avatar Linus Torvalds

v2.5.0.10 -> v2.5.0.11

- Jeff Garzik: no longer support old cards in tulip driver
(see separate driver for old tulip chips)
- Pat Mochel: driverfs/device model documentation
- Ballabio Dario: update eata driver to new IO locking
- Ingo Molnar: raid resync with new bio structures (much more efficient)
and mempool_resize()
- Jens Axboe: bio queue locking
parent 80044607
This diff is collapsed.
driverfs - The Device Driver Filesystem
Patrick Mochel <mochel@osdl.org>
3 December 2001
What it is:
~~~~~~~~~~~
driverfs is a unified means for device drivers to export interfaces to
userspace.
Some drivers have a need for exporting interfaces for things like
setting device-specific parameters, or tuning the device performance.
For example, wireless networking cards export a file in procfs to set
their SSID.
Other times, the bus on which a device resides may export other
information about the device. For example, PCI and USB both export
device information via procfs or usbdevfs.
In these cases, the files or directories are in nearly random places
in /proc. One benefit of driverfs is that it can consolidate all of
these interfaces to one standard location.
Why it's better than procfs:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This of course can't happen without changing every single driver that
exports a procfs interface, and having some coordination between all
of them as to what the proper place for their files is. Or can it?
driverfs was developed in conjunction with the new driver model for
the 2.5 kernel. In that model, the system has one unified tree of all
the devices that are present in the system. It follows naturally that
this tree can be exported to userspace in the same order.
So, every bus and every device gets a directory in the filesystem.
This directory is created when the device is registered in the tree;
before the driver actually gets a initialised. The dentry for this
directory is stored in the struct device for this driver, so the
driver has access to it.
Now, every driver has one standard place to export its files.
Granted, the location of the file is not as intuitive as it may have
been under procfs. But, I argue that with the exception of
/proc/bus/pci, none of the files had intuitive locations. I also argue
that the development of userspace tools can help cope with these
changes and inconsistencies in locations.
Why we're not just using procfs:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When developing the new driver model, it was initially implemented
with a procfs tree. In explaining the concept to Linus, he said "Don't
use proc."
I was a little shocked (especially considering I had already
implemented it using procfs). "What do you mean 'don't use proc'?"
His argument was that too many things use proc that shouldn't. And
even more things misuse proc that shouldn't. On top of that, procfs
was written before the VFS layer was written, so it doesn't use the
dcache. It reimplements many of the same features that the dcache
does, and is in general, crufty.
So, he told me to write my own. Soon after, he pointed me at ramfs,
the simplest filesystem known to man.
Consequently, we have a virtual fileystem based heavily on ramfs, and
borrowing some conceptual functionality from procfs.
It may suck, but it does what it was designed to. At least so far.
How it works:
~~~~~~~~~~~~~
Directories are encapsulated like this:
struct driver_dir_entry {
char * name;
struct dentry * dentry;
mode_t mode;
struct list_head files;
};
name:
Name of the directory.
dentry:
Dentry for the directory.
mode:
Permissions of the directory.
files:
Linked list of driver_file_entry's that are in the directory.
To create a directory, one first calls
struct driver_dir_entry *
driverfs_create_dir_entry(const char * name, mode_t mode);
which allocates and initialises a struct driver_dir_entry. Then to actually
create the directory:
int driverfs_create_dir(struct driver_dir_entry *, struct driver_dir_entry *);
To remove a directory:
void driverfs_remove_dir(struct driver_dir_entry * entry);
Files are encapsulated like this:
struct driver_file_entry {
struct driver_dir_entry * parent;
struct list_head node;
char * name;
mode_t mode;
struct dentry * dentry;
void * data;
struct driverfs_operations * ops;
};
struct driverfs_operations {
ssize_t (*read) (char *, size_t, loff_t, void *);
ssize_t (*write)(const char *, size_t, loff_t, void*);
};
node:
Node in its parent directory's list of files.
name:
The name of the file.
dentry:
The dentry for the file.
data:
Caller specific data that is passed to the callbacks when they
are called.
ops:
Operations for the file. Currently, this only contains read() and write()
callbacks for the file.
To create a file, one first calls
struct driver_file_entry *
driverfs_create_entry (const char * name, mode_t mode,
struct driverfs_operations * ops, void * data);
That allocates and initialises a struct driver_file_entry. Then, to actually
create a file, one calls
int driverfs_create_file(struct driver_file_entry * entry,
struct driver_dir_entry * parent);
To remove a file, one calls
void driverfs_remove_file(struct driver_dir_entry *, const char * name);
The callback functionality is similar to the way procfs works. When a
user performs a read(2) or write(2) on the file, it first calls a
driverfs function. This function then checks for a non-NULL pointer in
the file->private_data field, which it assumes to be a pointer to a
struct driver_file_entry.
It then checks for the appropriate callback and calls it.
What driverfs is not:
~~~~~~~~~~~~~~~~~~~~~
It is not a replacement for either devfs or procfs.
It does not handle device nodes, like devfs is intended to do. I think
this functionality is possible, but indeed think that integration of
the device nodes and control files should be done. Whether driverfs or
devfs, or something else, is the place to do it, I don't know.
It is not intended to be a replacement for all of the procfs
functionality. I think that many of the driver files should be moved
out of /proc (and maybe a few other things as well ;).
Limitations:
~~~~~~~~~~~~
The driverfs functions assume that at most a page is being either read
or written each time.
Possible bugs:
~~~~~~~~~~~~~~
It may not deal with offsets and/or seeks very well, especially if
they cross a page boundary.
There may be locking issues when dynamically adding/removing files and
directories rapidly (like if you have a hot plug device).
There are some people that believe that filesystems which add
files/directories dynamically based on the presence of devices are
inherently flawed. Though not as technically versed in this area as
some of those people, I like to believe that they can be made to work,
with the right guidance.
VERSION = 2 VERSION = 2
PATCHLEVEL = 5 PATCHLEVEL = 5
SUBLEVEL = 1 SUBLEVEL = 1
EXTRAVERSION =-pre10 EXTRAVERSION =-pre11
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
......
...@@ -9,11 +9,3 @@ void * __io_virt_debug(unsigned long x, const char *file, int line) ...@@ -9,11 +9,3 @@ void * __io_virt_debug(unsigned long x, const char *file, int line)
return (void *)x; return (void *)x;
} }
unsigned long __io_phys_debug(unsigned long x, const char *file, int line)
{
if (x < PAGE_OFFSET) {
printk("io mapaddr 0x%05lx not valid at %s:%d!\n", x, file, line);
return x;
}
return __pa(x);
}
...@@ -1237,7 +1237,7 @@ static void do_cciss_request(request_queue_t *q) ...@@ -1237,7 +1237,7 @@ static void do_cciss_request(request_queue_t *q)
blkdev_dequeue_request(creq); blkdev_dequeue_request(creq);
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
c->cmd_type = CMD_RWREQ; c->cmd_type = CMD_RWREQ;
c->rq = creq; c->rq = creq;
...@@ -1298,7 +1298,7 @@ static void do_cciss_request(request_queue_t *q) ...@@ -1298,7 +1298,7 @@ static void do_cciss_request(request_queue_t *q)
c->Request.CDB[8]= creq->nr_sectors & 0xff; c->Request.CDB[8]= creq->nr_sectors & 0xff;
c->Request.CDB[9] = c->Request.CDB[11] = c->Request.CDB[12] = 0; c->Request.CDB[9] = c->Request.CDB[11] = c->Request.CDB[12] = 0;
spin_lock_irq(&q->queue_lock); spin_lock_irq(q->queue_lock);
addQ(&(h->reqQ),c); addQ(&(h->reqQ),c);
h->Qdepth++; h->Qdepth++;
...@@ -1866,7 +1866,7 @@ static int __init cciss_init_one(struct pci_dev *pdev, ...@@ -1866,7 +1866,7 @@ static int __init cciss_init_one(struct pci_dev *pdev,
q = BLK_DEFAULT_QUEUE(MAJOR_NR + i); q = BLK_DEFAULT_QUEUE(MAJOR_NR + i);
q->queuedata = hba[i]; q->queuedata = hba[i];
blk_init_queue(q, do_cciss_request); blk_init_queue(q, do_cciss_request, &hba[i]->lock);
blk_queue_bounce_limit(q, hba[i]->pdev->dma_mask); blk_queue_bounce_limit(q, hba[i]->pdev->dma_mask);
blk_queue_max_segments(q, MAXSGENTRIES); blk_queue_max_segments(q, MAXSGENTRIES);
blk_queue_max_sectors(q, 512); blk_queue_max_sectors(q, 512);
......
...@@ -66,6 +66,7 @@ struct ctlr_info ...@@ -66,6 +66,7 @@ struct ctlr_info
unsigned int Qdepth; unsigned int Qdepth;
unsigned int maxQsinceinit; unsigned int maxQsinceinit;
unsigned int maxSG; unsigned int maxSG;
spinlock_t lock;
//* pointers to command and error info pool */ //* pointers to command and error info pool */
CommandList_struct *cmd_pool; CommandList_struct *cmd_pool;
...@@ -242,7 +243,7 @@ struct board_type { ...@@ -242,7 +243,7 @@ struct board_type {
struct access_method *access; struct access_method *access;
}; };
#define CCISS_LOCK(i) (&((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)) #define CCISS_LOCK(i) ((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)
#endif /* CCISS_H */ #endif /* CCISS_H */
...@@ -467,7 +467,7 @@ int __init cpqarray_init(void) ...@@ -467,7 +467,7 @@ int __init cpqarray_init(void)
q = BLK_DEFAULT_QUEUE(MAJOR_NR + i); q = BLK_DEFAULT_QUEUE(MAJOR_NR + i);
q->queuedata = hba[i]; q->queuedata = hba[i];
blk_init_queue(q, do_ida_request); blk_init_queue(q, do_ida_request, &hba[i]->lock);
blk_queue_bounce_limit(q, hba[i]->pci_dev->dma_mask); blk_queue_bounce_limit(q, hba[i]->pci_dev->dma_mask);
blk_queue_max_segments(q, SG_MAX); blk_queue_max_segments(q, SG_MAX);
blksize_size[MAJOR_NR+i] = ida_blocksizes + (i*256); blksize_size[MAJOR_NR+i] = ida_blocksizes + (i*256);
...@@ -882,7 +882,7 @@ static void do_ida_request(request_queue_t *q) ...@@ -882,7 +882,7 @@ static void do_ida_request(request_queue_t *q)
blkdev_dequeue_request(creq); blkdev_dequeue_request(creq);
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
c->ctlr = h->ctlr; c->ctlr = h->ctlr;
c->hdr.unit = MINOR(creq->rq_dev) >> NWD_SHIFT; c->hdr.unit = MINOR(creq->rq_dev) >> NWD_SHIFT;
...@@ -915,7 +915,7 @@ DBGPX( printk("Submitting %d sectors in %d segments\n", creq->nr_sectors, seg); ...@@ -915,7 +915,7 @@ DBGPX( printk("Submitting %d sectors in %d segments\n", creq->nr_sectors, seg);
c->req.hdr.cmd = (rq_data_dir(creq) == READ) ? IDA_READ : IDA_WRITE; c->req.hdr.cmd = (rq_data_dir(creq) == READ) ? IDA_READ : IDA_WRITE;
c->type = CMD_RWREQ; c->type = CMD_RWREQ;
spin_lock_irq(&q->queue_lock); spin_lock_irq(q->queue_lock);
/* Put the request on the tail of the request queue */ /* Put the request on the tail of the request queue */
addQ(&h->reqQ, c); addQ(&h->reqQ, c);
......
...@@ -106,6 +106,7 @@ struct ctlr_info { ...@@ -106,6 +106,7 @@ struct ctlr_info {
cmdlist_t *cmd_pool; cmdlist_t *cmd_pool;
dma_addr_t cmd_pool_dhandle; dma_addr_t cmd_pool_dhandle;
__u32 *cmd_pool_bits; __u32 *cmd_pool_bits;
spinlock_t lock;
unsigned int Qdepth; unsigned int Qdepth;
unsigned int maxQsinceinit; unsigned int maxQsinceinit;
...@@ -117,7 +118,7 @@ struct ctlr_info { ...@@ -117,7 +118,7 @@ struct ctlr_info {
unsigned int misc_tflags; unsigned int misc_tflags;
}; };
#define IDA_LOCK(i) (&((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)) #define IDA_LOCK(i) ((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)
#endif #endif
......
...@@ -204,6 +204,8 @@ static int use_virtual_dma; ...@@ -204,6 +204,8 @@ static int use_virtual_dma;
* record each buffers capabilities * record each buffers capabilities
*/ */
static spinlock_t floppy_lock;
static unsigned short virtual_dma_port=0x3f0; static unsigned short virtual_dma_port=0x3f0;
void floppy_interrupt(int irq, void *dev_id, struct pt_regs * regs); void floppy_interrupt(int irq, void *dev_id, struct pt_regs * regs);
static int set_dor(int fdc, char mask, char data); static int set_dor(int fdc, char mask, char data);
...@@ -2296,7 +2298,7 @@ static void request_done(int uptodate) ...@@ -2296,7 +2298,7 @@ static void request_done(int uptodate)
DRS->maxtrack = 1; DRS->maxtrack = 1;
/* unlock chained buffers */ /* unlock chained buffers */
spin_lock_irqsave(&QUEUE->queue_lock, flags); spin_lock_irqsave(QUEUE->queue_lock, flags);
while (current_count_sectors && !QUEUE_EMPTY && while (current_count_sectors && !QUEUE_EMPTY &&
current_count_sectors >= CURRENT->current_nr_sectors){ current_count_sectors >= CURRENT->current_nr_sectors){
current_count_sectors -= CURRENT->current_nr_sectors; current_count_sectors -= CURRENT->current_nr_sectors;
...@@ -2304,7 +2306,7 @@ static void request_done(int uptodate) ...@@ -2304,7 +2306,7 @@ static void request_done(int uptodate)
CURRENT->sector += CURRENT->current_nr_sectors; CURRENT->sector += CURRENT->current_nr_sectors;
end_request(1); end_request(1);
} }
spin_unlock_irqrestore(&QUEUE->queue_lock, flags); spin_unlock_irqrestore(QUEUE->queue_lock, flags);
if (current_count_sectors && !QUEUE_EMPTY){ if (current_count_sectors && !QUEUE_EMPTY){
/* "unlock" last subsector */ /* "unlock" last subsector */
...@@ -2329,9 +2331,9 @@ static void request_done(int uptodate) ...@@ -2329,9 +2331,9 @@ static void request_done(int uptodate)
DRWE->last_error_sector = CURRENT->sector; DRWE->last_error_sector = CURRENT->sector;
DRWE->last_error_generation = DRS->generation; DRWE->last_error_generation = DRS->generation;
} }
spin_lock_irqsave(&QUEUE->queue_lock, flags); spin_lock_irqsave(QUEUE->queue_lock, flags);
end_request(0); end_request(0);
spin_unlock_irqrestore(&QUEUE->queue_lock, flags); spin_unlock_irqrestore(QUEUE->queue_lock, flags);
} }
} }
...@@ -2433,17 +2435,20 @@ static void rw_interrupt(void) ...@@ -2433,17 +2435,20 @@ static void rw_interrupt(void)
static int buffer_chain_size(void) static int buffer_chain_size(void)
{ {
struct bio *bio; struct bio *bio;
int size; struct bio_vec *bv;
int size, i;
char *base; char *base;
base = CURRENT->buffer; base = bio_data(CURRENT->bio);
size = 0; size = 0;
rq_for_each_bio(bio, CURRENT) { rq_for_each_bio(bio, CURRENT) {
if (bio_data(bio) != base + size) bio_for_each_segment(bv, bio, i) {
if (page_address(bv->bv_page) + bv->bv_offset != base + size)
break; break;
size += bio->bi_size; size += bv->bv_len;
}
} }
return size >> 9; return size >> 9;
...@@ -2469,9 +2474,10 @@ static int transfer_size(int ssize, int max_sector, int max_size) ...@@ -2469,9 +2474,10 @@ static int transfer_size(int ssize, int max_sector, int max_size)
static void copy_buffer(int ssize, int max_sector, int max_sector_2) static void copy_buffer(int ssize, int max_sector, int max_sector_2)
{ {
int remaining; /* number of transferred 512-byte sectors */ int remaining; /* number of transferred 512-byte sectors */
struct bio_vec *bv;
struct bio *bio; struct bio *bio;
char *buffer, *dma_buffer; char *buffer, *dma_buffer;
int size; int size, i;
max_sector = transfer_size(ssize, max_sector = transfer_size(ssize,
minimum(max_sector, max_sector_2), minimum(max_sector, max_sector_2),
...@@ -2501,12 +2507,17 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2) ...@@ -2501,12 +2507,17 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
dma_buffer = floppy_track_buffer + ((fsector_t - buffer_min) << 9); dma_buffer = floppy_track_buffer + ((fsector_t - buffer_min) << 9);
bio = CURRENT->bio;
size = CURRENT->current_nr_sectors << 9; size = CURRENT->current_nr_sectors << 9;
buffer = CURRENT->buffer;
while (remaining > 0){ rq_for_each_bio(bio, CURRENT) {
bio_for_each_segment(bv, bio, i) {
if (!remaining)
break;
size = bv->bv_len;
SUPBOUND(size, remaining); SUPBOUND(size, remaining);
buffer = page_address(bv->bv_page) + bv->bv_offset;
#ifdef FLOPPY_SANITY_CHECK #ifdef FLOPPY_SANITY_CHECK
if (dma_buffer + size > if (dma_buffer + size >
floppy_track_buffer + (max_buffer_sectors << 10) || floppy_track_buffer + (max_buffer_sectors << 10) ||
...@@ -2530,20 +2541,10 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2) ...@@ -2530,20 +2541,10 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
memcpy(buffer, dma_buffer, size); memcpy(buffer, dma_buffer, size);
else else
memcpy(dma_buffer, buffer, size); memcpy(dma_buffer, buffer, size);
remaining -= size;
if (!remaining)
break;
remaining -= size;
dma_buffer += size; dma_buffer += size;
bio = bio->bi_next;
#ifdef FLOPPY_SANITY_CHECK
if (!bio){
DPRINT("bh=null in copy buffer after copy\n");
break;
} }
#endif
size = bio->bi_size;
buffer = bio_data(bio);
} }
#ifdef FLOPPY_SANITY_CHECK #ifdef FLOPPY_SANITY_CHECK
if (remaining){ if (remaining){
...@@ -4169,7 +4170,7 @@ int __init floppy_init(void) ...@@ -4169,7 +4170,7 @@ int __init floppy_init(void)
blk_size[MAJOR_NR] = floppy_sizes; blk_size[MAJOR_NR] = floppy_sizes;
blksize_size[MAJOR_NR] = floppy_blocksizes; blksize_size[MAJOR_NR] = floppy_blocksizes;
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST); blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST, &floppy_lock);
reschedule_timeout(MAXTIMEOUT, "floppy init", MAXTIMEOUT); reschedule_timeout(MAXTIMEOUT, "floppy init", MAXTIMEOUT);
config_types(); config_types();
...@@ -4477,6 +4478,7 @@ MODULE_LICENSE("GPL"); ...@@ -4477,6 +4478,7 @@ MODULE_LICENSE("GPL");
#else #else
__setup ("floppy=", floppy_setup); __setup ("floppy=", floppy_setup);
module_init(floppy_init)
/* eject the boot floppy (if we need the drive for a different root floppy) */ /* eject the boot floppy (if we need the drive for a different root floppy) */
/* This should only be called at boot time when we're sure that there's no /* This should only be called at boot time when we're sure that there's no
......
...@@ -254,6 +254,12 @@ void blk_queue_segment_boundary(request_queue_t *q, unsigned long mask) ...@@ -254,6 +254,12 @@ void blk_queue_segment_boundary(request_queue_t *q, unsigned long mask)
q->seg_boundary_mask = mask; q->seg_boundary_mask = mask;
} }
void blk_queue_assign_lock(request_queue_t *q, spinlock_t *lock)
{
spin_lock_init(lock);
q->queue_lock = lock;
}
static char *rq_flags[] = { "REQ_RW", "REQ_RW_AHEAD", "REQ_BARRIER", static char *rq_flags[] = { "REQ_RW", "REQ_RW_AHEAD", "REQ_BARRIER",
"REQ_CMD", "REQ_NOMERGE", "REQ_STARTED", "REQ_CMD", "REQ_NOMERGE", "REQ_STARTED",
"REQ_DONTPREP", "REQ_DRIVE_CMD", "REQ_DRIVE_TASK", "REQ_DONTPREP", "REQ_DRIVE_CMD", "REQ_DRIVE_TASK",
...@@ -536,9 +542,9 @@ void generic_unplug_device(void *data) ...@@ -536,9 +542,9 @@ void generic_unplug_device(void *data)
request_queue_t *q = (request_queue_t *) data; request_queue_t *q = (request_queue_t *) data;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
__generic_unplug_device(q); __generic_unplug_device(q);
spin_unlock_irqrestore(&q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
} }
static int __blk_cleanup_queue(struct request_list *list) static int __blk_cleanup_queue(struct request_list *list)
...@@ -624,7 +630,6 @@ static int blk_init_free_list(request_queue_t *q) ...@@ -624,7 +630,6 @@ static int blk_init_free_list(request_queue_t *q)
init_waitqueue_head(&q->rq[READ].wait); init_waitqueue_head(&q->rq[READ].wait);
init_waitqueue_head(&q->rq[WRITE].wait); init_waitqueue_head(&q->rq[WRITE].wait);
spin_lock_init(&q->queue_lock);
return 0; return 0;
nomem: nomem:
blk_cleanup_queue(q); blk_cleanup_queue(q);
...@@ -661,7 +666,7 @@ static int __make_request(request_queue_t *, struct bio *); ...@@ -661,7 +666,7 @@ static int __make_request(request_queue_t *, struct bio *);
* blk_init_queue() must be paired with a blk_cleanup_queue() call * blk_init_queue() must be paired with a blk_cleanup_queue() call
* when the block device is deactivated (such as at module unload). * when the block device is deactivated (such as at module unload).
**/ **/
int blk_init_queue(request_queue_t *q, request_fn_proc *rfn) int blk_init_queue(request_queue_t *q, request_fn_proc *rfn, spinlock_t *lock)
{ {
int ret; int ret;
...@@ -682,6 +687,7 @@ int blk_init_queue(request_queue_t *q, request_fn_proc *rfn) ...@@ -682,6 +687,7 @@ int blk_init_queue(request_queue_t *q, request_fn_proc *rfn)
q->plug_tq.routine = &generic_unplug_device; q->plug_tq.routine = &generic_unplug_device;
q->plug_tq.data = q; q->plug_tq.data = q;
q->queue_flags = (1 << QUEUE_FLAG_CLUSTER); q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
q->queue_lock = lock;
/* /*
* by default assume old behaviour and bounce for any highmem page * by default assume old behaviour and bounce for any highmem page
...@@ -728,7 +734,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw) ...@@ -728,7 +734,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw)
struct request_list *rl = &q->rq[rw]; struct request_list *rl = &q->rq[rw];
struct request *rq; struct request *rq;
spin_lock_prefetch(&q->queue_lock); spin_lock_prefetch(q->queue_lock);
generic_unplug_device(q); generic_unplug_device(q);
add_wait_queue(&rl->wait, &wait); add_wait_queue(&rl->wait, &wait);
...@@ -736,9 +742,9 @@ static struct request *get_request_wait(request_queue_t *q, int rw) ...@@ -736,9 +742,9 @@ static struct request *get_request_wait(request_queue_t *q, int rw)
set_current_state(TASK_UNINTERRUPTIBLE); set_current_state(TASK_UNINTERRUPTIBLE);
if (rl->count < batch_requests) if (rl->count < batch_requests)
schedule(); schedule();
spin_lock_irq(&q->queue_lock); spin_lock_irq(q->queue_lock);
rq = get_request(q, rw); rq = get_request(q, rw);
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
} while (rq == NULL); } while (rq == NULL);
remove_wait_queue(&rl->wait, &wait); remove_wait_queue(&rl->wait, &wait);
current->state = TASK_RUNNING; current->state = TASK_RUNNING;
...@@ -949,9 +955,9 @@ void blk_attempt_remerge(request_queue_t *q, struct request *rq) ...@@ -949,9 +955,9 @@ void blk_attempt_remerge(request_queue_t *q, struct request *rq)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
__blk_attempt_remerge(q, rq); __blk_attempt_remerge(q, rq);
spin_unlock_irqrestore(&q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
} }
static int __make_request(request_queue_t *q, struct bio *bio) static int __make_request(request_queue_t *q, struct bio *bio)
...@@ -974,7 +980,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -974,7 +980,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
*/ */
blk_queue_bounce(q, &bio); blk_queue_bounce(q, &bio);
spin_lock_prefetch(&q->queue_lock); spin_lock_prefetch(q->queue_lock);
latency = elevator_request_latency(elevator, rw); latency = elevator_request_latency(elevator, rw);
barrier = test_bit(BIO_RW_BARRIER, &bio->bi_rw); barrier = test_bit(BIO_RW_BARRIER, &bio->bi_rw);
...@@ -983,7 +989,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -983,7 +989,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req = NULL; req = NULL;
head = &q->queue_head; head = &q->queue_head;
spin_lock_irq(&q->queue_lock); spin_lock_irq(q->queue_lock);
insert_here = head->prev; insert_here = head->prev;
if (blk_queue_empty(q) || barrier) { if (blk_queue_empty(q) || barrier) {
...@@ -1066,7 +1072,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -1066,7 +1072,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
freereq = NULL; freereq = NULL;
} else if ((req = get_request(q, rw)) == NULL) { } else if ((req = get_request(q, rw)) == NULL) {
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
/* /*
* READA bit set * READA bit set
...@@ -1111,7 +1117,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -1111,7 +1117,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
out: out:
if (freereq) if (freereq)
blkdev_release_request(freereq); blkdev_release_request(freereq);
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
return 0; return 0;
end_io: end_io:
...@@ -1608,3 +1614,4 @@ EXPORT_SYMBOL(blk_nohighio); ...@@ -1608,3 +1614,4 @@ EXPORT_SYMBOL(blk_nohighio);
EXPORT_SYMBOL(blk_dump_rq_flags); EXPORT_SYMBOL(blk_dump_rq_flags);
EXPORT_SYMBOL(submit_bio); EXPORT_SYMBOL(submit_bio);
EXPORT_SYMBOL(blk_contig_segment); EXPORT_SYMBOL(blk_contig_segment);
EXPORT_SYMBOL(blk_queue_assign_lock);
...@@ -62,6 +62,8 @@ static u64 nbd_bytesizes[MAX_NBD]; ...@@ -62,6 +62,8 @@ static u64 nbd_bytesizes[MAX_NBD];
static struct nbd_device nbd_dev[MAX_NBD]; static struct nbd_device nbd_dev[MAX_NBD];
static devfs_handle_t devfs_handle; static devfs_handle_t devfs_handle;
static spinlock_t nbd_lock;
#define DEBUG( s ) #define DEBUG( s )
/* #define DEBUG( s ) printk( s ) /* #define DEBUG( s ) printk( s )
*/ */
...@@ -347,22 +349,22 @@ static void do_nbd_request(request_queue_t * q) ...@@ -347,22 +349,22 @@ static void do_nbd_request(request_queue_t * q)
#endif #endif
req->errors = 0; req->errors = 0;
blkdev_dequeue_request(req); blkdev_dequeue_request(req);
spin_unlock_irq(&q->queue_lock); spin_unlock_irq(q->queue_lock);
down (&lo->queue_lock); down (&lo->queue_lock);
list_add(&req->queuelist, &lo->queue_head); list_add(&req->queuelist, &lo->queue_head);
nbd_send_req(lo->sock, req); /* Why does this block? */ nbd_send_req(lo->sock, req); /* Why does this block? */
up (&lo->queue_lock); up (&lo->queue_lock);
spin_lock_irq(&q->queue_lock); spin_lock_irq(q->queue_lock);
continue; continue;
error_out: error_out:
req->errors++; req->errors++;
blkdev_dequeue_request(req); blkdev_dequeue_request(req);
spin_unlock(&q->queue_lock); spin_unlock(q->queue_lock);
nbd_end_request(req); nbd_end_request(req);
spin_lock(&q->queue_lock); spin_lock(q->queue_lock);
} }
return; return;
} }
...@@ -515,7 +517,7 @@ static int __init nbd_init(void) ...@@ -515,7 +517,7 @@ static int __init nbd_init(void)
#endif #endif
blksize_size[MAJOR_NR] = nbd_blksizes; blksize_size[MAJOR_NR] = nbd_blksizes;
blk_size[MAJOR_NR] = nbd_sizes; blk_size[MAJOR_NR] = nbd_sizes;
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), do_nbd_request); blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), do_nbd_request, &nbd_lock);
for (i = 0; i < MAX_NBD; i++) { for (i = 0; i < MAX_NBD; i++) {
nbd_dev[i].refcnt = 0; nbd_dev[i].refcnt = 0;
nbd_dev[i].file = NULL; nbd_dev[i].file = NULL;
......
...@@ -146,6 +146,8 @@ static int pcd_drive_count; ...@@ -146,6 +146,8 @@ static int pcd_drive_count;
#include <asm/uaccess.h> #include <asm/uaccess.h>
static spinlock_t pcd_lock;
#ifndef MODULE #ifndef MODULE
#include "setup.h" #include "setup.h"
...@@ -355,7 +357,7 @@ int pcd_init (void) /* preliminary initialisation */ ...@@ -355,7 +357,7 @@ int pcd_init (void) /* preliminary initialisation */
} }
} }
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST); blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST, &pcd_lock);
read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */ read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */
for (i=0;i<PCD_UNITS;i++) pcd_blocksizes[i] = 1024; for (i=0;i<PCD_UNITS;i++) pcd_blocksizes[i] = 1024;
...@@ -821,11 +823,11 @@ static void pcd_start( void ) ...@@ -821,11 +823,11 @@ static void pcd_start( void )
if (pcd_command(unit,rd_cmd,2048,"read block")) { if (pcd_command(unit,rd_cmd,2048,"read block")) {
pcd_bufblk = -1; pcd_bufblk = -1;
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pcd_lock,saved_flags);
pcd_busy = 0; pcd_busy = 0;
end_request(0); end_request(0);
do_pcd_request(NULL); do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pcd_lock,saved_flags);
return; return;
} }
...@@ -845,11 +847,11 @@ static void do_pcd_read( void ) ...@@ -845,11 +847,11 @@ static void do_pcd_read( void )
pcd_retries = 0; pcd_retries = 0;
pcd_transfer(); pcd_transfer();
if (!pcd_count) { if (!pcd_count) {
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pcd_lock,saved_flags);
end_request(1); end_request(1);
pcd_busy = 0; pcd_busy = 0;
do_pcd_request(NULL); do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pcd_lock,saved_flags);
return; return;
} }
...@@ -868,19 +870,19 @@ static void do_pcd_read_drq( void ) ...@@ -868,19 +870,19 @@ static void do_pcd_read_drq( void )
pi_do_claimed(PI,pcd_start); pi_do_claimed(PI,pcd_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pcd_lock,saved_flags);
pcd_busy = 0; pcd_busy = 0;
pcd_bufblk = -1; pcd_bufblk = -1;
end_request(0); end_request(0);
do_pcd_request(NULL); do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pcd_lock,saved_flags);
return; return;
} }
do_pcd_read(); do_pcd_read();
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pcd_lock,saved_flags);
do_pcd_request(NULL); do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pcd_lock,saved_flags);
} }
/* the audio_ioctl stuff is adapted from sr_ioctl.c */ /* the audio_ioctl stuff is adapted from sr_ioctl.c */
......
...@@ -164,6 +164,8 @@ static int pf_drive_count; ...@@ -164,6 +164,8 @@ static int pf_drive_count;
#include <asm/uaccess.h> #include <asm/uaccess.h>
static spinlock_t pf_spin_lock;
#ifndef MODULE #ifndef MODULE
#include "setup.h" #include "setup.h"
...@@ -358,7 +360,7 @@ int pf_init (void) /* preliminary initialisation */ ...@@ -358,7 +360,7 @@ int pf_init (void) /* preliminary initialisation */
return -1; return -1;
} }
q = BLK_DEFAULT_QUEUE(MAJOR_NR); q = BLK_DEFAULT_QUEUE(MAJOR_NR);
blk_init_queue(q, DEVICE_REQUEST); blk_init_queue(q, DEVICE_REQUEST, &pf_spin_lock);
blk_queue_max_segments(q, cluster); blk_queue_max_segments(q, cluster);
read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */ read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */
...@@ -876,9 +878,9 @@ static void pf_next_buf( int unit ) ...@@ -876,9 +878,9 @@ static void pf_next_buf( int unit )
{ long saved_flags; { long saved_flags;
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1); end_request(1);
if (!pf_run) { spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); if (!pf_run) { spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
...@@ -894,7 +896,7 @@ static void pf_next_buf( int unit ) ...@@ -894,7 +896,7 @@ static void pf_next_buf( int unit )
pf_count = CURRENT->current_nr_sectors; pf_count = CURRENT->current_nr_sectors;
pf_buf = CURRENT->buffer; pf_buf = CURRENT->buffer;
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
} }
static void do_pf_read( void ) static void do_pf_read( void )
...@@ -918,11 +920,11 @@ static void do_pf_read_start( void ) ...@@ -918,11 +920,11 @@ static void do_pf_read_start( void )
pi_do_claimed(PI,do_pf_read_start); pi_do_claimed(PI,do_pf_read_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0); end_request(0);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
pf_mask = STAT_DRQ; pf_mask = STAT_DRQ;
...@@ -944,11 +946,11 @@ static void do_pf_read_drq( void ) ...@@ -944,11 +946,11 @@ static void do_pf_read_drq( void )
pi_do_claimed(PI,do_pf_read_start); pi_do_claimed(PI,do_pf_read_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0); end_request(0);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
pi_read_block(PI,pf_buf,512); pi_read_block(PI,pf_buf,512);
...@@ -959,11 +961,11 @@ static void do_pf_read_drq( void ) ...@@ -959,11 +961,11 @@ static void do_pf_read_drq( void )
if (!pf_count) pf_next_buf(unit); if (!pf_count) pf_next_buf(unit);
} }
pi_disconnect(PI); pi_disconnect(PI);
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1); end_request(1);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
} }
static void do_pf_write( void ) static void do_pf_write( void )
...@@ -985,11 +987,11 @@ static void do_pf_write_start( void ) ...@@ -985,11 +987,11 @@ static void do_pf_write_start( void )
pi_do_claimed(PI,do_pf_write_start); pi_do_claimed(PI,do_pf_write_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0); end_request(0);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
...@@ -1002,11 +1004,11 @@ static void do_pf_write_start( void ) ...@@ -1002,11 +1004,11 @@ static void do_pf_write_start( void )
pi_do_claimed(PI,do_pf_write_start); pi_do_claimed(PI,do_pf_write_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0); end_request(0);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
pi_write_block(PI,pf_buf,512); pi_write_block(PI,pf_buf,512);
...@@ -1032,19 +1034,19 @@ static void do_pf_write_done( void ) ...@@ -1032,19 +1034,19 @@ static void do_pf_write_done( void )
pi_do_claimed(PI,do_pf_write_start); pi_do_claimed(PI,do_pf_write_start);
return; return;
} }
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0); end_request(0);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return; return;
} }
pi_disconnect(PI); pi_disconnect(PI);
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags); spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1); end_request(1);
pf_busy = 0; pf_busy = 0;
do_pf_request(NULL); do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags); spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
} }
/* end of pf.c */ /* end of pf.c */
......
...@@ -189,6 +189,8 @@ int __init ps2esdi_init(void) ...@@ -189,6 +189,8 @@ int __init ps2esdi_init(void)
return 0; return 0;
} /* ps2esdi_init */ } /* ps2esdi_init */
module_init(ps2esdi_init);
#ifdef MODULE #ifdef MODULE
static int cyl[MAX_HD] = {-1,-1}; static int cyl[MAX_HD] = {-1,-1};
......
This diff is collapsed.
...@@ -597,7 +597,7 @@ static void ide_init_queue(ide_drive_t *drive) ...@@ -597,7 +597,7 @@ static void ide_init_queue(ide_drive_t *drive)
int max_sectors; int max_sectors;
q->queuedata = HWGROUP(drive); q->queuedata = HWGROUP(drive);
blk_init_queue(q, do_ide_request); blk_init_queue(q, do_ide_request, &ide_lock);
blk_queue_segment_boundary(q, 0xffff); blk_queue_segment_boundary(q, 0xffff);
/* IDE can do up to 128K per request, pdc4030 needs smaller limit */ /* IDE can do up to 128K per request, pdc4030 needs smaller limit */
......
...@@ -177,8 +177,6 @@ static int initializing; /* set while initializing built-in drivers */ ...@@ -177,8 +177,6 @@ static int initializing; /* set while initializing built-in drivers */
/* /*
* protects global structures etc, we want to split this into per-hwgroup * protects global structures etc, we want to split this into per-hwgroup
* instead. * instead.
*
* anti-deadlock ordering: ide_lock -> DRIVE_LOCK
*/ */
spinlock_t ide_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; spinlock_t ide_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED;
...@@ -583,11 +581,9 @@ inline int __ide_end_request(ide_hwgroup_t *hwgroup, int uptodate, int nr_secs) ...@@ -583,11 +581,9 @@ inline int __ide_end_request(ide_hwgroup_t *hwgroup, int uptodate, int nr_secs)
if (!end_that_request_first(rq, uptodate, nr_secs)) { if (!end_that_request_first(rq, uptodate, nr_secs)) {
add_blkdev_randomness(MAJOR(rq->rq_dev)); add_blkdev_randomness(MAJOR(rq->rq_dev));
spin_lock(DRIVE_LOCK(drive));
blkdev_dequeue_request(rq); blkdev_dequeue_request(rq);
hwgroup->rq = NULL; hwgroup->rq = NULL;
end_that_request_last(rq); end_that_request_last(rq);
spin_unlock(DRIVE_LOCK(drive));
ret = 0; ret = 0;
} }
...@@ -900,11 +896,9 @@ void ide_end_drive_cmd (ide_drive_t *drive, byte stat, byte err) ...@@ -900,11 +896,9 @@ void ide_end_drive_cmd (ide_drive_t *drive, byte stat, byte err)
} }
} }
spin_lock(DRIVE_LOCK(drive));
blkdev_dequeue_request(rq); blkdev_dequeue_request(rq);
HWGROUP(drive)->rq = NULL; HWGROUP(drive)->rq = NULL;
end_that_request_last(rq); end_that_request_last(rq);
spin_unlock(DRIVE_LOCK(drive));
spin_unlock_irqrestore(&ide_lock, flags); spin_unlock_irqrestore(&ide_lock, flags);
} }
...@@ -1368,7 +1362,7 @@ static inline ide_drive_t *choose_drive (ide_hwgroup_t *hwgroup) ...@@ -1368,7 +1362,7 @@ static inline ide_drive_t *choose_drive (ide_hwgroup_t *hwgroup)
/* /*
* Issue a new request to a drive from hwgroup * Issue a new request to a drive from hwgroup
* Caller must have already done spin_lock_irqsave(DRIVE_LOCK(drive), ...) * Caller must have already done spin_lock_irqsave(&ide_lock, ...)
* *
* A hwgroup is a serialized group of IDE interfaces. Usually there is * A hwgroup is a serialized group of IDE interfaces. Usually there is
* exactly one hwif (interface) per hwgroup, but buggy controllers (eg. CMD640) * exactly one hwif (interface) per hwgroup, but buggy controllers (eg. CMD640)
...@@ -1456,9 +1450,7 @@ static void ide_do_request(ide_hwgroup_t *hwgroup, int masked_irq) ...@@ -1456,9 +1450,7 @@ static void ide_do_request(ide_hwgroup_t *hwgroup, int masked_irq)
/* /*
* just continuing an interrupted request maybe * just continuing an interrupted request maybe
*/ */
spin_lock(DRIVE_LOCK(drive));
rq = hwgroup->rq = elv_next_request(&drive->queue); rq = hwgroup->rq = elv_next_request(&drive->queue);
spin_unlock(DRIVE_LOCK(drive));
/* /*
* Some systems have trouble with IDE IRQs arriving while * Some systems have trouble with IDE IRQs arriving while
...@@ -1496,19 +1488,7 @@ request_queue_t *ide_get_queue (kdev_t dev) ...@@ -1496,19 +1488,7 @@ request_queue_t *ide_get_queue (kdev_t dev)
*/ */
void do_ide_request(request_queue_t *q) void do_ide_request(request_queue_t *q)
{ {
unsigned long flags;
/*
* release queue lock, grab IDE global lock and restore when
* we leave...
*/
spin_unlock(&q->queue_lock);
spin_lock_irqsave(&ide_lock, flags);
ide_do_request(q->queuedata, 0); ide_do_request(q->queuedata, 0);
spin_unlock_irqrestore(&ide_lock, flags);
spin_lock(&q->queue_lock);
} }
/* /*
...@@ -1875,7 +1855,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio ...@@ -1875,7 +1855,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio
if (action == ide_wait) if (action == ide_wait)
rq->waiting = &wait; rq->waiting = &wait;
spin_lock_irqsave(&ide_lock, flags); spin_lock_irqsave(&ide_lock, flags);
spin_lock(DRIVE_LOCK(drive));
if (blk_queue_empty(&drive->queue) || action == ide_preempt) { if (blk_queue_empty(&drive->queue) || action == ide_preempt) {
if (action == ide_preempt) if (action == ide_preempt)
hwgroup->rq = NULL; hwgroup->rq = NULL;
...@@ -1886,7 +1865,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio ...@@ -1886,7 +1865,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio
queue_head = queue_head->next; queue_head = queue_head->next;
} }
q->elevator.elevator_add_req_fn(q, rq, queue_head); q->elevator.elevator_add_req_fn(q, rq, queue_head);
spin_unlock(DRIVE_LOCK(drive));
ide_do_request(hwgroup, 0); ide_do_request(hwgroup, 0);
spin_unlock_irqrestore(&ide_lock, flags); spin_unlock_irqrestore(&ide_lock, flags);
if (action == ide_wait) { if (action == ide_wait) {
......
...@@ -189,7 +189,7 @@ static mdk_personality_t linear_personality= ...@@ -189,7 +189,7 @@ static mdk_personality_t linear_personality=
status: linear_status, status: linear_status,
}; };
static int md__init linear_init (void) static int __init linear_init (void)
{ {
return register_md_personality (LINEAR, &linear_personality); return register_md_personality (LINEAR, &linear_personality);
} }
......
This diff is collapsed.
...@@ -334,7 +334,7 @@ static mdk_personality_t raid0_personality= ...@@ -334,7 +334,7 @@ static mdk_personality_t raid0_personality=
status: raid0_status, status: raid0_status,
}; };
static int md__init raid0_init (void) static int __init raid0_init (void)
{ {
return register_md_personality (RAID0, &raid0_personality); return register_md_personality (RAID0, &raid0_personality);
} }
......
This diff is collapsed.
2001-12-11 Jeff Garzik <jgarzik@mandrakesoft.com>
* eeprom.c, timer.c, media.c, tulip_core.c:
Remove 21040 and 21041 chip support.
2001-11-13 David S. Miller <davem@redhat.com> 2001-11-13 David S. Miller <davem@redhat.com>
* tulip_core.c (tulip_mwi_config): Kill unused label early_out. * tulip_core.c (tulip_mwi_config): Kill unused label early_out.
......
...@@ -136,23 +136,6 @@ void __devinit tulip_parse_eeprom(struct net_device *dev) ...@@ -136,23 +136,6 @@ void __devinit tulip_parse_eeprom(struct net_device *dev)
subsequent_board: subsequent_board:
if (ee_data[27] == 0) { /* No valid media table. */ if (ee_data[27] == 0) { /* No valid media table. */
} else if (tp->chip_id == DC21041) {
unsigned char *p = (void *)ee_data + ee_data[27 + controller_index*3];
int media = get_u16(p);
int count = p[2];
p += 3;
printk(KERN_INFO "%s: 21041 Media table, default media %4.4x (%s).\n",
dev->name, media,
media & 0x0800 ? "Autosense" : medianame[media & MEDIA_MASK]);
for (i = 0; i < count; i++) {
unsigned char media_block = *p++;
int media_code = media_block & MEDIA_MASK;
if (media_block & 0x40)
p += 6;
printk(KERN_INFO "%s: 21041 media #%d, %s.\n",
dev->name, media_code, medianame[media_code]);
}
} else { } else {
unsigned char *p = (void *)ee_data + ee_data[27]; unsigned char *p = (void *)ee_data + ee_data[27];
unsigned char csr12dir = 0; unsigned char csr12dir = 0;
......
...@@ -21,12 +21,6 @@ ...@@ -21,12 +21,6 @@
#include "tulip.h" #include "tulip.h"
/* This is a mysterious value that can be written to CSR11 in the 21040 (only)
to support a pre-NWay full-duplex signaling mechanism using short frames.
No one knows what it should be, but if left at its default value some
10base2(!) packets trigger a full-duplex-request interrupt. */
#define FULL_DUPLEX_MAGIC 0x6969
/* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually /* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
met by back-to-back PCI I/O cycles, but we insert a delay to avoid met by back-to-back PCI I/O cycles, but we insert a delay to avoid
"overclocking" issues or future 66Mhz PCI. */ "overclocking" issues or future 66Mhz PCI. */
...@@ -326,17 +320,6 @@ void tulip_select_media(struct net_device *dev, int startup) ...@@ -326,17 +320,6 @@ void tulip_select_media(struct net_device *dev, int startup)
printk(KERN_DEBUG "%s: Using media type %s, CSR12 is %2.2x.\n", printk(KERN_DEBUG "%s: Using media type %s, CSR12 is %2.2x.\n",
dev->name, medianame[dev->if_port], dev->name, medianame[dev->if_port],
inl(ioaddr + CSR12) & 0xff); inl(ioaddr + CSR12) & 0xff);
} else if (tp->chip_id == DC21041) {
int port = dev->if_port <= 4 ? dev->if_port : 0;
if (tulip_debug > 1)
printk(KERN_DEBUG "%s: 21041 using media %s, CSR12 is %4.4x.\n",
dev->name, medianame[port == 3 ? 12: port],
inl(ioaddr + CSR12));
outl(0x00000000, ioaddr + CSR13); /* Reset the serial interface */
outl(t21041_csr14[port], ioaddr + CSR14);
outl(t21041_csr15[port], ioaddr + CSR15);
outl(t21041_csr13[port], ioaddr + CSR13);
new_csr6 = 0x80020000;
} else if (tp->chip_id == LC82C168) { } else if (tp->chip_id == LC82C168) {
if (startup && ! tp->medialock) if (startup && ! tp->medialock)
dev->if_port = tp->mii_cnt ? 11 : 0; dev->if_port = tp->mii_cnt ? 11 : 0;
...@@ -363,26 +346,6 @@ void tulip_select_media(struct net_device *dev, int startup) ...@@ -363,26 +346,6 @@ void tulip_select_media(struct net_device *dev, int startup)
new_csr6 = 0x00420000; new_csr6 = 0x00420000;
outl(0x1F078, ioaddr + 0xB8); outl(0x1F078, ioaddr + 0xB8);
} }
} else if (tp->chip_id == DC21040) { /* 21040 */
/* Turn on the xcvr interface. */
int csr12 = inl(ioaddr + CSR12);
if (tulip_debug > 1)
printk(KERN_DEBUG "%s: 21040 media type is %s, CSR12 is %2.2x.\n",
dev->name, medianame[dev->if_port], csr12);
if (tulip_media_cap[dev->if_port] & MediaAlwaysFD)
tp->full_duplex = 1;
new_csr6 = 0x20000;
/* Set the full duplux match frame. */
outl(FULL_DUPLEX_MAGIC, ioaddr + CSR11);
outl(0x00000000, ioaddr + CSR13); /* Reset the serial interface */
if (t21040_csr13[dev->if_port] & 8) {
outl(0x0705, ioaddr + CSR14);
outl(0x0006, ioaddr + CSR15);
} else {
outl(0xffff, ioaddr + CSR14);
outl(0x0000, ioaddr + CSR15);
}
outl(0x8f01 | t21040_csr13[dev->if_port], ioaddr + CSR13);
} else { /* Unknown chip type with no media table. */ } else { /* Unknown chip type with no media table. */
if (tp->default_port == 0) if (tp->default_port == 0)
dev->if_port = tp->mii_cnt ? 11 : 3; dev->if_port = tp->mii_cnt ? 11 : 3;
......
...@@ -33,60 +33,6 @@ void tulip_timer(unsigned long data) ...@@ -33,60 +33,6 @@ void tulip_timer(unsigned long data)
inl(ioaddr + CSR14), inl(ioaddr + CSR15)); inl(ioaddr + CSR14), inl(ioaddr + CSR15));
} }
switch (tp->chip_id) { switch (tp->chip_id) {
case DC21040:
if (!tp->medialock && csr12 & 0x0002) { /* Network error */
printk(KERN_INFO "%s: No link beat found.\n",
dev->name);
dev->if_port = (dev->if_port == 2 ? 0 : 2);
tulip_select_media(dev, 0);
dev->trans_start = jiffies;
}
break;
case DC21041:
if (tulip_debug > 2)
printk(KERN_DEBUG "%s: 21041 media tick CSR12 %8.8x.\n",
dev->name, csr12);
if (tp->medialock) break;
switch (dev->if_port) {
case 0: case 3: case 4:
if (csr12 & 0x0004) { /*LnkFail */
/* 10baseT is dead. Check for activity on alternate port. */
tp->mediasense = 1;
if (csr12 & 0x0200)
dev->if_port = 2;
else
dev->if_port = 1;
printk(KERN_INFO "%s: No 21041 10baseT link beat, Media switched to %s.\n",
dev->name, medianame[dev->if_port]);
outl(0, ioaddr + CSR13); /* Reset */
outl(t21041_csr14[dev->if_port], ioaddr + CSR14);
outl(t21041_csr15[dev->if_port], ioaddr + CSR15);
outl(t21041_csr13[dev->if_port], ioaddr + CSR13);
next_tick = 10*HZ; /* 2.4 sec. */
} else
next_tick = 30*HZ;
break;
case 1: /* 10base2 */
case 2: /* AUI */
if (csr12 & 0x0100) {
next_tick = (30*HZ); /* 30 sec. */
tp->mediasense = 0;
} else if ((csr12 & 0x0004) == 0) {
printk(KERN_INFO "%s: 21041 media switched to 10baseT.\n",
dev->name);
dev->if_port = 0;
tulip_select_media(dev, 0);
next_tick = (24*HZ)/10; /* 2.4 sec. */
} else if (tp->mediasense || (csr12 & 0x0002)) {
dev->if_port = 3 - dev->if_port; /* Swap ports. */
tulip_select_media(dev, 0);
next_tick = 20*HZ;
} else {
next_tick = 20*HZ;
}
break;
}
break;
case DC21140: case DC21140:
case DC21142: case DC21142:
case MX98713: case MX98713:
......
...@@ -15,8 +15,8 @@ ...@@ -15,8 +15,8 @@
*/ */
#define DRV_NAME "tulip" #define DRV_NAME "tulip"
#define DRV_VERSION "0.9.15-pre9" #define DRV_VERSION "1.1.0"
#define DRV_RELDATE "Nov 6, 2001" #define DRV_RELDATE "Dec 11, 2001"
#include <linux/config.h> #include <linux/config.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -130,12 +130,8 @@ int tulip_debug = 1; ...@@ -130,12 +130,8 @@ int tulip_debug = 1;
*/ */
struct tulip_chip_table tulip_tbl[] = { struct tulip_chip_table tulip_tbl[] = {
/* DC21040 */ { }, /* placeholder for array, slot unused currently */
{ "Digital DC21040 Tulip", 128, 0x0001ebef, 0, tulip_timer }, { }, /* placeholder for array, slot unused currently */
/* DC21041 */
{ "Digital DC21041 Tulip", 128, 0x0001ebef,
HAS_MEDIA_TABLE | HAS_NWAY, tulip_timer },
/* DC21140 */ /* DC21140 */
{ "Digital DS21140 Tulip", 128, 0x0001ebef, { "Digital DS21140 Tulip", 128, 0x0001ebef,
...@@ -192,8 +188,6 @@ struct tulip_chip_table tulip_tbl[] = { ...@@ -192,8 +188,6 @@ struct tulip_chip_table tulip_tbl[] = {
static struct pci_device_id tulip_pci_tbl[] __devinitdata = { static struct pci_device_id tulip_pci_tbl[] __devinitdata = {
{ 0x1011, 0x0002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21040 },
{ 0x1011, 0x0014, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21041 },
{ 0x1011, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21140 }, { 0x1011, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21140 },
{ 0x1011, 0x0019, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21143 }, { 0x1011, 0x0019, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21143 },
{ 0x11AD, 0x0002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, LC82C168 }, { 0x11AD, 0x0002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, LC82C168 },
...@@ -224,19 +218,6 @@ MODULE_DEVICE_TABLE(pci, tulip_pci_tbl); ...@@ -224,19 +218,6 @@ MODULE_DEVICE_TABLE(pci, tulip_pci_tbl);
/* A full-duplex map for media types. */ /* A full-duplex map for media types. */
const char tulip_media_cap[32] = const char tulip_media_cap[32] =
{0,0,0,16, 3,19,16,24, 27,4,7,5, 0,20,23,20, 28,31,0,0, }; {0,0,0,16, 3,19,16,24, 27,4,7,5, 0,20,23,20, 28,31,0,0, };
u8 t21040_csr13[] = {2,0x0C,8,4, 4,0,0,0, 0,0,0,0, 4,0,0,0};
/* 21041 transceiver register settings: 10-T, 10-2, AUI, 10-T, 10T-FD*/
u16 t21041_csr13[] = {
csr13_mask_10bt, /* 10-T */
csr13_mask_auibnc, /* 10-2 */
csr13_mask_auibnc, /* AUI */
csr13_mask_10bt, /* 10-T */
csr13_mask_10bt, /* 10T-FD */
};
u16 t21041_csr14[] = { 0xFFFF, 0xF7FD, 0xF7FD, 0x7F3F, 0x7F3D, };
u16 t21041_csr15[] = { 0x0008, 0x0006, 0x000E, 0x0008, 0x0008, };
static void tulip_tx_timeout(struct net_device *dev); static void tulip_tx_timeout(struct net_device *dev);
static void tulip_init_ring(struct net_device *dev); static void tulip_init_ring(struct net_device *dev);
...@@ -388,19 +369,6 @@ static void tulip_up(struct net_device *dev) ...@@ -388,19 +369,6 @@ static void tulip_up(struct net_device *dev)
outl(0x0008, ioaddr + CSR15); outl(0x0008, ioaddr + CSR15);
} }
tulip_select_media(dev, 1); tulip_select_media(dev, 1);
} else if (tp->chip_id == DC21041) {
dev->if_port = 0;
tp->nway = tp->mediasense = 1;
tp->nwayset = tp->lpar = 0;
outl(0x00000000, ioaddr + CSR13);
outl(0xFFFFFFFF, ioaddr + CSR14);
outl(0x00000008, ioaddr + CSR15); /* Listen on AUI also. */
tp->csr6 = 0x80020000;
if (tp->sym_advertise & 0x0040)
tp->csr6 |= FullDuplex;
outl(tp->csr6, ioaddr + CSR6);
outl(0x0000EF01, ioaddr + CSR13);
} else if (tp->chip_id == DC21142) { } else if (tp->chip_id == DC21142) {
if (tp->mii_cnt) { if (tp->mii_cnt) {
tulip_select_media(dev, 1); tulip_select_media(dev, 1);
...@@ -538,33 +506,6 @@ static void tulip_tx_timeout(struct net_device *dev) ...@@ -538,33 +506,6 @@ static void tulip_tx_timeout(struct net_device *dev)
if (tulip_debug > 1) if (tulip_debug > 1)
printk(KERN_WARNING "%s: Transmit timeout using MII device.\n", printk(KERN_WARNING "%s: Transmit timeout using MII device.\n",
dev->name); dev->name);
} else if (tp->chip_id == DC21040) {
if ( !tp->medialock && inl(ioaddr + CSR12) & 0x0002) {
dev->if_port = (dev->if_port == 2 ? 0 : 2);
printk(KERN_INFO "%s: 21040 transmit timed out, switching to "
"%s.\n",
dev->name, medianame[dev->if_port]);
tulip_select_media(dev, 0);
}
goto out;
} else if (tp->chip_id == DC21041) {
int csr12 = inl(ioaddr + CSR12);
printk(KERN_WARNING "%s: 21041 transmit timed out, status %8.8x, "
"CSR12 %8.8x, CSR13 %8.8x, CSR14 %8.8x, resetting...\n",
dev->name, inl(ioaddr + CSR5), csr12,
inl(ioaddr + CSR13), inl(ioaddr + CSR14));
tp->mediasense = 1;
if ( ! tp->medialock) {
if (dev->if_port == 1 || dev->if_port == 2)
if (csr12 & 0x0004) {
dev->if_port = 2 - dev->if_port;
} else
dev->if_port = 0;
else
dev->if_port = 1;
tulip_select_media(dev, 0);
}
} else if (tp->chip_id == DC21140 || tp->chip_id == DC21142 } else if (tp->chip_id == DC21140 || tp->chip_id == DC21142
|| tp->chip_id == MX98713 || tp->chip_id == COMPEX9881 || tp->chip_id == MX98713 || tp->chip_id == COMPEX9881
|| tp->chip_id == DM910X) { || tp->chip_id == DM910X) {
...@@ -636,7 +577,6 @@ static void tulip_tx_timeout(struct net_device *dev) ...@@ -636,7 +577,6 @@ static void tulip_tx_timeout(struct net_device *dev)
tp->stats.tx_errors++; tp->stats.tx_errors++;
out:
spin_unlock_irqrestore (&tp->lock, flags); spin_unlock_irqrestore (&tp->lock, flags);
dev->trans_start = jiffies; dev->trans_start = jiffies;
netif_wake_queue (dev); netif_wake_queue (dev);
...@@ -802,10 +742,6 @@ static void tulip_down (struct net_device *dev) ...@@ -802,10 +742,6 @@ static void tulip_down (struct net_device *dev)
/* release any unconsumed transmit buffers */ /* release any unconsumed transmit buffers */
tulip_clean_tx_ring(tp); tulip_clean_tx_ring(tp);
/* 21040 -- Leave the card in 10baseT state. */
if (tp->chip_id == DC21040)
outl (0x00000004, ioaddr + CSR13);
if (inl (ioaddr + CSR6) != 0xffffffff) if (inl (ioaddr + CSR6) != 0xffffffff)
tp->stats.rx_missed_errors += inl (ioaddr + CSR8) & 0xffff; tp->stats.rx_missed_errors += inl (ioaddr + CSR8) & 0xffff;
...@@ -966,7 +902,6 @@ static int private_ioctl (struct net_device *dev, struct ifreq *rq, int cmd) ...@@ -966,7 +902,6 @@ static int private_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
0x1848 + 0x1848 +
((csr12&0x7000) == 0x5000 ? 0x20 : 0) + ((csr12&0x7000) == 0x5000 ? 0x20 : 0) +
((csr12&0x06) == 6 ? 0 : 4); ((csr12&0x06) == 6 ? 0 : 4);
if (tp->chip_id != DC21041)
data->val_out |= 0x6048; data->val_out |= 0x6048;
break; break;
case 4: case 4:
...@@ -974,7 +909,6 @@ static int private_ioctl (struct net_device *dev, struct ifreq *rq, int cmd) ...@@ -974,7 +909,6 @@ static int private_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
data->val_out = data->val_out =
((inl(ioaddr + CSR6) >> 3) & 0x0040) + ((inl(ioaddr + CSR6) >> 3) & 0x0040) +
((csr14 >> 1) & 0x20) + 1; ((csr14 >> 1) & 0x20) + 1;
if (tp->chip_id != DC21041)
data->val_out |= ((csr14 >> 9) & 0x03C0); data->val_out |= ((csr14 >> 9) & 0x03C0);
break; break;
case 5: data->val_out = tp->lpar; break; case 5: data->val_out = tp->lpar; break;
...@@ -1358,7 +1292,6 @@ static int __devinit tulip_init_one (struct pci_dev *pdev, ...@@ -1358,7 +1292,6 @@ static int __devinit tulip_init_one (struct pci_dev *pdev,
long ioaddr; long ioaddr;
static int board_idx = -1; static int board_idx = -1;
int chip_idx = ent->driver_data; int chip_idx = ent->driver_data;
unsigned int t2104x_mode = 0;
unsigned int eeprom_missing = 0; unsigned int eeprom_missing = 0;
unsigned int force_csr0 = 0; unsigned int force_csr0 = 0;
...@@ -1527,31 +1460,12 @@ static int __devinit tulip_init_one (struct pci_dev *pdev, ...@@ -1527,31 +1460,12 @@ static int __devinit tulip_init_one (struct pci_dev *pdev,
/* Clear the missed-packet counter. */ /* Clear the missed-packet counter. */
inl(ioaddr + CSR8); inl(ioaddr + CSR8);
if (chip_idx == DC21041) {
if (inl(ioaddr + CSR9) & 0x8000) {
chip_idx = DC21040;
t2104x_mode = 1;
} else {
t2104x_mode = 2;
}
}
/* The station address ROM is read byte serially. The register must /* The station address ROM is read byte serially. The register must
be polled, waiting for the value to be read bit serially from the be polled, waiting for the value to be read bit serially from the
EEPROM. EEPROM.
*/ */
sum = 0; sum = 0;
if (chip_idx == DC21040) { if (chip_idx == LC82C168) {
outl(0, ioaddr + CSR9); /* Reset the pointer with a dummy write. */
for (i = 0; i < 6; i++) {
int value, boguscnt = 100000;
do
value = inl(ioaddr + CSR9);
while (value < 0 && --boguscnt > 0);
dev->dev_addr[i] = value;
sum += value & 0xff;
}
} else if (chip_idx == LC82C168) {
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
int value, boguscnt = 100000; int value, boguscnt = 100000;
outl(0x600 | i, ioaddr + 0x98); outl(0x600 | i, ioaddr + 0x98);
...@@ -1719,10 +1633,6 @@ static int __devinit tulip_init_one (struct pci_dev *pdev, ...@@ -1719,10 +1633,6 @@ static int __devinit tulip_init_one (struct pci_dev *pdev,
dev->name, tulip_tbl[chip_idx].chip_name, chip_rev, ioaddr); dev->name, tulip_tbl[chip_idx].chip_name, chip_rev, ioaddr);
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);
if (t2104x_mode == 1)
printk(" 21040 compatible mode,");
else if (t2104x_mode == 2)
printk(" 21041 mode,");
if (eeprom_missing) if (eeprom_missing)
printk(" EEPROM not present,"); printk(" EEPROM not present,");
for (i = 0; i < 6; i++) for (i = 0; i < 6; i++)
...@@ -1731,26 +1641,13 @@ static int __devinit tulip_init_one (struct pci_dev *pdev, ...@@ -1731,26 +1641,13 @@ static int __devinit tulip_init_one (struct pci_dev *pdev,
if (tp->chip_id == PNIC2) if (tp->chip_id == PNIC2)
tp->link_change = pnic2_lnk_change; tp->link_change = pnic2_lnk_change;
else if ((tp->flags & HAS_NWAY) || tp->chip_id == DC21041) else if (tp->flags & HAS_NWAY)
tp->link_change = t21142_lnk_change; tp->link_change = t21142_lnk_change;
else if (tp->flags & HAS_PNICNWAY) else if (tp->flags & HAS_PNICNWAY)
tp->link_change = pnic_lnk_change; tp->link_change = pnic_lnk_change;
/* Reset the xcvr interface and turn on heartbeat. */ /* Reset the xcvr interface and turn on heartbeat. */
switch (chip_idx) { switch (chip_idx) {
case DC21041:
if (tp->sym_advertise == 0)
tp->sym_advertise = 0x0061;
outl(0x00000000, ioaddr + CSR13);
outl(0xFFFFFFFF, ioaddr + CSR14);
outl(0x00000008, ioaddr + CSR15); /* Listen on AUI also. */
outl(inl(ioaddr + CSR6) | csr6_fd, ioaddr + CSR6);
outl(0x0000EF01, ioaddr + CSR13);
break;
case DC21040:
outl(0x00000000, ioaddr + CSR13);
outl(0x00000004, ioaddr + CSR13);
break;
case DC21140: case DC21140:
case DM910X: case DM910X:
default: default:
......
This diff is collapsed.
...@@ -13,7 +13,7 @@ int eata2x_abort(Scsi_Cmnd *); ...@@ -13,7 +13,7 @@ int eata2x_abort(Scsi_Cmnd *);
int eata2x_reset(Scsi_Cmnd *); int eata2x_reset(Scsi_Cmnd *);
int eata2x_biosparam(Disk *, kdev_t, int *); int eata2x_biosparam(Disk *, kdev_t, int *);
#define EATA_VERSION "6.05.00" #define EATA_VERSION "7.00.00"
#define EATA { \ #define EATA { \
name: "EATA/DMA 2.0x rev. " EATA_VERSION " ", \ name: "EATA/DMA 2.0x rev. " EATA_VERSION " ", \
......
...@@ -183,7 +183,7 @@ void scsi_initialize_queue(Scsi_Device * SDpnt, struct Scsi_Host * SHpnt) ...@@ -183,7 +183,7 @@ void scsi_initialize_queue(Scsi_Device * SDpnt, struct Scsi_Host * SHpnt)
request_queue_t *q = &SDpnt->request_queue; request_queue_t *q = &SDpnt->request_queue;
int max_segments = SHpnt->sg_tablesize; int max_segments = SHpnt->sg_tablesize;
blk_init_queue(q, scsi_request_fn); blk_init_queue(q, scsi_request_fn, &SHpnt->host_lock);
q->queuedata = (void *) SDpnt; q->queuedata = (void *) SDpnt;
#ifdef DMA_CHUNK_SIZE #ifdef DMA_CHUNK_SIZE
......
...@@ -1254,9 +1254,7 @@ STATIC void scsi_restart_operations(struct Scsi_Host *host) ...@@ -1254,9 +1254,7 @@ STATIC void scsi_restart_operations(struct Scsi_Host *host)
break; break;
} }
spin_lock(&q->queue_lock);
q->request_fn(q); q->request_fn(q);
spin_unlock(&q->queue_lock);
} }
spin_unlock_irqrestore(&host->host_lock, flags); spin_unlock_irqrestore(&host->host_lock, flags);
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment