Commit 3d0affc8 authored by James Bottomley's avatar James Bottomley Committed by James Bottomley

i20 rewrite

From: Markus Lidel <Markus.Lidel@shadowconnect.com>

generic:
- split i2o_core into several files, grouped by same function
- I2O devices are now registered as devices and show up in sysfs
- the various I2O OSM's (e.g. i2o_scsi) now register in the I2O core
   and also use the 2.6 driver mechanism.
- I2O messages will be created in the message frame instead of creating
   it in local memory and copying it over later on.
- context list for 64 pointer to 32 context conversion now uses a
   double linked list

PCI:
- driver now registers as a PCI device driver and uses probe function to
   get the possible controllers. (needed for hotplugging)
- converted DMA handling from pci_* to generic dma_* functions

Block OSM:
- use one request queue per I2O block device instead of one per
   controller
- I2O block devices and queues are allocated dynamically and therefore
   no more limit of block devices

SCSI OSM:
- corrected bug in SCSI reply function which caused the memory to be
   freed before the done function was called.
- one I2O controller registers as one scsi host instead of one scsi host
   per channel
- no more ch,id,lun => tid mapping table

Config OSM:
- added ioctl32 for passthru and getiops.
- removed ioctl_html

Documentation:
- removed TODO entries from README
- moved docs under Documentation/i2o
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarJames Bottomley <James.Bottomley@SteelEye.com>
parent 8ef4b795
Linux I2O Support (c) Copyright 1999 Red Hat Software
and others.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version
2 of the License, or (at your option) any later version.
AUTHORS (so far)
Alan Cox, Building Number Three Ltd.
Core code, SCSI and Block OSMs
Steve Ralston, LSI Logic Corp.
Debugging SCSI and Block OSM
Deepak Saxena, Intel Corp.
Various core/block extensions
/proc interface, bug fixes
Ioctl interfaces for control
Debugging LAN OSM
Philip Rumpf
Fixed assorted dumb SMP locking bugs
Juha Sievanen, University of Helsinki Finland
LAN OSM code
/proc interface to LAN class
Bug fixes
Core code extensions
Auvo Häkkinen, University of Helsinki Finland
LAN OSM code
/Proc interface to LAN class
Bug fixes
Core code extensions
Taneli Vähäkangas, University of Helsinki Finland
Fixes to i2o_config
CREDITS
This work was made possible by
Red Hat Software
Funding for the Building #3 part of the project
Symbios Logic (Now LSI)
Host adapters, hints, known to work platforms when I hit
compatibility problems
BoxHill Corporation
Loan of initial FibreChannel disk array used for development work.
European Comission
Funding the work done by the University of Helsinki
SysKonnect
Loan of FDDI and Gigabit Ethernet cards
ASUSTeK
Loan of I2O motherboard
Linux I2O User Space Interface
rev 0.3 - 04/20/99
=============================================================================
Originally written by Deepak Saxena(deepak@plexity.net)
Currently maintained by Deepak Saxena(deepak@plexity.net)
=============================================================================
I. Introduction
The Linux I2O subsystem provides a set of ioctl() commands that can be
utilized by user space applications to communicate with IOPs and devices
on individual IOPs. This document defines the specific ioctl() commands
that are available to the user and provides examples of their uses.
This document assumes the reader is familiar with or has access to the
I2O specification as no I2O message parameters are outlined. For information
on the specification, see http://www.i2osig.org
This document and the I2O user space interface are currently maintained
by Deepak Saxena. Please send all comments, errata, and bug fixes to
deepak@csociety.purdue.edu
II. IOP Access
Access to the I2O subsystem is provided through the device file named
/dev/i2o/ctl. This file is a character file with major number 10 and minor
number 166. It can be created through the following command:
mknod /dev/i2o/ctl c 10 166
III. Determining the IOP Count
SYNOPSIS
ioctl(fd, I2OGETIOPS, int *count);
u8 count[MAX_I2O_CONTROLLERS];
DESCRIPTION
This function returns the system's active IOP table. count should
point to a buffer containing MAX_I2O_CONTROLLERS entries. Upon
returning, each entry will contain a non-zero value if the given
IOP unit is active, and NULL if it is inactive or non-existent.
RETURN VALUE.
Returns 0 if no errors occur, and -1 otherwise. If an error occurs,
errno is set appropriately:
EFAULT Invalid user space pointer was passed
IV. Getting Hardware Resource Table
SYNOPSIS
ioctl(fd, I2OHRTGET, struct i2o_cmd_hrt *hrt);
struct i2o_cmd_hrtlct
{
u32 iop; /* IOP unit number */
void *resbuf; /* Buffer for result */
u32 *reslen; /* Buffer length in bytes */
};
DESCRIPTION
This function returns the Hardware Resource Table of the IOP specified
by hrt->iop in the buffer pointed to by hrt->resbuf. The actual size of
the data is written into *(hrt->reslen).
RETURNS
This function returns 0 if no errors occur. If an error occurs, -1
is returned and errno is set appropriately:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ENOBUFS Buffer not large enough. If this occurs, the required
buffer length is written into *(hrt->reslen)
V. Getting Logical Configuration Table
SYNOPSIS
ioctl(fd, I2OLCTGET, struct i2o_cmd_lct *lct);
struct i2o_cmd_hrtlct
{
u32 iop; /* IOP unit number */
void *resbuf; /* Buffer for result */
u32 *reslen; /* Buffer length in bytes */
};
DESCRIPTION
This function returns the Logical Configuration Table of the IOP specified
by lct->iop in the buffer pointed to by lct->resbuf. The actual size of
the data is written into *(lct->reslen).
RETURNS
This function returns 0 if no errors occur. If an error occurs, -1
is returned and errno is set appropriately:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ENOBUFS Buffer not large enough. If this occurs, the required
buffer length is written into *(lct->reslen)
VI. Settting Parameters
SYNOPSIS
ioctl(fd, I2OPARMSET, struct i2o_parm_setget *ops);
struct i2o_cmd_psetget
{
u32 iop; /* IOP unit number */
u32 tid; /* Target device TID */
void *opbuf; /* Operation List buffer */
u32 oplen; /* Operation List buffer length in bytes */
void *resbuf; /* Result List buffer */
u32 *reslen; /* Result List buffer length in bytes */
};
DESCRIPTION
This function posts a UtilParamsSet message to the device identified
by ops->iop and ops->tid. The operation list for the message is
sent through the ops->opbuf buffer, and the result list is written
into the buffer pointed to by ops->resbuf. The number of bytes
written is placed into *(ops->reslen).
RETURNS
The return value is the size in bytes of the data written into
ops->resbuf if no errors occur. If an error occurs, -1 is returned
and errno is set appropriatly:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ENOBUFS Buffer not large enough. If this occurs, the required
buffer length is written into *(ops->reslen)
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
A return value of 0 does not mean that the value was actually
changed properly on the IOP. The user should check the result
list to determine the specific status of the transaction.
VII. Getting Parameters
SYNOPSIS
ioctl(fd, I2OPARMGET, struct i2o_parm_setget *ops);
struct i2o_parm_setget
{
u32 iop; /* IOP unit number */
u32 tid; /* Target device TID */
void *opbuf; /* Operation List buffer */
u32 oplen; /* Operation List buffer length in bytes */
void *resbuf; /* Result List buffer */
u32 *reslen; /* Result List buffer length in bytes */
};
DESCRIPTION
This function posts a UtilParamsGet message to the device identified
by ops->iop and ops->tid. The operation list for the message is
sent through the ops->opbuf buffer, and the result list is written
into the buffer pointed to by ops->resbuf. The actual size of data
written is placed into *(ops->reslen).
RETURNS
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ENOBUFS Buffer not large enough. If this occurs, the required
buffer length is written into *(ops->reslen)
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
A return value of 0 does not mean that the value was actually
properly retreived. The user should check the result list
to determine the specific status of the transaction.
VIII. Downloading Software
SYNOPSIS
ioctl(fd, I2OSWDL, struct i2o_sw_xfer *sw);
struct i2o_sw_xfer
{
u32 iop; /* IOP unit number */
u8 flags; /* DownloadFlags field */
u8 sw_type; /* Software type */
u32 sw_id; /* Software ID */
void *buf; /* Pointer to software buffer */
u32 *swlen; /* Length of software buffer */
u32 *maxfrag; /* Number of fragments */
u32 *curfrag; /* Current fragment number */
};
DESCRIPTION
This function downloads a software fragment pointed by sw->buf
to the iop identified by sw->iop. The DownloadFlags, SwID, SwType
and SwSize fields of the ExecSwDownload message are filled in with
the values of sw->flags, sw->sw_id, sw->sw_type and *(sw->swlen).
The fragments _must_ be sent in order and be 8K in size. The last
fragment _may_ be shorter, however. The kernel will compute its
size based on information in the sw->swlen field.
Please note that SW transfers can take a long time.
RETURNS
This function returns 0 no errors occur. If an error occurs, -1
is returned and errno is set appropriatly:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
IX. Uploading Software
SYNOPSIS
ioctl(fd, I2OSWUL, struct i2o_sw_xfer *sw);
struct i2o_sw_xfer
{
u32 iop; /* IOP unit number */
u8 flags; /* UploadFlags */
u8 sw_type; /* Software type */
u32 sw_id; /* Software ID */
void *buf; /* Pointer to software buffer */
u32 *swlen; /* Length of software buffer */
u32 *maxfrag; /* Number of fragments */
u32 *curfrag; /* Current fragment number */
};
DESCRIPTION
This function uploads a software fragment from the IOP identified
by sw->iop, sw->sw_type, sw->sw_id and optionally sw->swlen fields.
The UploadFlags, SwID, SwType and SwSize fields of the ExecSwUpload
message are filled in with the values of sw->flags, sw->sw_id,
sw->sw_type and *(sw->swlen).
The fragments _must_ be requested in order and be 8K in size. The
user is responsible for allocating memory pointed by sw->buf. The
last fragment _may_ be shorter.
Please note that SW transfers can take a long time.
RETURNS
This function returns 0 if no errors occur. If an error occurs, -1
is returned and errno is set appropriatly:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
X. Removing Software
SYNOPSIS
ioctl(fd, I2OSWDEL, struct i2o_sw_xfer *sw);
struct i2o_sw_xfer
{
u32 iop; /* IOP unit number */
u8 flags; /* RemoveFlags */
u8 sw_type; /* Software type */
u32 sw_id; /* Software ID */
void *buf; /* Unused */
u32 *swlen; /* Length of the software data */
u32 *maxfrag; /* Unused */
u32 *curfrag; /* Unused */
};
DESCRIPTION
This function removes software from the IOP identified by sw->iop.
The RemoveFlags, SwID, SwType and SwSize fields of the ExecSwRemove message
are filled in with the values of sw->flags, sw->sw_id, sw->sw_type and
*(sw->swlen). Give zero in *(sw->len) if the value is unknown. IOP uses
*(sw->swlen) value to verify correct identication of the module to remove.
The actual size of the module is written into *(sw->swlen).
RETURNS
This function returns 0 if no errors occur. If an error occurs, -1
is returned and errno is set appropriatly:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
X. Validating Configuration
SYNOPSIS
ioctl(fd, I2OVALIDATE, int *iop);
u32 iop;
DESCRIPTION
This function posts an ExecConfigValidate message to the controller
identified by iop. This message indicates that the current
configuration is accepted. The iop changes the status of suspect drivers
to valid and may delete old drivers from its store.
RETURNS
This function returns 0 if no erro occur. If an error occurs, -1 is
returned and errno is set appropriatly:
ETIMEDOUT Timeout waiting for reply message
ENXIO Invalid IOP number
XI. Configuration Dialog
SYNOPSIS
ioctl(fd, I2OHTML, struct i2o_html *htquery);
struct i2o_html
{
u32 iop; /* IOP unit number */
u32 tid; /* Target device ID */
u32 page; /* HTML page */
void *resbuf; /* Buffer for reply HTML page */
u32 *reslen; /* Length in bytes of reply buffer */
void *qbuf; /* Pointer to HTTP query string */
u32 qlen; /* Length in bytes of query string buffer */
};
DESCRIPTION
This function posts an UtilConfigDialog message to the device identified
by htquery->iop and htquery->tid. The requested HTML page number is
provided by the htquery->page field, and the resultant data is stored
in the buffer pointed to by htquery->resbuf. If there is an HTTP query
string that is to be sent to the device, it should be sent in the buffer
pointed to by htquery->qbuf. If there is no query string, this field
should be set to NULL. The actual size of the reply received is written
into *(htquery->reslen).
RETURNS
This function returns 0 if no error occur. If an error occurs, -1
is returned and errno is set appropriatly:
EFAULT Invalid user space pointer was passed
ENXIO Invalid IOP number
ENOBUFS Buffer not large enough. If this occurs, the required
buffer length is written into *(ops->reslen)
ETIMEDOUT Timeout waiting for reply message
ENOMEM Kernel memory allocation error
XII. Events
In the process of determining this. Current idea is to have use
the select() interface to allow user apps to periodically poll
the /dev/i2o/ctl device for events. When select() notifies the user
that an event is available, the user would call read() to retrieve
a list of all the events that are pending for the specific device.
=============================================================================
Revision History
=============================================================================
Rev 0.1 - 04/01/99
- Initial revision
Rev 0.2 - 04/06/99
- Changed return values to match UNIX ioctl() standard. Only return values
are 0 and -1. All errors are reported through errno.
- Added summary of proposed possible event interfaces
Rev 0.3 - 04/20/99
- Changed all ioctls() to use pointers to user data instead of actual data
- Updated error values to match the code
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
# In the future, some of these should be built conditionally. # In the future, some of these should be built conditionally.
# #
i2o_core-y += iop.o driver.o device.o debug.o pci.o exec-osm.o
obj-$(CONFIG_I2O) += i2o_core.o obj-$(CONFIG_I2O) += i2o_core.o
obj-$(CONFIG_I2O_CONFIG)+= i2o_config.o obj-$(CONFIG_I2O_CONFIG)+= i2o_config.o
obj-$(CONFIG_I2O_BLOCK) += i2o_block.o obj-$(CONFIG_I2O_BLOCK) += i2o_block.o
......
#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/i2o.h>
static int verbose;
extern struct i2o_driver **i2o_drivers;
extern unsigned int i2o_max_drivers;
static void i2o_report_util_cmd(u8 cmd);
static void i2o_report_exec_cmd(u8 cmd);
void i2o_report_fail_status(u8 req_status, u32 * msg);
void i2o_report_common_status(u8 req_status);
static void i2o_report_common_dsc(u16 detailed_status);
void i2o_dump_status_block(i2o_status_block * sb)
{
pr_debug("Organization ID: %d\n", sb->org_id);
pr_debug("IOP ID: %d\n", sb->iop_id);
pr_debug("Host Unit ID: %d\n", sb->host_unit_id);
pr_debug("Segment Number: %d\n", sb->segment_number);
pr_debug("I2O Version: %d\n", sb->i2o_version);
pr_debug("IOP State: %d\n", sb->iop_state);
pr_debug("Messanger Type: %d\n", sb->msg_type);
pr_debug("Inbound Frame Size: %d\n", sb->inbound_frame_size);
pr_debug("Init Code: %d\n", sb->init_code);
pr_debug("Max Inbound MFrames: %d\n", sb->max_inbound_frames);
pr_debug("Current Inbound MFrames: %d\n", sb->cur_inbound_frames);
pr_debug("Max Outbound MFrames: %d\n", sb->max_outbound_frames);
pr_debug("Product ID String: %s\n", sb->product_id);
pr_debug("Expected LCT Size: %d\n", sb->expected_lct_size);
pr_debug("IOP Capabilities: %d\n", sb->iop_capabilities);
pr_debug("Desired Private MemSize: %d\n", sb->desired_mem_size);
pr_debug("Current Private MemSize: %d\n", sb->current_mem_size);
pr_debug("Current Private MemBase: %d\n", sb->current_mem_base);
pr_debug("Desired Private IO Size: %d\n", sb->desired_io_size);
pr_debug("Current Private IO Size: %d\n", sb->current_io_size);
pr_debug("Current Private IO Base: %d\n", sb->current_io_base);
};
/*
* Used for error reporting/debugging purposes.
* Report Cmd name, Request status, Detailed Status.
*/
void i2o_report_status(const char *severity, const char *str,
struct i2o_message *m)
{
u32 *msg = (u32 *) m;
u8 cmd = (msg[1] >> 24) & 0xFF;
u8 req_status = (msg[4] >> 24) & 0xFF;
u16 detailed_status = msg[4] & 0xFFFF;
//struct i2o_driver *h = i2o_drivers[msg[2] & (i2o_max_drivers-1)];
if (cmd == I2O_CMD_UTIL_EVT_REGISTER)
return; // No status in this reply
printk("%s%s: ", severity, str);
if (cmd < 0x1F) // Utility cmd
i2o_report_util_cmd(cmd);
else if (cmd >= 0xA0 && cmd <= 0xEF) // Executive cmd
i2o_report_exec_cmd(cmd);
else
printk("Cmd = %0#2x, ", cmd); // Other cmds
if (msg[0] & MSG_FAIL) {
i2o_report_fail_status(req_status, msg);
return;
}
i2o_report_common_status(req_status);
if (cmd < 0x1F || (cmd >= 0xA0 && cmd <= 0xEF))
i2o_report_common_dsc(detailed_status);
else
printk(" / DetailedStatus = %0#4x.\n", detailed_status);
}
/* Used to dump a message to syslog during debugging */
void i2o_dump_message(struct i2o_message *m)
{
#ifdef DEBUG
u32 *msg = (u32 *) m;
int i;
printk(KERN_INFO "Dumping I2O message size %d @ %p\n",
msg[0] >> 16 & 0xffff, msg);
for (i = 0; i < ((msg[0] >> 16) & 0xffff); i++)
printk(KERN_INFO " msg[%d] = %0#10x\n", i, msg[i]);
#endif
}
/**
* i2o_report_controller_unit - print information about a tid
* @c: controller
* @d: device
*
* Dump an information block associated with a given unit (TID). The
* tables are read and a block of text is output to printk that is
* formatted intended for the user.
*/
void i2o_report_controller_unit(struct i2o_controller *c, struct i2o_device *d)
{
char buf[64];
char str[22];
int ret;
if (verbose == 0)
return;
printk(KERN_INFO "Target ID %03x.\n", d->lct_data.tid);
if ((ret = i2o_parm_field_get(d, 0xF100, 3, buf, 16)) >= 0) {
buf[16] = 0;
printk(KERN_INFO " Vendor: %s\n", buf);
}
if ((ret = i2o_parm_field_get(d, 0xF100, 4, buf, 16)) >= 0) {
buf[16] = 0;
printk(KERN_INFO " Device: %s\n", buf);
}
if (i2o_parm_field_get(d, 0xF100, 5, buf, 16) >= 0) {
buf[16] = 0;
printk(KERN_INFO " Description: %s\n", buf);
}
if ((ret = i2o_parm_field_get(d, 0xF100, 6, buf, 8)) >= 0) {
buf[8] = 0;
printk(KERN_INFO " Rev: %s\n", buf);
}
printk(KERN_INFO " Class: ");
//sprintf(str, "%-21s", i2o_get_class_name(d->lct_data.class_id));
printk("%s\n", str);
printk(KERN_INFO " Subclass: 0x%04X\n", d->lct_data.sub_class);
printk(KERN_INFO " Flags: ");
if (d->lct_data.device_flags & (1 << 0))
printk("C"); // ConfigDialog requested
if (d->lct_data.device_flags & (1 << 1))
printk("U"); // Multi-user capable
if (!(d->lct_data.device_flags & (1 << 4)))
printk("P"); // Peer service enabled!
if (!(d->lct_data.device_flags & (1 << 5)))
printk("M"); // Mgmt service enabled!
printk("\n");
}
/*
MODULE_PARM(verbose, "i");
MODULE_PARM_DESC(verbose, "Verbose diagnostics");
*/
/*
* Used for error reporting/debugging purposes.
* Following fail status are common to all classes.
* The preserved message must be handled in the reply handler.
*/
void i2o_report_fail_status(u8 req_status, u32 * msg)
{
static char *FAIL_STATUS[] = {
"0x80", /* not used */
"SERVICE_SUSPENDED", /* 0x81 */
"SERVICE_TERMINATED", /* 0x82 */
"CONGESTION",
"FAILURE",
"STATE_ERROR",
"TIME_OUT",
"ROUTING_FAILURE",
"INVALID_VERSION",
"INVALID_OFFSET",
"INVALID_MSG_FLAGS",
"FRAME_TOO_SMALL",
"FRAME_TOO_LARGE",
"INVALID_TARGET_ID",
"INVALID_INITIATOR_ID",
"INVALID_INITIATOR_CONTEX", /* 0x8F */
"UNKNOWN_FAILURE" /* 0xFF */
};
if (req_status == I2O_FSC_TRANSPORT_UNKNOWN_FAILURE)
printk("TRANSPORT_UNKNOWN_FAILURE (%0#2x)\n.", req_status);
else
printk("TRANSPORT_%s.\n", FAIL_STATUS[req_status & 0x0F]);
/* Dump some details */
printk(KERN_ERR " InitiatorId = %d, TargetId = %d\n",
(msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
printk(KERN_ERR " LowestVersion = 0x%02X, HighestVersion = 0x%02X\n",
(msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
printk(KERN_ERR " FailingHostUnit = 0x%04X, FailingIOP = 0x%03X\n",
msg[5] >> 16, msg[5] & 0xFFF);
printk(KERN_ERR " Severity: 0x%02X ", (msg[4] >> 16) & 0xFF);
if (msg[4] & (1 << 16))
printk("(FormatError), "
"this msg can never be delivered/processed.\n");
if (msg[4] & (1 << 17))
printk("(PathError), "
"this msg can no longer be delivered/processed.\n");
if (msg[4] & (1 << 18))
printk("(PathState), "
"the system state does not allow delivery.\n");
if (msg[4] & (1 << 19))
printk("(Congestion), resources temporarily not available;"
"do not retry immediately.\n");
}
/*
* Used for error reporting/debugging purposes.
* Following reply status are common to all classes.
*/
void i2o_report_common_status(u8 req_status)
{
static char *REPLY_STATUS[] = {
"SUCCESS",
"ABORT_DIRTY",
"ABORT_NO_DATA_TRANSFER",
"ABORT_PARTIAL_TRANSFER",
"ERROR_DIRTY",
"ERROR_NO_DATA_TRANSFER",
"ERROR_PARTIAL_TRANSFER",
"PROCESS_ABORT_DIRTY",
"PROCESS_ABORT_NO_DATA_TRANSFER",
"PROCESS_ABORT_PARTIAL_TRANSFER",
"TRANSACTION_ERROR",
"PROGRESS_REPORT"
};
if (req_status >= ARRAY_SIZE(REPLY_STATUS))
printk("RequestStatus = %0#2x", req_status);
else
printk("%s", REPLY_STATUS[req_status]);
}
/*
* Used for error reporting/debugging purposes.
* Following detailed status are valid for executive class,
* utility class, DDM class and for transaction error replies.
*/
static void i2o_report_common_dsc(u16 detailed_status)
{
static char *COMMON_DSC[] = {
"SUCCESS",
"0x01", // not used
"BAD_KEY",
"TCL_ERROR",
"REPLY_BUFFER_FULL",
"NO_SUCH_PAGE",
"INSUFFICIENT_RESOURCE_SOFT",
"INSUFFICIENT_RESOURCE_HARD",
"0x08", // not used
"CHAIN_BUFFER_TOO_LARGE",
"UNSUPPORTED_FUNCTION",
"DEVICE_LOCKED",
"DEVICE_RESET",
"INAPPROPRIATE_FUNCTION",
"INVALID_INITIATOR_ADDRESS",
"INVALID_MESSAGE_FLAGS",
"INVALID_OFFSET",
"INVALID_PARAMETER",
"INVALID_REQUEST",
"INVALID_TARGET_ADDRESS",
"MESSAGE_TOO_LARGE",
"MESSAGE_TOO_SMALL",
"MISSING_PARAMETER",
"TIMEOUT",
"UNKNOWN_ERROR",
"UNKNOWN_FUNCTION",
"UNSUPPORTED_VERSION",
"DEVICE_BUSY",
"DEVICE_NOT_AVAILABLE"
};
if (detailed_status > I2O_DSC_DEVICE_NOT_AVAILABLE)
printk(" / DetailedStatus = %0#4x.\n", detailed_status);
else
printk(" / %s.\n", COMMON_DSC[detailed_status]);
}
/*
* Used for error reporting/debugging purposes
*/
static void i2o_report_util_cmd(u8 cmd)
{
switch (cmd) {
case I2O_CMD_UTIL_NOP:
printk("UTIL_NOP, ");
break;
case I2O_CMD_UTIL_ABORT:
printk("UTIL_ABORT, ");
break;
case I2O_CMD_UTIL_CLAIM:
printk("UTIL_CLAIM, ");
break;
case I2O_CMD_UTIL_RELEASE:
printk("UTIL_CLAIM_RELEASE, ");
break;
case I2O_CMD_UTIL_CONFIG_DIALOG:
printk("UTIL_CONFIG_DIALOG, ");
break;
case I2O_CMD_UTIL_DEVICE_RESERVE:
printk("UTIL_DEVICE_RESERVE, ");
break;
case I2O_CMD_UTIL_DEVICE_RELEASE:
printk("UTIL_DEVICE_RELEASE, ");
break;
case I2O_CMD_UTIL_EVT_ACK:
printk("UTIL_EVENT_ACKNOWLEDGE, ");
break;
case I2O_CMD_UTIL_EVT_REGISTER:
printk("UTIL_EVENT_REGISTER, ");
break;
case I2O_CMD_UTIL_LOCK:
printk("UTIL_LOCK, ");
break;
case I2O_CMD_UTIL_LOCK_RELEASE:
printk("UTIL_LOCK_RELEASE, ");
break;
case I2O_CMD_UTIL_PARAMS_GET:
printk("UTIL_PARAMS_GET, ");
break;
case I2O_CMD_UTIL_PARAMS_SET:
printk("UTIL_PARAMS_SET, ");
break;
case I2O_CMD_UTIL_REPLY_FAULT_NOTIFY:
printk("UTIL_REPLY_FAULT_NOTIFY, ");
break;
default:
printk("Cmd = %0#2x, ", cmd);
}
}
/*
* Used for error reporting/debugging purposes
*/
static void i2o_report_exec_cmd(u8 cmd)
{
switch (cmd) {
case I2O_CMD_ADAPTER_ASSIGN:
printk("EXEC_ADAPTER_ASSIGN, ");
break;
case I2O_CMD_ADAPTER_READ:
printk("EXEC_ADAPTER_READ, ");
break;
case I2O_CMD_ADAPTER_RELEASE:
printk("EXEC_ADAPTER_RELEASE, ");
break;
case I2O_CMD_BIOS_INFO_SET:
printk("EXEC_BIOS_INFO_SET, ");
break;
case I2O_CMD_BOOT_DEVICE_SET:
printk("EXEC_BOOT_DEVICE_SET, ");
break;
case I2O_CMD_CONFIG_VALIDATE:
printk("EXEC_CONFIG_VALIDATE, ");
break;
case I2O_CMD_CONN_SETUP:
printk("EXEC_CONN_SETUP, ");
break;
case I2O_CMD_DDM_DESTROY:
printk("EXEC_DDM_DESTROY, ");
break;
case I2O_CMD_DDM_ENABLE:
printk("EXEC_DDM_ENABLE, ");
break;
case I2O_CMD_DDM_QUIESCE:
printk("EXEC_DDM_QUIESCE, ");
break;
case I2O_CMD_DDM_RESET:
printk("EXEC_DDM_RESET, ");
break;
case I2O_CMD_DDM_SUSPEND:
printk("EXEC_DDM_SUSPEND, ");
break;
case I2O_CMD_DEVICE_ASSIGN:
printk("EXEC_DEVICE_ASSIGN, ");
break;
case I2O_CMD_DEVICE_RELEASE:
printk("EXEC_DEVICE_RELEASE, ");
break;
case I2O_CMD_HRT_GET:
printk("EXEC_HRT_GET, ");
break;
case I2O_CMD_ADAPTER_CLEAR:
printk("EXEC_IOP_CLEAR, ");
break;
case I2O_CMD_ADAPTER_CONNECT:
printk("EXEC_IOP_CONNECT, ");
break;
case I2O_CMD_ADAPTER_RESET:
printk("EXEC_IOP_RESET, ");
break;
case I2O_CMD_LCT_NOTIFY:
printk("EXEC_LCT_NOTIFY, ");
break;
case I2O_CMD_OUTBOUND_INIT:
printk("EXEC_OUTBOUND_INIT, ");
break;
case I2O_CMD_PATH_ENABLE:
printk("EXEC_PATH_ENABLE, ");
break;
case I2O_CMD_PATH_QUIESCE:
printk("EXEC_PATH_QUIESCE, ");
break;
case I2O_CMD_PATH_RESET:
printk("EXEC_PATH_RESET, ");
break;
case I2O_CMD_STATIC_MF_CREATE:
printk("EXEC_STATIC_MF_CREATE, ");
break;
case I2O_CMD_STATIC_MF_RELEASE:
printk("EXEC_STATIC_MF_RELEASE, ");
break;
case I2O_CMD_STATUS_GET:
printk("EXEC_STATUS_GET, ");
break;
case I2O_CMD_SW_DOWNLOAD:
printk("EXEC_SW_DOWNLOAD, ");
break;
case I2O_CMD_SW_UPLOAD:
printk("EXEC_SW_UPLOAD, ");
break;
case I2O_CMD_SW_REMOVE:
printk("EXEC_SW_REMOVE, ");
break;
case I2O_CMD_SYS_ENABLE:
printk("EXEC_SYS_ENABLE, ");
break;
case I2O_CMD_SYS_MODIFY:
printk("EXEC_SYS_MODIFY, ");
break;
case I2O_CMD_SYS_QUIESCE:
printk("EXEC_SYS_QUIESCE, ");
break;
case I2O_CMD_SYS_TAB_SET:
printk("EXEC_SYS_TAB_SET, ");
break;
default:
printk("Cmd = %#02x, ", cmd);
}
}
void i2o_debug_state(struct i2o_controller *c)
{
printk(KERN_INFO "%s: State = ", c->name);
switch (((i2o_status_block *) c->status_block.virt)->iop_state) {
case 0x01:
printk("INIT\n");
break;
case 0x02:
printk("RESET\n");
break;
case 0x04:
printk("HOLD\n");
break;
case 0x05:
printk("READY\n");
break;
case 0x08:
printk("OPERATIONAL\n");
break;
case 0x10:
printk("FAILED\n");
break;
case 0x11:
printk("FAULTED\n");
break;
default:
printk("%x (unknown !!)\n",
((i2o_status_block *) c->status_block.virt)->iop_state);
}
};
void i2o_systab_debug(struct i2o_sys_tbl *sys_tbl)
{
u32 *table;
int count;
u32 size;
table = (u32 *) sys_tbl;
size = sizeof(struct i2o_sys_tbl) + sys_tbl->num_entries
* sizeof(struct i2o_sys_tbl_entry);
for (count = 0; count < (size >> 2); count++)
printk(KERN_INFO "sys_tbl[%d] = %0#10x\n", count, table[count]);
}
void i2o_dump_hrt(struct i2o_controller *c)
{
u32 *rows = (u32 *) c->hrt.virt;
u8 *p = (u8 *) c->hrt.virt;
u8 *d;
int count;
int length;
int i;
int state;
if (p[3] != 0) {
printk(KERN_ERR
"%s: HRT table for controller is too new a version.\n",
c->name);
return;
}
count = p[0] | (p[1] << 8);
length = p[2];
printk(KERN_INFO "%s: HRT has %d entries of %d bytes each.\n",
c->name, count, length << 2);
rows += 2;
for (i = 0; i < count; i++) {
printk(KERN_INFO "Adapter %08X: ", rows[0]);
p = (u8 *) (rows + 1);
d = (u8 *) (rows + 2);
state = p[1] << 8 | p[0];
printk("TID %04X:[", state & 0xFFF);
state >>= 12;
if (state & (1 << 0))
printk("H"); /* Hidden */
if (state & (1 << 2)) {
printk("P"); /* Present */
if (state & (1 << 1))
printk("C"); /* Controlled */
}
if (state > 9)
printk("*"); /* Hard */
printk("]:");
switch (p[3] & 0xFFFF) {
case 0:
/* Adapter private bus - easy */
printk("Local bus %d: I/O at 0x%04X Mem 0x%08X",
p[2], d[1] << 8 | d[0], *(u32 *) (d + 4));
break;
case 1:
/* ISA bus */
printk("ISA %d: CSN %d I/O at 0x%04X Mem 0x%08X",
p[2], d[2], d[1] << 8 | d[0], *(u32 *) (d + 4));
break;
case 2: /* EISA bus */
printk("EISA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
p[2], d[3], d[1] << 8 | d[0], *(u32 *) (d + 4));
break;
case 3: /* MCA bus */
printk("MCA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
p[2], d[3], d[1] << 8 | d[0], *(u32 *) (d + 4));
break;
case 4: /* PCI bus */
printk("PCI %d: Bus %d Device %d Function %d",
p[2], d[2], d[1], d[0]);
break;
case 0x80: /* Other */
default:
printk("Unsupported bus type.");
break;
}
printk("\n");
rows += length;
}
}
EXPORT_SYMBOL(i2o_dump_status_block);
EXPORT_SYMBOL(i2o_dump_message);
/*
* Functions to handle I2O devices
*
* Copyright (C) 2004 Markus Lidel <Markus.Lidel@shadowconnect.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* Fixes/additions:
* Markus Lidel <Markus.Lidel@shadowconnect.com>
* initial version.
*/
#include <linux/module.h>
#include <linux/i2o.h>
/* Exec OSM functions */
extern struct bus_type i2o_bus_type;
/**
* i2o_device_issue_claim - claim or release a device
* @dev: I2O device to claim or release
* @cmd: claim or release command
* @type: type of claim
*
* Issue I2O UTIL_CLAIM or UTIL_RELEASE messages. The message to be sent
* is set by cmd. dev is the I2O device which should be claim or
* released and the type is the claim type (see the I2O spec).
*
* Returs 0 on success or negative error code on failure.
*/
static inline int i2o_device_issue_claim(struct i2o_device *dev, u32 cmd,
u32 type)
{
struct i2o_message *msg;
u32 m;
m = i2o_msg_get_wait(dev->iop, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(cmd << 24 | HOST_TID << 12 | dev->lct_data.tid, &msg->u.head[1]);
writel(type, &msg->body[0]);
return i2o_msg_post_wait(dev->iop, m, 60);
};
/**
* i2o_device_claim - claim a device for use by an OSM
* @dev: I2O device to claim
* @drv: I2O driver which wants to claim the device
*
* Do the leg work to assign a device to a given OSM. If the claim succeed
* the owner of the rimary. If the attempt fails a negative errno code
* is returned. On success zero is returned.
*/
int i2o_device_claim(struct i2o_device *dev)
{
int rc = 0;
down(&dev->lock);
rc = i2o_device_issue_claim(dev, I2O_CMD_UTIL_CLAIM, I2O_CLAIM_PRIMARY);
if (!rc)
pr_debug("claim of device %d succeded\n", dev->lct_data.tid);
else
pr_debug("claim of device %d failed %d\n", dev->lct_data.tid,
rc);
up(&dev->lock);
return rc;
};
/**
* i2o_device_claim_release - release a device that the OSM is using
* @dev: device to release
* @drv: driver which claimed the device
*
* Drop a claim by an OSM on a given I2O device.
*
* AC - some devices seem to want to refuse an unclaim until they have
* finished internal processing. It makes sense since you don't want a
* new device to go reconfiguring the entire system until you are done.
* Thus we are prepared to wait briefly.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_device_claim_release(struct i2o_device *dev)
{
int tries;
int rc = 0;
down(&dev->lock);
/*
* If the controller takes a nonblocking approach to
* releases we have to sleep/poll for a few times.
*/
for (tries = 0; tries < 10; tries++) {
rc = i2o_device_issue_claim(dev, I2O_CMD_UTIL_RELEASE,
I2O_CLAIM_PRIMARY);
if (!rc)
break;
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(HZ);
}
if (!rc)
pr_debug("claim release of device %d succeded\n",
dev->lct_data.tid);
else
pr_debug("claim release of device %d failed %d\n",
dev->lct_data.tid, rc);
up(&dev->lock);
return rc;
};
/**
* i2o_device_release - release the memory for a I2O device
* @dev: I2O device which should be released
*
* Release the allocated memory. This function is called if refcount of
* device reaches 0 automatically.
*/
static void i2o_device_release(struct device *dev)
{
struct i2o_device *i2o_dev = to_i2o_device(dev);
pr_debug("Release I2O device %s\n", dev->bus_id);
kfree(i2o_dev);
};
/**
* i2o_device_class_release - Remove I2O device attributes
* @cd: I2O class device which is added to the I2O device class
*
* Removes attributes from the I2O device again. Also search each device
* on the controller for I2O devices which refert to this device as parent
* or user and remove this links also.
*/
static void i2o_device_class_release(struct class_device *cd)
{
struct i2o_device *i2o_dev, *tmp;
struct i2o_controller *c;
i2o_dev = to_i2o_device(cd->dev);
c = i2o_dev->iop;
sysfs_remove_link(&i2o_dev->device.kobj, "parent");
sysfs_remove_link(&i2o_dev->device.kobj, "user");
list_for_each_entry(tmp, &c->devices, list) {
if (tmp->lct_data.parent_tid == i2o_dev->lct_data.tid)
sysfs_remove_link(&tmp->device.kobj, "parent");
if (tmp->lct_data.user_tid == i2o_dev->lct_data.tid)
sysfs_remove_link(&tmp->device.kobj, "user");
}
};
/* I2O device class */
static struct class i2o_device_class = {
.name = "i2o_device",
.release = i2o_device_class_release
};
/**
* i2o_device_alloc - Allocate a I2O device and initialize it
*
* Allocate the memory for a I2O device and initialize locks and lists
*
* Returns the allocated I2O device or a negative error code if the device
* could not be allocated.
*/
static struct i2o_device *i2o_device_alloc(void)
{
struct i2o_device *dev;
dev = kmalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
return ERR_PTR(-ENOMEM);
memset(dev, 0, sizeof(*dev));
INIT_LIST_HEAD(&dev->list);
init_MUTEX(&dev->lock);
dev->device.bus = &i2o_bus_type;
dev->device.release = &i2o_device_release;
dev->classdev.class = &i2o_device_class;
dev->classdev.dev = &dev->device;
return dev;
};
/**
* i2o_device_add - allocate a new I2O device and add it to the IOP
* @iop: I2O controller where the device is on
* @entry: LCT entry of the I2O device
*
* Allocate a new I2O device and initialize it with the LCT entry. The
* device is appended to the device list of the controller.
*
* Returns a pointer to the I2O device on success or negative error code
* on failure.
*/
struct i2o_device *i2o_device_add(struct i2o_controller *c,
i2o_lct_entry * entry)
{
struct i2o_device *dev;
dev = i2o_device_alloc();
if (IS_ERR(dev)) {
printk(KERN_ERR "i2o: unable to allocate i2o device\n");
return dev;
}
dev->lct_data = *entry;
snprintf(dev->device.bus_id, BUS_ID_SIZE, "%d:%03x", c->unit,
dev->lct_data.tid);
snprintf(dev->classdev.class_id, BUS_ID_SIZE, "%d:%03x", c->unit,
dev->lct_data.tid);
dev->iop = c;
dev->device.parent = &c->device;
device_register(&dev->device);
list_add_tail(&dev->list, &c->devices);
class_device_register(&dev->classdev);
pr_debug("I2O device %s added\n", dev->device.bus_id);
return dev;
};
/**
* i2o_device_remove - remove an I2O device from the I2O core
* @dev: I2O device which should be released
*
* Is used on I2O controller removal or LCT modification, when the device
* is removed from the system. Note that the device could still hang
* around until the refcount reaches 0.
*/
void i2o_device_remove(struct i2o_device *i2o_dev)
{
class_device_unregister(&i2o_dev->classdev);
list_del(&i2o_dev->list);
device_unregister(&i2o_dev->device);
};
/**
* i2o_device_parse_lct - Parse a previously fetched LCT and create devices
* @c: I2O controller from which the LCT should be parsed.
*
* The Logical Configuration Table tells us what we can talk to on the
* board. For every entry we create an I2O device, which is registered in
* the I2O core.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_device_parse_lct(struct i2o_controller *c)
{
struct i2o_device *dev, *tmp;
i2o_lct *lct;
int i;
int max;
down(&c->lct_lock);
if (c->lct)
kfree(c->lct);
lct = c->dlct.virt;
c->lct = kmalloc(lct->table_size * 4, GFP_KERNEL);
if (!c->lct) {
up(&c->lct_lock);
return -ENOMEM;
}
if (lct->table_size * 4 > c->dlct.len) {
memcpy_fromio(c->lct, c->dlct.virt, c->dlct.len);
up(&c->lct_lock);
return -EAGAIN;
}
memcpy_fromio(c->lct, c->dlct.virt, lct->table_size * 4);
lct = c->lct;
max = (lct->table_size - 3) / 9;
pr_debug("LCT has %d entries (LCT size: %d)\n", max, lct->table_size);
/* remove devices, which are not in the LCT anymore */
list_for_each_entry_safe(dev, tmp, &c->devices, list) {
int found = 0;
for (i = 0; i < max; i++) {
if (lct->lct_entry[i].tid == dev->lct_data.tid) {
found = 1;
break;
}
}
if (!found)
i2o_device_remove(dev);
}
/* add new devices, which are new in the LCT */
for (i = 0; i < max; i++) {
int found = 0;
list_for_each_entry_safe(dev, tmp, &c->devices, list) {
if (lct->lct_entry[i].tid == dev->lct_data.tid) {
found = 1;
break;
}
}
if (!found)
i2o_device_add(c, &lct->lct_entry[i]);
}
up(&c->lct_lock);
return 0;
};
/**
* i2o_device_class_show_class_id - Displays class id of I2O device
* @cd: class device of which the class id should be displayed
* @buf: buffer into which the class id should be printed
*
* Returns the number of bytes which are printed into the buffer.
*/
static ssize_t i2o_device_class_show_class_id(struct class_device *cd,
char *buf)
{
struct i2o_device *dev = to_i2o_device(cd->dev);
sprintf(buf, "%03x\n", dev->lct_data.class_id);
return strlen(buf) + 1;
};
/**
* i2o_device_class_show_tid - Displays TID of I2O device
* @cd: class device of which the TID should be displayed
* @buf: buffer into which the class id should be printed
*
* Returns the number of bytes which are printed into the buffer.
*/
static ssize_t i2o_device_class_show_tid(struct class_device *cd, char *buf)
{
struct i2o_device *dev = to_i2o_device(cd->dev);
sprintf(buf, "%03x\n", dev->lct_data.tid);
return strlen(buf) + 1;
};
/* I2O device class attributes */
static CLASS_DEVICE_ATTR(class_id, S_IRUGO, i2o_device_class_show_class_id,
NULL);
static CLASS_DEVICE_ATTR(tid, S_IRUGO, i2o_device_class_show_tid, NULL);
/**
* i2o_device_class_add - Adds attributes to the I2O device
* @cd: I2O class device which is added to the I2O device class
*
* This function get called when a I2O device is added to the class. It
* creates the attributes for each device and creates user/parent symlink
* if necessary.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_device_class_add(struct class_device *cd)
{
struct i2o_device *i2o_dev, *tmp;
struct i2o_controller *c;
i2o_dev = to_i2o_device(cd->dev);
c = i2o_dev->iop;
class_device_create_file(cd, &class_device_attr_class_id);
class_device_create_file(cd, &class_device_attr_tid);
/* create user entries for this device */
tmp = i2o_iop_find_device(i2o_dev->iop, i2o_dev->lct_data.user_tid);
if (tmp)
sysfs_create_link(&i2o_dev->device.kobj, &tmp->device.kobj,
"user");
/* create user entries refering to this device */
list_for_each_entry(tmp, &c->devices, list)
if (tmp->lct_data.user_tid == i2o_dev->lct_data.tid)
sysfs_create_link(&tmp->device.kobj,
&i2o_dev->device.kobj, "user");
/* create parent entries for this device */
tmp = i2o_iop_find_device(i2o_dev->iop, i2o_dev->lct_data.parent_tid);
if (tmp)
sysfs_create_link(&i2o_dev->device.kobj, &tmp->device.kobj,
"parent");
/* create parent entries refering to this device */
list_for_each_entry(tmp, &c->devices, list)
if (tmp->lct_data.parent_tid == i2o_dev->lct_data.tid)
sysfs_create_link(&tmp->device.kobj,
&i2o_dev->device.kobj, "parent");
return 0;
};
/* I2O device class interface */
static struct class_interface i2o_device_class_interface = {
.class = &i2o_device_class,
.add = i2o_device_class_add
};
/*
* Run time support routines
*/
/* Issue UTIL_PARAMS_GET or UTIL_PARAMS_SET
*
* This function can be used for all UtilParamsGet/Set operations.
* The OperationList is given in oplist-buffer,
* and results are returned in reslist-buffer.
* Note that the minimum sized reslist is 8 bytes and contains
* ResultCount, ErrorInfoSize, BlockStatus and BlockSize.
*/
int i2o_parm_issue(struct i2o_device *i2o_dev, int cmd, void *oplist,
int oplen, void *reslist, int reslen)
{
struct i2o_message *msg;
u32 m;
u32 *res32 = (u32 *) reslist;
u32 *restmp = (u32 *) reslist;
int len = 0;
int i = 0;
int rc;
struct i2o_dma res;
struct i2o_controller *c = i2o_dev->iop;
struct device *dev = &c->pdev->dev;
res.virt = NULL;
if (i2o_dma_alloc(dev, &res, reslen, GFP_KERNEL))
return -ENOMEM;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY) {
i2o_dma_free(dev, &res);
return -ETIMEDOUT;
}
i = 0;
writel(cmd << 24 | HOST_TID << 12 | i2o_dev->lct_data.tid,
&msg->u.head[1]);
writel(0, &msg->body[i++]);
writel(0x4C000000 | oplen, &msg->body[i++]); /* OperationList */
memcpy_toio(&msg->body[i], oplist, oplen);
i += (oplen / 4 + (oplen % 4 ? 1 : 0));
writel(0xD0000000 | res.len, &msg->body[i++]); /* ResultList */
writel(res.phys, &msg->body[i++]);
writel(I2O_MESSAGE_SIZE(i + sizeof(struct i2o_message) / 4) |
SGL_OFFSET_5, &msg->u.head[0]);
rc = i2o_msg_post_wait_mem(c, m, 10, &res);
/* This only looks like a memory leak - don't "fix" it. */
if (rc == -ETIMEDOUT)
return rc;
memcpy_fromio(reslist, res.virt, res.len);
i2o_dma_free(dev, &res);
/* Query failed */
if (rc)
return rc;
/*
* Calculate number of bytes of Result LIST
* We need to loop through each Result BLOCK and grab the length
*/
restmp = res32 + 1;
len = 1;
for (i = 0; i < (res32[0] & 0X0000FFFF); i++) {
if (restmp[0] & 0x00FF0000) { /* BlockStatus != SUCCESS */
printk(KERN_WARNING
"%s - Error:\n ErrorInfoSize = 0x%02x, "
"BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
(cmd ==
I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET" :
"PARAMS_GET", res32[1] >> 24,
(res32[1] >> 16) & 0xFF, res32[1] & 0xFFFF);
/*
* If this is the only request,than we return an error
*/
if ((res32[0] & 0x0000FFFF) == 1) {
return -((res32[1] >> 16) & 0xFF); /* -BlockStatus */
}
}
len += restmp[0] & 0x0000FFFF; /* Length of res BLOCK */
restmp += restmp[0] & 0x0000FFFF; /* Skip to next BLOCK */
}
return (len << 2); /* bytes used by result list */
}
/*
* Query one field group value or a whole scalar group.
*/
int i2o_parm_field_get(struct i2o_device *i2o_dev, int group, int field,
void *buf, int buflen)
{
u16 opblk[] = { 1, 0, I2O_PARAMS_FIELD_GET, group, 1, field };
u8 resblk[8 + buflen]; /* 8 bytes for header */
int size;
if (field == -1) /* whole group */
opblk[4] = -1;
size = i2o_parm_issue(i2o_dev, I2O_CMD_UTIL_PARAMS_GET, opblk,
sizeof(opblk), resblk, sizeof(resblk));
memcpy(buf, resblk + 8, buflen); /* cut off header */
if (size > buflen)
return buflen;
return size;
}
/*
* Set a scalar group value or a whole group.
*/
int i2o_parm_field_set(struct i2o_device *i2o_dev, int group, int field,
void *buf, int buflen)
{
u16 *opblk;
u8 resblk[8 + buflen]; /* 8 bytes for header */
int size;
opblk = kmalloc(buflen + 64, GFP_KERNEL);
if (opblk == NULL) {
printk(KERN_ERR "i2o: no memory for operation buffer.\n");
return -ENOMEM;
}
opblk[0] = 1; /* operation count */
opblk[1] = 0; /* pad */
opblk[2] = I2O_PARAMS_FIELD_SET;
opblk[3] = group;
if (field == -1) { /* whole group */
opblk[4] = -1;
memcpy(opblk + 5, buf, buflen);
} else { /* single field */
opblk[4] = 1;
opblk[5] = field;
memcpy(opblk + 6, buf, buflen);
}
size = i2o_parm_issue(i2o_dev, I2O_CMD_UTIL_PARAMS_SET, opblk,
12 + buflen, resblk, sizeof(resblk));
kfree(opblk);
if (size > buflen)
return buflen;
return size;
}
/*
* if oper == I2O_PARAMS_TABLE_GET, get from all rows
* if fieldcount == -1 return all fields
* ibuf and ibuflen are unused (use NULL, 0)
* else return specific fields
* ibuf contains fieldindexes
*
* if oper == I2O_PARAMS_LIST_GET, get from specific rows
* if fieldcount == -1 return all fields
* ibuf contains rowcount, keyvalues
* else return specific fields
* fieldcount is # of fieldindexes
* ibuf contains fieldindexes, rowcount, keyvalues
*
* You could also use directly function i2o_issue_params().
*/
int i2o_parm_table_get(struct i2o_device *dev, int oper, int group,
int fieldcount, void *ibuf, int ibuflen, void *resblk,
int reslen)
{
u16 *opblk;
int size;
size = 10 + ibuflen;
if (size % 4)
size += 4 - size % 4;
opblk = kmalloc(size, GFP_KERNEL);
if (opblk == NULL) {
printk(KERN_ERR "i2o: no memory for query buffer.\n");
return -ENOMEM;
}
opblk[0] = 1; /* operation count */
opblk[1] = 0; /* pad */
opblk[2] = oper;
opblk[3] = group;
opblk[4] = fieldcount;
memcpy(opblk + 5, ibuf, ibuflen); /* other params */
size = i2o_parm_issue(dev, I2O_CMD_UTIL_PARAMS_GET, opblk,
size, resblk, reslen);
kfree(opblk);
if (size > reslen)
return reslen;
return size;
}
/**
* i2o_device_init - Initialize I2O devices
*
* Registers the I2O device class.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_device_init(void)
{
int rc;
rc = class_register(&i2o_device_class);
if (rc)
return rc;
return class_interface_register(&i2o_device_class_interface);
};
/**
* i2o_device_exit - I2O devices exit function
*
* Unregisters the I2O device class.
*/
void i2o_device_exit(void)
{
class_interface_register(&i2o_device_class_interface);
class_unregister(&i2o_device_class);
};
EXPORT_SYMBOL(i2o_device_claim);
EXPORT_SYMBOL(i2o_device_claim_release);
EXPORT_SYMBOL(i2o_parm_field_get);
EXPORT_SYMBOL(i2o_parm_field_set);
EXPORT_SYMBOL(i2o_parm_table_get);
EXPORT_SYMBOL(i2o_parm_issue);
/*
* Functions to handle I2O drivers (OSMs) and I2O bus type for sysfs
*
* Copyright (C) 2004 Markus Lidel <Markus.Lidel@shadowconnect.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* Fixes/additions:
* Markus Lidel <Markus.Lidel@shadowconnect.com>
* initial version.
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/rwsem.h>
#include <linux/i2o.h>
/* max_drivers - Maximum I2O drivers (OSMs) which could be registered */
unsigned int i2o_max_drivers = I2O_MAX_DRIVERS;
module_param_named(max_drivers, i2o_max_drivers, uint, 0);
MODULE_PARM_DESC(max_drivers, "maximum number of OSM's to support");
/* I2O drivers lock and array */
static spinlock_t i2o_drivers_lock = SPIN_LOCK_UNLOCKED;
static struct i2o_driver **i2o_drivers;
/**
* i2o_bus_match - Tell if a I2O device class id match the class ids of
* the I2O driver (OSM)
*
* @dev: device which should be verified
* @drv: the driver to match against
*
* Used by the bus to check if the driver wants to handle the device.
*
* Returns 1 if the class ids of the driver match the class id of the
* device, otherwise 0.
*/
static int i2o_bus_match(struct device *dev, struct device_driver *drv)
{
struct i2o_device *i2o_dev = to_i2o_device(dev);
struct i2o_driver *i2o_drv = to_i2o_driver(drv);
struct i2o_class_id *ids = i2o_drv->classes;
if (ids)
while (ids->class_id != I2O_CLASS_END) {
if (ids->class_id == i2o_dev->lct_data.class_id)
return 1;
ids++;
}
return 0;
};
/* I2O bus type */
struct bus_type i2o_bus_type = {
.name = "i2o",
.match = i2o_bus_match,
};
/**
* i2o_driver_register - Register a I2O driver (OSM) in the I2O core
* @drv: I2O driver which should be registered
*
* Registers the OSM drv in the I2O core and creates an event queues if
* necessary.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_driver_register(struct i2o_driver *drv)
{
int i;
int rc = 0;
unsigned long flags;
pr_debug("Register driver %s\n", drv->name);
if (drv->event) {
drv->event_queue = create_workqueue(drv->name);
if (!drv->event_queue) {
printk(KERN_ERR "i2o: Could not initialize event queue "
"for driver %s\n", drv->name);
return -EFAULT;
}
pr_debug("Event queue initialized for driver %s\n", drv->name);
} else
drv->event_queue = NULL;
drv->driver.name = drv->name;
drv->driver.bus = &i2o_bus_type;
spin_lock_irqsave(&i2o_drivers_lock, flags);
for (i = 0; i2o_drivers[i]; i++)
if (i >= i2o_max_drivers) {
printk(KERN_ERR "i2o: too many drivers registered, "
"increase max_drivers\n");
spin_unlock_irqrestore(&i2o_drivers_lock, flags);
return -EFAULT;
}
drv->context = i;
i2o_drivers[i] = drv;
spin_unlock_irqrestore(&i2o_drivers_lock, flags);
pr_debug("driver %s gets context id %d\n", drv->name, drv->context);
rc = driver_register(&drv->driver);
if (rc)
destroy_workqueue(drv->event_queue);
return rc;
};
/**
* i2o_driver_unregister - Unregister a I2O driver (OSM) from the I2O core
* @drv: I2O driver which should be unregistered
*
* Unregisters the OSM drv from the I2O core and cleanup event queues if
* necessary.
*/
void i2o_driver_unregister(struct i2o_driver *drv)
{
unsigned long flags;
pr_debug("unregister driver %s\n", drv->name);
driver_unregister(&drv->driver);
spin_lock_irqsave(&i2o_drivers_lock, flags);
i2o_drivers[drv->context] = NULL;
spin_unlock_irqrestore(&i2o_drivers_lock, flags);
if (drv->event_queue) {
destroy_workqueue(drv->event_queue);
drv->event_queue = NULL;
pr_debug("event queue removed for %s\n", drv->name);
}
};
/**
* i2o_driver_dispatch - dispatch an I2O reply message
* @c: I2O controller of the message
* @m: I2O message number
* @msg: I2O message to be delivered
*
* The reply is delivered to the driver from which the original message
* was. This function is only called from interrupt context.
*
* Returns 0 on success and the message should not be flushed. Returns > 0
* on success and if the message should be flushed afterwords. Returns
* negative error code on failure (the message will be flushed too).
*/
int i2o_driver_dispatch(struct i2o_controller *c, u32 m,
struct i2o_message *msg)
{
struct i2o_driver *drv;
u32 context = readl(&msg->u.s.icntxt);
if (likely(context < i2o_max_drivers)) {
spin_lock(&i2o_drivers_lock);
drv = i2o_drivers[context];
spin_unlock(&i2o_drivers_lock);
if (unlikely(!drv)) {
printk(KERN_WARNING "i2o: Spurious reply to unknown "
"driver %d\n", context);
return -EIO;
}
if ((readl(&msg->u.head[1]) >> 24) == I2O_CMD_UTIL_EVT_REGISTER) {
struct i2o_device *dev, *tmp;
struct i2o_event *evt;
u16 size;
u16 tid;
tid = readl(&msg->u.head[1]) & 0x1fff;
pr_debug("%s: event received from device %d\n", c->name,
tid);
/* cut of header from message size (in 32-bit words) */
size = (readl(&msg->u.head[0]) >> 16) - 5;
evt = kmalloc(size * 4 + sizeof(*evt), GFP_ATOMIC);
if (!evt)
return -ENOMEM;
memset(evt, 0, size * 4 + sizeof(*evt));
evt->size = size;
memcpy_fromio(&evt->tcntxt, &msg->u.s.tcntxt,
(size + 2) * 4);
list_for_each_entry_safe(dev, tmp, &c->devices, list)
if (dev->lct_data.tid == tid) {
evt->i2o_dev = dev;
break;
}
INIT_WORK(&evt->work, (void (*)(void *))drv->event,
evt);
queue_work(drv->event_queue, &evt->work);
return 1;
}
if (likely(drv->reply))
return drv->reply(c, m, msg);
else
pr_debug("%s: Reply to driver %s, but no reply function"
" defined!\n", c->name, drv->name);
return -EIO;
} else
printk(KERN_WARNING "i2o: Spurious reply to unknown driver "
"%d\n", readl(&msg->u.s.icntxt));
return -EIO;
}
/**
* i2o_driver_init - initialize I2O drivers (OSMs)
*
* Registers the I2O bus and allocate memory for the array of OSMs.
*
* Returns 0 on success or negative error code on failure.
*/
int __init i2o_driver_init(void)
{
int rc = 0;
if ((i2o_max_drivers < 2) || (i2o_max_drivers > 64) ||
((i2o_max_drivers ^ (i2o_max_drivers - 1)) !=
(2 * i2o_max_drivers - 1))) {
printk(KERN_WARNING "i2o: max_drivers set to %d, but must be "
">=2 and <= 64 and a power of 2\n", i2o_max_drivers);
i2o_max_drivers = I2O_MAX_DRIVERS;
}
printk(KERN_INFO "i2o: max_drivers=%d\n", i2o_max_drivers);
i2o_drivers =
kmalloc(i2o_max_drivers * sizeof(*i2o_drivers), GFP_KERNEL);
if (!i2o_drivers)
return -ENOMEM;
memset(i2o_drivers, 0, i2o_max_drivers * sizeof(*i2o_drivers));
rc = bus_register(&i2o_bus_type);
if (rc < 0)
kfree(i2o_drivers);
return rc;
};
/**
* i2o_driver_exit - clean up I2O drivers (OSMs)
*
* Unregisters the I2O bus and free driver array.
*/
void __exit i2o_driver_exit(void)
{
bus_unregister(&i2o_bus_type);
kfree(i2o_drivers);
};
EXPORT_SYMBOL(i2o_driver_register);
EXPORT_SYMBOL(i2o_driver_unregister);
/*
* Executive OSM
*
* Copyright (C) 1999-2002 Red Hat Software
*
* Written by Alan Cox, Building Number Three Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* A lot of the I2O message side code from this is taken from the Red
* Creek RCPCI45 adapter driver by Red Creek Communications
*
* Fixes/additions:
* Philipp Rumpf
* Juha Sievnen <Juha.Sievanen@cs.Helsinki.FI>
* Auvo Hkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
* Deepak Saxena <deepak@plexity.net>
* Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
* Alan Cox <alan@redhat.com>:
* Ported to Linux 2.5.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Minor fixes for 2.6.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Support for sysfs included.
*/
#include <linux/module.h>
#include <linux/i2o.h>
struct i2o_driver i2o_exec_driver;
/* Module internal functions from other sources */
extern int i2o_device_parse_lct(struct i2o_controller *);
/* global wait list for POST WAIT */
static LIST_HEAD(i2o_exec_wait_list);
/* Wait struct needed for POST WAIT */
struct i2o_exec_wait {
wait_queue_head_t *wq; /* Pointer to Wait queue */
struct i2o_dma dma; /* DMA buffers to free on failure */
u32 tcntxt; /* transaction context from reply */
int complete; /* 1 if reply received otherwise 0 */
u32 m; /* message id */
struct i2o_message *msg; /* pointer to the reply message */
struct list_head list; /* node in global wait list */
};
/* Exec OSM class handling definition */
static struct i2o_class_id i2o_exec_class_id[] = {
{I2O_CLASS_EXECUTIVE},
{I2O_CLASS_END}
};
/**
* i2o_exec_wait_alloc - Allocate a i2o_exec_wait struct an initialize it
*
* Allocate the i2o_exec_wait struct and initialize the wait.
*
* Returns i2o_exec_wait pointer on success or negative error code on
* failure.
*/
static struct i2o_exec_wait *i2o_exec_wait_alloc(void)
{
struct i2o_exec_wait *wait;
wait = kmalloc(sizeof(*wait), GFP_KERNEL);
if (!wait)
return ERR_PTR(-ENOMEM);
memset(wait, 0, sizeof(*wait));
INIT_LIST_HEAD(&wait->list);
return wait;
};
/**
* i2o_exec_wait_free - Free a i2o_exec_wait struct
* @i2o_exec_wait: I2O wait data which should be cleaned up
*/
static void i2o_exec_wait_free(struct i2o_exec_wait *wait)
{
kfree(wait);
};
/**
* i2o_msg_post_wait_mem - Post and wait a message with DMA buffers
* @c: controller
* @m: message to post
* @timeout: time in seconds to wait
* @dma: i2o_dma struct of the DMA buffer to free on failure
*
* This API allows an OSM to post a message and then be told whether or
* not the system received a successful reply. If the message times out
* then the value '-ETIMEDOUT' is returned. This is a special case. In
* this situation the message may (should) complete at an indefinite time
* in the future. When it completes it will use the memory buffer
* attached to the request. If -ETIMEDOUT is returned then the memory
* buffer must not be freed. Instead the event completion will free them
* for you. In all other cases the buffer are your problem.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_msg_post_wait_mem(struct i2o_controller *c, u32 m, unsigned long
timeout, struct i2o_dma *dma)
{
DECLARE_WAIT_QUEUE_HEAD(wq);
DEFINE_WAIT(wait);
struct i2o_exec_wait *iwait;
static u32 tcntxt = 0x80000000;
struct i2o_message *msg = c->in_queue.virt + m;
int rc = 0;
iwait = i2o_exec_wait_alloc();
if (!iwait)
return -ENOMEM;
if (tcntxt == 0xffffffff)
tcntxt = 0x80000000;
if (dma)
iwait->dma = *dma;
/*
* Fill in the message initiator context and transaction context.
* We will only use transaction contexts >= 0x80000000 for POST WAIT,
* so we could find a POST WAIT reply easier in the reply handler.
*/
writel(i2o_exec_driver.context, &msg->u.s.icntxt);
iwait->tcntxt = tcntxt++;
writel(iwait->tcntxt, &msg->u.s.tcntxt);
/*
* Post the message to the controller. At some point later it will
* return. If we time out before it returns then complete will be zero.
*/
i2o_msg_post(c, m);
if (!iwait->complete) {
iwait->wq = &wq;
/*
* we add elements add the head, because if a entry in the list
* will never be removed, we have to iterate over it every time
*/
list_add(&iwait->list, &i2o_exec_wait_list);
prepare_to_wait(&wq, &wait, TASK_INTERRUPTIBLE);
if (!iwait->complete)
schedule_timeout(timeout * HZ);
finish_wait(&wq, &wait);
iwait->wq = NULL;
}
barrier();
if (iwait->complete) {
if (readl(&iwait->msg->body[0]) >> 24)
rc = readl(&iwait->msg->body[0]) & 0xff;
i2o_flush_reply(c, iwait->m);
i2o_exec_wait_free(iwait);
} else {
/*
* We cannot remove it now. This is important. When it does
* terminate (which it must do if the controller has not
* died...) then it will otherwise scribble on stuff.
*
* FIXME: try abort message
*/
if (dma)
dma->virt = NULL;
rc = -ETIMEDOUT;
}
return rc;
};
/**
* i2o_msg_post_wait_complete - Reply to a i2o_msg_post request from IOP
* @c: I2O controller which answers
* @m: message id
* @msg: pointer to the I2O reply message
*
* This function is called in interrupt context only. If the reply reached
* before the timeout, the i2o_exec_wait struct is filled with the message
* and the task will be waked up. The task is now responsible for returning
* the message m back to the controller! If the message reaches us after
* the timeout clean up the i2o_exec_wait struct (including allocated
* DMA buffer).
*
* Return 0 on success and if the message m should not be given back to the
* I2O controller, or >0 on success and if the message should be given back
* afterwords. Returns negative error code on failure. In this case the
* message must also be given back to the controller.
*/
static int i2o_msg_post_wait_complete(struct i2o_controller *c, u32 m,
struct i2o_message *msg)
{
struct i2o_exec_wait *wait, *tmp;
static spinlock_t lock = SPIN_LOCK_UNLOCKED;
int rc = 1;
u32 context;
context = readl(&msg->u.s.tcntxt);
/*
* We need to search through the i2o_exec_wait_list to see if the given
* message is still outstanding. If not, it means that the IOP took
* longer to respond to the message than we had allowed and timer has
* already expired. Not much we can do about that except log it for
* debug purposes, increase timeout, and recompile.
*/
spin_lock(&lock);
list_for_each_entry_safe(wait, tmp, &i2o_exec_wait_list, list) {
if (wait->tcntxt == context) {
list_del(&wait->list);
wait->m = m;
wait->msg = msg;
wait->complete = 1;
barrier();
if (wait->wq) {
wake_up_interruptible(wait->wq);
rc = 0;
} else {
struct device *dev;
dev = &c->pdev->dev;
pr_debug("timedout reply received!\n");
i2o_dma_free(dev, &wait->dma);
i2o_exec_wait_free(wait);
rc = -1;
}
spin_unlock(&lock);
return rc;
}
}
spin_unlock(&lock);
pr_debug("i2o: Bogus reply in POST WAIT (tr-context: %08x)!\n",
context);
return -1;
};
/**
* i2o_exec_probe - Called if a new I2O device (executive class) appears
* @dev: I2O device which should be probed
*
* Registers event notification for every event from Executive device. The
* return is always 0, because we want all devices of class Executive.
*
* Returns 0 on success.
*/
static int i2o_exec_probe(struct device *dev)
{
struct i2o_device *i2o_dev = to_i2o_device(dev);
i2o_event_register(i2o_dev, &i2o_exec_driver, 0, 0xffffffff);
i2o_dev->iop->exec = i2o_dev;
return 0;
};
/**
* i2o_exec_remove - Called on I2O device removal
* @dev: I2O device which was removed
*
* Unregisters event notification from Executive I2O device.
*
* Returns 0 on success.
*/
static int i2o_exec_remove(struct device *dev)
{
i2o_event_register(to_i2o_device(dev), &i2o_exec_driver, 0, 0);
return 0;
};
/**
* i2o_exec_lct_modified - Called on LCT NOTIFY reply
* @c: I2O controller on which the LCT has modified
*
* This function handles asynchronus LCT NOTIFY replies. It parses the
* new LCT and if the buffer for the LCT was to small sends a LCT NOTIFY
* again.
*/
static void i2o_exec_lct_modified(struct i2o_controller *c)
{
if (i2o_device_parse_lct(c) == -EAGAIN)
i2o_exec_lct_notify(c, 0);
};
/**
* i2o_exec_reply - I2O Executive reply handler
* @c: I2O controller from which the reply comes
* @m: message id
* @msg: pointer to the I2O reply message
*
* This function is always called from interrupt context. If a POST WAIT
* reply was received, pass it to the complete function. If a LCT NOTIFY
* reply was received, a new event is created to handle the update.
*
* Returns 0 on success and if the reply should not be flushed or > 0
* on success and if the reply should be flushed. Returns negative error
* code on failure and if the reply should be flushed.
*/
static int i2o_exec_reply(struct i2o_controller *c, u32 m,
struct i2o_message *msg)
{
if (readl(&msg->u.head[0]) & MSG_FAIL) { // Fail bit is set
struct i2o_message *pmsg; /* preserved message */
u32 pm;
pm = readl(&msg->body[3]);
pmsg = c->in_queue.virt + pm;
i2o_report_status(KERN_INFO, "i2o_core", msg);
/* Release the preserved msg by resubmitting it as a NOP */
i2o_msg_nop(c, pm);
/* If reply to i2o_post_wait failed, return causes a timeout */
return -1;
}
if (readl(&msg->u.s.tcntxt) & 0x80000000)
return i2o_msg_post_wait_complete(c, m, msg);
if ((readl(&msg->u.head[1]) >> 24) == I2O_CMD_LCT_NOTIFY) {
struct work_struct *work;
pr_debug("%s: LCT notify received\n", c->name);
work = kmalloc(sizeof(*work), GFP_ATOMIC);
if (!work)
return -ENOMEM;
INIT_WORK(work, (void (*)(void *))i2o_exec_lct_modified, c);
queue_work(i2o_exec_driver.event_queue, work);
return 1;
}
/*
* If this happens, we want to dump the message to the syslog so
* it can be sent back to the card manufacturer by the end user
* to aid in debugging.
*
*/
printk(KERN_WARNING "%s: Unsolicited message reply sent to core!"
"Message dumped to syslog\n", c->name);
i2o_dump_message(msg);
return -EFAULT;
}
/**
* i2o_exec_event - Event handling function
* @evt: Event which occurs
*
* Handles events send by the Executive device. At the moment does not do
* anything useful.
*/
static void i2o_exec_event(struct i2o_event *evt)
{
printk(KERN_INFO "Event received from device: %d\n",
evt->i2o_dev->lct_data.tid);
kfree(evt);
};
/**
* i2o_exec_lct_get - Get the IOP's Logical Configuration Table
* @c: I2O controller from which the LCT should be fetched
*
* Send a LCT NOTIFY request to the controller, and wait
* I2O_TIMEOUT_LCT_GET seconds until arrival of response. If the LCT is
* to large, retry it.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_exec_lct_get(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
int i = 0;
int rc = -EAGAIN;
for (i = 1; i <= I2O_LCT_GET_TRIES; i++) {
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(EIGHT_WORD_MSG_SIZE | SGL_OFFSET_6, &msg->u.head[0]);
writel(I2O_CMD_LCT_NOTIFY << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(0xffffffff, &msg->body[0]);
writel(0x00000000, &msg->body[1]);
writel(0xd0000000 | c->dlct.len, &msg->body[2]);
writel(c->dlct.phys, &msg->body[3]);
rc = i2o_msg_post_wait(c, m, I2O_TIMEOUT_LCT_GET);
if (rc < 0)
break;
rc = i2o_device_parse_lct(c);
if (rc != -EAGAIN)
break;
}
return rc;
}
/**
* i2o_exec_lct_notify - Send a asynchronus LCT NOTIFY request
* @c: I2O controller to which the request should be send
* @change_ind: change indicator
*
* This function sends a LCT NOTIFY request to the I2O controller with
* the change indicator change_ind. If the change_ind == 0 the controller
* replies immediately after the request. If change_ind > 0 the reply is
* send after change indicator of the LCT is > change_ind.
*/
int i2o_exec_lct_notify(struct i2o_controller *c, u32 change_ind)
{
i2o_status_block *sb = c->status_block.virt;
struct device *dev;
struct i2o_message *msg;
u32 m;
dev = &c->pdev->dev;
if (i2o_dma_realloc(dev, &c->dlct, sb->expected_lct_size, GFP_KERNEL))
return -ENOMEM;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(EIGHT_WORD_MSG_SIZE | SGL_OFFSET_6, &msg->u.head[0]);
writel(I2O_CMD_LCT_NOTIFY << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(i2o_exec_driver.context, &msg->u.s.icntxt);
writel(0, &msg->u.s.tcntxt); /* FIXME */
writel(0xffffffff, &msg->body[0]);
writel(change_ind, &msg->body[1]);
writel(0xd0000000 | c->dlct.len, &msg->body[2]);
writel(c->dlct.phys, &msg->body[3]);
i2o_msg_post(c, m);
return 0;
};
/* Exec OSM driver struct */
struct i2o_driver i2o_exec_driver = {
.name = "exec-osm",
.reply = i2o_exec_reply,
.event = i2o_exec_event,
.classes = i2o_exec_class_id,
.driver = {
.probe = i2o_exec_probe,
.remove = i2o_exec_remove,
},
};
/**
* i2o_exec_init - Registers the Exec OSM
*
* Registers the Exec OSM in the I2O core.
*
* Returns 0 on success or negative error code on failure.
*/
int __init i2o_exec_init(void)
{
return i2o_driver_register(&i2o_exec_driver);
};
/**
* i2o_exec_exit - Removes the Exec OSM
*
* Unregisters the Exec OSM from the I2O core.
*/
void __exit i2o_exec_exit(void)
{
i2o_driver_unregister(&i2o_exec_driver);
};
EXPORT_SYMBOL(i2o_msg_post_wait_mem);
EXPORT_SYMBOL(i2o_exec_lct_get);
EXPORT_SYMBOL(i2o_exec_lct_notify);
/* /*
* I2O Random Block Storage Class OSM * Block OSM
* *
* (C) Copyright 1999-2002 Red Hat * Copyright (C) 1999-2002 Red Hat Software
*
* Written by Alan Cox, Building Number Three Ltd
* *
* This program is free software; you can redistribute it and/or * Written by Alan Cox, Building Number Three Ltd
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
* *
* This program is distributed in the hope that it will be useful, but * This program is free software; you can redistribute it and/or modify it
* WITHOUT ANY WARRANTY; without even the implied warranty of * under the terms of the GNU General Public License as published by the
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Free Software Foundation; either version 2 of the License, or (at your
* General Public License for more details. * option) any later version.
* *
* For the purpose of avoiding doubt the preferred form of the work * This program is distributed in the hope that it will be useful, but
* for making modifications shall be a standards compliant form such * WITHOUT ANY WARRANTY; without even the implied warranty of
* gzipped tar and not one requiring a proprietary or patent encumbered * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* tool to unpack. * General Public License for more details.
* *
* This is a beta test release. Most of the good code was taken * For the purpose of avoiding doubt the preferred form of the work
* from the nbd driver by Pavel Machek, who in turn took some of it * for making modifications shall be a standards compliant form such
* from loop.c. Isn't free software great for reusability 8) * gzipped tar and not one requiring a proprietary or patent encumbered
* tool to unpack.
* *
* Fixes/additions: * Fixes/additions:
* Steve Ralston: * Steve Ralston:
* Multiple device handling error fixes, * Multiple device handling error fixes,
* Added a queue depth. * Added a queue depth.
* Alan Cox: * Alan Cox:
* FC920 has an rmw bug. Dont or in the end marker. * FC920 has an rmw bug. Dont or in the end marker.
* Removed queue walk, fixed for 64bitness. * Removed queue walk, fixed for 64bitness.
* Rewrote much of the code over time * Rewrote much of the code over time
* Added indirect block lists * Added indirect block lists
* Handle 64K limits on many controllers * Handle 64K limits on many controllers
* Don't use indirects on the Promise (breaks) * Don't use indirects on the Promise (breaks)
* Heavily chop down the queue depths * Heavily chop down the queue depths
* Deepak Saxena: * Deepak Saxena:
* Independent queues per IOP * Independent queues per IOP
* Support for dynamic device creation/deletion * Support for dynamic device creation/deletion
* Code cleanup * Code cleanup
* Support for larger I/Os through merge* functions * Support for larger I/Os through merge* functions
* (taken from DAC960 driver) * (taken from DAC960 driver)
* Boji T Kannanthanam: * Boji T Kannanthanam:
* Set the I2O Block devices to be detected in increasing * Set the I2O Block devices to be detected in increasing
* order of TIDs during boot. * order of TIDs during boot.
* Search and set the I2O block device that we boot off from as * Search and set the I2O block device that we boot off
* the first device to be claimed (as /dev/i2o/hda) * from as the first device to be claimed (as /dev/i2o/hda)
* Properly attach/detach I2O gendisk structure from the system * Properly attach/detach I2O gendisk structure from the
* gendisk list. The I2O block devices now appear in * system gendisk list. The I2O block devices now appear in
* /proc/partitions. * /proc/partitions.
* Markus Lidel <Markus.Lidel@shadowconnect.com>: * Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Minor bugfixes for 2.6. * Minor bugfixes for 2.6.
*
* To do:
* Serial number scanning to find duplicates for FC multipathing
*/ */
#include <linux/major.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/stat.h>
#include <linux/pci.h>
#include <linux/errno.h>
#include <linux/file.h>
#include <linux/ioctl.h>
#include <linux/i2o.h> #include <linux/i2o.h>
#include <linux/mempool.h>
#include <linux/genhd.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/blkpg.h>
#include <linux/slab.h>
#include <linux/hdreg.h> #include <linux/hdreg.h>
#include <linux/spinlock.h>
#include <linux/bio.h>
#include <linux/notifier.h> #include "i2o_block.h"
#include <linux/reboot.h>
#include <asm/uaccess.h> static struct i2o_driver i2o_block_driver;
#include <asm/semaphore.h>
#include <linux/completion.h>
#include <asm/io.h>
#include <linux/smp_lock.h>
#include <linux/wait.h>
#define MAJOR_NR I2O_MAJOR /* global Block OSM request mempool */
static struct i2o_block_mempool i2o_blk_req_pool;
#define MAX_I2OB 16 /* Block OSM class handling definition */
static struct i2o_class_id i2o_block_class_id[] = {
{I2O_CLASS_RANDOM_BLOCK_STORAGE},
{I2O_CLASS_END}
};
/**
* i2o_block_device_free - free the memory of the I2O Block device
* @dev: I2O Block device, which should be cleaned up
*
* Frees the request queue, gendisk and the i2o_block_device structure.
*/
static void i2o_block_device_free(struct i2o_block_device *dev)
{
blk_cleanup_queue(dev->gd->queue);
#define MAX_I2OB_DEPTH 8 put_disk(dev->gd);
#define MAX_I2OB_RETRIES 4
//#define DRIVERDEBUG kfree(dev);
#ifdef DRIVERDEBUG };
#define DEBUG( s ) printk( s )
#else
#define DEBUG( s )
#endif
/* /**
* Events that this OSM is interested in * i2o_block_remove - remove the I2O Block device from the system again
* @dev: I2O Block device which should be removed
*
* Remove gendisk from system and free all allocated memory.
*
* Always returns 0.
*/ */
#define I2OB_EVENT_MASK (I2O_EVT_IND_BSA_VOLUME_LOAD | \ static int i2o_block_remove(struct device *dev)
I2O_EVT_IND_BSA_VOLUME_UNLOAD | \ {
I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ | \ struct i2o_device *i2o_dev = to_i2o_device(dev);
I2O_EVT_IND_BSA_CAPACITY_CHANGE | \ struct i2o_block_device *i2o_blk_dev = dev_get_drvdata(dev);
I2O_EVT_IND_BSA_SCSI_SMART )
printk(KERN_INFO "block-osm: Device removed %s\n",
i2o_blk_dev->gd->disk_name);
/* i2o_event_register(i2o_dev, &i2o_block_driver, 0, 0);
* Some of these can be made smaller later
*/
static int i2ob_context; del_gendisk(i2o_blk_dev->gd);
static struct block_device_operations i2ob_fops;
/* dev_set_drvdata(dev, NULL);
* I2O Block device descriptor
i2o_device_claim_release(i2o_dev);
i2o_block_device_free(i2o_blk_dev);
return 0;
};
/**
* i2o_block_device flush - Flush all dirty data of I2O device dev
* @dev: I2O device which should be flushed
*
* Flushes all dirty data on device dev.
*
* Returns 0 on success or negative error code on failure.
*/ */
struct i2ob_device static int i2o_block_device_flush(struct i2o_device *dev)
{ {
struct i2o_controller *controller; struct i2o_message *msg;
struct i2o_device *i2odev; u32 m;
int unit;
int tid; m = i2o_msg_get_wait(dev->iop, &msg, I2O_TIMEOUT_MESSAGE_GET);
int flags; if (m == I2O_QUEUE_EMPTY)
int refcnt; return -ETIMEDOUT;
struct request *head, *tail;
request_queue_t *req_queue; writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
int max_segments; writel(I2O_CMD_BLOCK_CFLUSH << 24 | HOST_TID << 12 | dev->lct_data.tid,
int max_direct; /* Not yet used properly */ &msg->u.head[1]);
int done_flag; writel(60 << 16, &msg->body[0]);
int depth; pr_debug("Flushing...\n");
int rcache;
int wcache; return i2o_msg_post_wait(dev->iop, m, 60);
int power;
int index;
int media_change_flag;
u32 max_sectors;
struct gendisk *gd;
}; };
/* /**
* FIXME: * i2o_block_device_mount - Mount (load) the media of device dev
* We should cache align these to avoid ping-ponging lines on SMP * @dev: I2O device which should receive the mount request
* boxes under heavy I/O load... * @media_id: Media Identifier
*
* Load a media into drive. Identifier should be set to -1, because the
* spec does not support any other value.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int i2o_block_device_mount(struct i2o_device *dev, u32 media_id)
struct i2ob_request
{ {
struct i2ob_request *next; struct i2o_message *msg;
struct request *req; u32 m;
int num;
int sg_dma_direction; m = i2o_msg_get_wait(dev->iop, &msg, I2O_TIMEOUT_MESSAGE_GET);
int sg_nents; if (m == I2O_QUEUE_EMPTY)
struct scatterlist sg_table[16]; return -ETIMEDOUT;
writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_BLOCK_MMOUNT << 24 | HOST_TID << 12 | dev->lct_data.tid,
&msg->u.head[1]);
writel(-1, &msg->body[0]);
writel(0, &msg->body[1]);
pr_debug("Mounting...\n");
return i2o_msg_post_wait(dev->iop, m, 2);
}; };
/* /**
* Per IOP request queue information * i2o_block_device_lock - Locks the media of device dev
* @dev: I2O device which should receive the lock request
* @media_id: Media Identifier
*
* Lock media of device dev to prevent removal. The media identifier
* should be set to -1, because the spec does not support any other value.
* *
* We have a separate request_queue_t per IOP so that a heavilly * Returns 0 on success or negative error code on failure.
* loaded I2O block device on an IOP does not starve block devices
* across all I2O controllers.
*
*/ */
struct i2ob_iop_queue static int i2o_block_device_lock(struct i2o_device *dev, u32 media_id)
{ {
unsigned int queue_depth; struct i2o_message *msg;
struct i2ob_request request_queue[MAX_I2OB_DEPTH]; u32 m;
struct i2ob_request *i2ob_qhead;
request_queue_t *req_queue; m = i2o_msg_get_wait(dev->iop, &msg, I2O_TIMEOUT_MESSAGE_GET);
spinlock_t lock; if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_BLOCK_MLOCK << 24 | HOST_TID << 12 | dev->lct_data.tid,
&msg->u.head[1]);
writel(-1, &msg->body[0]);
pr_debug("Locking...\n");
return i2o_msg_post_wait(dev->iop, m, 2);
}; };
static struct i2ob_iop_queue *i2ob_queues[MAX_I2O_CONTROLLERS];
/* /**
* Each I2O disk is one of these. * i2o_block_device_unlock - Unlocks the media of device dev
* @dev: I2O device which should receive the unlocked request
* @media_id: Media Identifier
*
* Unlocks the media in device dev. The media identifier should be set to
* -1, because the spec does not support any other value.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int i2o_block_device_unlock(struct i2o_device *dev, u32 media_id)
{
struct i2o_message *msg;
u32 m;
static struct i2ob_device i2ob_dev[MAX_I2OB]; m = i2o_msg_get_wait(dev->iop, &msg, I2O_TIMEOUT_MESSAGE_GET);
static int i2ob_dev_count = 0; if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
/* writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
* Mutex and spin lock for event handling synchronization writel(I2O_CMD_BLOCK_MUNLOCK << 24 | HOST_TID << 12 | dev->lct_data.tid,
* evt_msg contains the last event. &msg->u.head[1]);
*/ writel(media_id, &msg->body[0]);
static DECLARE_MUTEX_LOCKED(i2ob_evt_sem); pr_debug("Unlocking...\n");
static DECLARE_COMPLETION(i2ob_thread_dead);
static spinlock_t i2ob_evt_lock = SPIN_LOCK_UNLOCKED;
static u32 evt_msg[MSG_FRAME_SIZE];
static void i2o_block_reply(struct i2o_handler *, struct i2o_controller *,
struct i2o_message *);
static void i2ob_new_device(struct i2o_controller *, struct i2o_device *);
static void i2ob_del_device(struct i2o_controller *, struct i2o_device *);
static void i2ob_reboot_event(void);
static int i2ob_install_device(struct i2o_controller *, struct i2o_device *, int);
static void i2ob_end_request(struct request *);
static void i2ob_request(request_queue_t *);
static int i2ob_init_iop(unsigned int);
static int i2ob_query_device(struct i2ob_device *, int, int, void*, int);
static int i2ob_evt(void *);
static int evt_pid = 0;
static int evt_running = 0;
static int scan_unit = 0;
/* return i2o_msg_post_wait(dev->iop, m, 2);
* I2O OSM registration structure...keeps getting bigger and bigger :) };
/**
* i2o_block_device_power - Power management for device dev
* @dev: I2O device which should receive the power management request
* @operation: Operation which should be send
*
* Send a power management request to the device dev.
*
* Returns 0 on success or negative error code on failure.
*/ */
static struct i2o_handler i2o_block_handler = static int i2o_block_device_power(struct i2o_block_device *dev, u8 op)
{ {
i2o_block_reply, struct i2o_device *i2o_dev = dev->i2o_dev;
i2ob_new_device, struct i2o_controller *c = i2o_dev->iop;
i2ob_del_device, struct i2o_message *msg;
i2ob_reboot_event, u32 m;
"I2O Block OSM", int rc;
0,
I2O_CLASS_RANDOM_BLOCK_STORAGE m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_BLOCK_POWER << 24 | HOST_TID << 12 | i2o_dev->lct_data.
tid, &msg->u.head[1]);
writel(op << 24, &msg->body[0]);
pr_debug("Power...\n");
rc = i2o_msg_post_wait(c, m, 60);
if (!rc)
dev->power = op;
return rc;
}; };
/** /**
* i2ob_get - Get an I2O message * i2o_block_request_alloc - Allocate an I2O block request struct
* @dev: I2O block device *
* Allocates an I2O block request struct and initialize the list.
* *
* Get a message from the FIFO used for this block device. The message is returned * Returns a i2o_block_request pointer on success or negative error code
* or the I2O 'no message' value of 0xFFFFFFFF if nothing is available. * on failure.
*/ */
static inline struct i2o_block_request *i2o_block_request_alloc(void)
{
struct i2o_block_request *ireq;
ireq = mempool_alloc(i2o_blk_req_pool.pool, GFP_ATOMIC);
if (!ireq)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&ireq->queue);
return ireq;
};
static u32 i2ob_get(struct i2ob_device *dev) /**
* i2o_block_request_free - Frees a I2O block request
* @ireq: I2O block request which should be freed
*
* Fres the allocated memory (give it back to the request mempool).
*/
static inline void i2o_block_request_free(struct i2o_block_request *ireq)
{ {
struct i2o_controller *c=dev->controller; mempool_free(ireq, i2o_blk_req_pool.pool);
return I2O_POST_READ32(c); };
}
static int i2ob_build_sglist(struct i2ob_device *dev, struct i2ob_request *ireq) /**
* i2o_block_sglist_alloc - Allocate the SG list and map it
* @ireq: I2O block request
*
* Builds the SG list and map it into to be accessable by the controller.
*
* Returns the number of elements in the SG list or 0 on failure.
*/
static inline int i2o_block_sglist_alloc(struct i2o_block_request *ireq)
{ {
struct scatterlist *sg = ireq->sg_table; struct device *dev = &ireq->i2o_blk_dev->i2o_dev->iop->pdev->dev;
int nents; int nents;
nents = blk_rq_map_sg(dev->req_queue, ireq->req, ireq->sg_table); nents = blk_rq_map_sg(ireq->req->q, ireq->req, ireq->sg_table);
if (rq_data_dir(ireq->req) == READ) if (rq_data_dir(ireq->req) == READ)
ireq->sg_dma_direction = PCI_DMA_FROMDEVICE; ireq->sg_dma_direction = PCI_DMA_FROMDEVICE;
else else
ireq->sg_dma_direction = PCI_DMA_TODEVICE; ireq->sg_dma_direction = PCI_DMA_TODEVICE;
ireq->sg_nents = pci_map_sg(dev->controller->pdev, sg, nents, ireq->sg_dma_direction); ireq->sg_nents = dma_map_sg(dev, ireq->sg_table, nents,
ireq->sg_dma_direction);
return ireq->sg_nents; return ireq->sg_nents;
} };
void i2ob_free_sglist(struct i2ob_device *dev, struct i2ob_request *ireq)
{
struct pci_dev *pdev = dev->controller->pdev;
struct scatterlist *sg = ireq->sg_table;
int nents = ireq->sg_nents;
pci_unmap_sg(pdev, sg, nents, ireq->sg_dma_direction);
}
/** /**
* i2ob_send - Turn a request into a message and send it * i2o_block_sglist_free - Frees the SG list
* @m: Message offset * @ireq: I2O block request from which the SG should be freed
* @dev: I2O device
* @ireq: Request structure
* @unit: Device identity
* *
* Generate an I2O BSAREAD request. This interface function is called for devices that * Frees the SG list from the I2O block request.
* appear to explode when they are fed indirect chain pointers (notably right now this
* appears to afflict Promise hardwre, so be careful what you feed the hardware
*
* No cleanup is done by this interface. It is done on the interrupt side when the
* reply arrives
*/ */
static inline void i2o_block_sglist_free(struct i2o_block_request *ireq)
static int i2ob_send(u32 m, struct i2ob_device *dev, struct i2ob_request *ireq, int unit)
{ {
struct i2o_controller *c = dev->controller; struct device *dev = &ireq->i2o_blk_dev->i2o_dev->iop->pdev->dev;
int tid = dev->tid;
void *msg;
void *mptr;
u64 offset;
struct request *req = ireq->req;
int count = req->nr_sectors<<9;
struct scatterlist *sg;
int sgnum;
int i;
// printk(KERN_INFO "i2ob_send called\n"); dma_unmap_sg(dev, ireq->sg_table, ireq->sg_nents,
/* Map the message to a virtual address */ ireq->sg_dma_direction);
msg = c->msg_virt + m; };
sgnum = i2ob_build_sglist(dev, ireq);
/* FIXME: if we have no resources how should we get out of this */
if(sgnum == 0)
BUG();
/*
* Build the message based on the request.
*/
i2o_raw_writel(i2ob_context|(unit<<8), msg+8);
i2o_raw_writel(ireq->num, msg+12);
i2o_raw_writel(req->nr_sectors << 9, msg+20);
/* /**
* Mask out partitions from now on * i2o_block_prep_req_fn - Allocates I2O block device specific struct
*/ * @q: request queue for the request
* @req: the request to prepare
/* This can be optimised later - just want to be sure its right for *
starters */ * Allocate the necessary i2o_block_request struct and connect it to
offset = ((u64)req->sector) << 9; * the request. This is needed that we not loose the SG list later on.
i2o_raw_writel( offset & 0xFFFFFFFF, msg+24); *
i2o_raw_writel(offset>>32, msg+28); * Returns BLKPREP_OK on success or BLKPREP_DEFER on failure.
mptr=msg+32; */
static int i2o_block_prep_req_fn(struct request_queue *q, struct request *req)
sg = ireq->sg_table; {
if(rq_data_dir(req) == READ) struct i2o_block_device *i2o_blk_dev = q->queuedata;
{ struct i2o_block_request *ireq;
DEBUG("READ\n");
i2o_raw_writel(I2O_CMD_BLOCK_READ<<24|HOST_TID<<12|tid, msg+4); /* request is already processed by us, so return */
for(i = sgnum; i > 0; i--) if (req->flags & REQ_SPECIAL) {
{ pr_debug("REQ_SPECIAL already set!\n");
if(i != 1) req->flags |= REQ_DONTPREP;
i2o_raw_writel(0x10000000|sg_dma_len(sg), mptr); return BLKPREP_OK;
else
i2o_raw_writel(0xD0000000|sg_dma_len(sg), mptr);
i2o_raw_writel(sg_dma_address(sg), mptr+4);
mptr += 8;
count -= sg_dma_len(sg);
sg++;
}
switch(dev->rcache)
{
case CACHE_NULL:
i2o_raw_writel(0, msg+16);break;
case CACHE_PREFETCH:
i2o_raw_writel(0x201F0008, msg+16);break;
case CACHE_SMARTFETCH:
if(req->nr_sectors > 16)
i2o_raw_writel(0x201F0008, msg+16);
else
i2o_raw_writel(0x001F0000, msg+16);
break;
}
// printk("Reading %d entries %d bytes.\n",
// mptr-msg-8, req->nr_sectors<<9);
} }
else if(rq_data_dir(req) == WRITE)
{
DEBUG("WRITE\n");
i2o_raw_writel(I2O_CMD_BLOCK_WRITE<<24|HOST_TID<<12|tid, msg+4);
for(i = sgnum; i > 0; i--)
{
if(i != 1)
i2o_raw_writel(0x14000000|sg_dma_len(sg), mptr);
else
i2o_raw_writel(0xD4000000|sg_dma_len(sg), mptr);
i2o_raw_writel(sg_dma_address(sg), mptr+4);
mptr += 8;
count -= sg_dma_len(sg);
sg++;
}
switch(dev->wcache) /* connect the i2o_block_request to the request */
{ if (!req->special) {
case CACHE_NULL: ireq = i2o_block_request_alloc();
i2o_raw_writel(0, msg+16);break; if (unlikely(IS_ERR(ireq))) {
case CACHE_WRITETHROUGH: pr_debug("unable to allocate i2o_block_request!\n");
i2o_raw_writel(0x001F0008, msg+16);break; return BLKPREP_DEFER;
case CACHE_WRITEBACK:
i2o_raw_writel(0x001F0010, msg+16);break;
case CACHE_SMARTBACK:
if(req->nr_sectors > 16)
i2o_raw_writel(0x001F0004, msg+16);
else
i2o_raw_writel(0x001F0010, msg+16);
break;
case CACHE_SMARTTHROUGH:
if(req->nr_sectors > 16)
i2o_raw_writel(0x001F0004, msg+16);
else
i2o_raw_writel(0x001F0010, msg+16);
} }
// printk("Writing %d entries %d bytes.\n",
// mptr-msg-8, req->nr_sectors<<9);
}
i2o_raw_writel(I2O_MESSAGE_SIZE(mptr-msg)>>2 | SGL_OFFSET_8, msg);
if(count != 0)
{
printk(KERN_ERR "Request count botched by %d.\n", count);
}
i2o_post_message(c,m); ireq->i2o_blk_dev = i2o_blk_dev;
i2ob_queues[c->unit]->queue_depth ++; req->special = ireq;
ireq->req = req;
} else
ireq = req->special;
return 0; /* do not come back here */
} req->flags |= REQ_DONTPREP | REQ_SPECIAL;
/* return BLKPREP_OK;
* Remove a request from the _locked_ request list. We update both the };
* list chain and if this is the last item the tail pointer. Caller
* must hold the lock.
*/
static inline void i2ob_unhook_request(struct i2ob_request *ireq,
unsigned int iop)
{
ireq->next = i2ob_queues[iop]->i2ob_qhead;
i2ob_queues[iop]->i2ob_qhead = ireq;
}
/* /**
* Request completion handler * i2o_block_delayed_request_fn - delayed request queue function
* delayed_request: the delayed request with the queue to start
*
* If the request queue is stopped for a disk, and there is no open
* request, a new event is created, which calls this function to start
* the queue after I2O_BLOCK_REQUEST_TIME. Otherwise the queue will never
* be started again.
*/ */
static void i2o_block_delayed_request_fn(void *delayed_request)
static inline void i2ob_end_request(struct request *req)
{ {
/* FIXME - pci unmap the request */ struct i2o_block_delayed_request *dreq = delayed_request;
struct request_queue *q = dreq->queue;
/* unsigned long flags;
* Loop until all of the buffers that are linked
* to this request have been marked updated and
* unlocked.
*/
while (end_that_request_first( req, !req->errors, req->hard_cur_sectors ));
/* spin_lock_irqsave(q->queue_lock, flags);
* It is now ok to complete the request. blk_start_queue(q);
*/ spin_unlock_irqrestore(q->queue_lock, flags);
end_that_request_last( req ); kfree(dreq);
DEBUG("IO COMPLETED\n"); };
}
/* /**
* OSM reply handler. This gets all the message replies * i2o_block_reply - Block OSM reply handler.
* @c: I2O controller from which the message arrives
* @m: message id of reply
* qmsg: the actuall I2O message reply
*
* This function gets all the message replies.
*
*/ */
static int i2o_block_reply(struct i2o_controller *c, u32 m,
static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg) struct i2o_message *msg)
{ {
unsigned long flags; struct i2o_block_request *ireq;
struct i2ob_request *ireq = NULL; struct request *req;
struct i2o_block_device *dev;
struct request_queue *q;
u8 st; u8 st;
u32 *m = (u32 *)msg; unsigned long flags;
u8 unit = m[2]>>8;
struct i2ob_device *dev = &i2ob_dev[unit];
/* /* FAILed message */
* FAILed message if (unlikely(readl(&msg->u.head[0]) & (1 << 13))) {
*/ struct i2o_message *pmsg;
if(m[0] & (1<<13)) u32 pm;
{
DEBUG("FAIL"); printk(KERN_WARNING "FAIL");
/* /*
* FAILed message from controller * FAILed message from controller
* We increment the error count and abort it * We increment the error count and abort it
...@@ -468,65 +431,85 @@ static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, str ...@@ -468,65 +431,85 @@ static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, str
* better be on the safe side since no one really follows * better be on the safe side since no one really follows
* the spec to the book :) * the spec to the book :)
*/ */
ireq=&i2ob_queues[c->unit]->request_queue[m[3]]; pm = readl(&msg->body[3]);
ireq->req->errors++; pmsg = c->in_queue.virt + pm;
spin_lock_irqsave(dev->req_queue->queue_lock, flags); req = i2o_cntxt_list_get(c, readl(&pmsg->u.s.tcntxt));
i2ob_unhook_request(ireq, c->unit); if (unlikely(!req)) {
i2ob_end_request(ireq->req); printk(KERN_ERR "block-osm: NULL reply received!\n");
spin_unlock_irqrestore(dev->req_queue->queue_lock, flags); return -1;
}
ireq = req->special;
dev = ireq->i2o_blk_dev;
q = dev->gd->queue;
req->errors++;
spin_lock_irqsave(q->queue_lock, flags);
while (end_that_request_chunk(req, !req->errors,
readl(&pmsg->body[1]))) ;
end_that_request_last(req);
dev->open_queue_depth--;
list_del(&ireq->queue);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
/* Now flush the message by making it a NOP */ /* Now flush the message by making it a NOP */
m[0]&=0x00FFFFFF; i2o_msg_nop(c, pm);
m[0]|=(I2O_CMD_UTIL_NOP)<<24;
i2o_post_message(c, (unsigned long) m - (unsigned long) c->msg_virt);
return; return -1;
} }
if(msg->function == I2O_CMD_UTIL_EVT_REGISTER) req = i2o_cntxt_list_get(c, readl(&msg->u.s.tcntxt));
{ if (unlikely(!req)) {
spin_lock(&i2ob_evt_lock); printk(KERN_ERR "block-osm: NULL reply received!\n");
memcpy(evt_msg, msg, (m[0]>>16)<<2); return -1;
spin_unlock(&i2ob_evt_lock);
up(&i2ob_evt_sem);
return;
} }
if(!dev->i2odev) ireq = req->special;
{ dev = ireq->i2o_blk_dev;
q = dev->gd->queue;
if (unlikely(!dev->i2o_dev)) {
/* /*
* This is HACK, but Intel Integrated RAID allows user * This is HACK, but Intel Integrated RAID allows user
* to delete a volume that is claimed, locked, and in use * to delete a volume that is claimed, locked, and in use
* by the OS. We have to check for a reply from a * by the OS. We have to check for a reply from a
* non-existent device and flag it as an error or the system * non-existent device and flag it as an error or the system
* goes kaput... * goes kaput...
*/ */
ireq=&i2ob_queues[c->unit]->request_queue[m[3]]; req->errors++;
ireq->req->errors++; printk(KERN_WARNING
printk(KERN_WARNING "I2O Block: Data transfer to deleted device!\n"); "I2O Block: Data transfer to deleted device!\n");
spin_lock_irqsave(dev->req_queue->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
i2ob_unhook_request(ireq, c->unit); while (end_that_request_chunk
i2ob_end_request(ireq->req); (req, !req->errors, readl(&msg->body[1]))) ;
spin_unlock_irqrestore(dev->req_queue->queue_lock, flags); end_that_request_last(req);
return;
} dev->open_queue_depth--;
list_del(&ireq->queue);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
return -1;
}
/* /*
* Lets see what is cooking. We stuffed the * Lets see what is cooking. We stuffed the
* request in the context. * request in the context.
*/ */
ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
st=m[4]>>24;
if(st!=0) st = readl(&msg->body[0]) >> 24;
{
if (st != 0) {
int err; int err;
char *bsa_errors[] = char *bsa_errors[] = {
{ "Success",
"Success", "Media Error",
"Media Error",
"Failure communicating to device", "Failure communicating to device",
"Device Failure", "Device Failure",
"Device is not ready", "Device is not ready",
...@@ -540,61 +523,62 @@ static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, str ...@@ -540,61 +523,62 @@ static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, str
"Device has reset", "Device has reset",
"Volume has changed, waiting for acknowledgement" "Volume has changed, waiting for acknowledgement"
}; };
err = m[4]&0xFFFF; err = readl(&msg->body[0]) & 0xffff;
/* /*
* Device not ready means two things. One is that the * Device not ready means two things. One is that the
* the thing went offline (but not a removal media) * the thing went offline (but not a removal media)
* *
* The second is that you have a SuperTrak 100 and the * The second is that you have a SuperTrak 100 and the
* firmware got constipated. Unlike standard i2o card * firmware got constipated. Unlike standard i2o card
* setups the supertrak returns an error rather than * setups the supertrak returns an error rather than
* blocking for the timeout in these cases. * blocking for the timeout in these cases.
* *
* Don't stick a supertrak100 into cache aggressive modes * Don't stick a supertrak100 into cache aggressive modes
*/ */
printk(KERN_ERR "\n/dev/%s error: %s", dev->gd->disk_name,
printk(KERN_ERR "\n/dev/%s error: %s", dev->i2odev->dev_name, bsa_errors[readl(&msg->body[0]) & 0xffff]);
bsa_errors[m[4]&0XFFFF]); if (readl(&msg->body[0]) & 0x00ff0000)
if(m[4]&0x00FF0000) printk(" - DDM attempted %d retries",
printk(" - DDM attempted %d retries", (m[4]>>16)&0x00FF ); (readl(&msg->body[0]) >> 16) & 0x00ff);
printk(".\n"); printk(".\n");
ireq->req->errors++; req->errors++;
} } else
else req->errors = 0;
ireq->req->errors = 0;
/* if (!end_that_request_chunk(req, !req->errors, readl(&msg->body[1]))) {
* Dequeue the request. We use irqsave locks as one day we add_disk_randomness(req->rq_disk);
* may be running polled controllers from a BH... spin_lock_irqsave(q->queue_lock, flags);
*/
i2ob_free_sglist(dev, ireq);
spin_lock_irqsave(dev->req_queue->queue_lock, flags);
i2ob_unhook_request(ireq, c->unit);
i2ob_end_request(ireq->req);
i2ob_queues[c->unit]->queue_depth --;
/*
* We may be able to do more I/O
*/
i2ob_request(dev->gd->queue);
spin_unlock_irqrestore(dev->req_queue->queue_lock, flags);
}
/* end_that_request_last(req);
* Event handler. Needs to be a separate thread b/c we may have
* to do things like scan a partition table, or query parameters dev->open_queue_depth--;
* which cannot be done from an interrupt or from a bottom half. list_del(&ireq->queue);
*/ blk_start_queue(q);
static int i2ob_evt(void *dummy)
spin_unlock_irqrestore(q->queue_lock, flags);
i2o_block_sglist_free(ireq);
i2o_block_request_free(ireq);
} else
printk(KERN_ERR "still remaining chunks\n");
return 1;
};
static void i2o_block_event(struct i2o_event *evt)
{
printk(KERN_INFO "block-osm: event received\n");
};
#if 0
static int i2o_block_event(void *dummy)
{ {
unsigned int evt; unsigned int evt;
unsigned long flags; unsigned long flags;
struct i2ob_device *dev; struct i2o_block_device *dev;
int unit; int unit;
//The only event that has data is the SCSI_SMART event. //The only event that has data is the SCSI_SMART event.
struct i2o_reply { struct i2o_reply {
...@@ -604,24 +588,22 @@ static int i2ob_evt(void *dummy) ...@@ -604,24 +588,22 @@ static int i2ob_evt(void *dummy)
u8 ASCQ; u8 ASCQ;
u16 pad; u16 pad;
u8 data[16]; u8 data[16];
} *evt_local; } *evt_local;
daemonize("i2oblock"); daemonize("i2oblock");
allow_signal(SIGKILL); allow_signal(SIGKILL);
evt_running = 1; evt_running = 1;
while(1) while (1) {
{ if (down_interruptible(&i2ob_evt_sem)) {
if(down_interruptible(&i2ob_evt_sem))
{
evt_running = 0; evt_running = 0;
printk("exiting..."); printk("exiting...");
break; break;
} }
/* /*
* Keep another CPU/interrupt from overwriting the * Keep another CPU/interrupt from overwriting the
* message while we're reading it * message while we're reading it
* *
* We stuffed the unit in the TxContext and grab the event mask * We stuffed the unit in the TxContext and grab the event mask
...@@ -634,20 +616,19 @@ static int i2ob_evt(void *dummy) ...@@ -634,20 +616,19 @@ static int i2ob_evt(void *dummy)
unit = le32_to_cpu(evt_local->header[3]); unit = le32_to_cpu(evt_local->header[3]);
evt = le32_to_cpu(evt_local->evt_indicator); evt = le32_to_cpu(evt_local->evt_indicator);
dev = &i2ob_dev[unit]; dev = &i2o_blk_dev[unit];
switch(evt) switch (evt) {
{
/* /*
* New volume loaded on same TID, so we just re-install. * New volume loaded on same TID, so we just re-install.
* The TID/controller don't change as it is the same * The TID/controller don't change as it is the same
* I2O device. It's just new media that we have to * I2O device. It's just new media that we have to
* rescan. * rescan.
*/ */
case I2O_EVT_IND_BSA_VOLUME_LOAD: case I2O_EVT_IND_BSA_VOLUME_LOAD:
{ {
i2ob_install_device(dev->i2odev->controller, i2ob_install_device(dev->i2o_device->iop,
dev->i2odev, unit); dev->i2o_device, unit);
add_disk(dev->gd); add_disk(dev->gendisk);
break; break;
} }
...@@ -657,144 +638,108 @@ static int i2ob_evt(void *dummy) ...@@ -657,144 +638,108 @@ static int i2ob_evt(void *dummy)
* have media, so we don't want to clear the controller or * have media, so we don't want to clear the controller or
* device pointer. * device pointer.
*/ */
case I2O_EVT_IND_BSA_VOLUME_UNLOAD: case I2O_EVT_IND_BSA_VOLUME_UNLOAD:
{ {
struct gendisk *p = dev->gd; struct gendisk *p = dev->gendisk;
blk_queue_max_sectors(dev->gd->queue, 0); blk_queue_max_sectors(dev->gendisk->queue, 0);
del_gendisk(p); del_gendisk(p);
put_disk(p); put_disk(p);
dev->gd = NULL; dev->gendisk = NULL;
dev->media_change_flag = 1; dev->media_change_flag = 1;
break; break;
} }
case I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ: case I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ:
printk(KERN_WARNING "%s: Attempt to eject locked media\n", printk(KERN_WARNING
dev->i2odev->dev_name); "%s: Attempt to eject locked media\n",
break; dev->i2o_device->dev_name);
break;
/* /*
* The capacity has changed and we are going to be * The capacity has changed and we are going to be
* updating the max_sectors and other information * updating the max_sectors and other information
* about this disk. We try a revalidate first. If * about this disk. We try a revalidate first. If
* the block device is in use, we don't want to * the block device is in use, we don't want to
* do that as there may be I/Os bound for the disk * do that as there may be I/Os bound for the disk
* at the moment. In that case we read the size * at the moment. In that case we read the size
* from the device and update the information ourselves * from the device and update the information ourselves
* and the user can later force a partition table * and the user can later force a partition table
* update through an ioctl. * update through an ioctl.
*/ */
case I2O_EVT_IND_BSA_CAPACITY_CHANGE: case I2O_EVT_IND_BSA_CAPACITY_CHANGE:
{ {
u64 size; u64 size;
if(i2ob_query_device(dev, 0x0004, 0, &size, 8) !=0 ) if (i2ob_query_device(dev, 0x0004, 0, &size, 8)
i2ob_query_device(dev, 0x0000, 4, &size, 8); != 0)
i2ob_query_device(dev, 0x0000, 4, &size,
8);
spin_lock_irqsave(dev->req_queue->queue_lock, flags); spin_lock_irqsave(dev->req_queue->queue_lock,
set_capacity(dev->gd, size>>9); flags);
spin_unlock_irqrestore(dev->req_queue->queue_lock, flags); set_capacity(dev->gendisk, size >> 9);
spin_unlock_irqrestore(dev->req_queue->
queue_lock, flags);
break; break;
} }
/* /*
* We got a SCSI SMART event, we just log the relevant * We got a SCSI SMART event, we just log the relevant
* information and let the user decide what they want * information and let the user decide what they want
* to do with the information. * to do with the information.
*/ */
case I2O_EVT_IND_BSA_SCSI_SMART: case I2O_EVT_IND_BSA_SCSI_SMART:
{ {
char buf[16]; char buf[16];
printk(KERN_INFO "I2O Block: %s received a SCSI SMART Event\n",dev->i2odev->dev_name); printk(KERN_INFO
evt_local->data[16]='\0'; "I2O Block: %s received a SCSI SMART Event\n",
sprintf(buf,"%s",&evt_local->data[0]); dev->i2o_device->dev_name);
printk(KERN_INFO " Disk Serial#:%s\n",buf); evt_local->data[16] = '\0';
printk(KERN_INFO " ASC 0x%02x \n",evt_local->ASC); sprintf(buf, "%s", &evt_local->data[0]);
printk(KERN_INFO " ASCQ 0x%02x \n",evt_local->ASCQ); printk(KERN_INFO " Disk Serial#:%s\n",
buf);
printk(KERN_INFO " ASC 0x%02x \n",
evt_local->ASC);
printk(KERN_INFO " ASCQ 0x%02x \n",
evt_local->ASCQ);
break; break;
} }
/* /*
* Non event * Non event
*/ */
case 0: case 0:
break; break;
/* /*
* An event we didn't ask for. Call the card manufacturer * An event we didn't ask for. Call the card manufacturer
* and tell them to fix their firmware :) * and tell them to fix their firmware :)
*/ */
case 0x20:
/*
* If a promise card reports 0x20 event then the brown stuff
* hit the fan big time. The card seems to recover but loses
* the pending writes. Deeply ungood except for testing fsck
*/
if(dev->i2odev->controller->promise)
panic("I2O controller firmware failed. Reboot and force a filesystem check.\n");
default:
printk(KERN_INFO "%s: Received event 0x%X we didn't register for\n"
KERN_INFO " Blame the I2O card manufacturer 8)\n",
dev->i2odev->dev_name, evt);
break;
}
};
complete_and_exit(&i2ob_thread_dead,0);
return 0;
}
/*
* The I2O block driver is listed as one of those that pulls the
* front entry off the queue before processing it. This is important
* to remember here. If we drop the io lock then CURRENT will change
* on us. We must unlink CURRENT in this routine before we return, if
* we use it.
*/
static void i2ob_request(request_queue_t *q)
{
struct request *req;
struct i2ob_request *ireq;
struct i2ob_device *dev;
u32 m;
while ((req = elv_next_request(q)) != NULL) {
dev = req->rq_disk->private_data;
/*
* Queue depths probably belong with some kind of
* generic IOP commit control. Certainly it's not right
* its global!
*/
if(i2ob_queues[dev->unit]->queue_depth >= dev->depth)
break;
/* Get a message */
m = i2ob_get(dev);
if(m==0xFFFFFFFF) case 0x20:
{ /*
if(i2ob_queues[dev->unit]->queue_depth == 0) * If a promise card reports 0x20 event then the brown stuff
printk(KERN_ERR "i2o_block: message queue and request queue empty!!\n"); * hit the fan big time. The card seems to recover but loses
* the pending writes. Deeply ungood except for testing fsck
*/
if (dev->i2o_device->iop->promise)
panic
("I2O controller firmware failed. Reboot and force a filesystem check.\n");
default:
printk(KERN_INFO
"%s: Received event 0x%X we didn't register for\n"
KERN_INFO
" Blame the I2O card manufacturer 8)\n",
dev->i2o_device->dev_name, evt);
break; break;
} }
/* };
* Everything ok, so pull from kernel queue onto our queue
*/
req->errors = 0;
blkdev_dequeue_request(req);
ireq = i2ob_queues[dev->unit]->i2ob_qhead;
i2ob_queues[dev->unit]->i2ob_qhead = ireq->next;
ireq->req = req;
i2ob_send(m, dev, ireq, dev->index); complete_and_exit(&i2ob_thread_dead, 0);
} return 0;
} }
#endif
/* /*
* SCSI-CAM for ioctl geometry mapping * SCSI-CAM for ioctl geometry mapping
...@@ -803,8 +748,8 @@ static void i2ob_request(request_queue_t *q) ...@@ -803,8 +748,8 @@ static void i2ob_request(request_queue_t *q)
* *
* LBA -> CHS mapping table taken from: * LBA -> CHS mapping table taken from:
* *
* "Incorporating the I2O Architecture into BIOS for Intel Architecture * "Incorporating the I2O Architecture into BIOS for Intel Architecture
* Platforms" * Platforms"
* *
* This is an I2O document that is only available to I2O members, * This is an I2O document that is only available to I2O members,
* not developers. * not developers.
...@@ -825,865 +770,647 @@ static void i2ob_request(request_queue_t *q) ...@@ -825,865 +770,647 @@ static void i2ob_request(request_queue_t *q)
#define BLOCK_SIZE_42G 8806400 #define BLOCK_SIZE_42G 8806400
#define BLOCK_SIZE_84G 17612800 #define BLOCK_SIZE_84G 17612800
static void i2o_block_biosparam( static void i2o_block_biosparam(unsigned long capacity, unsigned short *cyls,
unsigned long capacity, unsigned char *hds, unsigned char *secs)
unsigned short *cyls, {
unsigned char *hds, unsigned long heads, sectors, cylinders;
unsigned char *secs)
{
unsigned long heads, sectors, cylinders;
sectors = 63L; /* Maximize sectors per track */ sectors = 63L; /* Maximize sectors per track */
if(capacity <= BLOCK_SIZE_528M) if (capacity <= BLOCK_SIZE_528M)
heads = 16; heads = 16;
else if(capacity <= BLOCK_SIZE_1G) else if (capacity <= BLOCK_SIZE_1G)
heads = 32; heads = 32;
else if(capacity <= BLOCK_SIZE_21G) else if (capacity <= BLOCK_SIZE_21G)
heads = 64; heads = 64;
else if(capacity <= BLOCK_SIZE_42G) else if (capacity <= BLOCK_SIZE_42G)
heads = 128; heads = 128;
else else
heads = 255; heads = 255;
cylinders = (unsigned long)capacity / (heads * sectors); cylinders = (unsigned long)capacity / (heads * sectors);
*cyls = (unsigned short) cylinders; /* Stuff return values */ *cyls = (unsigned short)cylinders; /* Stuff return values */
*secs = (unsigned char) sectors; *secs = (unsigned char)sectors;
*hds = (unsigned char) heads; *hds = (unsigned char)heads;
} }
/* /**
* Issue device specific ioctl calls. * i2o_block_open - Open the block device
*
* Power up the device, mount and lock the media. This function is called,
* if the block device is opened for access.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int i2o_block_open(struct inode *inode, struct file *file)
static int i2ob_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{ {
struct gendisk *disk = inode->i_bdev->bd_disk; struct i2o_block_device *dev = inode->i_bdev->bd_disk->private_data;
struct i2ob_device *dev = disk->private_data;
void __user *argp = (void __user *)arg;
/* Anyone capable of this syscall can do *real bad* things */ if (!dev->i2o_dev)
return -ENODEV;
if (!capable(CAP_SYS_ADMIN)) if (dev->power > 0x1f)
return -EPERM; i2o_block_device_power(dev, 0x02);
switch (cmd) {
case HDIO_GETGEO:
{
struct hd_geometry g;
i2o_block_biosparam(get_capacity(disk),
&g.cylinders, &g.heads, &g.sectors);
g.start = get_start_sect(inode->i_bdev);
return copy_to_user(argp, &g, sizeof(g))?-EFAULT:0;
}
case BLKI2OGRSTRAT:
return put_user(dev->rcache, (int __user *)argp);
case BLKI2OGWSTRAT:
return put_user(dev->wcache, (int __user *)argp);
case BLKI2OSRSTRAT:
if(arg<0||arg>CACHE_SMARTFETCH)
return -EINVAL;
dev->rcache = arg;
break;
case BLKI2OSWSTRAT:
if(arg!=0 && (arg<CACHE_WRITETHROUGH || arg>CACHE_SMARTBACK))
return -EINVAL;
dev->wcache = arg;
break;
}
return -ENOTTY;
}
/* i2o_block_device_mount(dev->i2o_dev, -1);
* Close the block device down
i2o_block_device_lock(dev->i2o_dev, -1);
pr_debug("Ready.\n");
return 0;
};
/**
* i2o_block_release - Release the I2O block device
*
* Unlock and unmount the media, and power down the device. Gets called if
* the block device is closed.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int i2o_block_release(struct inode *inode, struct file *file)
static int i2ob_release(struct inode *inode, struct file *file)
{ {
struct gendisk *disk = inode->i_bdev->bd_disk; struct gendisk *disk = inode->i_bdev->bd_disk;
struct i2ob_device *dev = disk->private_data; struct i2o_block_device *dev = disk->private_data;
u8 operation;
/* /*
* This is to deail with the case of an application * This is to deail with the case of an application
* opening a device and then the device dissapears while * opening a device and then the device dissapears while
* it's in use, and then the application tries to release * it's in use, and then the application tries to release
* it. ex: Unmounting a deleted RAID volume at reboot. * it. ex: Unmounting a deleted RAID volume at reboot.
* If we send messages, it will just cause FAILs since * If we send messages, it will just cause FAILs since
* the TID no longer exists. * the TID no longer exists.
*/ */
if(!dev->i2odev) if (!dev->i2o_dev)
return 0; return 0;
if (dev->refcnt <= 0) i2o_block_device_flush(dev->i2o_dev);
printk(KERN_ALERT "i2ob_release: refcount(%d) <= 0\n", dev->refcnt);
dev->refcnt--;
if(dev->refcnt==0)
{
/*
* Flush the onboard cache on unmount
*/
u32 msg[5];
int *query_done = &dev->done_flag;
msg[0] = (FIVE_WORD_MSG_SIZE|SGL_OFFSET_0);
msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
msg[2] = i2ob_context|0x40000000;
msg[3] = (u32)query_done;
msg[4] = 60<<16;
DEBUG("Flushing...");
i2o_post_wait(dev->controller, msg, 20, 60);
/* i2o_block_device_unlock(dev->i2o_dev, -1);
* Unlock the media
*/
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
msg[2] = i2ob_context|0x40000000;
msg[3] = (u32)query_done;
msg[4] = -1;
DEBUG("Unlocking...");
i2o_post_wait(dev->controller, msg, 20, 2);
DEBUG("Unlocked.\n");
msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_POWER<<24 | HOST_TID << 12 | dev->tid;
if(dev->flags & (1<<3|1<<4)) /* Removable */
msg[4] = 0x21 << 24;
else
msg[4] = 0x24 << 24;
if(i2o_post_wait(dev->controller, msg, 20, 60)==0)
dev->power = 0x24;
/* if (dev->flags & (1 << 3 | 1 << 4)) /* Removable */
* Now unclaim the device. operation = 0x21;
*/ else
operation = 0x24;
i2o_block_device_power(dev, operation);
if (i2o_release_device(dev->i2odev, &i2o_block_handler))
printk(KERN_ERR "i2ob_release: controller rejected unclaim.\n");
DEBUG("Unclaim\n");
}
return 0; return 0;
} }
/* /**
* Open the block device. * i2o_block_ioctl - Issue device specific ioctl calls.
* @cmd: ioctl command
* @arg: arg
*
* Handles ioctl request for the block device.
*
* Return 0 on success or negative error on failure.
*/ */
static int i2o_block_ioctl(struct inode *inode, struct file *file,
static int i2ob_open(struct inode *inode, struct file *file) unsigned int cmd, unsigned long arg)
{ {
struct gendisk *disk = inode->i_bdev->bd_disk; struct gendisk *disk = inode->i_bdev->bd_disk;
struct i2ob_device *dev = disk->private_data; struct i2o_block_device *dev = disk->private_data;
void __user *argp = (void __user *)arg;
if(!dev->i2odev) /* Anyone capable of this syscall can do *real bad* things */
return -ENODEV;
if (!capable(CAP_SYS_ADMIN))
if(dev->refcnt++==0) return -EPERM;
{
u32 msg[6];
DEBUG("Claim ");
if(i2o_claim_device(dev->i2odev, &i2o_block_handler))
{
dev->refcnt--;
printk(KERN_INFO "I2O Block: Could not open device\n");
return -EBUSY;
}
DEBUG("Claimed ");
/*
* Power up if needed
*/
if(dev->power > 0x1f) switch (cmd) {
case HDIO_GETGEO:
{ {
msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0; struct hd_geometry g;
msg[1] = I2O_CMD_BLOCK_POWER<<24 | HOST_TID << 12 | dev->tid; i2o_block_biosparam(get_capacity(disk),
msg[4] = 0x02 << 24; &g.cylinders, &g.heads, &g.sectors);
if(i2o_post_wait(dev->controller, msg, 20, 60) == 0) g.start = get_start_sect(inode->i_bdev);
dev->power = 0x02; return copy_to_user(argp, &g, sizeof(g)) ? -EFAULT : 0;
} }
/* case BLKI2OGRSTRAT:
* Mount the media if needed. Note that we don't use return put_user(dev->rcache, (int __user *)arg);
* the lock bit. Since we have to issue a lock if it case BLKI2OGWSTRAT:
* refuses a mount (quite possible) then we might as return put_user(dev->wcache, (int __user *)arg);
* well just send two messages out. case BLKI2OSRSTRAT:
*/ if (arg < 0 || arg > CACHE_SMARTFETCH)
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0; return -EINVAL;
msg[1] = I2O_CMD_BLOCK_MMOUNT<<24|HOST_TID<<12|dev->tid; dev->rcache = arg;
msg[4] = -1; break;
msg[5] = 0; case BLKI2OSWSTRAT:
DEBUG("Mount "); if (arg != 0
i2o_post_wait(dev->controller, msg, 24, 2); && (arg < CACHE_WRITETHROUGH || arg > CACHE_SMARTBACK))
return -EINVAL;
/* dev->wcache = arg;
* Lock the media break;
*/ }
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0; return -ENOTTY;
msg[1] = I2O_CMD_BLOCK_MLOCK<<24|HOST_TID<<12|dev->tid; };
msg[4] = -1;
DEBUG("Lock ");
i2o_post_wait(dev->controller, msg, 20, 2);
DEBUG("Ready.\n");
}
return 0;
}
/* /**
* Issue a device query * i2o_block_media_changed - Have we seen a media change?
* @disk: gendisk which should be verified
*
* Verifies if the media has changed.
*
* Returns 1 if the media was changed or 0 otherwise.
*/ */
static int i2o_block_media_changed(struct gendisk *disk)
static int i2ob_query_device(struct i2ob_device *dev, int table,
int field, void *buf, int buflen)
{ {
return i2o_query_scalar(dev->controller, dev->tid, struct i2o_block_device *p = disk->private_data;
table, field, buf, buflen);
}
if (p->media_change_flag) {
p->media_change_flag = 0;
return 1;
}
return 0;
}
/* /**
* Install the I2O block device we found. * i2o_block_transfer - Transfer a request to/from the I2O controller
* @req: the request which should be transfered
*
* This function converts the request into a I2O message. The necessary
* DMA buffers are allocated and after everything is setup post the message
* to the I2O controller. No cleanup is done by this function. It is done
* on the interrupt side when the reply arrives.
*
* Return 0 on success or negative error code on failure.
*/ */
static int i2o_block_transfer(struct request *req)
static int i2ob_install_device(struct i2o_controller *c, struct i2o_device *d, int unit)
{ {
u64 size; struct i2o_block_device *dev = req->rq_disk->private_data;
u32 blocksize; struct i2o_controller *c = dev->i2o_dev->iop;
u8 type; int tid = dev->i2o_dev->lct_data.tid;
u16 power; struct i2o_message *msg;
u32 flags, status; void *mptr;
struct i2ob_device *dev=&i2ob_dev[unit]; struct i2o_block_request *ireq = req->special;
struct gendisk *disk; struct scatterlist *sg;
request_queue_t *q; int sgnum;
int segments; int i;
u32 m;
u32 tcntxt;
u32 sg_flags;
int rc;
m = i2o_msg_get(c, &msg);
if (m == I2O_QUEUE_EMPTY) {
rc = -EBUSY;
goto exit;
}
/* tcntxt = i2o_cntxt_list_add(c, req);
* For logging purposes... if (!tcntxt) {
*/ rc = -ENOMEM;
printk(KERN_INFO "i2ob: Installing tid %d device at unit %d\n", goto nop_msg;
d->lct_data.tid, unit); }
/* if ((sgnum = i2o_block_sglist_alloc(ireq)) <= 0) {
* If this is the first I2O block device found on this IOP, rc = -ENOMEM;
* we need to initialize all the queue data structures goto context_remove;
* before any I/O can be performed. If it fails, this
* device is useless.
*/
if(!i2ob_queues[c->unit]) {
if(i2ob_init_iop(c->unit))
return 1;
} }
q = i2ob_queues[c->unit]->req_queue; /* Build the message based on the request. */
writel(i2o_block_driver.context, &msg->u.s.icntxt);
writel(tcntxt, &msg->u.s.tcntxt);
writel(req->nr_sectors << 9, &msg->body[1]);
/* writel((((u64) req->sector) << 9) & 0xffffffff, &msg->body[2]);
* This will save one level of lookup/indirection in critical writel(req->sector >> 23, &msg->body[3]);
* code so that we can directly get the queue ptr from the
* device instead of having to go the IOP data structure.
*/
dev->req_queue = q;
/* mptr = &msg->body[4];
* Allocate a gendisk structure and initialize it
*/
disk = alloc_disk(16);
if (!disk)
return 1;
dev->gd = disk; sg = ireq->sg_table;
/* initialize gendik structure */
disk->major = MAJOR_NR;
disk->first_minor = unit<<4;
disk->queue = q;
disk->fops = &i2ob_fops;
sprintf(disk->disk_name, "i2o/hd%c", 'a' + unit);
disk->private_data = dev;
/* if (rq_data_dir(req) == READ) {
* Ask for the current media data. If that isn't supported writel(I2O_CMD_BLOCK_READ << 24 | HOST_TID << 12 | tid,
* then we ask for the device capacity data &msg->u.head[1]);
*/ sg_flags = 0x10000000;
if(i2ob_query_device(dev, 0x0004, 1, &blocksize, 4) != 0 switch (dev->rcache) {
|| i2ob_query_device(dev, 0x0004, 0, &size, 8) !=0 ) case CACHE_NULL:
{ writel(0, &msg->body[0]);
i2ob_query_device(dev, 0x0000, 3, &blocksize, 4); break;
i2ob_query_device(dev, 0x0000, 4, &size, 8); case CACHE_PREFETCH:
writel(0x201F0008, &msg->body[0]);
break;
case CACHE_SMARTFETCH:
if (req->nr_sectors > 16)
writel(0x201F0008, &msg->body[0]);
else
writel(0x001F0000, &msg->body[0]);
break;
}
} else {
writel(I2O_CMD_BLOCK_WRITE << 24 | HOST_TID << 12 | tid,
&msg->u.head[1]);
sg_flags = 0x14000000;
switch (dev->wcache) {
case CACHE_NULL:
writel(0, &msg->body[0]);
break;
case CACHE_WRITETHROUGH:
writel(0x001F0008, &msg->body[0]);
break;
case CACHE_WRITEBACK:
writel(0x001F0010, &msg->body[0]);
break;
case CACHE_SMARTBACK:
if (req->nr_sectors > 16)
writel(0x001F0004, &msg->body[0]);
else
writel(0x001F0010, &msg->body[0]);
break;
case CACHE_SMARTTHROUGH:
if (req->nr_sectors > 16)
writel(0x001F0004, &msg->body[0]);
else
writel(0x001F0010, &msg->body[0]);
}
} }
if(i2ob_query_device(dev, 0x0000, 2, &power, 2)!=0)
power = 0;
i2ob_query_device(dev, 0x0000, 5, &flags, 4);
i2ob_query_device(dev, 0x0000, 6, &status, 4);
set_capacity(disk, size>>9);
/* for (i = sgnum; i > 0; i--) {
* Max number of Scatter-Gather Elements if (i == 1)
*/ sg_flags |= 0x80000000;
writel(sg_flags | sg_dma_len(sg), mptr);
dev->power = power; /* Save power state in device proper */ writel(sg_dma_address(sg), mptr + 4);
dev->flags = flags; mptr += 8;
sg++;
segments = (d->controller->status_block->inbound_frame_size - 7) / 2;
if(segments > 16)
segments = 16;
dev->power = power; /* Save power state */
dev->flags = flags; /* Keep the type info */
blk_queue_max_sectors(q, 96); /* 256 might be nicer but many controllers
explode on 65536 or higher */
blk_queue_max_phys_segments(q, segments);
blk_queue_max_hw_segments(q, segments);
dev->rcache = CACHE_SMARTFETCH;
dev->wcache = CACHE_WRITETHROUGH;
if(d->controller->battery == 0)
dev->wcache = CACHE_WRITETHROUGH;
if(d->controller->promise)
dev->wcache = CACHE_WRITETHROUGH;
if(d->controller->short_req)
{
blk_queue_max_sectors(q, 8);
blk_queue_max_phys_segments(q, 8);
blk_queue_max_hw_segments(q, 8);
} }
strcpy(d->dev_name, disk->disk_name); writel(I2O_MESSAGE_SIZE
strcpy(disk->devfs_name, disk->disk_name); (((unsigned long)mptr -
(unsigned long)&msg->u.head[0]) >> 2) | SGL_OFFSET_8,
printk(KERN_INFO "%s: Max segments %d, queue depth %d, byte limit %d.\n", &msg->u.head[0]);
d->dev_name, dev->max_segments, dev->depth, dev->max_sectors<<9);
i2ob_query_device(dev, 0x0000, 0, &type, 1); i2o_msg_post(c, m);
printk(KERN_INFO "%s: ", d->dev_name);
switch(type)
{
case 0: printk("Disk Storage");break;
case 4: printk("WORM");break;
case 5: printk("CD-ROM");break;
case 7: printk("Optical device");break;
default:
printk("Type %d", type);
}
if(status&(1<<10))
printk("(RAID)");
if((flags^status)&(1<<4|1<<3)) /* Missing media or device */
{
printk(KERN_INFO " Not loaded.\n");
/* Device missing ? */
if((flags^status)&(1<<4))
return 1;
}
else
{
printk(": %dMB, %d byte sectors",
(int)(size>>20), blocksize);
}
if(status&(1<<0))
{
u32 cachesize;
i2ob_query_device(dev, 0x0003, 0, &cachesize, 4);
cachesize>>=10;
if(cachesize>4095)
printk(", %dMb cache", cachesize>>10);
else
printk(", %dKb cache", cachesize);
}
printk(".\n");
printk(KERN_INFO "%s: Maximum sectors/read set to %d.\n",
d->dev_name, dev->max_sectors);
/* list_add_tail(&ireq->queue, &dev->open_queue);
* Register for the events we're interested in and that the dev->open_queue_depth++;
* device actually supports.
*/
i2o_event_register(c, d->lct_data.tid, i2ob_context, unit,
(I2OB_EVENT_MASK & d->lct_data.event_capabilities));
return 0; return 0;
}
/*
* Initialize IOP specific queue structures. This is called
* once for each IOP that has a block device sitting behind it.
*/
static int i2ob_init_iop(unsigned int unit)
{
int i;
i2ob_queues[unit] = (struct i2ob_iop_queue *) kmalloc(sizeof(struct i2ob_iop_queue), GFP_ATOMIC); context_remove:
if(!i2ob_queues[unit]) i2o_cntxt_list_remove(c, req);
{
printk(KERN_WARNING "Could not allocate request queue for I2O block device!\n");
return -1;
}
for(i = 0; i< MAX_I2OB_DEPTH; i++)
{
i2ob_queues[unit]->request_queue[i].next = &i2ob_queues[unit]->request_queue[i+1];
i2ob_queues[unit]->request_queue[i].num = i;
}
/* Queue is MAX_I2OB + 1... */
i2ob_queues[unit]->request_queue[i].next = NULL;
i2ob_queues[unit]->i2ob_qhead = &i2ob_queues[unit]->request_queue[0];
i2ob_queues[unit]->queue_depth = 0;
i2ob_queues[unit]->lock = SPIN_LOCK_UNLOCKED;
i2ob_queues[unit]->req_queue = blk_init_queue(i2ob_request, &i2ob_queues[unit]->lock);
if (!i2ob_queues[unit]->req_queue) {
kfree(i2ob_queues[unit]);
return -1;
}
i2ob_queues[unit]->req_queue->queuedata = &i2ob_queues[unit]; nop_msg:
i2o_msg_nop(c, m);
return 0; exit:
} return rc;
};
/* /**
* Probe the I2O subsytem for block class devices * i2o_block_request_fn - request queue handling function
* q: request queue from which the request could be fetched
*
* Takes the next request from the queue, transfers it and if no error
* occurs dequeue it from the queue. On arrival of the reply the message
* will be processed further. If an error occurs requeue the request.
*/ */
static void i2ob_scan(int bios) static void i2o_block_request_fn(struct request_queue *q)
{ {
int i; struct request *req;
int warned = 0;
struct i2o_device *d, *b=NULL;
struct i2o_controller *c;
for(i=0; i< MAX_I2O_CONTROLLERS; i++)
{
c=i2o_find_controller(i);
if(c==NULL)
continue;
/* while (!blk_queue_plugged(q)) {
* The device list connected to the I2O Controller is doubly linked req = elv_next_request(q);
* Here we traverse the end of the list , and start claiming devices if (!req)
* from that end. This assures that within an I2O controller atleast break;
* the newly created volumes get claimed after the older ones, thus
* mapping to same major/minor (and hence device file name) after
* every reboot.
* The exception being:
* 1. If there was a TID reuse.
* 2. There was more than one I2O controller.
*/
if(!bios) if (blk_fs_request(req)) {
{ struct i2o_block_delayed_request *dreq;
for (d=c->devices;d!=NULL;d=d->next) struct i2o_block_request *ireq = req->special;
if(d->next == NULL) unsigned int queue_depth;
b = d;
}
else
b = c->devices;
while(b != NULL) queue_depth = ireq->i2o_blk_dev->open_queue_depth;
{
d=b;
if(bios)
b = b->next;
else
b = b->prev;
if(d->lct_data.class_id!=I2O_CLASS_RANDOM_BLOCK_STORAGE) if (queue_depth < I2O_BLOCK_MAX_OPEN_REQUESTS)
continue; if (!i2o_block_transfer(req)) {
blkdev_dequeue_request(req);
continue;
}
if(d->lct_data.user_tid != 0xFFF) if (queue_depth)
continue; break;
if(bios) /* stop the queue and retry later */
{ dreq = kmalloc(sizeof(*dreq), GFP_ATOMIC);
if(d->lct_data.bios_info != 0x80) if (!dreq)
continue; continue;
printk(KERN_INFO "Claiming as Boot device: Controller %d, TID %d\n", c->unit, d->lct_data.tid);
}
else
{
if(d->lct_data.bios_info == 0x80)
continue; /*Already claimed on pass 1 */
}
if(scan_unit<MAX_I2OB) dreq->queue = q;
i2ob_new_device(c, d); INIT_WORK(&dreq->work, i2o_block_delayed_request_fn,
else dreq);
{
if(!warned++) printk(KERN_INFO "block-osm: transfer error\n");
printk(KERN_WARNING "i2o_block: too many device, registering only %d.\n", scan_unit); if (!queue_delayed_work(i2o_block_driver.event_queue,
&dreq->work,
I2O_BLOCK_RETRY_TIME))
kfree(dreq);
else {
blk_stop_queue(q);
break;
} }
} } else
i2o_unlock_controller(c); end_request(req, 0);
} }
} };
static void i2ob_probe(void) /* I2O Block device operations definition */
static struct block_device_operations i2o_block_fops = {
.owner = THIS_MODULE,
.open = i2o_block_open,
.release = i2o_block_release,
.ioctl = i2o_block_ioctl,
.media_changed = i2o_block_media_changed
};
/**
* i2o_block_device_alloc - Allocate memory for a I2O Block device
*
* Allocate memory for the i2o_block_device struct, gendisk and request
* queue and initialize them as far as no additional information is needed.
*
* Returns a pointer to the allocated I2O Block device on succes or a
* negative error code on failure.
*/
static struct i2o_block_device *i2o_block_device_alloc(void)
{ {
/* struct i2o_block_device *dev;
* Some overhead/redundancy involved here, while trying to struct gendisk *gd;
* claim the first boot volume encountered as /dev/i2o/hda struct request_queue *queue;
* everytime. All the i2o_controllers are searched and the int rc;
* first i2o block device marked as bootable is claimed
* If an I2O block device was booted off , the bios sets dev = kmalloc(sizeof(*dev), GFP_KERNEL);
* its bios_info field to 0x80, this what we search for. if (!dev) {
* Assuming that the bootable volume is /dev/i2o/hda printk(KERN_ERR "block-osm: Insufficient memory to allocate "
* everytime will prevent any kernel panic while mounting "I2O Block disk.\n");
* root partition rc = -ENOMEM;
*/ goto exit;
}
memset(dev, 0, sizeof(*dev));
INIT_LIST_HEAD(&dev->open_queue);
spin_lock_init(&dev->lock);
dev->rcache = CACHE_PREFETCH;
dev->wcache = CACHE_WRITEBACK;
/* allocate a gendisk with 16 partitions */
gd = alloc_disk(16);
if (!gd) {
printk(KERN_ERR "block-osm: Insufficient memory to allocate "
"gendisk.\n");
rc = -ENOMEM;
goto cleanup_dev;
}
printk(KERN_INFO "i2o_block: Checking for Boot device...\n"); /* initialize the request queue */
i2ob_scan(1); queue = blk_init_queue(i2o_block_request_fn, &dev->lock);
if (!queue) {
printk(KERN_ERR "block-osm: Insufficient memory to allocate "
"request queue.\n");
rc = -ENOMEM;
goto cleanup_queue;
}
/* blk_queue_prep_rq(queue, i2o_block_prep_req_fn);
* Now the remainder.
*/
printk(KERN_INFO "i2o_block: Checking for I2O Block devices...\n");
i2ob_scan(0);
}
gd->major = I2O_MAJOR;
gd->queue = queue;
gd->fops = &i2o_block_fops;
gd->private_data = dev;
/* dev->gd = gd;
* New device notification handler. Called whenever a new
* I2O block storage device is added to the system. return dev;
*
* Should we spin lock around this to keep multiple devs from cleanup_queue:
* getting updated at the same time? put_disk(gd);
*
cleanup_dev:
kfree(dev);
exit:
return ERR_PTR(rc);
};
/**
* i2o_block_probe - verify if dev is a I2O Block device and install it
* @dev: device to verify if it is a I2O Block device
*
* We only verify if the user_tid of the device is 0xfff and then install
* the device. Otherwise it is used by some other device (e. g. RAID).
*
* Returns 0 on success or negative error code on failure.
*/ */
void i2ob_new_device(struct i2o_controller *c, struct i2o_device *d) static int i2o_block_probe(struct device *dev)
{ {
struct i2ob_device *dev; struct i2o_device *i2o_dev = to_i2o_device(dev);
int unit = 0; struct i2o_block_device *i2o_blk_dev;
struct i2o_controller *c = i2o_dev->iop;
printk(KERN_INFO "i2o_block: New device detected\n"); struct gendisk *gd;
printk(KERN_INFO " Controller %d Tid %d\n",c->unit, d->lct_data.tid); struct request_queue *queue;
static int unit = 0;
int rc;
u64 size;
u32 blocksize;
u16 power;
u32 flags, status;
int segments;
/* Check for available space */ /* skip devices which are used by IOP */
if(i2ob_dev_count>=MAX_I2OB) if (i2o_dev->lct_data.user_tid != 0xfff) {
{ pr_debug("skipping used device %03x\n", i2o_dev->lct_data.tid);
printk(KERN_ERR "i2o_block: No more devices allowed!\n"); return -ENODEV;
return;
} }
for(unit = 0; unit < MAX_I2OB; unit ++)
{ printk(KERN_INFO "block-osm: New device detected (TID: %03x)\n",
if(!i2ob_dev[unit].i2odev) i2o_dev->lct_data.tid);
break;
if (i2o_device_claim(i2o_dev)) {
printk(KERN_WARNING "block-osm: Unable to claim device. "
"Installation aborted\n");
rc = -EFAULT;
goto exit;
} }
if(i2o_claim_device(d, &i2o_block_handler)) i2o_blk_dev = i2o_block_device_alloc();
{ if (IS_ERR(i2o_blk_dev)) {
printk(KERN_INFO "i2o_block: Unable to claim device. Installation aborted\n"); printk(KERN_ERR "block-osm: could not alloc a new I2O block"
return; "device");
rc = PTR_ERR(i2o_blk_dev);
goto claim_release;
} }
dev = &i2ob_dev[unit]; i2o_blk_dev->i2o_dev = i2o_dev;
dev->i2odev = d; dev_set_drvdata(dev, i2o_blk_dev);
dev->controller = c;
dev->tid = d->lct_data.tid;
dev->unit = c->unit;
if(i2ob_install_device(c,d,unit)) { /* setup gendisk */
i2o_release_device(d, &i2o_block_handler); gd = i2o_blk_dev->gd;
printk(KERN_ERR "i2o_block: Could not install new device\n"); gd->first_minor = unit << 4;
} sprintf(gd->disk_name, "i2o/hd%c", 'a' + unit);
else sprintf(gd->devfs_name, "i2o/hd%c", 'a' + unit);
{ gd->driverfs_dev = &i2o_dev->device;
i2o_release_device(d, &i2o_block_handler);
add_disk(dev->gd);
i2ob_dev_count++;
i2o_device_notify_on(d, &i2o_block_handler);
}
return; /* setup request queue */
} queue = gd->queue;
queue->queuedata = i2o_blk_dev;
/* blk_queue_max_phys_segments(queue, I2O_MAX_SEGMENTS);
* Deleted device notification handler. Called when a device we blk_queue_max_sectors(queue, I2O_MAX_SECTORS);
* are talking to has been deleted by the user or some other
* mysterious fource outside the kernel.
*/
void i2ob_del_device(struct i2o_controller *c, struct i2o_device *d)
{
int unit = 0;
unsigned long flags;
struct i2ob_device *dev;
for(unit = 0; unit < MAX_I2OB; unit ++) if (c->short_req)
{ segments = 8;
dev = &i2ob_dev[unit]; else {
if(dev->i2odev == d) i2o_status_block *sb;
{
printk(KERN_INFO " /dev/%s: Controller %d Tid %d\n",
d->dev_name, c->unit, d->lct_data.tid);
break;
}
}
printk(KERN_INFO "I2O Block Device Deleted\n"); sb = c->status_block.virt;
if(unit >= MAX_I2OB) segments = (sb->inbound_frame_size -
{ sizeof(struct i2o_message) / 4 - 4) / 2;
printk(KERN_ERR "i2ob_del_device called, but not in dev table!\n");
return;
} }
spin_lock_irqsave(dev->req_queue->queue_lock, flags); blk_queue_max_hw_segments(queue, segments);
/* pr_debug("max sectors: %d\n", I2O_MAX_SECTORS);
* Need to do this...we somtimes get two events from the IRTOS pr_debug("phys segments: %d\n", I2O_MAX_SEGMENTS);
* in a row and that causes lots of problems. pr_debug("hw segments: %d\n", segments);
*/
i2o_device_notify_off(d, &i2o_block_handler);
/* /*
* This will force errors when i2ob_get_queue() is called * Ask for the current media data. If that isn't supported
* by the kenrel. * then we ask for the device capacity data
*/ */
if(dev->gd) { if (i2o_parm_field_get(i2o_dev, 0x0004, 1, &blocksize, 4) != 0
struct gendisk *gd = dev->gd; || i2o_parm_field_get(i2o_dev, 0x0004, 0, &size, 8) != 0) {
gd->queue = NULL; i2o_parm_field_get(i2o_dev, 0x0000, 3, &blocksize, 4);
del_gendisk(gd); i2o_parm_field_get(i2o_dev, 0x0000, 4, &size, 8);
put_disk(gd);
dev->gd = NULL;
} }
spin_unlock_irqrestore(dev->req_queue->queue_lock, flags); pr_debug("blocksize: %d\n", blocksize);
dev->req_queue = NULL;
dev->i2odev = NULL;
dev->refcnt = 0;
dev->tid = 0;
/*
* Do we need this?
* The media didn't really change...the device is just gone
*/
dev->media_change_flag = 1;
i2ob_dev_count--; if (i2o_parm_field_get(i2o_dev, 0x0000, 2, &power, 2))
} power = 0;
i2o_parm_field_get(i2o_dev, 0x0000, 5, &flags, 4);
i2o_parm_field_get(i2o_dev, 0x0000, 6, &status, 4);
set_capacity(gd, size >> 9);
i2o_event_register(i2o_dev, &i2o_block_driver, 0, 0xffffffff);
add_disk(gd);
unit++;
/*
* Have we seen a media change ?
*/
static int i2ob_media_change(struct gendisk *disk)
{
struct i2ob_device *p = disk->private_data;
if(p->media_change_flag)
{
p->media_change_flag=0;
return 1;
}
return 0; return 0;
}
static int i2ob_revalidate(struct gendisk *disk) claim_release:
{ i2o_device_claim_release(i2o_dev);
struct i2ob_device *p = disk->private_data;
return i2ob_install_device(p->controller, p->i2odev, p->index);
}
/* exit:
* Reboot notifier. This is called by i2o_core when the system return rc;
* shuts down. };
*/
static void i2ob_reboot_event(void)
{
int i;
for(i=0;i<MAX_I2OB;i++)
{
struct i2ob_device *dev=&i2ob_dev[i];
if(dev->refcnt!=0)
{
/*
* Flush the onboard cache
*/
u32 msg[5];
int *query_done = &dev->done_flag;
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
msg[2] = i2ob_context|0x40000000;
msg[3] = (u32)query_done;
msg[4] = 60<<16;
DEBUG("Flushing...");
i2o_post_wait(dev->controller, msg, 20, 60);
DEBUG("Unlocking...");
/*
* Unlock the media
*/
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
msg[2] = i2ob_context|0x40000000;
msg[3] = (u32)query_done;
msg[4] = -1;
i2o_post_wait(dev->controller, msg, 20, 2);
DEBUG("Unlocked.\n");
}
}
}
static struct block_device_operations i2ob_fops = /* Block OSM driver struct */
{ static struct i2o_driver i2o_block_driver = {
.owner = THIS_MODULE, .name = "block-osm",
.open = i2ob_open, .event = i2o_block_event,
.release = i2ob_release, .reply = i2o_block_reply,
.ioctl = i2ob_ioctl, .classes = i2o_block_class_id,
.media_changed = i2ob_media_change, .driver = {
.revalidate_disk= i2ob_revalidate, .probe = i2o_block_probe,
.remove = i2o_block_remove,
},
}; };
/* /**
* And here should be modules and kernel interface * i2o_block_init - Block OSM initialization function
* (Just smiley confuses emacs :-) *
* Allocate the slab and mempool for request structs, registers i2o_block
* block device and finally register the Block OSM in the I2O core.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int __init i2o_block_init(void)
static int i2o_block_init(void)
{ {
int i; int rc;
int size;
printk(KERN_INFO "I2O Block Storage OSM v0.9\n"); printk(KERN_INFO "I2O Block Storage OSM v0.9\n");
printk(KERN_INFO " (c) Copyright 1999-2001 Red Hat Software.\n"); printk(KERN_INFO " (c) Copyright 1999-2001 Red Hat Software.\n");
/*
* Register the block device interfaces
*/
if (register_blkdev(MAJOR_NR, "i2o_block"))
return -EIO;
#ifdef MODULE
printk(KERN_INFO "i2o_block: registered device at major %d\n", MAJOR_NR);
#endif
/* /* Allocate request mempool and slab */
* Set up the queue size = sizeof(struct i2o_block_request);
*/ i2o_blk_req_pool.slab = kmem_cache_create("i2o_block_req", size, 0,
for(i = 0; i < MAX_I2O_CONTROLLERS; i++) SLAB_HWCACHE_ALIGN, NULL,
i2ob_queues[i] = NULL; NULL);
if (!i2o_blk_req_pool.slab) {
/* printk(KERN_ERR "block-osm: can't init request slab\n");
* Now fill in the boiler plate rc = -ENOMEM;
*/ goto exit;
for (i = 0; i < MAX_I2OB; i++) {
struct i2ob_device *dev = &i2ob_dev[i];
dev->index = i;
dev->refcnt = 0;
dev->flags = 0;
dev->controller = NULL;
dev->i2odev = NULL;
dev->tid = 0;
dev->head = NULL;
dev->tail = NULL;
dev->depth = MAX_I2OB_DEPTH;
dev->max_sectors = 2;
dev->gd = NULL;
} }
/*
* Register the OSM handler as we will need this to probe for
* drives, geometry and other goodies.
*/
if(i2o_install_handler(&i2o_block_handler)<0) i2o_blk_req_pool.pool = mempool_create(I2O_REQ_MEMPOOL_SIZE,
{ mempool_alloc_slab,
unregister_blkdev(MAJOR_NR, "i2o_block"); mempool_free_slab,
printk(KERN_ERR "i2o_block: unable to register OSM.\n"); i2o_blk_req_pool.slab);
return -EINVAL; if (!i2o_blk_req_pool.pool) {
printk(KERN_ERR "block-osm: can't init request mempool\n");
rc = -ENOMEM;
goto free_slab;
} }
i2ob_context = i2o_block_handler.context;
/* /* Register the block device interfaces */
* Initialize event handling thread rc = register_blkdev(I2O_MAJOR, "i2o_block");
*/ if (rc) {
init_MUTEX_LOCKED(&i2ob_evt_sem); printk(KERN_ERR "block-osm: unable to register block device\n");
evt_pid = kernel_thread(i2ob_evt, NULL, CLONE_SIGHAND); goto free_mempool;
if(evt_pid < 0)
{
printk(KERN_ERR "i2o_block: Could not initialize event thread. Aborting\n");
i2o_remove_handler(&i2o_block_handler);
return 0;
} }
#ifdef MODULE
printk(KERN_INFO "block-osm: registered device at major %d\n",
I2O_MAJOR);
#endif
i2ob_probe(); /* Register Block OSM into I2O core */
rc = i2o_driver_register(&i2o_block_driver);
if (rc) {
printk(KERN_ERR "block-osm: Could not register Block driver\n");
goto unregister_blkdev;
}
return 0; return 0;
unregister_blkdev(MAJOR_NR, "i2o_block"); unregister_blkdev:
return -ENOMEM; unregister_blkdev(I2O_MAJOR, "i2o_block");
}
free_mempool:
mempool_destroy(i2o_blk_req_pool.pool);
static void i2o_block_exit(void) free_slab:
{ kmem_cache_destroy(i2o_blk_req_pool.slab);
int i;
if(evt_running) {
printk(KERN_INFO "Killing I2O block threads...");
i = kill_proc(evt_pid, SIGKILL, 1);
if(!i) {
printk("waiting...\n");
}
/* Be sure it died */
wait_for_completion(&i2ob_thread_dead);
printk("done.\n");
}
/* exit:
* Unregister for updates from any devices..otherwise we still return rc;
* get them and the core jumps to random memory :O };
*/
if(i2ob_dev_count) {
struct i2o_device *d;
for(i = 0; i < MAX_I2OB; i++)
if((d = i2ob_dev[i].i2odev))
i2ob_del_device(d->controller, d);
}
/*
* We may get further callbacks for ourself. The i2o_core
* code handles this case reasonably sanely. The problem here
* is we shouldn't get them .. but a couple of cards feel
* obliged to tell us stuff we don't care about.
*
* This isnt ideal at all but will do for now.
*/
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(HZ);
/*
* Flush the OSM
*/
i2o_remove_handler(&i2o_block_handler); /**
* i2o_block_exit - Block OSM exit function
*
* Unregisters Block OSM from I2O core, unregisters i2o_block block device
* and frees the mempool and slab.
*/
static void __exit i2o_block_exit(void)
{
/* Unregister I2O Block OSM from I2O core */
i2o_driver_unregister(&i2o_block_driver);
/* /* Unregister block device */
* Return the block device unregister_blkdev(I2O_MAJOR, "i2o_block");
*/
if (unregister_blkdev(MAJOR_NR, "i2o_block") != 0)
printk("i2o_block: cleanup_module failed\n");
/* /* Free request mempool and slab */
* release request queue mempool_destroy(i2o_blk_req_pool.pool);
*/ kmem_cache_destroy(i2o_blk_req_pool.slab);
for (i = 0; i < MAX_I2O_CONTROLLERS; i ++) };
if(i2ob_queues[i]) {
blk_cleanup_queue(i2ob_queues[i]->req_queue);
kfree(i2ob_queues[i]);
}
}
MODULE_AUTHOR("Red Hat"); MODULE_AUTHOR("Red Hat");
MODULE_DESCRIPTION("I2O Block Device OSM"); MODULE_DESCRIPTION("I2O Block Device OSM");
......
/*
* Block OSM structures/API
*
* Copyright (C) 1999-2002 Red Hat Software
*
* Written by Alan Cox, Building Number Three Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* For the purpose of avoiding doubt the preferred form of the work
* for making modifications shall be a standards compliant form such
* gzipped tar and not one requiring a proprietary or patent encumbered
* tool to unpack.
*
* Fixes/additions:
* Steve Ralston:
* Multiple device handling error fixes,
* Added a queue depth.
* Alan Cox:
* FC920 has an rmw bug. Dont or in the end marker.
* Removed queue walk, fixed for 64bitness.
* Rewrote much of the code over time
* Added indirect block lists
* Handle 64K limits on many controllers
* Don't use indirects on the Promise (breaks)
* Heavily chop down the queue depths
* Deepak Saxena:
* Independent queues per IOP
* Support for dynamic device creation/deletion
* Code cleanup
* Support for larger I/Os through merge* functions
* (taken from DAC960 driver)
* Boji T Kannanthanam:
* Set the I2O Block devices to be detected in increasing
* order of TIDs during boot.
* Search and set the I2O block device that we boot off
* from as the first device to be claimed (as /dev/i2o/hda)
* Properly attach/detach I2O gendisk structure from the
* system gendisk list. The I2O block devices now appear in
* /proc/partitions.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Minor bugfixes for 2.6.
*/
#ifndef I2O_BLOCK_OSM_H
#define I2O_BLOCK_OSM_H
#define I2O_BLOCK_RETRY_TIME HZ/4
#define I2O_BLOCK_MAX_OPEN_REQUESTS 50
/* I2O Block OSM mempool struct */
struct i2o_block_mempool {
kmem_cache_t *slab;
mempool_t *pool;
};
/* I2O Block device descriptor */
struct i2o_block_device {
struct i2o_device *i2o_dev; /* pointer to I2O device */
struct gendisk *gd;
spinlock_t lock; /* queue lock */
struct list_head open_queue; /* list of transfered, but unfinished
requests */
unsigned int open_queue_depth; /* number of requests in the queue */
int rcache; /* read cache flags */
int wcache; /* write cache flags */
int flags;
int power; /* power state */
int media_change_flag; /* media changed flag */
};
/* I2O Block device request */
struct i2o_block_request
{
struct list_head queue;
struct request *req; /* corresponding request */
struct i2o_block_device *i2o_blk_dev; /* I2O block device */
int sg_dma_direction; /* direction of DMA buffer read/write */
int sg_nents; /* number of SG elements */
struct scatterlist sg_table[I2O_MAX_SEGMENTS]; /* SG table */
};
/* I2O Block device delayed request */
struct i2o_block_delayed_request
{
struct work_struct work;
struct request_queue *queue;
};
#endif
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
* I2O Configuration Interface Driver * I2O Configuration Interface Driver
* *
* (C) Copyright 1999-2002 Red Hat * (C) Copyright 1999-2002 Red Hat
* *
* Written by Alan Cox, Building Number Three Ltd * Written by Alan Cox, Building Number Three Ltd
* *
* Fixes/additions: * Fixes/additions:
...@@ -41,63 +41,52 @@ ...@@ -41,63 +41,52 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/ioctl32.h>
#include <linux/syscalls.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/io.h> #include <asm/io.h>
static int i2o_cfg_context = -1; extern int i2o_parm_issue(struct i2o_device *, int, void *, int, void *, int);
static void *page_buf;
static spinlock_t i2o_config_lock = SPIN_LOCK_UNLOCKED; static spinlock_t i2o_config_lock = SPIN_LOCK_UNLOCKED;
struct wait_queue *i2o_wait_queue; struct wait_queue *i2o_wait_queue;
#define MODINC(x,y) ((x) = ((x) + 1) % (y)) #define MODINC(x,y) ((x) = ((x) + 1) % (y))
struct sg_simple_element { struct sg_simple_element {
u32 flag_count; u32 flag_count;
u32 addr_bus; u32 addr_bus;
}; };
struct i2o_cfg_info struct i2o_cfg_info {
{ struct file *fp;
struct file* fp;
struct fasync_struct *fasync; struct fasync_struct *fasync;
struct i2o_evt_info event_q[I2O_EVT_Q_LEN]; struct i2o_evt_info event_q[I2O_EVT_Q_LEN];
u16 q_in; // Queue head index u16 q_in; // Queue head index
u16 q_out; // Queue tail index u16 q_out; // Queue tail index
u16 q_len; // Queue length u16 q_len; // Queue length
u16 q_lost; // Number of lost events u16 q_lost; // Number of lost events
u32 q_id; // Event queue ID...used as tx_context ulong q_id; // Event queue ID...used as tx_context
struct i2o_cfg_info *next; struct i2o_cfg_info *next;
}; };
static struct i2o_cfg_info *open_files = NULL; static struct i2o_cfg_info *open_files = NULL;
static int i2o_cfg_info_id = 0; static ulong i2o_cfg_info_id = 0;
static int ioctl_getiops(unsigned long);
static int ioctl_gethrt(unsigned long);
static int ioctl_getlct(unsigned long);
static int ioctl_parms(unsigned long, unsigned int);
static int ioctl_html(unsigned long);
static int ioctl_swdl(unsigned long);
static int ioctl_swul(unsigned long);
static int ioctl_swdel(unsigned long);
static int ioctl_validate(unsigned long);
static int ioctl_evt_reg(unsigned long, struct file *);
static int ioctl_evt_get(unsigned long, struct file *);
static int ioctl_passthru(unsigned long);
static int cfg_fasync(int, struct file*, int);
#if 0
/* /*
* This is the callback for any message we have posted. The message itself * This is the callback for any message we have posted. The message itself
* will be returned to the message pool when we return from the IRQ * will be returned to the message pool when we return from the IRQ
* *
* This runs in irq context so be short and sweet. * This runs in irq context so be short and sweet.
*/ */
static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *m) static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c,
struct i2o_message *m)
{ {
u32 *msg = (u32 *)m; u32 *msg = (u32 *) m;
if (msg[0] & MSG_FAIL) { if (msg[0] & MSG_FAIL) {
u32 *preserved_msg = (u32*)(c->msg_virt + msg[7]); u32 *preserved_msg = (u32 *) (c->msg_virt + msg[7]);
printk(KERN_ERR "i2o_config: IOP failed to process the msg.\n"); printk(KERN_ERR "i2o_config: IOP failed to process the msg.\n");
...@@ -109,26 +98,25 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc ...@@ -109,26 +98,25 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc
i2o_post_message(c, msg[7]); i2o_post_message(c, msg[7]);
} }
if (msg[4] >> 24) // ReqStatus != SUCCESS if (msg[4] >> 24) // ReqStatus != SUCCESS
i2o_report_status(KERN_INFO,"i2o_config", msg); i2o_report_status(KERN_INFO, "i2o_config", msg);
if(m->function == I2O_CMD_UTIL_EVT_REGISTER) if (m->function == I2O_CMD_UTIL_EVT_REGISTER) {
{
struct i2o_cfg_info *inf; struct i2o_cfg_info *inf;
for(inf = open_files; inf; inf = inf->next) for (inf = open_files; inf; inf = inf->next)
if(inf->q_id == msg[3]) if (inf->q_id == i2o_cntxt_list_get(c, msg[3]))
break; break;
// //
// If this is the case, it means that we're getting // If this is the case, it means that we're getting
// events for a file descriptor that's been close()'d // events for a file descriptor that's been close()'d
// w/o the user unregistering for events first. // w/o the user unregistering for events first.
// The code currently assumes that the user will // The code currently assumes that the user will
// take care of unregistering for events before closing // take care of unregistering for events before closing
// a file. // a file.
// //
// TODO: // TODO:
// Should we track event registartion and deregister // Should we track event registartion and deregister
// for events when a file is close()'d so this doesn't // for events when a file is close()'d so this doesn't
// happen? That would get rid of the search through // happen? That would get rid of the search through
...@@ -137,8 +125,8 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc ...@@ -137,8 +125,8 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc
// it would mean having all sorts of tables to track // it would mean having all sorts of tables to track
// what each file is registered for...I think the // what each file is registered for...I think the
// current method is simpler. - DS // current method is simpler. - DS
// //
if(!inf) if (!inf)
return; return;
inf->event_q[inf->q_in].id.iop = c->unit; inf->event_q[inf->q_in].id.iop = c->unit;
...@@ -149,278 +137,167 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc ...@@ -149,278 +137,167 @@ static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struc
// Data size = msg size - reply header // Data size = msg size - reply header
// //
inf->event_q[inf->q_in].data_size = (m->size - 5) * 4; inf->event_q[inf->q_in].data_size = (m->size - 5) * 4;
if(inf->event_q[inf->q_in].data_size) if (inf->event_q[inf->q_in].data_size)
memcpy(inf->event_q[inf->q_in].evt_data, memcpy(inf->event_q[inf->q_in].evt_data,
(unsigned char *)(msg + 5), (unsigned char *)(msg + 5),
inf->event_q[inf->q_in].data_size); inf->event_q[inf->q_in].data_size);
spin_lock(&i2o_config_lock); spin_lock(&i2o_config_lock);
MODINC(inf->q_in, I2O_EVT_Q_LEN); MODINC(inf->q_in, I2O_EVT_Q_LEN);
if(inf->q_len == I2O_EVT_Q_LEN) if (inf->q_len == I2O_EVT_Q_LEN) {
{
MODINC(inf->q_out, I2O_EVT_Q_LEN); MODINC(inf->q_out, I2O_EVT_Q_LEN);
inf->q_lost++; inf->q_lost++;
} } else {
else
{
// Keep I2OEVTGET on another CPU from touching this // Keep I2OEVTGET on another CPU from touching this
inf->q_len++; inf->q_len++;
} }
spin_unlock(&i2o_config_lock); spin_unlock(&i2o_config_lock);
// printk(KERN_INFO "File %p w/id %d has %d events\n", // printk(KERN_INFO "File %p w/id %d has %d events\n",
// inf->fp, inf->q_id, inf->q_len); // inf->fp, inf->q_id, inf->q_len);
kill_fasync(&inf->fasync, SIGIO, POLL_IN); kill_fasync(&inf->fasync, SIGIO, POLL_IN);
} }
return; return;
} }
#endif
/* /*
* Each of these describes an i2o message handler. They are * Each of these describes an i2o message handler. They are
* multiplexed by the i2o_core code * multiplexed by the i2o_core code
*/ */
struct i2o_handler cfg_handler=
{
i2o_cfg_reply,
NULL,
NULL,
NULL,
"Configuration",
0,
0xffffffff // All classes
};
static ssize_t cfg_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
printk(KERN_INFO "i2o_config write not yet supported\n");
return 0;
}
struct i2o_driver i2o_config_driver = {
.name = "Config-OSM"
};
static ssize_t cfg_read(struct file *file, char __user *buf, size_t count, loff_t *ptr) static int i2o_cfg_getiops(unsigned long arg)
{ {
return 0; struct i2o_controller *c;
} u8 __user *user_iop_table = (void __user *)arg;
u8 tmp[MAX_I2O_CONTROLLERS];
/*
* IOCTL Handler
*/
static int cfg_ioctl(struct inode *inode, struct file *fp, unsigned int cmd,
unsigned long arg)
{
int ret;
switch(cmd)
{
case I2OGETIOPS:
ret = ioctl_getiops(arg);
break;
case I2OHRTGET:
ret = ioctl_gethrt(arg);
break;
case I2OLCTGET:
ret = ioctl_getlct(arg);
break;
case I2OPARMSET:
ret = ioctl_parms(arg, I2OPARMSET);
break;
case I2OPARMGET:
ret = ioctl_parms(arg, I2OPARMGET);
break;
case I2OSWDL:
ret = ioctl_swdl(arg);
break;
case I2OSWUL:
ret = ioctl_swul(arg);
break;
case I2OSWDEL:
ret = ioctl_swdel(arg);
break;
case I2OVALIDATE:
ret = ioctl_validate(arg);
break;
case I2OHTML:
ret = ioctl_html(arg);
break;
case I2OEVTREG:
ret = ioctl_evt_reg(arg, fp);
break;
case I2OEVTGET:
ret = ioctl_evt_get(arg, fp);
break;
case I2OPASSTHRU:
ret = ioctl_passthru(arg);
break;
default:
ret = -EINVAL;
}
return ret;
}
int ioctl_getiops(unsigned long arg) memset(tmp, 0, MAX_I2O_CONTROLLERS);
{
u8 __user *user_iop_table = (void __user *)arg;
struct i2o_controller *c = NULL;
int i;
u8 foo[MAX_I2O_CONTROLLERS];
if(!access_ok(VERIFY_WRITE, user_iop_table, MAX_I2O_CONTROLLERS)) if (!access_ok(VERIFY_WRITE, user_iop_table, MAX_I2O_CONTROLLERS))
return -EFAULT; return -EFAULT;
for(i = 0; i < MAX_I2O_CONTROLLERS; i++) list_for_each_entry(c, &i2o_controllers, list)
{ tmp[c->unit] = 1;
c = i2o_find_controller(i);
if(c) __copy_to_user(user_iop_table, tmp, MAX_I2O_CONTROLLERS);
{
foo[i] = 1;
if(pci_set_dma_mask(c->pdev, 0xffffffff))
{
printk(KERN_WARNING "i2o_config : No suitable DMA available on controller %d\n", i);
i2o_unlock_controller(c);
continue;
}
i2o_unlock_controller(c);
}
else
{
foo[i] = 0;
}
}
__copy_to_user(user_iop_table, foo, MAX_I2O_CONTROLLERS);
return 0; return 0;
} };
int ioctl_gethrt(unsigned long arg) static int i2o_cfg_gethrt(unsigned long arg)
{ {
struct i2o_controller *c; struct i2o_controller *c;
struct i2o_cmd_hrtlct __user *cmd = (void __user *)arg; struct i2o_cmd_hrtlct __user *cmd = (struct i2o_cmd_hrtlct __user *)arg;
struct i2o_cmd_hrtlct kcmd; struct i2o_cmd_hrtlct kcmd;
i2o_hrt *hrt; i2o_hrt *hrt;
int len; int len;
u32 reslen; u32 reslen;
int ret = 0; int ret = 0;
if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct))) if (copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
return -EFAULT; return -EFAULT;
if(get_user(reslen, kcmd.reslen) < 0) if (get_user(reslen, kcmd.reslen) < 0)
return -EFAULT; return -EFAULT;
if(kcmd.resbuf == NULL) if (kcmd.resbuf == NULL)
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kcmd.iop); c = i2o_find_iop(kcmd.iop);
if(!c) if (!c)
return -ENXIO; return -ENXIO;
hrt = (i2o_hrt *)c->hrt;
i2o_unlock_controller(c); hrt = (i2o_hrt *) c->hrt.virt;
len = 8 + ((hrt->entry_len * hrt->num_entries) << 2); len = 8 + ((hrt->entry_len * hrt->num_entries) << 2);
/* We did a get user...so assuming mem is ok...is this bad? */ /* We did a get user...so assuming mem is ok...is this bad? */
put_user(len, kcmd.reslen); put_user(len, kcmd.reslen);
if(len > reslen) if (len > reslen)
ret = -ENOBUFS; ret = -ENOBUFS;
if(copy_to_user(kcmd.resbuf, (void*)hrt, len)) if (copy_to_user(kcmd.resbuf, (void *)hrt, len))
ret = -EFAULT; ret = -EFAULT;
return ret; return ret;
} };
int ioctl_getlct(unsigned long arg) static int i2o_cfg_getlct(unsigned long arg)
{ {
struct i2o_controller *c; struct i2o_controller *c;
struct i2o_cmd_hrtlct __user *cmd = (void __user *)arg; struct i2o_cmd_hrtlct __user *cmd = (struct i2o_cmd_hrtlct __user *)arg;
struct i2o_cmd_hrtlct kcmd; struct i2o_cmd_hrtlct kcmd;
i2o_lct *lct; i2o_lct *lct;
int len; int len;
int ret = 0; int ret = 0;
u32 reslen; u32 reslen;
if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct))) if (copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
return -EFAULT; return -EFAULT;
if(get_user(reslen, kcmd.reslen) < 0) if (get_user(reslen, kcmd.reslen) < 0)
return -EFAULT; return -EFAULT;
if(kcmd.resbuf == NULL) if (kcmd.resbuf == NULL)
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kcmd.iop); c = i2o_find_iop(kcmd.iop);
if(!c) if (!c)
return -ENXIO; return -ENXIO;
lct = (i2o_lct *)c->lct; lct = (i2o_lct *) c->lct;
i2o_unlock_controller(c);
len = (unsigned int)lct->table_size << 2; len = (unsigned int)lct->table_size << 2;
put_user(len, kcmd.reslen); put_user(len, kcmd.reslen);
if(len > reslen) if (len > reslen)
ret = -ENOBUFS; ret = -ENOBUFS;
else if(copy_to_user(kcmd.resbuf, (void*)lct, len)) else if (copy_to_user(kcmd.resbuf, lct, len))
ret = -EFAULT; ret = -EFAULT;
return ret; return ret;
} };
static int ioctl_parms(unsigned long arg, unsigned int type) static int i2o_cfg_parms(unsigned long arg, unsigned int type)
{ {
int ret = 0; int ret = 0;
struct i2o_controller *c; struct i2o_controller *c;
struct i2o_cmd_psetget __user *cmd = (void __user *)arg; struct i2o_device *dev;
struct i2o_cmd_psetget __user *cmd =
(struct i2o_cmd_psetget __user *)arg;
struct i2o_cmd_psetget kcmd; struct i2o_cmd_psetget kcmd;
u32 reslen; u32 reslen;
u8 *ops; u8 *ops;
u8 *res; u8 *res;
int len; int len = 0;
u32 i2o_cmd = (type == I2OPARMGET ? u32 i2o_cmd = (type == I2OPARMGET ?
I2O_CMD_UTIL_PARAMS_GET : I2O_CMD_UTIL_PARAMS_GET : I2O_CMD_UTIL_PARAMS_SET);
I2O_CMD_UTIL_PARAMS_SET);
if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_psetget))) if (copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_psetget)))
return -EFAULT; return -EFAULT;
if(get_user(reslen, kcmd.reslen)) if (get_user(reslen, kcmd.reslen))
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kcmd.iop); c = i2o_find_iop(kcmd.iop);
if(!c) if (!c)
return -ENXIO;
dev = i2o_iop_find_device(c, kcmd.tid);
if (!dev)
return -ENXIO; return -ENXIO;
ops = (u8*)kmalloc(kcmd.oplen, GFP_KERNEL); ops = (u8 *) kmalloc(kcmd.oplen, GFP_KERNEL);
if(!ops) if (!ops)
{
i2o_unlock_controller(c);
return -ENOMEM; return -ENOMEM;
}
if(copy_from_user(ops, kcmd.opbuf, kcmd.oplen)) if (copy_from_user(ops, kcmd.opbuf, kcmd.oplen)) {
{
i2o_unlock_controller(c);
kfree(ops); kfree(ops);
return -EFAULT; return -EFAULT;
} }
...@@ -429,404 +306,309 @@ static int ioctl_parms(unsigned long arg, unsigned int type) ...@@ -429,404 +306,309 @@ static int ioctl_parms(unsigned long arg, unsigned int type)
* It's possible to have a _very_ large table * It's possible to have a _very_ large table
* and that the user asks for all of it at once... * and that the user asks for all of it at once...
*/ */
res = (u8*)kmalloc(65536, GFP_KERNEL); res = (u8 *) kmalloc(65536, GFP_KERNEL);
if(!res) if (!res) {
{
i2o_unlock_controller(c);
kfree(ops); kfree(ops);
return -ENOMEM; return -ENOMEM;
} }
len = i2o_issue_params(i2o_cmd, c, kcmd.tid, len = i2o_parm_issue(dev, i2o_cmd, ops, kcmd.oplen, res, 65536);
ops, kcmd.oplen, res, 65536);
i2o_unlock_controller(c);
kfree(ops); kfree(ops);
if (len < 0) { if (len < 0) {
kfree(res); kfree(res);
return -EAGAIN; return -EAGAIN;
} }
put_user(len, kcmd.reslen); put_user(len, kcmd.reslen);
if(len > reslen) if (len > reslen)
ret = -ENOBUFS; ret = -ENOBUFS;
else if(copy_to_user(kcmd.resbuf, res, len)) else if (copy_to_user(kcmd.resbuf, res, len))
ret = -EFAULT; ret = -EFAULT;
kfree(res); kfree(res);
return ret; return ret;
} };
int ioctl_html(unsigned long arg)
{
struct i2o_html __user *cmd = (void __user *)arg;
struct i2o_html kcmd;
struct i2o_controller *c;
u8 *res = NULL;
void *query = NULL;
dma_addr_t query_phys, res_phys;
int ret = 0;
int token;
u32 len;
u32 reslen;
u32 msg[MSG_FRAME_SIZE];
if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_html)))
{
printk(KERN_INFO "i2o_config: can't copy html cmd\n");
return -EFAULT;
}
if(get_user(reslen, kcmd.reslen) < 0)
{
printk(KERN_INFO "i2o_config: can't copy html reslen\n");
return -EFAULT;
}
if(!kcmd.resbuf)
{
printk(KERN_INFO "i2o_config: NULL html buffer\n");
return -EFAULT;
}
c = i2o_find_controller(kcmd.iop);
if(!c)
return -ENXIO;
if(kcmd.qlen) /* Check for post data */
{
query = pci_alloc_consistent(c->pdev, kcmd.qlen, &query_phys);
if(!query)
{
i2o_unlock_controller(c);
return -ENOMEM;
}
if(copy_from_user(query, kcmd.qbuf, kcmd.qlen))
{
i2o_unlock_controller(c);
printk(KERN_INFO "i2o_config: could not get query\n");
pci_free_consistent(c->pdev, kcmd.qlen, query, query_phys);
return -EFAULT;
}
}
res = pci_alloc_consistent(c->pdev, 65536, &res_phys);
if(!res)
{
i2o_unlock_controller(c);
pci_free_consistent(c->pdev, kcmd.qlen, query, query_phys);
return -ENOMEM;
}
msg[1] = (I2O_CMD_UTIL_CONFIG_DIALOG << 24)|HOST_TID<<12|kcmd.tid;
msg[2] = i2o_cfg_context;
msg[3] = 0;
msg[4] = kcmd.page;
msg[5] = 0xD0000000|65536;
msg[6] = res_phys;
if(!kcmd.qlen) /* Check for post data */
msg[0] = SEVEN_WORD_MSG_SIZE|SGL_OFFSET_5;
else
{
msg[0] = NINE_WORD_MSG_SIZE|SGL_OFFSET_5;
msg[5] = 0x50000000|65536;
msg[7] = 0xD4000000|(kcmd.qlen);
msg[8] = query_phys;
}
/*
Wait for a considerable time till the Controller
does its job before timing out. The controller might
take more time to process this request if there are
many devices connected to it.
*/
token = i2o_post_wait_mem(c, msg, 9*4, 400, query, res, query_phys, res_phys, kcmd.qlen, 65536);
if(token < 0)
{
printk(KERN_DEBUG "token = %#10x\n", token);
i2o_unlock_controller(c);
if(token != -ETIMEDOUT)
{
pci_free_consistent(c->pdev, 65536, res, res_phys);
if(kcmd.qlen)
pci_free_consistent(c->pdev, kcmd.qlen, query, query_phys);
}
return token;
}
i2o_unlock_controller(c);
len = strnlen(res, 65536);
put_user(len, kcmd.reslen);
if(len > reslen)
ret = -ENOMEM;
if(copy_to_user(kcmd.resbuf, res, len))
ret = -EFAULT;
pci_free_consistent(c->pdev, 65536, res, res_phys);
if(kcmd.qlen)
pci_free_consistent(c->pdev, kcmd.qlen, query, query_phys);
return ret; static int i2o_cfg_swdl(unsigned long arg)
}
int ioctl_swdl(unsigned long arg)
{ {
struct i2o_sw_xfer kxfer; struct i2o_sw_xfer kxfer;
struct i2o_sw_xfer __user *pxfer = (void __user *)arg; struct i2o_sw_xfer __user *pxfer = (struct i2o_sw_xfer __user *)arg;
unsigned char maxfrag = 0, curfrag = 1; unsigned char maxfrag = 0, curfrag = 1;
unsigned char *buffer; struct i2o_dma buffer;
u32 msg[9]; struct i2o_message *msg;
u32 m;
unsigned int status = 0, swlen = 0, fragsize = 8192; unsigned int status = 0, swlen = 0, fragsize = 8192;
struct i2o_controller *c; struct i2o_controller *c;
dma_addr_t buffer_phys;
if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer))) if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
return -EFAULT; return -EFAULT;
if(get_user(swlen, kxfer.swlen) < 0) if (get_user(swlen, kxfer.swlen) < 0)
return -EFAULT; return -EFAULT;
if(get_user(maxfrag, kxfer.maxfrag) < 0) if (get_user(maxfrag, kxfer.maxfrag) < 0)
return -EFAULT; return -EFAULT;
if(get_user(curfrag, kxfer.curfrag) < 0) if (get_user(curfrag, kxfer.curfrag) < 0)
return -EFAULT; return -EFAULT;
if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192; if (curfrag == maxfrag)
fragsize = swlen - (maxfrag - 1) * 8192;
if(!kxfer.buf || !access_ok(VERIFY_READ, kxfer.buf, fragsize)) if (!kxfer.buf || !access_ok(VERIFY_READ, kxfer.buf, fragsize))
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kxfer.iop); c = i2o_find_iop(kxfer.iop);
if(!c) if (!c)
return -ENXIO; return -ENXIO;
buffer=pci_alloc_consistent(c->pdev, fragsize, &buffer_phys); m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (buffer==NULL) if (m == I2O_QUEUE_EMPTY)
{ return -EBUSY;
i2o_unlock_controller(c);
if (i2o_dma_alloc(&c->pdev->dev, &buffer, fragsize, GFP_KERNEL)) {
i2o_msg_nop(c, m);
return -ENOMEM; return -ENOMEM;
} }
__copy_from_user(buffer, kxfer.buf, fragsize);
__copy_from_user(buffer.virt, kxfer.buf, fragsize);
msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7;
msg[1]= I2O_CMD_SW_DOWNLOAD<<24 | HOST_TID<<12 | ADAPTER_TID; writel(NINE_WORD_MSG_SIZE | SGL_OFFSET_7, &msg->u.head[0]);
msg[2]= (u32)cfg_handler.context; writel(I2O_CMD_SW_DOWNLOAD << 24 | HOST_TID << 12 | ADAPTER_TID,
msg[3]= 0; &msg->u.head[1]);
msg[4]= (((u32)kxfer.flags)<<24) | (((u32)kxfer.sw_type)<<16) | writel(i2o_config_driver.context, &msg->u.head[2]);
(((u32)maxfrag)<<8) | (((u32)curfrag)); writel(0, &msg->u.head[3]);
msg[5]= swlen; writel((((u32) kxfer.flags) << 24) | (((u32) kxfer.sw_type) << 16) |
msg[6]= kxfer.sw_id; (((u32) maxfrag) << 8) | (((u32) curfrag)), &msg->body[0]);
msg[7]= (0xD0000000 | fragsize); writel(swlen, &msg->body[1]);
msg[8]= buffer_phys; writel(kxfer.sw_id, &msg->body[2]);
writel(0xD0000000 | fragsize, &msg->body[3]);
// printk("i2o_config: swdl frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize); writel(buffer.phys, &msg->body[4]);
status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL, buffer_phys, 0, fragsize, 0);
// printk("i2o_config: swdl frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
i2o_unlock_controller(c); status = i2o_msg_post_wait_mem(c, m, 60, &buffer);
if(status != -ETIMEDOUT)
pci_free_consistent(c->pdev, fragsize, buffer, buffer_phys); if (status != -ETIMEDOUT)
i2o_dma_free(&c->pdev->dev, &buffer);
if (status != I2O_POST_WAIT_OK)
{ if (status != I2O_POST_WAIT_OK) {
// it fails if you try and send frags out of order // it fails if you try and send frags out of order
// and for some yet unknown reasons too // and for some yet unknown reasons too
printk(KERN_INFO "i2o_config: swdl failed, DetailedStatus = %d\n", status); printk(KERN_INFO
"i2o_config: swdl failed, DetailedStatus = %d\n",
status);
return status; return status;
} }
return 0; return 0;
} };
int ioctl_swul(unsigned long arg) static int i2o_cfg_swul(unsigned long arg)
{ {
struct i2o_sw_xfer kxfer; struct i2o_sw_xfer kxfer;
struct i2o_sw_xfer __user *pxfer = (void __user *)arg; struct i2o_sw_xfer __user *pxfer = (struct i2o_sw_xfer __user *)arg;
unsigned char maxfrag = 0, curfrag = 1; unsigned char maxfrag = 0, curfrag = 1;
unsigned char *buffer; struct i2o_dma buffer;
u32 msg[9]; struct i2o_message *msg;
u32 m;
unsigned int status = 0, swlen = 0, fragsize = 8192; unsigned int status = 0, swlen = 0, fragsize = 8192;
struct i2o_controller *c; struct i2o_controller *c;
dma_addr_t buffer_phys;
if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
return -EFAULT; return -EFAULT;
if(get_user(swlen, kxfer.swlen) < 0) if (get_user(swlen, kxfer.swlen) < 0)
return -EFAULT; return -EFAULT;
if(get_user(maxfrag, kxfer.maxfrag) < 0) if (get_user(maxfrag, kxfer.maxfrag) < 0)
return -EFAULT; return -EFAULT;
if(get_user(curfrag, kxfer.curfrag) < 0) if (get_user(curfrag, kxfer.curfrag) < 0)
return -EFAULT; return -EFAULT;
if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192; if (curfrag == maxfrag)
fragsize = swlen - (maxfrag - 1) * 8192;
if(!kxfer.buf || !access_ok(VERIFY_WRITE, kxfer.buf, fragsize))
if (!kxfer.buf || !access_ok(VERIFY_WRITE, kxfer.buf, fragsize))
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kxfer.iop); c = i2o_find_iop(kxfer.iop);
if(!c) if (!c)
return -ENXIO; return -ENXIO;
buffer=pci_alloc_consistent(c->pdev, fragsize, &buffer_phys); m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (buffer==NULL) if (m == I2O_QUEUE_EMPTY)
{ return -EBUSY;
i2o_unlock_controller(c);
if (i2o_dma_alloc(&c->pdev->dev, &buffer, fragsize, GFP_KERNEL)) {
i2o_msg_nop(c, m);
return -ENOMEM; return -ENOMEM;
} }
msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7; writel(NINE_WORD_MSG_SIZE | SGL_OFFSET_7, &msg->u.head[0]);
msg[1]= I2O_CMD_SW_UPLOAD<<24 | HOST_TID<<12 | ADAPTER_TID; writel(I2O_CMD_SW_UPLOAD << 24 | HOST_TID << 12 | ADAPTER_TID,
msg[2]= (u32)cfg_handler.context; &msg->u.head[1]);
msg[3]= 0; writel(i2o_config_driver.context, &msg->u.head[2]);
msg[4]= (u32)kxfer.flags<<24|(u32)kxfer.sw_type<<16|(u32)maxfrag<<8|(u32)curfrag; writel(0, &msg->u.head[3]);
msg[5]= swlen; writel((u32) kxfer.flags << 24 | (u32) kxfer.
msg[6]= kxfer.sw_id; sw_type << 16 | (u32) maxfrag << 8 | (u32) curfrag,
msg[7]= (0xD0000000 | fragsize); &msg->body[0]);
msg[8]= buffer_phys; writel(swlen, &msg->body[1]);
writel(kxfer.sw_id, &msg->body[2]);
// printk("i2o_config: swul frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize); writel(0xD0000000 | fragsize, &msg->body[3]);
status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL, buffer_phys, 0, fragsize, 0); writel(buffer.phys, &msg->body[4]);
i2o_unlock_controller(c);
// printk("i2o_config: swul frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
if (status != I2O_POST_WAIT_OK) status = i2o_msg_post_wait_mem(c, m, 60, &buffer);
{
if(status != -ETIMEDOUT) if (status != I2O_POST_WAIT_OK) {
pci_free_consistent(c->pdev, fragsize, buffer, buffer_phys); if (status != -ETIMEDOUT)
printk(KERN_INFO "i2o_config: swul failed, DetailedStatus = %d\n", status); i2o_dma_free(&c->pdev->dev, &buffer);
printk(KERN_INFO
"i2o_config: swul failed, DetailedStatus = %d\n",
status);
return status; return status;
} }
__copy_to_user(kxfer.buf, buffer, fragsize); __copy_to_user(kxfer.buf, buffer.virt, fragsize);
pci_free_consistent(c->pdev, fragsize, buffer, buffer_phys); i2o_dma_free(&c->pdev->dev, &buffer);
return 0; return 0;
} };
int ioctl_swdel(unsigned long arg) static int i2o_cfg_swdel(unsigned long arg)
{ {
struct i2o_controller *c; struct i2o_controller *c;
struct i2o_sw_xfer kxfer; struct i2o_sw_xfer kxfer;
struct i2o_sw_xfer __user *pxfer = (void __user *)arg; struct i2o_sw_xfer __user *pxfer = (struct i2o_sw_xfer __user *)arg;
u32 msg[7]; struct i2o_message *msg;
u32 m;
unsigned int swlen; unsigned int swlen;
int token; int token;
if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer))) if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
return -EFAULT; return -EFAULT;
if (get_user(swlen, kxfer.swlen) < 0) if (get_user(swlen, kxfer.swlen) < 0)
return -EFAULT; return -EFAULT;
c = i2o_find_controller(kxfer.iop); c = i2o_find_iop(kxfer.iop);
if (!c) if (!c)
return -ENXIO; return -ENXIO;
msg[0] = SEVEN_WORD_MSG_SIZE | SGL_OFFSET_0; m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
msg[1] = I2O_CMD_SW_REMOVE<<24 | HOST_TID<<12 | ADAPTER_TID; if (m == I2O_QUEUE_EMPTY)
msg[2] = (u32)i2o_cfg_context; return -EBUSY;
msg[3] = 0;
msg[4] = (u32)kxfer.flags<<24 | (u32)kxfer.sw_type<<16; writel(SEVEN_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
msg[5] = swlen; writel(I2O_CMD_SW_REMOVE << 24 | HOST_TID << 12 | ADAPTER_TID,
msg[6] = kxfer.sw_id; &msg->u.head[1]);
writel(i2o_config_driver.context, &msg->u.head[2]);
token = i2o_post_wait(c, msg, sizeof(msg), 10); writel(0, &msg->u.head[3]);
i2o_unlock_controller(c); writel((u32) kxfer.flags << 24 | (u32) kxfer.sw_type << 16,
&msg->body[0]);
if (token != I2O_POST_WAIT_OK) writel(swlen, &msg->body[1]);
{ writel(kxfer.sw_id, &msg->body[2]);
printk(KERN_INFO "i2o_config: swdel failed, DetailedStatus = %d\n", token);
token = i2o_msg_post_wait(c, m, 10);
if (token != I2O_POST_WAIT_OK) {
printk(KERN_INFO
"i2o_config: swdel failed, DetailedStatus = %d\n",
token);
return -ETIMEDOUT; return -ETIMEDOUT;
} }
return 0; return 0;
} };
int ioctl_validate(unsigned long arg) static int i2o_cfg_validate(unsigned long arg)
{ {
int token; int token;
int iop = (int)arg; int iop = (int)arg;
u32 msg[4]; struct i2o_message *msg;
struct i2o_controller *c; u32 m;
struct i2o_controller *c;
c=i2o_find_controller(iop);
if (!c) c = i2o_find_iop(iop);
return -ENXIO; if (!c)
return -ENXIO;
msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_CONFIG_VALIDATE<<24 | HOST_TID<<12 | iop; m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
msg[2] = (u32)i2o_cfg_context; if (m == I2O_QUEUE_EMPTY)
msg[3] = 0; return -EBUSY;
token = i2o_post_wait(c, msg, sizeof(msg), 10); writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
i2o_unlock_controller(c); writel(I2O_CMD_CONFIG_VALIDATE << 24 | HOST_TID << 12 | iop,
&msg->u.head[1]);
if (token != I2O_POST_WAIT_OK) writel(i2o_config_driver.context, &msg->u.head[2]);
{ writel(0, &msg->u.head[3]);
printk(KERN_INFO "Can't validate configuration, ErrorStatus = %d\n",
token); token = i2o_msg_post_wait(c, m, 10);
return -ETIMEDOUT;
} if (token != I2O_POST_WAIT_OK) {
printk(KERN_INFO "Can't validate configuration, ErrorStatus = "
return 0; "%d\n", token);
} return -ETIMEDOUT;
}
static int ioctl_evt_reg(unsigned long arg, struct file *fp)
return 0;
};
static int i2o_cfg_evt_reg(unsigned long arg, struct file *fp)
{ {
u32 msg[5]; struct i2o_message *msg;
struct i2o_evt_id __user *pdesc = (void __user *)arg; u32 m;
struct i2o_evt_id __user *pdesc = (struct i2o_evt_id __user *)arg;
struct i2o_evt_id kdesc; struct i2o_evt_id kdesc;
struct i2o_controller *iop; struct i2o_controller *c;
struct i2o_device *d; struct i2o_device *d;
if (copy_from_user(&kdesc, pdesc, sizeof(struct i2o_evt_id))) if (copy_from_user(&kdesc, pdesc, sizeof(struct i2o_evt_id)))
return -EFAULT; return -EFAULT;
/* IOP exists? */ /* IOP exists? */
iop = i2o_find_controller(kdesc.iop); c = i2o_find_iop(kdesc.iop);
if(!iop) if (!c)
return -ENXIO; return -ENXIO;
i2o_unlock_controller(iop);
/* Device exists? */ /* Device exists? */
for(d = iop->devices; d; d = d->next) d = i2o_iop_find_device(c, kdesc.tid);
if(d->lct_data.tid == kdesc.tid) if (!d)
break;
if(!d)
return -ENODEV; return -ENODEV;
msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0; m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | kdesc.tid; if (m == I2O_QUEUE_EMPTY)
msg[2] = (u32)i2o_cfg_context; return -EBUSY;
msg[3] = (u32)fp->private_data;
msg[4] = kdesc.evt_mask; writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_UTIL_EVT_REGISTER << 24 | HOST_TID << 12 | kdesc.tid,
&msg->u.head[1]);
writel(i2o_config_driver.context, &msg->u.head[2]);
writel(i2o_cntxt_list_add(c, fp->private_data), &msg->u.head[3]);
writel(kdesc.evt_mask, &msg->body[0]);
i2o_post_this(iop, msg, 20); i2o_msg_post(c, m);
return 0; return 0;
} }
static int ioctl_evt_get(unsigned long arg, struct file *fp) static int i2o_cfg_evt_get(unsigned long arg, struct file *fp)
{ {
u32 id = (u32)fp->private_data;
struct i2o_cfg_info *p = NULL; struct i2o_cfg_info *p = NULL;
struct i2o_evt_get __user *uget = (void __user *)arg; struct i2o_evt_get __user *uget = (struct i2o_evt_get __user *)arg;
struct i2o_evt_get kget; struct i2o_evt_get kget;
unsigned long flags; unsigned long flags;
for(p = open_files; p; p = p->next) for (p = open_files; p; p = p->next)
if(p->q_id == id) if (p->q_id == (ulong) fp->private_data)
break; break;
if(!p->q_len) if (!p->q_len)
{
return -ENOENT; return -ENOENT;
return 0;
}
memcpy(&kget.info, &p->event_q[p->q_out], sizeof(struct i2o_evt_info)); memcpy(&kget.info, &p->event_q[p->q_out], sizeof(struct i2o_evt_info));
MODINC(p->q_out, I2O_EVT_Q_LEN); MODINC(p->q_out, I2O_EVT_Q_LEN);
...@@ -836,16 +618,234 @@ static int ioctl_evt_get(unsigned long arg, struct file *fp) ...@@ -836,16 +618,234 @@ static int ioctl_evt_get(unsigned long arg, struct file *fp)
kget.lost = p->q_lost; kget.lost = p->q_lost;
spin_unlock_irqrestore(&i2o_config_lock, flags); spin_unlock_irqrestore(&i2o_config_lock, flags);
if(copy_to_user(uget, &kget, sizeof(struct i2o_evt_get))) if (copy_to_user(uget, &kget, sizeof(struct i2o_evt_get)))
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
static int ioctl_passthru(unsigned long arg) #if BITS_PER_LONG == 64
static int i2o_cfg_passthru32(unsigned fd, unsigned cmnd, unsigned long arg,
struct file *file)
{ {
struct i2o_cmd_passthru __user *cmd = (void __user *) arg; struct i2o_cmd_passthru32 __user *cmd;
struct i2o_controller *c;
u32 *user_msg;
u32 *reply = NULL;
u32 *user_reply = NULL;
u32 size = 0;
u32 reply_size = 0;
u32 rcode = 0;
struct i2o_dma sg_list[SG_TABLESIZE];
u32 sg_offset = 0;
u32 sg_count = 0;
u32 i = 0;
i2o_status_block *sb;
struct i2o_message *msg;
u32 m;
unsigned int iop;
cmd = (struct i2o_cmd_passthru32 __user *)arg;
if (get_user(iop, &cmd->iop) || get_user(user_msg, &cmd->msg))
return -EFAULT;
c = i2o_find_iop(iop);
if (!c) {
pr_debug("controller %d not found\n", iop);
return -ENXIO;
}
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
sb = c->status_block.virt;
if (get_user(size, &user_msg[0])) {
printk(KERN_WARNING "unable to get size!\n");
return -EFAULT;
}
size = size >> 16;
if (size > sb->inbound_frame_size) {
pr_debug("size of message > inbound_frame_size");
return -EFAULT;
}
user_reply = &user_msg[size];
size <<= 2; // Convert to bytes
/* Copy in the user's I2O command */
if (copy_from_user(msg, user_msg, size)) {
printk(KERN_WARNING "unable to copy user message\n");
return -EFAULT;
}
i2o_dump_message(msg);
if (get_user(reply_size, &user_reply[0]) < 0)
return -EFAULT;
reply_size >>= 16;
reply_size <<= 2;
reply = kmalloc(reply_size, GFP_KERNEL);
if (!reply) {
printk(KERN_WARNING "%s: Could not allocate reply buffer\n",
c->name);
return -ENOMEM;
}
memset(reply, 0, reply_size);
sg_offset = (msg->u.head[0] >> 4) & 0x0f;
writel(i2o_config_driver.context, &msg->u.s.icntxt);
writel(i2o_cntxt_list_add(c, reply), &msg->u.s.tcntxt);
memset(sg_list, 0, sizeof(sg_list[0]) * SG_TABLESIZE);
if (sg_offset) {
struct sg_simple_element *sg;
if (sg_offset * 4 >= size) {
rcode = -EFAULT;
goto cleanup;
}
// TODO 64bit fix
sg = (struct sg_simple_element *)((&msg->u.head[0]) +
sg_offset);
sg_count =
(size - sg_offset * 4) / sizeof(struct sg_simple_element);
if (sg_count > SG_TABLESIZE) {
printk(KERN_DEBUG "%s:IOCTL SG List too large (%u)\n",
c->name, sg_count);
kfree(reply);
return -EINVAL;
}
for (i = 0; i < sg_count; i++) {
int sg_size;
struct i2o_dma *p;
if (!(sg[i].flag_count & 0x10000000
/*I2O_SGL_FLAGS_SIMPLE_ADDRESS_ELEMENT */ )) {
printk(KERN_DEBUG
"%s:Bad SG element %d - not simple (%x)\n",
c->name, i, sg[i].flag_count);
rcode = -EINVAL;
goto cleanup;
}
sg_size = sg[i].flag_count & 0xffffff;
p = &(sg_list[i]);
/* Allocate memory for the transfer */
if (i2o_dma_alloc
(&c->pdev->dev, p, sg_size,
PCI_DMA_BIDIRECTIONAL)) {
printk(KERN_DEBUG
"%s: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
c->name, sg_size, i, sg_count);
rcode = -ENOMEM;
goto cleanup;
}
/* Copy in the user's SG buffer if necessary */
if (sg[i].
flag_count & 0x04000000 /*I2O_SGL_FLAGS_DIR */ ) {
// TODO 64bit fix
if (copy_from_user
(p->virt, (void *)(u64) sg[i].addr_bus,
sg_size)) {
printk(KERN_DEBUG
"%s: Could not copy SG buf %d FROM user\n",
c->name, i);
rcode = -EFAULT;
goto cleanup;
}
}
//TODO 64bit fix
sg[i].addr_bus = (u32) p->phys;
}
}
rcode = i2o_msg_post_wait(c, m, 60);
if (rcode)
goto cleanup;
if (sg_offset) {
u32 msg[128];
/* Copy back the Scatter Gather buffers back to user space */
u32 j;
// TODO 64bit fix
struct sg_simple_element *sg;
int sg_size;
printk(KERN_INFO "sg_offset\n");
// re-acquire the original message to handle correctly the sg copy operation
memset(&msg, 0, MSG_FRAME_SIZE * 4);
// get user msg size in u32s
if (get_user(size, &user_msg[0])) {
rcode = -EFAULT;
goto cleanup;
}
size = size >> 16;
size *= 4;
/* Copy in the user's I2O command */
if (copy_from_user(msg, user_msg, size)) {
rcode = -EFAULT;
goto cleanup;
}
sg_count =
(size - sg_offset * 4) / sizeof(struct sg_simple_element);
// TODO 64bit fix
sg = (struct sg_simple_element *)(msg + sg_offset);
for (j = 0; j < sg_count; j++) {
/* Copy out the SG list to user's buffer if necessary */
if (!
(sg[j].
flag_count & 0x4000000 /*I2O_SGL_FLAGS_DIR */ )) {
sg_size = sg[j].flag_count & 0xffffff;
// TODO 64bit fix
if (copy_to_user
((void __user *)(u64) sg[j].addr_bus,
sg_list[j].virt, sg_size)) {
printk(KERN_WARNING
"%s: Could not copy %p TO user %x\n",
c->name, sg_list[j].virt,
sg[j].addr_bus);
rcode = -EFAULT;
goto cleanup;
}
}
}
}
/* Copy back the reply to user space */
if (reply_size) {
// we wrote our own values for context - now restore the user supplied ones
printk(KERN_INFO "reply_size\n");
if (copy_from_user(reply + 2, user_msg + 2, sizeof(u32) * 2)) {
printk(KERN_WARNING
"%s: Could not copy message context FROM user\n",
c->name);
rcode = -EFAULT;
}
if (copy_to_user(user_reply, reply, reply_size)) {
printk(KERN_WARNING
"%s: Could not copy reply TO user\n", c->name);
rcode = -EFAULT;
}
}
cleanup:
kfree(reply);
printk(KERN_INFO "rcode: %d\n", rcode);
return rcode;
}
#else
static int i2o_cfg_passthru(unsigned long arg)
{
struct i2o_cmd_passthru __user *cmd =
(struct i2o_cmd_passthru __user *)arg;
struct i2o_controller *c; struct i2o_controller *c;
u32 msg[MSG_FRAME_SIZE];
u32 __user *user_msg; u32 __user *user_msg;
u32 *reply = NULL; u32 *reply = NULL;
u32 __user *user_reply = NULL; u32 __user *user_reply = NULL;
...@@ -858,64 +858,88 @@ static int ioctl_passthru(unsigned long arg) ...@@ -858,64 +858,88 @@ static int ioctl_passthru(unsigned long arg)
int sg_index = 0; int sg_index = 0;
u32 i = 0; u32 i = 0;
void *p = NULL; void *p = NULL;
i2o_status_block *sb;
struct i2o_message *msg;
u32 m;
unsigned int iop; unsigned int iop;
if (get_user(iop, &cmd->iop) || get_user(user_msg, &cmd->msg)) if (get_user(iop, &cmd->iop) || get_user(user_msg, &cmd->msg))
return -EFAULT; return -EFAULT;
c = i2o_find_controller(iop); c = i2o_find_iop(iop);
if (!c) if (!c) {
return -ENXIO; pr_debug("controller %d not found\n", iop);
return -ENXIO;
}
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
sb = c->status_block.virt;
memset(&msg, 0, MSG_FRAME_SIZE*4); if (get_user(size, &user_msg[0]))
if(get_user(size, &user_msg[0]))
return -EFAULT; return -EFAULT;
size = size>>16; size = size >> 16;
user_reply = &user_msg[size]; if (size > sb->inbound_frame_size) {
if(size > MSG_FRAME_SIZE) pr_debug("size of message > inbound_frame_size");
return -EFAULT; return -EFAULT;
size *= 4; // Convert to bytes }
user_reply = &user_msg[size];
size <<= 2; // Convert to bytes
/* Copy in the user's I2O command */ /* Copy in the user's I2O command */
if(copy_from_user(msg, user_msg, size)) if (copy_from_user(msg, user_msg, size))
return -EFAULT; return -EFAULT;
if(get_user(reply_size, &user_reply[0]) < 0)
if (get_user(reply_size, &user_reply[0]) < 0)
return -EFAULT; return -EFAULT;
reply_size = reply_size>>16; reply_size >>= 16;
reply = kmalloc(REPLY_FRAME_SIZE*4, GFP_KERNEL); reply_size <<= 2;
if(!reply) {
printk(KERN_WARNING"%s: Could not allocate reply buffer\n",c->name); reply = kmalloc(reply_size, GFP_KERNEL);
if (!reply) {
printk(KERN_WARNING "%s: Could not allocate reply buffer\n",
c->name);
return -ENOMEM; return -ENOMEM;
} }
memset(reply, 0, REPLY_FRAME_SIZE*4); memset(reply, 0, reply_size);
sg_offset = (msg[0]>>4)&0x0f;
msg[2] = (u32)i2o_cfg_context; sg_offset = (msg->u.head[0] >> 4) & 0x0f;
msg[3] = (u32)reply;
memset(sg_list,0, sizeof(sg_list[0])*SG_TABLESIZE); writel(i2o_config_driver.context, &msg->u.s.icntxt);
if(sg_offset) { writel(i2o_cntxt_list_add(c, reply), &msg->u.s.tcntxt);
memset(sg_list, 0, sizeof(sg_list[0]) * SG_TABLESIZE);
if (sg_offset) {
struct sg_simple_element *sg; struct sg_simple_element *sg;
if(sg_offset * 4 >= size) { if (sg_offset * 4 >= size) {
rcode = -EFAULT; rcode = -EFAULT;
goto cleanup; goto cleanup;
} }
// TODO 64bit fix // TODO 64bit fix
sg = (struct sg_simple_element*) (msg+sg_offset); sg = (struct sg_simple_element *)((&msg->u.head[0]) +
sg_count = (size - sg_offset*4) / sizeof(struct sg_simple_element); sg_offset);
sg_count =
(size - sg_offset * 4) / sizeof(struct sg_simple_element);
if (sg_count > SG_TABLESIZE) { if (sg_count > SG_TABLESIZE) {
printk(KERN_DEBUG"%s:IOCTL SG List too large (%u)\n", c->name,sg_count); printk(KERN_DEBUG "%s:IOCTL SG List too large (%u)\n",
kfree (reply); c->name, sg_count);
kfree(reply);
return -EINVAL; return -EINVAL;
} }
for(i = 0; i < sg_count; i++) { for (i = 0; i < sg_count; i++) {
int sg_size; int sg_size;
if (!(sg[i].flag_count & 0x10000000 /*I2O_SGL_FLAGS_SIMPLE_ADDRESS_ELEMENT*/)) { if (!(sg[i].flag_count & 0x10000000
printk(KERN_DEBUG"%s:Bad SG element %d - not simple (%x)\n",c->name,i, sg[i].flag_count); /*I2O_SGL_FLAGS_SIMPLE_ADDRESS_ELEMENT */ )) {
printk(KERN_DEBUG
"%s:Bad SG element %d - not simple (%x)\n",
c->name, i, sg[i].flag_count);
rcode = -EINVAL; rcode = -EINVAL;
goto cleanup; goto cleanup;
} }
...@@ -923,61 +947,78 @@ static int ioctl_passthru(unsigned long arg) ...@@ -923,61 +947,78 @@ static int ioctl_passthru(unsigned long arg)
/* Allocate memory for the transfer */ /* Allocate memory for the transfer */
p = kmalloc(sg_size, GFP_KERNEL); p = kmalloc(sg_size, GFP_KERNEL);
if (!p) { if (!p) {
printk(KERN_DEBUG"%s: Could not allocate SG buffer - size = %d buffer number %d of %d\n", c->name,sg_size,i,sg_count); printk(KERN_DEBUG
"%s: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
c->name, sg_size, i, sg_count);
rcode = -ENOMEM; rcode = -ENOMEM;
goto cleanup; goto cleanup;
} }
sg_list[sg_index++] = p; // sglist indexed with input frame, not our internal frame. sg_list[sg_index++] = p; // sglist indexed with input frame, not our internal frame.
/* Copy in the user's SG buffer if necessary */ /* Copy in the user's SG buffer if necessary */
if(sg[i].flag_count & 0x04000000 /*I2O_SGL_FLAGS_DIR*/) { if (sg[i].
flag_count & 0x04000000 /*I2O_SGL_FLAGS_DIR */ ) {
// TODO 64bit fix // TODO 64bit fix
if (copy_from_user(p,(void __user *)sg[i].addr_bus, sg_size)) { if (copy_from_user
printk(KERN_DEBUG"%s: Could not copy SG buf %d FROM user\n",c->name,i); (p, (void __user *)sg[i].addr_bus,
sg_size)) {
printk(KERN_DEBUG
"%s: Could not copy SG buf %d FROM user\n",
c->name, i);
rcode = -EFAULT; rcode = -EFAULT;
goto cleanup; goto cleanup;
} }
} }
//TODO 64bit fix //TODO 64bit fix
sg[i].addr_bus = (u32)virt_to_bus(p); sg[i].addr_bus = virt_to_bus(p);
} }
} }
rcode = i2o_post_wait(c, msg, size, 60); rcode = i2o_msg_post_wait(c, m, 60);
if(rcode) if (rcode)
goto cleanup; goto cleanup;
if(sg_offset) { if (sg_offset) {
u32 msg[128];
/* Copy back the Scatter Gather buffers back to user space */ /* Copy back the Scatter Gather buffers back to user space */
u32 j; u32 j;
// TODO 64bit fix // TODO 64bit fix
struct sg_simple_element* sg; struct sg_simple_element *sg;
int sg_size; int sg_size;
printk(KERN_INFO "sg_offset\n");
// re-acquire the original message to handle correctly the sg copy operation // re-acquire the original message to handle correctly the sg copy operation
memset(&msg, 0, MSG_FRAME_SIZE*4); memset(&msg, 0, MSG_FRAME_SIZE * 4);
// get user msg size in u32s // get user msg size in u32s
if (get_user(size, &user_msg[0])) { if (get_user(size, &user_msg[0])) {
rcode = -EFAULT; rcode = -EFAULT;
goto cleanup; goto cleanup;
} }
size = size>>16; size = size >> 16;
size *= 4; size *= 4;
/* Copy in the user's I2O command */ /* Copy in the user's I2O command */
if (copy_from_user (msg, user_msg, size)) { if (copy_from_user(msg, user_msg, size)) {
rcode = -EFAULT; rcode = -EFAULT;
goto cleanup; goto cleanup;
} }
sg_count = (size - sg_offset*4) / sizeof(struct sg_simple_element); sg_count =
(size - sg_offset * 4) / sizeof(struct sg_simple_element);
// TODO 64bit fix // TODO 64bit fix
sg = (struct sg_simple_element*)(msg + sg_offset); sg = (struct sg_simple_element *)(msg + sg_offset);
for (j = 0; j < sg_count; j++) { for (j = 0; j < sg_count; j++) {
/* Copy out the SG list to user's buffer if necessary */ /* Copy out the SG list to user's buffer if necessary */
if (!(sg[j].flag_count & 0x4000000 /*I2O_SGL_FLAGS_DIR*/)) { if (!
(sg[j].
flag_count & 0x4000000 /*I2O_SGL_FLAGS_DIR */ )) {
sg_size = sg[j].flag_count & 0xffffff; sg_size = sg[j].flag_count & 0xffffff;
// TODO 64bit fix // TODO 64bit fix
if (copy_to_user((void __user *)sg[j].addr_bus,sg_list[j], sg_size)) { if (copy_to_user
printk(KERN_WARNING"%s: Could not copy %p TO user %x\n",c->name, sg_list[j], sg[j].addr_bus); ((void __user *)sg[j].addr_bus, sg_list[j],
sg_size)) {
printk(KERN_WARNING
"%s: Could not copy %p TO user %x\n",
c->name, sg_list[j],
sg[j].addr_bus);
rcode = -EFAULT; rcode = -EFAULT;
goto cleanup; goto cleanup;
} }
...@@ -986,37 +1027,109 @@ static int ioctl_passthru(unsigned long arg) ...@@ -986,37 +1027,109 @@ static int ioctl_passthru(unsigned long arg)
} }
/* Copy back the reply to user space */ /* Copy back the reply to user space */
if (reply_size) { if (reply_size) {
// we wrote our own values for context - now restore the user supplied ones // we wrote our own values for context - now restore the user supplied ones
if(copy_from_user(reply+2, user_msg+2, sizeof(u32)*2)) { printk(KERN_INFO "reply_size\n");
printk(KERN_WARNING"%s: Could not copy message context FROM user\n",c->name); if (copy_from_user(reply + 2, user_msg + 2, sizeof(u32) * 2)) {
printk(KERN_WARNING
"%s: Could not copy message context FROM user\n",
c->name);
rcode = -EFAULT; rcode = -EFAULT;
} }
if(copy_to_user(user_reply, reply, reply_size)) { if (copy_to_user(user_reply, reply, reply_size)) {
printk(KERN_WARNING"%s: Could not copy reply TO user\n",c->name); printk(KERN_WARNING
"%s: Could not copy reply TO user\n", c->name);
rcode = -EFAULT; rcode = -EFAULT;
} }
} }
cleanup: cleanup:
kfree(reply); kfree(reply);
i2o_unlock_controller(c);
return rcode; return rcode;
} }
#endif
/*
* IOCTL Handler
*/
static int i2o_cfg_ioctl(struct inode *inode, struct file *fp, unsigned int cmd,
unsigned long arg)
{
int ret;
switch (cmd) {
case I2OGETIOPS:
ret = i2o_cfg_getiops(arg);
break;
case I2OHRTGET:
ret = i2o_cfg_gethrt(arg);
break;
case I2OLCTGET:
ret = i2o_cfg_getlct(arg);
break;
case I2OPARMSET:
ret = i2o_cfg_parms(arg, I2OPARMSET);
break;
case I2OPARMGET:
ret = i2o_cfg_parms(arg, I2OPARMGET);
break;
case I2OSWDL:
ret = i2o_cfg_swdl(arg);
break;
case I2OSWUL:
ret = i2o_cfg_swul(arg);
break;
case I2OSWDEL:
ret = i2o_cfg_swdel(arg);
break;
case I2OVALIDATE:
ret = i2o_cfg_validate(arg);
break;
case I2OEVTREG:
ret = i2o_cfg_evt_reg(arg, fp);
break;
case I2OEVTGET:
ret = i2o_cfg_evt_get(arg, fp);
break;
#if BITS_PER_LONG != 64
case I2OPASSTHRU:
ret = i2o_cfg_passthru(arg);
break;
#endif
default:
pr_debug("i2o_config: unknown ioctl called!\n");
ret = -EINVAL;
}
return ret;
}
static int cfg_open(struct inode *inode, struct file *file) static int cfg_open(struct inode *inode, struct file *file)
{ {
struct i2o_cfg_info *tmp = struct i2o_cfg_info *tmp =
(struct i2o_cfg_info *)kmalloc(sizeof(struct i2o_cfg_info), GFP_KERNEL); (struct i2o_cfg_info *)kmalloc(sizeof(struct i2o_cfg_info),
GFP_KERNEL);
unsigned long flags; unsigned long flags;
if(!tmp) if (!tmp)
return -ENOMEM; return -ENOMEM;
file->private_data = (void*)(i2o_cfg_info_id++); file->private_data = (void *)(i2o_cfg_info_id++);
tmp->fp = file; tmp->fp = file;
tmp->fasync = NULL; tmp->fasync = NULL;
tmp->q_id = (u32)file->private_data; tmp->q_id = (ulong) file->private_data;
tmp->q_len = 0; tmp->q_len = 0;
tmp->q_in = 0; tmp->q_in = 0;
tmp->q_out = 0; tmp->q_out = 0;
...@@ -1026,13 +1139,28 @@ static int cfg_open(struct inode *inode, struct file *file) ...@@ -1026,13 +1139,28 @@ static int cfg_open(struct inode *inode, struct file *file)
spin_lock_irqsave(&i2o_config_lock, flags); spin_lock_irqsave(&i2o_config_lock, flags);
open_files = tmp; open_files = tmp;
spin_unlock_irqrestore(&i2o_config_lock, flags); spin_unlock_irqrestore(&i2o_config_lock, flags);
return 0; return 0;
} }
static int cfg_fasync(int fd, struct file *fp, int on)
{
ulong id = (ulong) fp->private_data;
struct i2o_cfg_info *p;
for (p = open_files; p; p = p->next)
if (p->q_id == id)
break;
if (!p)
return -EBADF;
return fasync_helper(fd, fp, on, &p->fasync);
}
static int cfg_release(struct inode *inode, struct file *file) static int cfg_release(struct inode *inode, struct file *file)
{ {
u32 id = (u32)file->private_data; ulong id = (ulong) file->private_data;
struct i2o_cfg_info *p1, *p2; struct i2o_cfg_info *p1, *p2;
unsigned long flags; unsigned long flags;
...@@ -1040,14 +1168,12 @@ static int cfg_release(struct inode *inode, struct file *file) ...@@ -1040,14 +1168,12 @@ static int cfg_release(struct inode *inode, struct file *file)
p1 = p2 = NULL; p1 = p2 = NULL;
spin_lock_irqsave(&i2o_config_lock, flags); spin_lock_irqsave(&i2o_config_lock, flags);
for(p1 = open_files; p1; ) for (p1 = open_files; p1;) {
{ if (p1->q_id == id) {
if(p1->q_id == id)
{
if(p1->fasync) if (p1->fasync)
cfg_fasync(-1, file, 0); cfg_fasync(-1, file, 0);
if(p2) if (p2)
p2->next = p1->next; p2->next = p1->next;
else else
open_files = p1->next; open_files = p1->next;
...@@ -1064,83 +1190,55 @@ static int cfg_release(struct inode *inode, struct file *file) ...@@ -1064,83 +1190,55 @@ static int cfg_release(struct inode *inode, struct file *file)
return 0; return 0;
} }
static int cfg_fasync(int fd, struct file *fp, int on) static struct file_operations config_fops = {
{ .owner = THIS_MODULE,
u32 id = (u32)fp->private_data; .llseek = no_llseek,
struct i2o_cfg_info *p; .ioctl = i2o_cfg_ioctl,
.open = cfg_open,
for(p = open_files; p; p = p->next) .release = cfg_release,
if(p->q_id == id) .fasync = cfg_fasync,
break;
if(!p)
return -EBADF;
return fasync_helper(fd, fp, on, &p->fasync);
}
static struct file_operations config_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = cfg_read,
.write = cfg_write,
.ioctl = cfg_ioctl,
.open = cfg_open,
.release = cfg_release,
.fasync = cfg_fasync,
}; };
static struct miscdevice i2o_miscdev = { static struct miscdevice i2o_miscdev = {
I2O_MINOR, I2O_MINOR,
"i2octl", "i2octl",
&config_fops &config_fops
}; };
static int __init i2o_config_init(void) static int __init i2o_config_init(void)
{ {
printk(KERN_INFO "I2O configuration manager v 0.04.\n"); printk(KERN_INFO "I2O configuration manager v 0.04.\n");
printk(KERN_INFO " (C) Copyright 1999 Red Hat Software\n"); printk(KERN_INFO " (C) Copyright 1999 Red Hat Software\n");
if((page_buf = kmalloc(4096, GFP_KERNEL))==NULL) if (misc_register(&i2o_miscdev) < 0) {
{
printk(KERN_ERR "i2o_config: no memory for page buffer.\n");
return -ENOBUFS;
}
if(misc_register(&i2o_miscdev) < 0)
{
printk(KERN_ERR "i2o_config: can't register device.\n"); printk(KERN_ERR "i2o_config: can't register device.\n");
kfree(page_buf);
return -EBUSY; return -EBUSY;
} }
/* /*
* Install our handler * Install our handler
*/ */
if(i2o_install_handler(&cfg_handler)<0) if (i2o_driver_register(&i2o_config_driver)) {
{
kfree(page_buf);
printk(KERN_ERR "i2o_config: handler register failed.\n"); printk(KERN_ERR "i2o_config: handler register failed.\n");
misc_deregister(&i2o_miscdev); misc_deregister(&i2o_miscdev);
return -EBUSY; return -EBUSY;
} }
/* #if BITS_PER_LONG ==64
* The low 16bits of the transaction context must match this register_ioctl32_conversion(I2OPASSTHRU32, i2o_cfg_passthru32);
* for everything we post. Otherwise someone else gets our mail register_ioctl32_conversion(I2OGETIOPS, (void *)sys_ioctl);
*/ #endif
i2o_cfg_context = cfg_handler.context;
return 0; return 0;
} }
static void i2o_config_exit(void) static void i2o_config_exit(void)
{ {
#if BITS_PER_LONG ==64
unregister_ioctl32_conversion(I2OPASSTHRU32);
unregister_ioctl32_conversion(I2OGETIOPS);
#endif
misc_deregister(&i2o_miscdev); misc_deregister(&i2o_miscdev);
i2o_driver_unregister(&i2o_config_driver);
if(page_buf)
kfree(page_buf);
if(i2o_cfg_context != -1)
i2o_remove_handler(&cfg_handler);
} }
MODULE_AUTHOR("Red Hat Software"); MODULE_AUTHOR("Red Hat Software");
MODULE_DESCRIPTION("I2O Configuration"); MODULE_DESCRIPTION("I2O Configuration");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
......
This source diff could not be displayed because it is too large. You can view the blob instead.
/* /*
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2, or (at your option) any * Free Software Foundation; either version 2, or (at your option) any
...@@ -19,13 +19,13 @@ ...@@ -19,13 +19,13 @@
* *
* o Each (bus,lun) is a logical device in I2O. We keep a map * o Each (bus,lun) is a logical device in I2O. We keep a map
* table. We spoof failed selection for unmapped units * table. We spoof failed selection for unmapped units
* o Request sense buffers can come back for free. * o Request sense buffers can come back for free.
* o Scatter gather is a bit dynamic. We have to investigate at * o Scatter gather is a bit dynamic. We have to investigate at
* setup time. * setup time.
* o Some of our resources are dynamically shared. The i2o core * o Some of our resources are dynamically shared. The i2o core
* needs a message reservation protocol to avoid swap v net * needs a message reservation protocol to avoid swap v net
* deadlocking. We need to back off queue requests. * deadlocking. We need to back off queue requests.
* *
* In general the firmware wants to help. Where its help isn't performance * In general the firmware wants to help. Where its help isn't performance
* useful we just ignore the aid. Its not worth the code in truth. * useful we just ignore the aid. Its not worth the code in truth.
* *
...@@ -40,7 +40,6 @@ ...@@ -40,7 +40,6 @@
* Fix the resource management problems. * Fix the resource management problems.
*/ */
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
...@@ -53,79 +52,272 @@ ...@@ -53,79 +52,272 @@
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/blkdev.h>
#include <linux/i2o.h>
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/atomic.h> #include <asm/atomic.h>
#include <linux/blkdev.h>
#include <linux/i2o.h>
#include <scsi/scsi.h> #include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h> #include <scsi/scsi_host.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_cmnd.h>
#define VERSION_STRING "Version 0.1.2" #define VERSION_STRING "Version 0.1.2"
//#define DRIVERDEBUG static int i2o_scsi_max_id = 16;
static int i2o_scsi_max_lun = 8;
#ifdef DRIVERDEBUG static LIST_HEAD(i2o_scsi_hosts);
#define dprintk(s, args...) printk(s, ## args)
#else struct i2o_scsi_host {
#define dprintk(s, args...) struct list_head list; /* node in in i2o_scsi_hosts */
#endif struct Scsi_Host *scsi_host; /* pointer to the SCSI host */
struct i2o_controller *iop; /* pointer to the I2O controller */
struct i2o_device *channel[0]; /* channel->i2o_dev mapping table */
};
static struct scsi_host_template i2o_scsi_host_template;
/*
* This is only needed, because we can only set the hostdata after the device is
* added to the scsi core. So we need this little workaround.
*/
static DECLARE_MUTEX(i2o_scsi_probe_lock);
static struct i2o_device *i2o_scsi_probe_dev = NULL;
static int i2o_scsi_slave_alloc(struct scsi_device *sdp)
{
sdp->hostdata = i2o_scsi_probe_dev;
return 0;
};
#define I2O_SCSI_CAN_QUEUE 4 #define I2O_SCSI_CAN_QUEUE 4
#define MAXHOSTS 32
struct i2o_scsi_host /* SCSI OSM class handling definition */
static struct i2o_class_id i2o_scsi_class_id[] = {
{I2O_CLASS_SCSI_PERIPHERAL},
{I2O_CLASS_END}
};
static struct i2o_scsi_host *i2o_scsi_host_alloc(struct i2o_controller *c)
{ {
struct i2o_controller *controller; struct i2o_scsi_host *i2o_shost;
s16 task[16][8]; /* Allow 16 devices for now */ struct i2o_device *i2o_dev;
unsigned long tagclock[16][8]; /* Tag clock for queueing */ struct Scsi_Host *scsi_host;
s16 bus_task; /* The adapter TID */ int max_channel = 0;
u8 type;
int i;
size_t size;
i2o_status_block *sb;
list_for_each_entry(i2o_dev, &c->devices, list)
if (i2o_dev->lct_data.class_id == I2O_CLASS_BUS_ADAPTER_PORT) {
if (i2o_parm_field_get(i2o_dev, 0x0000, 0, &type, 1) || (type == 1)) /* SCSI bus */
max_channel++;
}
if (!max_channel) {
printk(KERN_WARNING "scsi-osm: no channels found on %s\n",
c->name);
return ERR_PTR(-EFAULT);
}
size = max_channel * sizeof(struct i2o_device *)
+ sizeof(struct i2o_scsi_host);
scsi_host = scsi_host_alloc(&i2o_scsi_host_template, size);
if (!scsi_host) {
printk(KERN_WARNING "scsi-osm: Could not allocate SCSI host\n");
return ERR_PTR(-ENOMEM);
}
scsi_host->max_channel = max_channel - 1;
scsi_host->max_id = i2o_scsi_max_id;
scsi_host->max_lun = i2o_scsi_max_lun;
scsi_host->this_id = c->unit;
sb = c->status_block.virt;
scsi_host->sg_tablesize = (sb->inbound_frame_size -
sizeof(struct i2o_message) / 4 - 6) / 2;
i2o_shost = (struct i2o_scsi_host *)scsi_host->hostdata;
i2o_shost->scsi_host = scsi_host;
i2o_shost->iop = c;
i = 0;
list_for_each_entry(i2o_dev, &c->devices, list)
if (i2o_dev->lct_data.class_id == I2O_CLASS_BUS_ADAPTER_PORT) {
if (i2o_parm_field_get(i2o_dev, 0x0000, 0, &type, 1) || (type == 1)) /* only SCSI bus */
i2o_shost->channel[i++] = i2o_dev;
if (i >= max_channel)
break;
}
return i2o_shost;
}; };
static int scsi_context; /**
static int lun_done; * i2o_scsi_get_host - Get an I2O SCSI host
static int i2o_scsi_hosts; * @c: I2O controller to for which to get the SCSI host
*
* If the I2O controller already exists as SCSI host, the SCSI host
* is returned, otherwise the I2O controller is added to the SCSI
* core.
*
* Returns pointer to the I2O SCSI host on success or negative error code
* on failure.
*/
static struct i2o_scsi_host *i2o_scsi_get_host(struct i2o_controller *c)
{
struct i2o_scsi_host *i2o_shost;
int rc;
/* skip if already registered as I2O SCSI host */
list_for_each_entry(i2o_shost, &i2o_scsi_hosts, list)
if (i2o_shost->iop == c)
return i2o_shost;
i2o_shost = i2o_scsi_host_alloc(c);
if (IS_ERR(i2o_shost)) {
printk(KERN_ERR "scsi-osm: Could not initialize SCSI host\n");
return i2o_shost;
}
rc = scsi_add_host(i2o_shost->scsi_host, &c->device);
if (rc) {
printk(KERN_ERR "scsi-osm: Could not add SCSI host\n");
scsi_host_put(i2o_shost->scsi_host);
return ERR_PTR(rc);
}
static u32 *retry[32]; list_add(&i2o_shost->list, &i2o_scsi_hosts);
static struct i2o_controller *retry_ctrl[32]; pr_debug("new I2O SCSI host added\n");
static struct timer_list retry_timer;
static spinlock_t retry_lock = SPIN_LOCK_UNLOCKED;
static int retry_ct = 0;
static atomic_t queue_depth; return i2o_shost;
/* };
* SG Chain buffer support...
/**
* i2o_scsi_remove - Remove I2O device from SCSI core
* @dev: device which should be removed
*
* Removes the I2O device from the SCSI core again.
*
* Returns 0 on success.
*/ */
static int i2o_scsi_remove(struct device *dev)
{
struct i2o_device *i2o_dev = to_i2o_device(dev);
struct i2o_controller *c = i2o_dev->iop;
struct i2o_scsi_host *i2o_shost;
struct scsi_device *scsi_dev;
i2o_shost = i2o_scsi_get_host(c);
shost_for_each_device(scsi_dev, i2o_shost->scsi_host)
if (scsi_dev->hostdata == i2o_dev) {
scsi_remove_device(scsi_dev);
scsi_device_put(scsi_dev);
break;
}
#define SG_MAX_FRAGS 64 return 0;
};
/* /**
* FIXME: we should allocate one of these per bus we find as we * i2o_scsi_probe - verify if dev is a I2O SCSI device and install it
* locate them not in a lump at boot. * @dev: device to verify if it is a I2O SCSI device
*
* Retrieve channel, id and lun for I2O device. If everthing goes well
* register the I2O device as SCSI device on the I2O SCSI controller.
*
* Returns 0 on success or negative error code on failure.
*/ */
static int i2o_scsi_probe(struct device *dev)
typedef struct _chain_buf
{ {
u32 sg_flags_cnt[SG_MAX_FRAGS]; struct i2o_device *i2o_dev = to_i2o_device(dev);
u32 sg_buf[SG_MAX_FRAGS]; struct i2o_controller *c = i2o_dev->iop;
} chain_buf; struct i2o_scsi_host *i2o_shost;
struct Scsi_Host *scsi_host;
struct i2o_device *parent;
struct scsi_device *scsi_dev;
u32 id;
u64 lun;
int channel = -1;
int i;
i2o_shost = i2o_scsi_get_host(c);
if (IS_ERR(i2o_shost))
return PTR_ERR(i2o_shost);
scsi_host = i2o_shost->scsi_host;
if (i2o_parm_field_get(i2o_dev, 0, 3, &id, 4) < 0)
return -EFAULT;
if (id >= scsi_host->max_id) {
printk(KERN_WARNING "scsi-osm: SCSI device id (%d) >= max_id "
"of I2O host (%d)", id, scsi_host->max_id);
return -EFAULT;
}
if (i2o_parm_field_get(i2o_dev, 0, 4, &lun, 8) < 0)
return -EFAULT;
if (lun >= scsi_host->max_lun) {
printk(KERN_WARNING "scsi-osm: SCSI device id (%d) >= max_lun "
"of I2O host (%d)", (unsigned int)lun,
scsi_host->max_lun);
return -EFAULT;
}
parent = i2o_iop_find_device(c, i2o_dev->lct_data.parent_tid);
if (!parent) {
printk(KERN_WARNING "scsi-osm: can not find parent of device "
"%03x\n", i2o_dev->lct_data.tid);
return -EFAULT;
}
for (i = 0; i <= i2o_shost->scsi_host->max_channel; i++)
if (i2o_shost->channel[i] == parent)
channel = i;
if (channel == -1) {
printk(KERN_WARNING "scsi-osm: can not find channel of device "
"%03x\n", i2o_dev->lct_data.tid);
return -EFAULT;
}
down_interruptible(&i2o_scsi_probe_lock);
i2o_scsi_probe_dev = i2o_dev;
scsi_dev = scsi_add_device(i2o_shost->scsi_host, channel, id, lun);
i2o_scsi_probe_dev = NULL;
up(&i2o_scsi_probe_lock);
if (!scsi_dev) {
printk(KERN_WARNING "scsi-osm: can not add SCSI device "
"%03x\n", i2o_dev->lct_data.tid);
return -EFAULT;
}
#define SG_CHAIN_BUF_SZ sizeof(chain_buf) pr_debug("Added new SCSI device %03x (cannel: %d, id: %d, lun: %d)\n",
i2o_dev->lct_data.tid, channel, id, (unsigned int)lun);
#define SG_MAX_BUFS (i2o_num_controllers * I2O_SCSI_CAN_QUEUE) return 0;
#define SG_CHAIN_POOL_SZ (SG_MAX_BUFS * SG_CHAIN_BUF_SZ) };
static int max_sg_len = 0; static const char *i2o_scsi_info(struct Scsi_Host *SChost)
static chain_buf *sg_chain_pool = NULL; {
static int sg_chain_tag = 0; struct i2o_scsi_host *hostdata;
static int sg_max_frags = SG_MAX_FRAGS; hostdata = (struct i2o_scsi_host *)SChost->hostdata;
return hostdata->iop->name;
}
#if 0
/** /**
* i2o_retry_run - retry on timeout * i2o_retry_run - retry on timeout
* @f: unused * @f: unused
...@@ -136,16 +328,16 @@ static int sg_max_frags = SG_MAX_FRAGS; ...@@ -136,16 +328,16 @@ static int sg_max_frags = SG_MAX_FRAGS;
* and its default handler should be this in the core, and this * and its default handler should be this in the core, and this
* call a 2nd "I give up" handler in the OSM ? * call a 2nd "I give up" handler in the OSM ?
*/ */
static void i2o_retry_run(unsigned long f) static void i2o_retry_run(unsigned long f)
{ {
int i; int i;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&retry_lock, flags); spin_lock_irqsave(&retry_lock, flags);
for(i=0;i<retry_ct;i++) for (i = 0; i < retry_ct; i++)
i2o_post_message(retry_ctrl[i], virt_to_bus(retry[i])); i2o_post_message(retry_ctrl[i], virt_to_bus(retry[i]));
retry_ct=0; retry_ct = 0;
spin_unlock_irqrestore(&retry_lock, flags); spin_unlock_irqrestore(&retry_lock, flags);
} }
...@@ -155,740 +347,408 @@ static void i2o_retry_run(unsigned long f) ...@@ -155,740 +347,408 @@ static void i2o_retry_run(unsigned long f)
* Turn each of the pending commands into a NOP and post it back * Turn each of the pending commands into a NOP and post it back
* to the controller to clear it. * to the controller to clear it.
*/ */
static void flush_pending(void) static void flush_pending(void)
{ {
int i; int i;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&retry_lock, flags); spin_lock_irqsave(&retry_lock, flags);
for(i=0;i<retry_ct;i++) for (i = 0; i < retry_ct; i++) {
{ retry[i][0] &= ~0xFFFFFF;
retry[i][0]&=~0xFFFFFF; retry[i][0] |= I2O_CMD_UTIL_NOP << 24;
retry[i][0]|=I2O_CMD_UTIL_NOP<<24; i2o_post_message(retry_ctrl[i], virt_to_bus(retry[i]));
i2o_post_message(retry_ctrl[i],virt_to_bus(retry[i]));
} }
retry_ct=0; retry_ct = 0;
spin_unlock_irqrestore(&retry_lock, flags); spin_unlock_irqrestore(&retry_lock, flags);
} }
#endif
/** /**
* i2o_scsi_reply - scsi message reply processor * i2o_scsi_reply - SCSI OSM message reply handler
* @h: our i2o handler
* @c: controller issuing the reply * @c: controller issuing the reply
* @msg: the message from the controller (mapped) * @m: message id for flushing
* @msg: the message from the controller
* *
* Process reply messages (interrupts in normal scsi controller think). * Process reply messages (interrupts in normal scsi controller think).
* We can get a variety of messages to process. The normal path is * We can get a variety of messages to process. The normal path is
* scsi command completions. We must also deal with IOP failures, * scsi command completions. We must also deal with IOP failures,
* the reply to a bus reset and the reply to a LUN query. * the reply to a bus reset and the reply to a LUN query.
* *
* Locks: the queue lock is taken to call the completion handler * Returns 0 on success and if the reply should not be flushed or > 0
* on success and if the reply should be flushed. Returns negative error
* code on failure and if the reply should be flushed.
*/ */
static int i2o_scsi_reply(struct i2o_controller *c, u32 m,
static void i2o_scsi_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg) struct i2o_message *msg)
{ {
struct scsi_cmnd *current_command; struct scsi_cmnd *cmd;
spinlock_t *lock; struct device *dev;
u32 *m = (u32 *)msg; u8 as, ds, st;
u8 as,ds,st;
unsigned long flags; cmd = i2o_cntxt_list_get(c, readl(&msg->u.s.tcntxt));
if (msg->u.head[0] & (1 << 13)) {
struct i2o_message *pmsg; /* preserved message */
u32 pm;
pm = readl(&msg->body[3]);
pmsg = c->in_queue.virt + pm;
if(m[0] & (1<<13))
{
printk("IOP fail.\n"); printk("IOP fail.\n");
printk("From %d To %d Cmd %d.\n", printk("From %d To %d Cmd %d.\n",
(m[1]>>12)&0xFFF, (msg->u.head[1] >> 12) & 0xFFF,
m[1]&0xFFF, msg->u.head[1] & 0xFFF, msg->u.head[1] >> 24);
m[1]>>24); printk("Failure Code %d.\n", msg->body[0] >> 24);
printk("Failure Code %d.\n", m[4]>>24); if (msg->body[0] & (1 << 16))
if(m[4]&(1<<16))
printk("Format error.\n"); printk("Format error.\n");
if(m[4]&(1<<17)) if (msg->body[0] & (1 << 17))
printk("Path error.\n"); printk("Path error.\n");
if(m[4]&(1<<18)) if (msg->body[0] & (1 << 18))
printk("Path State.\n"); printk("Path State.\n");
if(m[4]&(1<<18)) if (msg->body[0] & (1 << 18))
printk("Congestion.\n"); printk("Congestion.\n");
m=(u32 *)bus_to_virt(m[7]); printk("Failing message is %p.\n", pmsg);
printk("Failing message is %p.\n", m);
cmd = i2o_cntxt_list_get(c, readl(&pmsg->u.s.tcntxt));
/* This isnt a fast path .. */ if (!cmd)
spin_lock_irqsave(&retry_lock, flags); return 1;
if((m[4]&(1<<18)) && retry_ct < 32) printk("Aborted %ld\n", cmd->serial_number);
{ cmd->result = DID_ERROR << 16;
retry_ctrl[retry_ct]=c; cmd->scsi_done(cmd);
retry[retry_ct]=m;
if(!retry_ct++) /* Now flush the message by making it a NOP */
{ i2o_msg_nop(c, pm);
retry_timer.expires=jiffies+1;
add_timer(&retry_timer); return 1;
}
spin_unlock_irqrestore(&retry_lock, flags);
}
else
{
spin_unlock_irqrestore(&retry_lock, flags);
/* Create a scsi error for this */
current_command = (struct scsi_cmnd *)i2o_context_list_get(m[3], c);
if(!current_command)
return;
lock = current_command->device->host->host_lock;
printk("Aborted %ld\n", current_command->serial_number);
spin_lock_irqsave(lock, flags);
current_command->result = DID_ERROR << 16;
current_command->scsi_done(current_command);
spin_unlock_irqrestore(lock, flags);
/* Now flush the message by making it a NOP */
m[0]&=0x00FFFFFF;
m[0]|=(I2O_CMD_UTIL_NOP)<<24;
i2o_post_message(c,virt_to_bus(m));
}
return;
} }
prefetchw(&queue_depth);
/* /*
* Low byte is device status, next is adapter status, * Low byte is device status, next is adapter status,
* (then one byte reserved), then request status. * (then one byte reserved), then request status.
*/ */
ds=(u8)le32_to_cpu(m[4]); ds = (u8) readl(&msg->body[0]);
as=(u8)le32_to_cpu(m[4]>>8); as = (u8) (readl(&msg->body[0]) >> 8);
st=(u8)le32_to_cpu(m[4]>>24); st = (u8) (readl(&msg->body[0]) >> 24);
dprintk(KERN_INFO "i2o got a scsi reply %08X: ", m[0]);
dprintk(KERN_INFO "m[2]=%08X: ", m[2]);
dprintk(KERN_INFO "m[4]=%08X\n", m[4]);
if(m[2]&0x80000000)
{
if(m[2]&0x40000000)
{
dprintk(KERN_INFO "Event.\n");
lun_done=1;
return;
}
printk(KERN_INFO "i2o_scsi: bus reset completed.\n");
return;
}
current_command = (struct scsi_cmnd *)i2o_context_list_get(m[3], c);
/* /*
* Is this a control request coming back - eg an abort ? * Is this a control request coming back - eg an abort ?
*/ */
atomic_dec(&queue_depth);
if(current_command==NULL)
{
if(st)
dprintk(KERN_WARNING "SCSI abort: %08X", m[4]);
dprintk(KERN_INFO "SCSI abort completed.\n");
return;
}
dprintk(KERN_INFO "Completed %ld\n", current_command->serial_number);
if(st == 0x06)
{
if(le32_to_cpu(m[5]) < current_command->underflow)
{
int i;
printk(KERN_ERR "SCSI: underflow 0x%08X 0x%08X\n",
le32_to_cpu(m[5]), current_command->underflow);
printk("Cmd: ");
for(i=0;i<15;i++)
printk("%02X ", current_command->cmnd[i]);
printk(".\n");
}
else st=0;
}
if(st)
{
/* An error has occurred */
dprintk(KERN_WARNING "SCSI error %08X", m[4]); if (!cmd) {
if (st)
if (as == 0x0E) printk(KERN_WARNING "SCSI abort: %08X",
/* SCSI Reset */ readl(&msg->body[0]));
current_command->result = DID_RESET << 16; printk(KERN_INFO "SCSI abort completed.\n");
else if (as == 0x0F) return -EFAULT;
current_command->result = DID_PARITY << 16;
else
current_command->result = DID_ERROR << 16;
}
else
/*
* It worked maybe ?
*/
current_command->result = DID_OK << 16 | ds;
if (current_command->use_sg) {
pci_unmap_sg(c->pdev,
(struct scatterlist *)current_command->buffer,
current_command->use_sg,
current_command->sc_data_direction);
} else if (current_command->request_bufflen) {
pci_unmap_single(c->pdev,
(dma_addr_t)((long)current_command->SCp.ptr),
current_command->request_bufflen,
current_command->sc_data_direction);
} }
lock = current_command->device->host->host_lock; pr_debug("Completed %ld\n", cmd->serial_number);
spin_lock_irqsave(lock, flags);
current_command->scsi_done(current_command);
spin_unlock_irqrestore(lock, flags);
return;
}
struct i2o_handler i2o_scsi_handler = {
.reply = i2o_scsi_reply,
.name = "I2O SCSI OSM",
.class = I2O_CLASS_SCSI_PERIPHERAL,
};
/** if (st) {
* i2o_find_lun - report the lun of an i2o device u32 count, error;
* @c: i2o controller owning the device /* An error has occurred */
* @d: i2o disk device
* @target: filled in with target id
* @lun: filled in with target lun
*
* Query an I2O device to find out its SCSI lun and target numbering. We
* don't currently handle some of the fancy SCSI-3 stuff although our
* querying is sufficient to do so.
*/
static int i2o_find_lun(struct i2o_controller *c, struct i2o_device *d, int *target, int *lun)
{
u8 reply[8];
if(i2o_query_scalar(c, d->lct_data.tid, 0, 3, reply, 4)<0)
return -1;
*target=reply[0];
if(i2o_query_scalar(c, d->lct_data.tid, 0, 4, reply, 8)<0)
return -1;
*lun=reply[1];
dprintk(KERN_INFO "SCSI (%d,%d)\n", *target, *lun);
return 0;
}
/** switch (st) {
* i2o_scsi_init - initialize an i2o device for scsi case 0x06:
* @c: i2o controller owning the device count = readl(&msg->body[1]);
* @d: scsi controller if (count < cmd->underflow) {
* @shpnt: scsi device we wish it to become int i;
* printk(KERN_ERR "SCSI: underflow 0x%08X 0x%08X"
* Enumerate the scsi peripheral/fibre channel peripheral class "\n", count, cmd->underflow);
* devices that are children of the controller. From that we build printk("Cmd: ");
* a translation map for the command queue code. Since I2O works on for (i = 0; i < 15; i++)
* its own tid's we effectively have to think backwards to get what printk("%02X ", cmd->cmnd[i]);
* the midlayer wants printk(".\n");
*/ cmd->result = (DID_ERROR << 16);
static void i2o_scsi_init(struct i2o_controller *c, struct i2o_device *d, struct Scsi_Host *shpnt)
{
struct i2o_device *unit;
struct i2o_scsi_host *h =(struct i2o_scsi_host *)shpnt->hostdata;
int lun;
int target;
h->controller=c;
h->bus_task=d->lct_data.tid;
for(target=0;target<16;target++)
for(lun=0;lun<8;lun++)
h->task[target][lun] = -1;
for(unit=c->devices;unit!=NULL;unit=unit->next)
{
dprintk(KERN_INFO "Class %03X, parent %d, want %d.\n",
unit->lct_data.class_id, unit->lct_data.parent_tid, d->lct_data.tid);
/* Only look at scsi and fc devices */
if ( (unit->lct_data.class_id != I2O_CLASS_SCSI_PERIPHERAL)
&& (unit->lct_data.class_id != I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL)
)
continue;
/* On our bus ? */
dprintk(KERN_INFO "Found a disk (%d).\n", unit->lct_data.tid);
if ((unit->lct_data.parent_tid == d->lct_data.tid)
|| (unit->lct_data.parent_tid == d->lct_data.parent_tid)
)
{
u16 limit;
dprintk(KERN_INFO "Its ours.\n");
if(i2o_find_lun(c, unit, &target, &lun)==-1)
{
printk(KERN_ERR "i2o_scsi: Unable to get lun for tid %d.\n", unit->lct_data.tid);
continue;
} }
dprintk(KERN_INFO "Found disk %d %d.\n", target, lun); break;
h->task[target][lun]=unit->lct_data.tid;
h->tagclock[target][lun]=jiffies; default:
error = readl(&msg->body[0]);
/* Get the max fragments/request */
i2o_query_scalar(c, d->lct_data.tid, 0xF103, 3, &limit, 2); printk(KERN_ERR "scsi-osm: SCSI error %08x\n", error);
/* sanity */ if ((error & 0xff) == 0x02 /*CHECK_CONDITION */ ) {
if ( limit == 0 ) int i;
{ u32 len = sizeof(cmd->sense_buffer);
printk(KERN_WARNING "i2o_scsi: Ignoring unreasonable SG limit of 0 from IOP!\n"); len = (len > 40) ? 40 : len;
limit = 1; // Copy over the sense data
memcpy(cmd->sense_buffer, (void *)&msg->body[3],
len);
for (i = 0; i <= len; i++)
printk(KERN_INFO "%02x\n",
cmd->sense_buffer[i]);
if (cmd->sense_buffer[0] == 0x70
&& cmd->sense_buffer[2] == DATA_PROTECT) {
/* This is to handle an array failed */
cmd->result = (DID_TIME_OUT << 16);
printk(KERN_WARNING "%s: SCSI Data "
"Protect-Device (%d,%d,%d) "
"hba_status=0x%x, dev_status="
"0x%x, cmd=0x%x\n", c->name,
(u32) cmd->device->channel,
(u32) cmd->device->id,
(u32) cmd->device->lun,
(error >> 8) & 0xff,
error & 0xff, cmd->cmnd[0]);
} else
cmd->result = (DID_ERROR << 16);
break;
} }
shpnt->sg_tablesize = limit;
dprintk(KERN_INFO "i2o_scsi: set scatter-gather to %d.\n", switch (as) {
shpnt->sg_tablesize); case 0x0E:
} /* SCSI Reset */
} cmd->result = DID_RESET << 16;
} break;
/** case 0x0F:
* i2o_scsi_detect - probe for I2O scsi devices cmd->result = DID_PARITY << 16;
* @tpnt: scsi layer template break;
*
* I2O is a little odd here. The I2O core already knows what the
* devices are. It also knows them by disk and tape as well as
* by controller. We register each I2O scsi class object as a
* scsi controller and then let the enumeration fake up the rest
*/
static int i2o_scsi_detect(struct scsi_host_template * tpnt)
{
struct Scsi_Host *shpnt = NULL;
int i;
int count;
printk(KERN_INFO "i2o_scsi.c: %s\n", VERSION_STRING); default:
cmd->result = DID_ERROR << 16;
break;
}
if(i2o_install_handler(&i2o_scsi_handler)<0) break;
{
printk(KERN_ERR "i2o_scsi: Unable to install OSM handler.\n");
return 0;
}
scsi_context = i2o_scsi_handler.context;
if((sg_chain_pool = kmalloc(SG_CHAIN_POOL_SZ, GFP_KERNEL)) == NULL)
{
printk(KERN_INFO "i2o_scsi: Unable to alloc %d byte SG chain buffer pool.\n", SG_CHAIN_POOL_SZ);
printk(KERN_INFO "i2o_scsi: SG chaining DISABLED!\n");
sg_max_frags = 11;
}
else
{
printk(KERN_INFO " chain_pool: %d bytes @ %p\n", SG_CHAIN_POOL_SZ, sg_chain_pool);
printk(KERN_INFO " (%d byte buffers X %d can_queue X %d i2o controllers)\n",
SG_CHAIN_BUF_SZ, I2O_SCSI_CAN_QUEUE, i2o_num_controllers);
sg_max_frags = SG_MAX_FRAGS; // 64
}
init_timer(&retry_timer);
retry_timer.data = 0UL;
retry_timer.function = i2o_retry_run;
// printk("SCSI OSM at %d.\n", scsi_context);
for (count = 0, i = 0; i < MAX_I2O_CONTROLLERS; i++)
{
struct i2o_controller *c=i2o_find_controller(i);
struct i2o_device *d;
/*
* This controller doesn't exist.
*/
if(c==NULL)
continue;
/*
* Fixme - we need some altered device locking. This
* is racing with device addition in theory. Easy to fix.
*/
for(d=c->devices;d!=NULL;d=d->next)
{
/*
* bus_adapter, SCSI (obsolete), or FibreChannel busses only
*/
if( (d->lct_data.class_id!=I2O_CLASS_BUS_ADAPTER_PORT) // bus_adapter
// && (d->lct_data.class_id!=I2O_CLASS_FIBRE_CHANNEL_PORT) // FC_PORT
)
continue;
shpnt = scsi_register(tpnt, sizeof(struct i2o_scsi_host));
if(shpnt==NULL)
continue;
shpnt->unique_id = (u32)d;
shpnt->io_port = 0;
shpnt->n_io_port = 0;
shpnt->irq = 0;
shpnt->this_id = /* Good question */15;
i2o_scsi_init(c, d, shpnt);
count++;
}
}
i2o_scsi_hosts = count;
if(count==0)
{
if(sg_chain_pool!=NULL)
{
kfree(sg_chain_pool);
sg_chain_pool = NULL;
} }
flush_pending();
del_timer(&retry_timer);
i2o_remove_handler(&i2o_scsi_handler);
}
return count;
}
static int i2o_scsi_release(struct Scsi_Host *host) cmd->scsi_done(cmd);
{ return 1;
if(--i2o_scsi_hosts==0)
{
if(sg_chain_pool!=NULL)
{
kfree(sg_chain_pool);
sg_chain_pool = NULL;
}
flush_pending();
del_timer(&retry_timer);
i2o_remove_handler(&i2o_scsi_handler);
} }
scsi_unregister(host); cmd->result = DID_OK << 16 | ds;
return 0; cmd->scsi_done(cmd);
}
dev = &c->pdev->dev;
if (cmd->use_sg)
dma_unmap_sg(dev, (struct scatterlist *)cmd->buffer,
cmd->use_sg, cmd->sc_data_direction);
else if (cmd->request_bufflen)
dma_unmap_single(dev, (dma_addr_t) ((long)cmd->SCp.ptr),
cmd->request_bufflen, cmd->sc_data_direction);
static const char *i2o_scsi_info(struct Scsi_Host *SChost) return 1;
{
struct i2o_scsi_host *hostdata;
hostdata = (struct i2o_scsi_host *)SChost->hostdata;
return(&hostdata->controller->name[0]);
} }
/* SCSI OSM driver struct */
static struct i2o_driver i2o_scsi_driver = {
.name = "scsi-osm",
.reply = i2o_scsi_reply,
.classes = i2o_scsi_class_id,
.driver = {
.probe = i2o_scsi_probe,
.remove = i2o_scsi_remove,
},
};
/** /**
* i2o_scsi_queuecommand - queue a SCSI command * i2o_scsi_queuecommand - queue a SCSI command
* @SCpnt: scsi command pointer * @SCpnt: scsi command pointer
* @done: callback for completion * @done: callback for completion
* *
* Issue a scsi comamnd asynchronously. Return 0 on success or 1 if * Issue a scsi command asynchronously. Return 0 on success or 1 if
* we hit an error (normally message queue congestion). The only * we hit an error (normally message queue congestion). The only
* minor complication here is that I2O deals with the device addressing * minor complication here is that I2O deals with the device addressing
* so we have to map the bus/dev/lun back to an I2O handle as well * so we have to map the bus/dev/lun back to an I2O handle as well
* as faking absent devices ourself. * as faking absent devices ourself.
* *
* Locks: takes the controller lock on error path only * Locks: takes the controller lock on error path only
*/ */
static int i2o_scsi_queuecommand(struct scsi_cmnd *SCpnt, static int i2o_scsi_queuecommand(struct scsi_cmnd *SCpnt,
void (*done) (struct scsi_cmnd *)) void (*done) (struct scsi_cmnd *))
{ {
int i;
int tid;
struct i2o_controller *c; struct i2o_controller *c;
struct scsi_cmnd *current_command;
struct Scsi_Host *host; struct Scsi_Host *host;
struct i2o_scsi_host *hostdata; struct i2o_device *i2o_dev;
u32 *msg, *mptr; struct device *dev;
int tid;
struct i2o_message *msg;
u32 m; u32 m;
u32 *lenptr; u32 scsi_flags, sg_flags;
int direction; u32 *mptr, *lenptr;
int scsidir; u32 len, reqlen;
u32 len; int i;
u32 reqlen;
u32 tag;
unsigned long flags;
static int max_qd = 1;
/* /*
* Do the incoming paperwork * Do the incoming paperwork
*/ */
i2o_dev = SCpnt->device->hostdata;
host = SCpnt->device->host; host = SCpnt->device->host;
hostdata = (struct i2o_scsi_host *)host->hostdata; c = i2o_dev->iop;
dev = &c->pdev->dev;
c = hostdata->controller;
prefetch(c);
prefetchw(&queue_depth);
SCpnt->scsi_done = done; SCpnt->scsi_done = done;
if(SCpnt->device->id > 15) if (unlikely(!i2o_dev)) {
{ printk(KERN_WARNING "scsi-osm: no I2O device in request\n");
printk(KERN_ERR "i2o_scsi: Wild target %d.\n", SCpnt->device->id);
return -1;
}
tid = hostdata->task[SCpnt->device->id][SCpnt->device->lun];
dprintk(KERN_INFO "qcmd: Tid = %d\n", tid);
current_command = SCpnt; /* set current command */
current_command->scsi_done = done; /* set ptr to done function */
/* We don't have such a device. Pretend we did the command
and that selection timed out */
if(tid == -1)
{
SCpnt->result = DID_NO_CONNECT << 16; SCpnt->result = DID_NO_CONNECT << 16;
done(SCpnt); done(SCpnt);
return 0; return 0;
} }
dprintk(KERN_INFO "Real scsi messages.\n"); tid = i2o_dev->lct_data.tid;
pr_debug("qcmd: Tid = %03x\n", tid);
pr_debug("Real scsi messages.\n");
/* /*
* Obtain an I2O message. If there are none free then * Obtain an I2O message. If there are none free then
* throw it back to the scsi layer * throw it back to the scsi layer
*/ */
m = le32_to_cpu(I2O_POST_READ32(c)); m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if(m==0xFFFFFFFF) if (m == I2O_QUEUE_EMPTY)
return 1; return SCSI_MLQUEUE_HOST_BUSY;
msg = (u32 *)(c->msg_virt + m);
/* /*
* Put together a scsi execscb message * Put together a scsi execscb message
*/ */
len = SCpnt->request_bufflen; len = SCpnt->request_bufflen;
direction = 0x00000000; // SGL IN (osm<--iop)
switch (SCpnt->sc_data_direction) {
if (SCpnt->sc_data_direction == DMA_NONE) { case PCI_DMA_NONE:
scsidir = 0x00000000; // DATA NO XFER scsi_flags = 0x00000000; // DATA NO XFER
} else if (SCpnt->sc_data_direction == DMA_TO_DEVICE) { sg_flags = 0x00000000;
direction = 0x04000000; // SGL OUT (osm-->iop) break;
scsidir = 0x80000000; // DATA OUT (iop-->dev)
} else if(SCpnt->sc_data_direction == DMA_FROM_DEVICE) { case PCI_DMA_TODEVICE:
scsidir = 0x40000000; // DATA IN (iop<--dev) scsi_flags = 0x80000000; // DATA OUT (iop-->dev)
} else { sg_flags = 0x14000000;
break;
case PCI_DMA_FROMDEVICE:
scsi_flags = 0x40000000; // DATA IN (iop<--dev)
sg_flags = 0x10000000;
break;
default:
/* Unknown - kill the command */ /* Unknown - kill the command */
SCpnt->result = DID_NO_CONNECT << 16; SCpnt->result = DID_NO_CONNECT << 16;
/* We must lock the request queue while completing */
spin_lock_irqsave(host->host_lock, flags);
done(SCpnt); done(SCpnt);
spin_unlock_irqrestore(host->host_lock, flags);
return 0; return 0;
} }
writel(I2O_CMD_SCSI_EXEC << 24 | HOST_TID << 12 | tid, &msg->u.head[1]);
i2o_raw_writel(I2O_CMD_SCSI_EXEC<<24|HOST_TID<<12|tid, &msg[1]); writel(i2o_scsi_driver.context, &msg->u.s.icntxt);
i2o_raw_writel(scsi_context, &msg[2]); /* So the I2O layer passes to us */
i2o_raw_writel(i2o_context_list_add(SCpnt, c), &msg[3]); /* We want the SCSI control block back */ /* We want the SCSI control block back */
writel(i2o_cntxt_list_add(c, SCpnt), &msg->u.s.tcntxt);
/* LSI_920_PCI_QUIRK /* LSI_920_PCI_QUIRK
* *
* Intermittant observations of msg frame word data corruption * Intermittant observations of msg frame word data corruption
* observed on msg[4] after: * observed on msg[4] after:
* WRITE, READ-MODIFY-WRITE * WRITE, READ-MODIFY-WRITE
* operations. 19990606 -sralston * operations. 19990606 -sralston
* *
* (Hence we build this word via tag. Its good practice anyway * (Hence we build this word via tag. Its good practice anyway
* we don't want fetches over PCI needlessly) * we don't want fetches over PCI needlessly)
*/ */
tag=0; /* Attach tags to the devices */
/* /*
* Attach tags to the devices if(SCpnt->device->tagged_supported) {
*/ if(SCpnt->tag == HEAD_OF_QUEUE_TAG)
if(SCpnt->device->tagged_supported) scsi_flags |= 0x01000000;
{ else if(SCpnt->tag == ORDERED_QUEUE_TAG)
/* scsi_flags |= 0x01800000;
* Some drives are too stupid to handle fairness issues }
* with tagged queueing. We throw in the odd ordered */
* tag to stop them starving themselves.
*/
if((jiffies - hostdata->tagclock[SCpnt->device->id][SCpnt->device->lun]) > (5*HZ))
{
tag=0x01800000; /* ORDERED! */
hostdata->tagclock[SCpnt->device->id][SCpnt->device->lun]=jiffies;
}
else
{
/* Hmmm... I always see value of 0 here,
* of which {HEAD_OF, ORDERED, SIMPLE} are NOT! -sralston
*/
if(SCpnt->tag == HEAD_OF_QUEUE_TAG)
tag=0x01000000;
else if(SCpnt->tag == ORDERED_QUEUE_TAG)
tag=0x01800000;
}
}
/* Direction, disconnect ok, tag, CDBLen */ /* Direction, disconnect ok, tag, CDBLen */
i2o_raw_writel(scsidir|0x20000000|SCpnt->cmd_len|tag, &msg[4]); writel(scsi_flags | 0x20200000 | SCpnt->cmd_len, &msg->body[0]);
mptr=msg+5; mptr = &msg->body[1];
/* /* Write SCSI command into the message - always 16 byte block */
* Write SCSI command into the message - always 16 byte block
*/
memcpy_toio(mptr, SCpnt->cmnd, 16); memcpy_toio(mptr, SCpnt->cmnd, 16);
mptr+=4; mptr += 4;
lenptr=mptr++; /* Remember me - fill in when we know */ lenptr = mptr++; /* Remember me - fill in when we know */
reqlen = 12; // SINGLE SGE reqlen = 12; // SINGLE SGE
/* /* Now fill in the SGList and command */
* Now fill in the SGList and command if (SCpnt->use_sg) {
* struct scatterlist *sg;
* FIXME: we need to set the sglist limits according to the
* message size of the I2O controller. We might only have room
* for 6 or so worst case
*/
if(SCpnt->use_sg)
{
struct scatterlist *sg = (struct scatterlist *)SCpnt->request_buffer;
int sg_count; int sg_count;
int chain = 0;
sg = SCpnt->request_buffer;
len = 0; len = 0;
sg_count = pci_map_sg(c->pdev, sg, SCpnt->use_sg, sg_count = dma_map_sg(dev, sg, SCpnt->use_sg,
SCpnt->sc_data_direction); SCpnt->sc_data_direction);
/* FIXME: handle fail */
if(!sg_count)
BUG();
if((sg_max_frags > 11) && (SCpnt->use_sg > 11))
{
chain = 1;
/*
* Need to chain!
*/
i2o_raw_writel(direction|0xB0000000|(SCpnt->use_sg*2*4), mptr++);
i2o_raw_writel(virt_to_bus(sg_chain_pool + sg_chain_tag), mptr);
mptr = (u32*)(sg_chain_pool + sg_chain_tag);
if (SCpnt->use_sg > max_sg_len)
{
max_sg_len = SCpnt->use_sg;
printk("i2o_scsi: Chain SG! SCpnt=%p, SG_FragCnt=%d, SG_idx=%d\n",
SCpnt, SCpnt->use_sg, sg_chain_tag);
}
if ( ++sg_chain_tag == SG_MAX_BUFS )
sg_chain_tag = 0;
for(i = 0 ; i < SCpnt->use_sg; i++)
{
*mptr++=cpu_to_le32(direction|0x10000000|sg_dma_len(sg));
len+=sg_dma_len(sg);
*mptr++=cpu_to_le32(sg_dma_address(sg));
sg++;
}
mptr[-2]=cpu_to_le32(direction|0xD0000000|sg_dma_len(sg-1));
}
else
{
for(i = 0 ; i < SCpnt->use_sg; i++)
{
i2o_raw_writel(direction|0x10000000|sg_dma_len(sg), mptr++);
len+=sg->length;
i2o_raw_writel(sg_dma_address(sg), mptr++);
sg++;
}
/* Make this an end of list. Again evade the 920 bug and if (unlikely(sg_count <= 0))
unwanted PCI read traffic */ return -ENOMEM;
i2o_raw_writel(direction|0xD0000000|sg_dma_len(sg-1), &mptr[-2]); for (i = SCpnt->use_sg; i > 0; i--) {
} if (i == 1)
sg_flags |= 0xC0000000;
if(!chain) writel(sg_flags | sg_dma_len(sg), mptr++);
reqlen = mptr - msg; writel(sg_dma_address(sg), mptr++);
len += sg_dma_len(sg);
i2o_raw_writel(len, lenptr); sg++;
if(len != SCpnt->underflow)
printk("Cmd len %08X Cmd underflow %08X\n",
len, SCpnt->underflow);
}
else
{
dprintk(KERN_INFO "non sg for %p, %d\n", SCpnt->request_buffer,
SCpnt->request_bufflen);
i2o_raw_writel(len = SCpnt->request_bufflen, lenptr);
if(len == 0)
{
reqlen = 9;
} }
else
{ reqlen = mptr - &msg->u.head[0];
writel(len, lenptr);
} else {
len = SCpnt->request_bufflen;
writel(len, lenptr);
if (len > 0) {
dma_addr_t dma_addr; dma_addr_t dma_addr;
dma_addr = pci_map_single(c->pdev,
SCpnt->request_buffer, dma_addr = dma_map_single(dev, SCpnt->request_buffer,
SCpnt->request_bufflen, SCpnt->request_bufflen,
SCpnt->sc_data_direction); SCpnt->sc_data_direction);
if(dma_addr == 0) if (!dma_addr)
BUG(); /* How to handle ?? */ return -ENOMEM;
SCpnt->SCp.ptr = (char *)(unsigned long) dma_addr;
i2o_raw_writel(0xD0000000|direction|SCpnt->request_bufflen, mptr++); SCpnt->SCp.ptr = (void *)(unsigned long)dma_addr;
i2o_raw_writel(dma_addr, mptr++); sg_flags |= 0xC0000000;
} writel(sg_flags | SCpnt->request_bufflen, mptr++);
writel(dma_addr, mptr++);
} else
reqlen = 9;
} }
/*
* Stick the headers on
*/
i2o_raw_writel(reqlen<<16 | SGL_OFFSET_10, msg); /* Stick the headers on */
writel(reqlen << 16 | SGL_OFFSET_10, &msg->u.head[0]);
/* Queue the message */ /* Queue the message */
i2o_post_message(c,m); i2o_msg_post(c, m);
atomic_inc(&queue_depth); pr_debug("Issued %ld\n", SCpnt->serial_number);
if(atomic_read(&queue_depth)> max_qd)
{
max_qd=atomic_read(&queue_depth);
printk("Queue depth now %d.\n", max_qd);
}
mb();
dprintk(KERN_INFO "Issued %ld\n", current_command->serial_number);
return 0; return 0;
} };
#if 0
FIXME
/** /**
* i2o_scsi_abort - abort a running command * i2o_scsi_abort - abort a running command
* @SCpnt: command to abort * @SCpnt: command to abort
* *
* Ask the I2O controller to abort a command. This is an asynchrnous * Ask the I2O controller to abort a command. This is an asynchrnous
* process and our callback handler will see the command complete * process and our callback handler will see the command complete
* with an aborted message if it succeeds. * with an aborted message if it succeeds.
* *
* Locks: no locks are held or needed * Locks: no locks are held or needed
*/ */
int i2o_scsi_abort(struct scsi_cmnd *SCpnt)
static int i2o_scsi_abort(struct scsi_cmnd * SCpnt)
{ {
struct i2o_controller *c; struct i2o_controller *c;
struct Scsi_Host *host; struct Scsi_Host *host;
...@@ -896,119 +756,48 @@ static int i2o_scsi_abort(struct scsi_cmnd * SCpnt) ...@@ -896,119 +756,48 @@ static int i2o_scsi_abort(struct scsi_cmnd * SCpnt)
u32 msg[5]; u32 msg[5];
int tid; int tid;
int status = FAILED; int status = FAILED;
printk(KERN_WARNING "i2o_scsi: Aborting command block.\n"); printk(KERN_WARNING "i2o_scsi: Aborting command block.\n");
host = SCpnt->device->host; host = SCpnt->device->host;
hostdata = (struct i2o_scsi_host *)host->hostdata; hostdata = (struct i2o_scsi_host *)host->hostdata;
tid = hostdata->task[SCpnt->device->id][SCpnt->device->lun]; tid = hostdata->task[SCpnt->device->id][SCpnt->device->lun];
if(tid==-1) if (tid == -1) {
{
printk(KERN_ERR "i2o_scsi: Impossible command to abort!\n"); printk(KERN_ERR "i2o_scsi: Impossible command to abort!\n");
return status; return status;
} }
c = hostdata->controller; c = hostdata->controller;
spin_unlock_irq(host->host_lock); spin_unlock_irq(host->host_lock);
msg[0] = FIVE_WORD_MSG_SIZE; msg[0] = FIVE_WORD_MSG_SIZE;
msg[1] = I2O_CMD_SCSI_ABORT<<24|HOST_TID<<12|tid; msg[1] = I2O_CMD_SCSI_ABORT << 24 | HOST_TID << 12 | tid;
msg[2] = scsi_context; msg[2] = scsi_context;
msg[3] = 0; msg[3] = 0;
msg[4] = i2o_context_list_remove(SCpnt, c); msg[4] = i2o_context_list_remove(SCpnt, c);
if(i2o_post_wait(c, msg, sizeof(msg), 240)) if (i2o_post_wait(c, msg, sizeof(msg), 240))
status = SUCCESS; status = SUCCESS;
spin_lock_irq(host->host_lock); spin_lock_irq(host->host_lock);
return status; return status;
} }
/** #endif
* i2o_scsi_bus_reset - Issue a SCSI reset
* @SCpnt: the command that caused the reset
*
* Perform a SCSI bus reset operation. In I2O this is just a message
* we pass. I2O can do clever multi-initiator and shared reset stuff
* but we don't support this.
*
* Locks: called with no lock held, requires no locks.
*/
static int i2o_scsi_bus_reset(struct scsi_cmnd * SCpnt)
{
int tid;
struct i2o_controller *c;
struct Scsi_Host *host;
struct i2o_scsi_host *hostdata;
u32 m;
void *msg;
unsigned long timeout;
/*
* Find the TID for the bus
*/
host = SCpnt->device->host;
spin_unlock_irq(host->host_lock);
printk(KERN_WARNING "i2o_scsi: Attempting to reset the bus.\n");
hostdata = (struct i2o_scsi_host *)host->hostdata;
tid = hostdata->bus_task;
c = hostdata->controller;
/*
* Now send a SCSI reset request. Any remaining commands
* will be aborted by the IOP. We need to catch the reply
* possibly ?
*/
timeout = jiffies+2*HZ;
do
{
m = le32_to_cpu(I2O_POST_READ32(c));
if(m != 0xFFFFFFFF)
break;
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
mb();
}
while(time_before(jiffies, timeout));
msg = c->msg_virt + m;
i2o_raw_writel(FOUR_WORD_MSG_SIZE|SGL_OFFSET_0, msg);
i2o_raw_writel(I2O_CMD_SCSI_BUSRESET<<24|HOST_TID<<12|tid, msg+4);
i2o_raw_writel(scsi_context|0x80000000, msg+8);
/* We use the top bit to split controller and unit transactions */
/* Now store unit,tid so we can tie the completion back to a specific device */
__raw_writel(c->unit << 16 | tid, msg+12);
wmb();
/* We want the command to complete after we return */
spin_lock_irq(host->host_lock);
i2o_post_message(c,m);
/* Should we wait for the reset to complete ? */
return SUCCESS;
}
/** /**
* i2o_scsi_bios_param - Invent disk geometry * i2o_scsi_bios_param - Invent disk geometry
* @sdev: scsi device * @sdev: scsi device
* @dev: block layer device * @dev: block layer device
* @capacity: size in sectors * @capacity: size in sectors
* @ip: geometry array * @ip: geometry array
* *
* This is anyones guess quite frankly. We use the same rules everyone * This is anyones guess quite frankly. We use the same rules everyone
* else appears to and hope. It seems to work. * else appears to and hope. It seems to work.
*/ */
static int i2o_scsi_bios_param(struct scsi_device * sdev, static int i2o_scsi_bios_param(struct scsi_device *sdev,
struct block_device *dev, sector_t capacity, int *ip) struct block_device *dev, sector_t capacity,
int *ip)
{ {
int size; int size;
...@@ -1023,25 +812,75 @@ static int i2o_scsi_bios_param(struct scsi_device * sdev, ...@@ -1023,25 +812,75 @@ static int i2o_scsi_bios_param(struct scsi_device * sdev,
return 0; return 0;
} }
MODULE_AUTHOR("Red Hat Software"); static struct scsi_host_template i2o_scsi_host_template = {
MODULE_LICENSE("GPL"); .proc_name = "SCSI-OSM",
.name = "I2O SCSI Peripheral OSM",
.info = i2o_scsi_info,
.queuecommand = i2o_scsi_queuecommand,
/*
.eh_abort_handler = i2o_scsi_abort,
*/
.bios_param = i2o_scsi_bios_param,
.can_queue = I2O_SCSI_CAN_QUEUE,
.sg_tablesize = 8,
.cmd_per_lun = 6,
.use_clustering = ENABLE_CLUSTERING,
.slave_alloc = i2o_scsi_slave_alloc,
};
/*
int
i2o_scsi_queuecommand(struct scsi_cmnd * cmd, void (*done) (struct scsi_cmnd *))
{
printk(KERN_INFO "queuecommand\n");
return SCSI_MLQUEUE_HOST_BUSY;
};
*/
static struct scsi_host_template driver_template = { /**
.proc_name = "i2o_scsi", * i2o_scsi_init - SCSI OSM initialization function
.name = "I2O SCSI Layer", *
.detect = i2o_scsi_detect, * Register SCSI OSM into I2O core.
.release = i2o_scsi_release, *
.info = i2o_scsi_info, * Returns 0 on success or negative error code on failure.
.queuecommand = i2o_scsi_queuecommand, */
.eh_abort_handler = i2o_scsi_abort, static int __init i2o_scsi_init(void)
.eh_bus_reset_handler = i2o_scsi_bus_reset, {
.bios_param = i2o_scsi_bios_param, int rc;
.can_queue = I2O_SCSI_CAN_QUEUE,
.this_id = 15, printk(KERN_INFO "I2O SCSI Peripheral OSM\n");
.sg_tablesize = 8,
.cmd_per_lun = 6, /* Register SCSI OSM into I2O core */
.use_clustering = ENABLE_CLUSTERING, rc = i2o_driver_register(&i2o_scsi_driver);
if (rc) {
printk(KERN_ERR "scsi-osm: Could not register SCSI driver\n");
return rc;
}
return 0;
}; };
#include "../../scsi/scsi_module.c" /**
* i2o_scsi_exit - SCSI OSM exit function
*
* Unregisters SCSI OSM from I2O core.
*/
static void __exit i2o_scsi_exit(void)
{
struct i2o_scsi_host *i2o_shost, *tmp;
/* Remove I2O SCSI hosts */
list_for_each_entry_safe(i2o_shost, tmp, &i2o_scsi_hosts, list) {
scsi_remove_host(i2o_shost->scsi_host);
scsi_host_put(i2o_shost->scsi_host);
}
/* Unregister I2O SCSI OSM from I2O core */
i2o_driver_unregister(&i2o_scsi_driver);
};
MODULE_AUTHOR("Red Hat Software");
MODULE_LICENSE("GPL");
module_init(i2o_scsi_init);
module_exit(i2o_scsi_exit);
/*
* Functions to handle I2O controllers and I2O message handling
*
* Copyright (C) 1999-2002 Red Hat Software
*
* Written by Alan Cox, Building Number Three Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* A lot of the I2O message side code from this is taken from the
* Red Creek RCPCI45 adapter driver by Red Creek Communications
*
* Fixes/additions:
* Philipp Rumpf
* Juha Sievnen <Juha.Sievanen@cs.Helsinki.FI>
* Auvo Hkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
* Deepak Saxena <deepak@plexity.net>
* Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
* Alan Cox <alan@redhat.com>:
* Ported to Linux 2.5.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Minor fixes for 2.6.
*/
#include <linux/module.h>
#include <linux/i2o.h>
/* global I2O controller list */
LIST_HEAD(i2o_controllers);
/*
* global I2O System Table. Contains information about all the IOPs in the
* system. Used to inform IOPs about each others existence.
*/
static struct i2o_dma i2o_systab;
/* Module internal functions from other sources */
extern struct i2o_driver i2o_exec_driver;
extern int i2o_exec_lct_get(struct i2o_controller *);
extern void i2o_device_remove(struct i2o_device *);
extern int __init i2o_driver_init(void);
extern void __exit i2o_driver_exit(void);
extern int __init i2o_exec_init(void);
extern void __exit i2o_exec_exit(void);
extern int __init i2o_pci_init(void);
extern void __exit i2o_pci_exit(void);
extern int i2o_device_init(void);
extern void i2o_device_exit(void);
/**
* i2o_msg_nop - Returns a message which is not used
* @c: I2O controller from which the message was created
* @m: message which should be returned
*
* If you fetch a message via i2o_msg_get, and can't use it, you must
* return the message with this function. Otherwise the message frame
* is lost.
*/
void i2o_msg_nop(struct i2o_controller *c, u32 m)
{
struct i2o_message *msg = c->in_queue.virt + m;
writel(THREE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(0, &msg->u.head[2]);
writel(0, &msg->u.head[3]);
i2o_msg_post(c, m);
};
/**
* i2o_msg_get_wait - obtain an I2O message from the IOP
* @c: I2O controller
* @msg: pointer to a I2O message pointer
* @wait: how long to wait until timeout
*
* This function waits up to wait seconds for a message slot to be
* available.
*
* On a success the message is returned and the pointer to the message is
* set in msg. The returned message is the physical page frame offset
* address from the read port (see the i2o spec). If no message is
* available returns I2O_QUEUE_EMPTY and msg is leaved untouched.
*/
u32 i2o_msg_get_wait(struct i2o_controller *c, struct i2o_message **msg,
int wait)
{
unsigned long timeout = jiffies + wait * HZ;
u32 m;
while ((m = i2o_msg_get(c, msg)) == I2O_QUEUE_EMPTY) {
if (time_after(jiffies, timeout)) {
pr_debug("%s: Timeout waiting for message frame.\n",
c->name);
return I2O_QUEUE_EMPTY;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
}
return m;
};
#if BITS_PER_LONG == 64
/**
* i2o_cntxt_list_add - Append a pointer to context list and return a id
* @ptr: pointer to add to the context list
* @c: controller to which the context list belong
*
* Because the context field in I2O is only 32-bit large, on 64-bit the
* pointer is to large to fit in the context field. The i2o_cntxt_list
* functions therefore map pointers to context fields.
*
* Returns context id > 0 on success or 0 on failure.
*/
u32 i2o_cntxt_list_add(struct i2o_controller * c, void *ptr)
{
struct i2o_context_list_element *entry;
unsigned long flags;
if (!ptr)
printk(KERN_ERR "NULL pointer found!\n");
entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry) {
printk(KERN_ERR "i2o: Could not allocate memory for context "
"list element\n");
return 0;
}
entry->ptr = ptr;
entry->timestamp = jiffies;
INIT_LIST_HEAD(&entry->list);
spin_lock_irqsave(&c->context_list_lock, flags);
if (unlikely(atomic_inc_and_test(&c->context_list_counter)))
atomic_inc(&c->context_list_counter);
entry->context = atomic_read(&c->context_list_counter);
list_add(&entry->list, &c->context_list);
spin_unlock_irqrestore(&c->context_list_lock, flags);
pr_debug("Add context to list %p -> %d\n", ptr, context);
return entry->context;
};
/**
* i2o_cntxt_list_remove - Remove a pointer from the context list
* @ptr: pointer which should be removed from the context list
* @c: controller to which the context list belong
*
* Removes a previously added pointer from the context list and returns
* the matching context id.
*
* Returns context id on succes or 0 on failure.
*/
u32 i2o_cntxt_list_remove(struct i2o_controller * c, void *ptr)
{
struct i2o_context_list_element *entry;
u32 context = 0;
unsigned long flags;
spin_lock_irqsave(&c->context_list_lock, flags);
list_for_each_entry(entry, &c->context_list, list)
if (entry->ptr == ptr) {
list_del(&entry->list);
context = entry->context;
kfree(entry);
break;
}
spin_unlock_irqrestore(&c->context_list_lock, flags);
if (!context)
printk(KERN_WARNING "i2o: Could not remove nonexistent ptr "
"%p\n", ptr);
pr_debug("remove ptr from context list %d -> %p\n", context, ptr);
return context;
};
/**
* i2o_cntxt_list_get - Get a pointer from the context list and remove it
* @context: context id to which the pointer belong
* @c: controller to which the context list belong
* returns pointer to the matching context id
*/
void *i2o_cntxt_list_get(struct i2o_controller *c, u32 context)
{
struct i2o_context_list_element *entry;
unsigned long flags;
void *ptr = NULL;
spin_lock_irqsave(&c->context_list_lock, flags);
list_for_each_entry(entry, &c->context_list, list)
if (entry->context == context) {
list_del(&entry->list);
ptr = entry->ptr;
kfree(entry);
break;
}
spin_unlock_irqrestore(&c->context_list_lock, flags);
if (!ptr)
printk(KERN_WARNING "i2o: context id %d not found\n", context);
pr_debug("get ptr from context list %d -> %p\n", context, ptr);
return ptr;
};
#endif
/**
* i2o_iop_find - Find an I2O controller by id
* @unit: unit number of the I2O controller to search for
*
* Lookup the I2O controller on the controller list.
*
* Returns pointer to the I2O controller on success or NULL if not found.
*/
struct i2o_controller *i2o_find_iop(int unit)
{
struct i2o_controller *c;
list_for_each_entry(c, &i2o_controllers, list) {
if (c->unit == unit)
return c;
}
return NULL;
};
/**
* i2o_iop_find_device - Find a I2O device on an I2O controller
* @c: I2O controller where the I2O device hangs on
* @tid: TID of the I2O device to search for
*
* Searches the devices of the I2O controller for a device with TID tid and
* returns it.
*
* Returns a pointer to the I2O device if found, otherwise NULL.
*/
struct i2o_device *i2o_iop_find_device(struct i2o_controller *c, u16 tid)
{
struct i2o_device *dev;
list_for_each_entry(dev, &c->devices, list)
if (dev->lct_data.tid == tid)
return dev;
return 0;
};
/**
* i2o_quiesce_controller - quiesce controller
* @c: controller
*
* Quiesce an IOP. Causes IOP to make external operation quiescent
* (i2o 'READY' state). Internal operation of the IOP continues normally.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_iop_quiesce(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
i2o_status_block *sb = c->status_block.virt;
int rc;
i2o_status_get(c);
/* SysQuiesce discarded if IOP not in READY or OPERATIONAL state */
if ((sb->iop_state != ADAPTER_STATE_READY) &&
(sb->iop_state != ADAPTER_STATE_OPERATIONAL))
return 0;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_SYS_QUIESCE << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
/* Long timeout needed for quiesce if lots of devices */
if ((rc = i2o_msg_post_wait(c, m, 240)))
printk(KERN_INFO "%s: Unable to quiesce (status=%#x).\n",
c->name, -rc);
else
pr_debug("%s: Quiesced.\n", c->name);
i2o_status_get(c); // Entered READY state
return rc;
};
/**
* i2o_iop_enable - move controller from ready to OPERATIONAL
* @c: I2O controller
*
* Enable IOP. This allows the IOP to resume external operations and
* reverses the effect of a quiesce. Returns zero or an error code if
* an error occurs.
*/
static int i2o_iop_enable(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
i2o_status_block *sb = c->status_block.virt;
int rc;
i2o_status_get(c);
/* Enable only allowed on READY state */
if (sb->iop_state != ADAPTER_STATE_READY)
return -EINVAL;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_SYS_ENABLE << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
/* How long of a timeout do we need? */
if ((rc = i2o_msg_post_wait(c, m, 240)))
printk(KERN_ERR "%s: Could not enable (status=%#x).\n",
c->name, -rc);
else
pr_debug("%s: Enabled.\n", c->name);
i2o_status_get(c); // entered OPERATIONAL state
return rc;
};
/**
* i2o_iop_quiesce_all - Quiesce all I2O controllers on the system
*
* Quiesce all I2O controllers which are connected to the system.
*/
static inline void i2o_iop_quiesce_all(void)
{
struct i2o_controller *c, *tmp;
list_for_each_entry_safe(c, tmp, &i2o_controllers, list) {
if (!c->no_quiesce)
i2o_iop_quiesce(c);
}
};
/**
* i2o_iop_enable_all - Enables all controllers on the system
*
* Enables all I2O controllers which are connected to the system.
*/
static inline void i2o_iop_enable_all(void)
{
struct i2o_controller *c, *tmp;
list_for_each_entry_safe(c, tmp, &i2o_controllers, list)
i2o_iop_enable(c);
};
/**
* i2o_clear_controller - Bring I2O controller into HOLD state
* @c: controller
*
* Clear an IOP to HOLD state, ie. terminate external operations, clear all
* input queues and prepare for a system restart. IOP's internal operation
* continues normally and the outbound queue is alive. The IOP is not
* expected to rebuild its LCT.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_iop_clear(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
int rc;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
/* Quiesce all IOPs first */
i2o_iop_quiesce_all();
writel(FOUR_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_ADAPTER_CLEAR << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
if ((rc = i2o_msg_post_wait(c, m, 30)))
printk(KERN_INFO "%s: Unable to clear (status=%#x).\n",
c->name, -rc);
else
pr_debug("%s: Cleared.\n", c->name);
/* Enable all IOPs */
i2o_iop_enable_all();
i2o_status_get(c);
return rc;
}
/**
* i2o_iop_reset - reset an I2O controller
* @c: controller to reset
*
* Reset the IOP into INIT state and wait until IOP gets into RESET state.
* Terminate all external operations, clear IOP's inbound and outbound
* queues, terminate all DDMs, and reload the IOP's operating environment
* and all local DDMs. The IOP rebuilds its LCT.
*/
static int i2o_iop_reset(struct i2o_controller *c)
{
u8 *status = c->status.virt;
struct i2o_message *msg;
u32 m;
unsigned long timeout;
i2o_status_block *sb = c->status_block.virt;
int rc = 0;
pr_debug("Resetting controller\n");
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
memset(status, 0, 4);
/* Quiesce all IOPs first */
i2o_iop_quiesce_all();
writel(EIGHT_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_ADAPTER_RESET << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(i2o_exec_driver.context, &msg->u.s.icntxt);
writel(0, &msg->u.s.tcntxt); //FIXME: use reasonable transaction context
writel(0, &msg->body[0]);
writel(0, &msg->body[1]);
writel(i2o_ptr_low((void *)c->status.phys), &msg->body[2]);
writel(i2o_ptr_high((void *)c->status.phys), &msg->body[3]);
i2o_msg_post(c, m);
/* Wait for a reply */
timeout = jiffies + I2O_TIMEOUT_RESET * HZ;
while (!*status) {
if (time_after(jiffies, timeout)) {
printk(KERN_ERR "IOP reset timeout.\n");
rc = -ETIMEDOUT;
goto exit;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
rmb();
}
if (*status == I2O_CMD_IN_PROGRESS) {
/*
* Once the reset is sent, the IOP goes into the INIT state
* which is indeterminate. We need to wait until the IOP
* has rebooted before we can let the system talk to
* it. We read the inbound Free_List until a message is
* available. If we can't read one in the given ammount of
* time, we assume the IOP could not reboot properly.
*/
pr_debug("%s: Reset in progress, waiting for reboot...\n",
c->name);
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_RESET);
while (m == I2O_QUEUE_EMPTY) {
if (time_after(jiffies, timeout)) {
printk(KERN_ERR "IOP reset timeout.\n");
rc = -ETIMEDOUT;
goto exit;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_RESET);
}
i2o_msg_nop(c, m);
}
/* from here all quiesce commands are safe */
c->no_quiesce = 0;
/* If IopReset was rejected or didn't perform reset, try IopClear */
i2o_status_get(c);
if (*status == I2O_CMD_REJECTED || sb->iop_state != ADAPTER_STATE_RESET) {
printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",
c->name);
i2o_iop_clear(c);
} else
pr_debug("%s: Reset completed.\n", c->name);
exit:
/* Enable all IOPs */
i2o_iop_enable_all();
return rc;
};
/**
* i2o_iop_init_outbound_queue - setup the outbound message queue
* @c: I2O controller
*
* Clear and (re)initialize IOP's outbound queue and post the message
* frames to the IOP.
*
* Returns 0 on success or a negative errno code on failure.
*/
int i2o_iop_init_outbound_queue(struct i2o_controller *c)
{
u8 *status = c->status.virt;
u32 m;
struct i2o_message *msg;
ulong timeout;
int i;
pr_debug("%s: Initializing Outbound Queue...\n", c->name);
memset(status, 0, 4);
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(EIGHT_WORD_MSG_SIZE | TRL_OFFSET_6, &msg->u.head[0]);
writel(I2O_CMD_OUTBOUND_INIT << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(i2o_exec_driver.context, &msg->u.s.icntxt);
writel(0x0106, &msg->u.s.tcntxt); /* FIXME: why 0x0106, maybe in
Spec? */
writel(PAGE_SIZE, &msg->body[0]);
writel(MSG_FRAME_SIZE << 16 | 0x80, &msg->body[1]); /* Outbound msg frame
size in words and Initcode */
writel(0xd0000004, &msg->body[2]);
writel(i2o_ptr_low((void *)c->status.phys), &msg->body[3]);
writel(i2o_ptr_high((void *)c->status.phys), &msg->body[4]);
i2o_msg_post(c, m);
timeout = jiffies + I2O_TIMEOUT_INIT_OUTBOUND_QUEUE * HZ;
while (*status <= I2O_CMD_IN_PROGRESS) {
if (time_after(jiffies, timeout)) {
printk(KERN_WARNING "%s: Timeout Initializing\n",
c->name);
return -ETIMEDOUT;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
rmb();
}
m = c->out_queue.phys;
/* Post frames */
for (i = 0; i < NMBR_MSG_FRAMES; i++) {
i2o_flush_reply(c, m);
m += MSG_FRAME_SIZE * 4;
}
return 0;
}
/**
* i2o_iop_activate - Bring controller up to HOLD
* @c: controller
*
* This function brings an I2O controller into HOLD state. The adapter
* is reset if necessary and then the queues and resource table are read.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_iop_activate(struct i2o_controller *c)
{
i2o_status_block *sb = c->status_block.virt;
int rc;
/* In INIT state, Wait Inbound Q to initialize (in i2o_status_get) */
/* In READY state, Get status */
rc = i2o_status_get(c);
if (rc) {
printk(KERN_INFO "Unable to obtain status of %s, "
"attempting a reset.\n", c->name);
if (i2o_iop_reset(c))
return rc;
}
if (sb->i2o_version > I2OVER15) {
printk(KERN_ERR "%s: Not running vrs. 1.5. of the I2O "
"Specification.\n", c->name);
return -ENODEV;
}
switch (sb->iop_state) {
case ADAPTER_STATE_FAULTED:
printk(KERN_CRIT "%s: hardware fault\n", c->name);
return -ENODEV;
case ADAPTER_STATE_READY:
case ADAPTER_STATE_OPERATIONAL:
case ADAPTER_STATE_HOLD:
case ADAPTER_STATE_FAILED:
pr_debug("already running, trying to reset...\n");
if (i2o_iop_reset(c))
return -ENODEV;
}
rc = i2o_iop_init_outbound_queue(c);
if (rc)
return rc;
/* In HOLD state */
rc = i2o_hrt_get(c);
if (rc)
return rc;
return 0;
};
/**
* i2o_iop_systab_set - Set the I2O System Table of the specified IOP
* @c: I2O controller to which the system table should be send
*
* Before the systab could be set i2o_systab_build() must be called.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_iop_systab_set(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
i2o_status_block *sb = c->status_block.virt;
struct device *dev = &c->pdev->dev;
struct resource *root;
int rc;
if (sb->current_mem_size < sb->desired_mem_size) {
struct resource *res = &c->mem_resource;
res->name = c->pdev->bus->name;
res->flags = IORESOURCE_MEM;
res->start = 0;
res->end = 0;
printk("%s: requires private memory resources.\n", c->name);
root = pci_find_parent_resource(c->pdev, res);
if (root == NULL)
printk("Can't find parent resource!\n");
if (root && allocate_resource(root, res, sb->desired_mem_size, sb->desired_mem_size, sb->desired_mem_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */
NULL, NULL) >= 0) {
c->mem_alloc = 1;
sb->current_mem_size = 1 + res->end - res->start;
sb->current_mem_base = res->start;
printk(KERN_INFO
"%s: allocated %ld bytes of PCI memory at 0x%08lX.\n",
c->name, 1 + res->end - res->start, res->start);
}
}
if (sb->current_io_size < sb->desired_io_size) {
struct resource *res = &c->io_resource;
res->name = c->pdev->bus->name;
res->flags = IORESOURCE_IO;
res->start = 0;
res->end = 0;
printk("%s: requires private memory resources.\n", c->name);
root = pci_find_parent_resource(c->pdev, res);
if (root == NULL)
printk("Can't find parent resource!\n");
if (root && allocate_resource(root, res, sb->desired_io_size, sb->desired_io_size, sb->desired_io_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */
NULL, NULL) >= 0) {
c->io_alloc = 1;
sb->current_io_size = 1 + res->end - res->start;
sb->current_mem_base = res->start;
printk(KERN_INFO
"%s: allocated %ld bytes of PCI I/O at 0x%08lX.\n",
c->name, 1 + res->end - res->start, res->start);
}
}
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
i2o_systab.phys = dma_map_single(dev, i2o_systab.virt, i2o_systab.len,
PCI_DMA_TODEVICE);
if (!i2o_systab.phys) {
i2o_msg_nop(c, m);
return -ENOMEM;
}
writel(I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6, &msg->u.head[0]);
writel(I2O_CMD_SYS_TAB_SET << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
/*
* Provide three SGL-elements:
* System table (SysTab), Private memory space declaration and
* Private i/o space declaration
*
* FIXME: is this still true?
* Nasty one here. We can't use dma_alloc_coherent to send the
* same table to everyone. We have to go remap it for them all
*/
writel(c->unit + 2, &msg->body[0]);
writel(0, &msg->body[1]);
writel(0x54000000 | i2o_systab.phys, &msg->body[2]);
writel(i2o_systab.phys, &msg->body[3]);
writel(0x54000000 | sb->current_mem_size, &msg->body[4]);
writel(sb->current_mem_base, &msg->body[5]);
writel(0xd4000000 | sb->current_io_size, &msg->body[6]);
writel(sb->current_io_base, &msg->body[6]);
rc = i2o_msg_post_wait(c, m, 120);
dma_unmap_single(dev, i2o_systab.phys, i2o_systab.len,
PCI_DMA_TODEVICE);
if (rc < 0)
printk(KERN_ERR "%s: Unable to set SysTab (status=%#x).\n",
c->name, -rc);
else
pr_debug("%s: SysTab set.\n", c->name);
i2o_status_get(c); // Entered READY state
return rc;
}
/**
* i2o_iop_online - Bring a controller online into OPERATIONAL state.
* @c: I2O controller
*
* Send the system table and enable the I2O controller.
*
* Returns 0 on success or negativer error code on failure.
*/
static int i2o_iop_online(struct i2o_controller *c)
{
int rc;
rc = i2o_iop_systab_set(c);
if (rc)
return rc;
/* In READY state */
pr_debug("%s: Attempting to enable...\n", c->name);
rc = i2o_iop_enable(c);
if (rc)
return rc;
return 0;
};
/**
* i2o_iop_remove - Remove the I2O controller from the I2O core
* @c: I2O controller
*
* Remove the I2O controller from the I2O core. If devices are attached to
* the controller remove these also and finally reset the controller.
*/
void i2o_iop_remove(struct i2o_controller *c)
{
struct i2o_device *dev, *tmp;
pr_debug("Deleting controller %s\n", c->name);
list_del(&c->list);
list_for_each_entry_safe(dev, tmp, &c->devices, list)
i2o_device_remove(dev);
/* Ask the IOP to switch to RESET state */
i2o_iop_reset(c);
}
/**
* i2o_systab_build - Build system table
*
* The system table contains information about all the IOPs in the system
* (duh) and is used by the Executives on the IOPs to establish peer2peer
* connections. We're not supporting peer2peer at the moment, but this
* will be needed down the road for things like lan2lan forwarding.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_systab_build(void)
{
struct i2o_controller *c, *tmp;
int num_controllers = 0;
u32 change_ind = 0;
int count = 0;
struct i2o_sys_tbl *systab = i2o_systab.virt;
list_for_each_entry_safe(c, tmp, &i2o_controllers, list)
num_controllers++;
if (systab) {
change_ind = systab->change_ind;
kfree(i2o_systab.virt);
}
/* Header + IOPs */
i2o_systab.len = sizeof(struct i2o_sys_tbl) + num_controllers *
sizeof(struct i2o_sys_tbl_entry);
systab = i2o_systab.virt = kmalloc(i2o_systab.len, GFP_KERNEL);
if (!systab) {
printk(KERN_ERR "i2o: unable to allocate memory for System "
"Table\n");
return -ENOMEM;
}
memset(systab, 0, i2o_systab.len);
systab->version = I2OVERSION;
systab->change_ind = change_ind + 1;
list_for_each_entry_safe(c, tmp, &i2o_controllers, list) {
i2o_status_block *sb;
if (count >= num_controllers) {
printk(KERN_ERR "i2o: controller added while building "
"system table\n");
break;
}
sb = c->status_block.virt;
/*
* Get updated IOP state so we have the latest information
*
* We should delete the controller at this point if it
* doesn't respond since if it's not on the system table
* it is techninically not part of the I2O subsystem...
*/
if (unlikely(i2o_status_get(c))) {
printk(KERN_ERR "%s: Deleting b/c could not get status"
" while attempting to build system table\n",
c->name);
i2o_iop_remove(c);
continue; // try the next one
}
systab->iops[count].org_id = sb->org_id;
systab->iops[count].iop_id = c->unit + 2;
systab->iops[count].seg_num = 0;
systab->iops[count].i2o_version = sb->i2o_version;
systab->iops[count].iop_state = sb->iop_state;
systab->iops[count].msg_type = sb->msg_type;
systab->iops[count].frame_size = sb->inbound_frame_size;
systab->iops[count].last_changed = change_ind;
systab->iops[count].iop_capabilities = sb->iop_capabilities;
systab->iops[count].inbound_low = i2o_ptr_low(c->post_port);
systab->iops[count].inbound_high = i2o_ptr_high(c->post_port);
count++;
}
systab->num_entries = count;
return 0;
};
/**
* i2o_parse_hrt - Parse the hardware resource table.
* @c: I2O controller
*
* We don't do anything with it except dumping it (in debug mode).
*
* Returns 0.
*/
static int i2o_parse_hrt(struct i2o_controller *c)
{
i2o_dump_hrt(c);
return 0;
};
/**
* i2o_status_get - Get the status block from the I2O controller
* @c: I2O controller
*
* Issue a status query on the controller. This updates the attached
* status block. The status block could then be accessed through
* c->status_block.
*
* Returns 0 on sucess or negative error code on failure.
*/
int i2o_status_get(struct i2o_controller *c)
{
struct i2o_message *msg;
u32 m;
u8 *status_block;
unsigned long timeout;
status_block = (u8 *) c->status_block.virt;
memset(status_block, 0, sizeof(i2o_status_block));
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(NINE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_STATUS_GET << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(i2o_exec_driver.context, &msg->u.s.icntxt);
writel(0, &msg->u.s.tcntxt); // FIXME: use resonable transaction context
writel(0, &msg->body[0]);
writel(0, &msg->body[1]);
writel(i2o_ptr_low((void *)c->status_block.phys), &msg->body[2]);
writel(i2o_ptr_high((void *)c->status_block.phys), &msg->body[3]);
writel(sizeof(i2o_status_block), &msg->body[4]); /* always 88 bytes */
i2o_msg_post(c, m);
/* Wait for a reply */
timeout = jiffies + I2O_TIMEOUT_STATUS_GET * HZ;
while (status_block[87] != 0xFF) {
if (time_after(jiffies, timeout)) {
printk(KERN_ERR "%s: Get status timeout.\n", c->name);
return -ETIMEDOUT;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1);
rmb();
}
#if DEBUG
i2o_debug_state(c);
#endif
return 0;
}
/*
* i2o_hrt_get - Get the Hardware Resource Table from the I2O controller
* @c: I2O controller from which the HRT should be fetched
*
* The HRT contains information about possible hidden devices but is
* mostly useless to us.
*
* Returns 0 on success or negativer error code on failure.
*/
int i2o_hrt_get(struct i2o_controller *c)
{
int rc;
int i;
i2o_hrt *hrt = c->hrt.virt;
u32 size = sizeof(i2o_hrt);
struct device *dev = &c->pdev->dev;
for (i = 0; i < I2O_HRT_GET_TRIES; i++) {
struct i2o_message *msg;
u32 m;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(SIX_WORD_MSG_SIZE | SGL_OFFSET_4, &msg->u.head[0]);
writel(I2O_CMD_HRT_GET << 24 | HOST_TID << 12 | ADAPTER_TID,
&msg->u.head[1]);
writel(0xd0000000 | c->hrt.len, &msg->body[0]);
writel(c->hrt.phys, &msg->body[1]);
rc = i2o_msg_post_wait_mem(c, m, 20, &c->hrt);
if (rc < 0) {
printk(KERN_ERR "%s: Unable to get HRT (status=%#x)\n",
c->name, -rc);
return rc;
}
size = hrt->num_entries * hrt->entry_len << 2;
if (size > c->hrt.len) {
if (i2o_dma_realloc(dev, &c->hrt, size, GFP_KERNEL))
return -ENOMEM;
else
hrt = c->hrt.virt;
} else
return i2o_parse_hrt(c);
}
printk(KERN_ERR "%s: Unable to get HRT after %d tries, giving up\n",
c->name, I2O_HRT_GET_TRIES);
return -EBUSY;
}
/**
* i2o_iop_alloc - Allocate and initialize a i2o_controller struct
*
* Allocate the necessary memory for a i2o_controller struct and
* initialize the lists.
*
* Returns a pointer to the I2O controller or a negative error code on
* failure.
*/
struct i2o_controller *i2o_iop_alloc(void)
{
static int unit = 0; /* 0 and 1 are NULL IOP and Local Host */
struct i2o_controller *c;
c = kmalloc(sizeof(*c), GFP_KERNEL);
if (!c) {
printk(KERN_ERR "i2o: Insufficient memory to allocate the "
"controller.\n");
return ERR_PTR(-ENOMEM);
}
memset(c, 0, sizeof(*c));
INIT_LIST_HEAD(&c->devices);
c->lock = SPIN_LOCK_UNLOCKED;
init_MUTEX(&c->lct_lock);
c->unit = unit++;
sprintf(c->name, "iop%d", c->unit);
#if BITS_PER_LONG == 64
c->context_list_lock = SPIN_LOCK_UNLOCKED;
atomic_set(&c->context_list_counter, 0);
INIT_LIST_HEAD(&c->context_list);
#endif
return c;
};
/**
* i2o_iop_free - Free the i2o_controller struct
* @c: I2O controller to free
*/
void i2o_iop_free(struct i2o_controller *c)
{
kfree(c);
};
/**
* i2o_iop_add - Initialize the I2O controller and add him to the I2O core
* @c: controller
*
* Initialize the I2O controller and if no error occurs add him to the I2O
* core.
*
* Returns 0 on success or negative error code on failure.
*/
int i2o_iop_add(struct i2o_controller *c)
{
int rc;
printk(KERN_INFO "%s: Activating I2O controller...\n", c->name);
printk(KERN_INFO "%s: This may take a few minutes if there are many "
"devices\n", c->name);
if ((rc = i2o_iop_activate(c))) {
printk(KERN_ERR "%s: controller could not activated\n",
c->name);
i2o_iop_reset(c);
return rc;
}
pr_debug("building sys table %s...\n", c->name);
if ((rc = i2o_systab_build())) {
i2o_iop_reset(c);
return rc;
}
pr_debug("online controller %s...\n", c->name);
if ((rc = i2o_iop_online(c))) {
i2o_iop_reset(c);
return rc;
}
pr_debug("getting LCT %s...\n", c->name);
if ((rc = i2o_exec_lct_get(c))) {
i2o_iop_reset(c);
return rc;
}
list_add(&c->list, &i2o_controllers);
printk(KERN_INFO "%s: Controller added\n", c->name);
return 0;
};
/**
* i2o_event_register - Turn on/off event notification for a I2O device
* @dev: I2O device which should receive the event registration request
* @drv: driver which want to get notified
* @tcntxt: transaction context to use with this notifier
* @evt_mask: mask of events
*
* Create and posts an event registration message to the task. No reply
* is waited for, or expected. If you do not want further notifications,
* call the i2o_event_register again with a evt_mask of 0.
*
* Returns 0 on success or -ETIMEDOUT if no message could be fetched for
* sending the request.
*/
int i2o_event_register(struct i2o_device *dev, struct i2o_driver *drv,
int tcntxt, u32 evt_mask)
{
struct i2o_controller *c = dev->iop;
struct i2o_message *msg;
u32 m;
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
return -ETIMEDOUT;
writel(FIVE_WORD_MSG_SIZE | SGL_OFFSET_0, &msg->u.head[0]);
writel(I2O_CMD_UTIL_EVT_REGISTER << 24 | HOST_TID << 12 | dev->lct_data.
tid, &msg->u.head[1]);
writel(drv->context, &msg->u.s.icntxt);
writel(tcntxt, &msg->u.s.tcntxt);
writel(evt_mask, &msg->body[0]);
i2o_msg_post(c, m);
return 0;
};
/**
* i2o_iop_init - I2O main initialization function
*
* Initialize the I2O drivers (OSM) functions, register the Executive OSM,
* initialize the I2O PCI part and finally initialize I2O device stuff.
*
* Returns 0 on success or negative error code on failure.
*/
static int __init i2o_iop_init(void)
{
int rc = 0;
printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
rc = i2o_device_init();
if (rc)
goto exit;
rc = i2o_driver_init();
if (rc)
goto device_exit;
rc = i2o_exec_init();
if (rc)
goto driver_exit;
rc = i2o_pci_init();
if (rc < 0)
goto exec_exit;
return 0;
exec_exit:
i2o_exec_exit();
driver_exit:
i2o_driver_exit();
device_exit:
i2o_device_exit();
exit:
return rc;
}
/**
* i2o_iop_exit - I2O main exit function
*
* Removes I2O controllers from PCI subsystem and shut down OSMs.
*/
static void __exit i2o_iop_exit(void)
{
i2o_pci_exit();
i2o_exec_exit();
i2o_driver_exit();
i2o_device_exit();
};
module_init(i2o_iop_init);
module_exit(i2o_iop_exit);
MODULE_AUTHOR("Red Hat Software");
MODULE_DESCRIPTION("I2O Core");
MODULE_LICENSE("GPL");
#if BITS_PER_LONG == 64
EXPORT_SYMBOL(i2o_cntxt_list_add);
EXPORT_SYMBOL(i2o_cntxt_list_get);
EXPORT_SYMBOL(i2o_cntxt_list_remove);
#endif
EXPORT_SYMBOL(i2o_msg_get_wait);
EXPORT_SYMBOL(i2o_msg_nop);
EXPORT_SYMBOL(i2o_find_iop);
EXPORT_SYMBOL(i2o_iop_find_device);
EXPORT_SYMBOL(i2o_event_register);
EXPORT_SYMBOL(i2o_status_get);
EXPORT_SYMBOL(i2o_hrt_get);
EXPORT_SYMBOL(i2o_controllers);
/*
* PCI handling of I2O controller
*
* Copyright (C) 1999-2002 Red Hat Software
*
* Written by Alan Cox, Building Number Three Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* A lot of the I2O message side code from this is taken from the Red
* Creek RCPCI45 adapter driver by Red Creek Communications
*
* Fixes/additions:
* Philipp Rumpf
* Juha Sievnen <Juha.Sievanen@cs.Helsinki.FI>
* Auvo Hkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
* Deepak Saxena <deepak@plexity.net>
* Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
* Alan Cox <alan@redhat.com>:
* Ported to Linux 2.5.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Minor fixes for 2.6.
* Markus Lidel <Markus.Lidel@shadowconnect.com>:
* Support for sysfs included.
*/
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/i2o.h>
#ifdef CONFIG_MTRR
#include <asm/mtrr.h>
#endif // CONFIG_MTRR
/* Module internal functions from other sources */
extern struct i2o_controller *i2o_iop_alloc(void);
extern void i2o_iop_free(struct i2o_controller *);
extern int i2o_iop_add(struct i2o_controller *);
extern void i2o_iop_remove(struct i2o_controller *);
extern int i2o_driver_dispatch(struct i2o_controller *, u32,
struct i2o_message *);
/* PCI device id table for all I2O controllers */
static struct pci_device_id __devinitdata i2o_pci_ids[] = {
{PCI_DEVICE_CLASS(PCI_CLASS_INTELLIGENT_I2O << 8, 0xffff00)},
{PCI_DEVICE(PCI_VENDOR_ID_DPT, 0xa511)},
{0}
};
/**
* i2o_dma_realloc - Realloc DMA memory
* @dev: struct device pointer to the PCI device of the I2O controller
* @addr: pointer to a i2o_dma struct DMA buffer
* @len: new length of memory
* @gfp_mask: GFP mask
*
* If there was something allocated in the addr, free it first. If len > 0
* than try to allocate it and write the addresses back to the addr
* structure. If len == 0 set the virtual address to NULL.
*
* Returns the 0 on success or negative error code on failure.
*/
int i2o_dma_realloc(struct device *dev, struct i2o_dma *addr, size_t len,
unsigned int gfp_mask)
{
i2o_dma_free(dev, addr);
if (len)
return i2o_dma_alloc(dev, addr, len, gfp_mask);
return 0;
};
/**
* i2o_pci_free - Frees the DMA memory for the I2O controller
* @c: I2O controller to free
*
* Remove all allocated DMA memory and unmap memory IO regions. If MTRR
* is enabled, also remove it again.
*/
static void __devexit i2o_pci_free(struct i2o_controller *c)
{
struct device *dev;
dev = &c->pdev->dev;
i2o_dma_free(dev, &c->out_queue);
i2o_dma_free(dev, &c->status_block);
if (c->lct)
kfree(c->lct);
i2o_dma_free(dev, &c->dlct);
i2o_dma_free(dev, &c->hrt);
i2o_dma_free(dev, &c->status);
#ifdef CONFIG_MTRR
if (c->mtrr_reg0 >= 0)
mtrr_del(c->mtrr_reg0, 0, 0);
if (c->mtrr_reg1 >= 0)
mtrr_del(c->mtrr_reg1, 0, 0);
#endif
if (c->raptor && c->in_queue.virt)
iounmap(c->in_queue.virt);
if (c->base.virt)
iounmap(c->base.virt);
}
/**
* i2o_pci_alloc - Allocate DMA memory, map IO memory for I2O controller
* @c: I2O controller
*
* Allocate DMA memory for a PCI (or in theory AGP) I2O controller. All
* IO mappings are also done here. If MTRR is enabled, also do add memory
* regions here.
*
* Returns 0 on success or negative error code on failure.
*/
static int __devinit i2o_pci_alloc(struct i2o_controller *c)
{
struct pci_dev *pdev = c->pdev;
struct device *dev = &pdev->dev;
int i;
for (i = 0; i < 6; i++) {
/* Skip I/O spaces */
if (!(pci_resource_flags(pdev, i) & IORESOURCE_IO)) {
if (!c->base.phys) {
c->base.phys = pci_resource_start(pdev, i);
c->base.len = pci_resource_len(pdev, i);
if (!c->raptor)
break;
} else {
c->in_queue.phys = pci_resource_start(pdev, i);
c->in_queue.len = pci_resource_len(pdev, i);
break;
}
}
}
if (i == 6) {
printk(KERN_ERR "i2o: I2O controller has no memory regions"
" defined.\n");
i2o_pci_free(c);
return -EINVAL;
}
/* Map the I2O controller */
if (c->raptor) {
printk(KERN_INFO "i2o: PCI I2O controller\n");
printk(KERN_INFO " BAR0 at 0x%08lX size=%ld\n",
(unsigned long)c->base.phys, (unsigned long)c->base.len);
printk(KERN_INFO " BAR1 at 0x%08lX size=%ld\n",
(unsigned long)c->in_queue.phys,
(unsigned long)c->in_queue.len);
} else
printk(KERN_INFO "i2o: PCI I2O controller at %08lX size=%ld\n",
(unsigned long)c->base.phys, (unsigned long)c->base.len);
c->base.virt = ioremap(c->base.phys, c->base.len);
if (!c->base.virt) {
printk(KERN_ERR "i2o: Unable to map controller.\n");
return -ENOMEM;
}
if (c->raptor) {
c->in_queue.virt = ioremap(c->in_queue.phys, c->in_queue.len);
if (!c->in_queue.virt) {
printk(KERN_ERR "i2o: Unable to map controller.\n");
i2o_pci_free(c);
return -ENOMEM;
}
} else
c->in_queue = c->base;
c->irq_mask = c->base.virt + 0x34;
c->post_port = c->base.virt + 0x40;
c->reply_port = c->base.virt + 0x44;
#ifdef CONFIG_MTRR
/* Enable Write Combining MTRR for IOP's memory region */
c->mtrr_reg0 = mtrr_add(c->in_queue.phys, c->in_queue.len,
MTRR_TYPE_WRCOMB, 1);
c->mtrr_reg1 = -1;
if (c->mtrr_reg0 < 0)
printk(KERN_WARNING "i2o: could not enable write combining "
"MTRR\n");
else
printk(KERN_INFO "i2o: using write combining MTRR\n");
/*
* If it is an INTEL i960 I/O processor then set the first 64K to
* Uncacheable since the region contains the messaging unit which
* shouldn't be cached.
*/
if ((pdev->vendor == PCI_VENDOR_ID_INTEL ||
pdev->vendor == PCI_VENDOR_ID_DPT) && !c->raptor) {
printk(KERN_INFO "i2o: MTRR workaround for Intel i960 processor"
"\n");
c->mtrr_reg1 = mtrr_add(c->base.phys, 0x10000,
MTRR_TYPE_UNCACHABLE, 1);
if (c->mtrr_reg1 < 0) {
printk(KERN_WARNING "i2o_pci: Error in setting "
"MTRR_TYPE_UNCACHABLE\n");
mtrr_del(c->mtrr_reg0, c->in_queue.phys,
c->in_queue.len);
c->mtrr_reg0 = -1;
}
}
#endif
if (i2o_dma_alloc(dev, &c->status, 4, GFP_KERNEL)) {
i2o_pci_free(c);
return -ENOMEM;
}
if (i2o_dma_alloc(dev, &c->hrt, sizeof(i2o_hrt), GFP_KERNEL)) {
i2o_pci_free(c);
return -ENOMEM;
}
if (i2o_dma_alloc(dev, &c->dlct, 8192, GFP_KERNEL)) {
i2o_pci_free(c);
return -ENOMEM;
}
if (i2o_dma_alloc(dev, &c->status_block, sizeof(i2o_status_block),
GFP_KERNEL)) {
i2o_pci_free(c);
return -ENOMEM;
}
if (i2o_dma_alloc(dev, &c->out_queue, MSG_POOL_SIZE, GFP_KERNEL)) {
i2o_pci_free(c);
return -ENOMEM;
}
pci_set_drvdata(pdev, c);
return 0;
}
/**
* i2o_pci_interrupt - Interrupt handler for I2O controller
* @irq: interrupt line
* @dev_id: pointer to the I2O controller
* @r: pointer to registers
*
* Handle an interrupt from a PCI based I2O controller. This turns out
* to be rather simple. We keep the controller pointer in the cookie.
*/
static irqreturn_t i2o_pci_interrupt(int irq, void *dev_id, struct pt_regs *r)
{
struct i2o_controller *c = dev_id;
struct device *dev = &c->pdev->dev;
struct i2o_message *m;
u32 mv;
u32 *msg;
/*
* Old 960 steppings had a bug in the I2O unit that caused
* the queue to appear empty when it wasn't.
*/
mv = I2O_REPLY_READ32(c);
if (mv == I2O_QUEUE_EMPTY) {
mv = I2O_REPLY_READ32(c);
if (unlikely(mv == I2O_QUEUE_EMPTY)) {
return IRQ_NONE;
} else
pr_debug("960 bug detected\n");
}
while (mv != I2O_QUEUE_EMPTY) {
/*
* Map the message from the page frame map to kernel virtual.
* Because bus_to_virt is deprecated, we have calculate the
* location by ourself!
*/
m = (struct i2o_message *)(mv -
(unsigned long)c->out_queue.phys +
(unsigned long)c->out_queue.virt);
msg = (u32 *) m;
/*
* Ensure this message is seen coherently but cachably by
* the processor
*/
dma_sync_single_for_cpu(dev, c->out_queue.phys, MSG_FRAME_SIZE,
PCI_DMA_FROMDEVICE);
/* dispatch it */
if (i2o_driver_dispatch(c, mv, m))
/* flush it if result != 0 */
i2o_flush_reply(c, mv);
/*
* That 960 bug again...
*/
mv = I2O_REPLY_READ32(c);
if (mv == I2O_QUEUE_EMPTY)
mv = I2O_REPLY_READ32(c);
}
return IRQ_HANDLED;
}
/**
* i2o_pci_irq_enable - Allocate interrupt for I2O controller
*
* Allocate an interrupt for the I2O controller, and activate interrupts
* on the I2O controller.
*
* Returns 0 on success or negative error code on failure.
*/
static int i2o_pci_irq_enable(struct i2o_controller *c)
{
struct pci_dev *pdev = c->pdev;
int rc;
I2O_IRQ_WRITE32(c, 0xffffffff);
if (pdev->irq) {
rc = request_irq(pdev->irq, i2o_pci_interrupt, SA_SHIRQ,
c->name, c);
if (rc < 0) {
printk(KERN_ERR "%s: unable to allocate interrupt %d."
"\n", c->name, pdev->irq);
return rc;
}
}
I2O_IRQ_WRITE32(c, 0x00000000);
printk(KERN_INFO "%s: Installed at IRQ %d\n", c->name, pdev->irq);
return 0;
}
/**
* i2o_pci_irq_disable - Free interrupt for I2O controller
* @c: I2O controller
*
* Disable interrupts in I2O controller and then free interrupt.
*/
static void i2o_pci_irq_disable(struct i2o_controller *c)
{
I2O_IRQ_WRITE32(c, 0xffffffff);
if (c->pdev->irq > 0)
free_irq(c->pdev->irq, c);
}
/**
* i2o_pci_probe - Probe the PCI device for an I2O controller
* @dev: PCI device to test
* @id: id which matched with the PCI device id table
*
* Probe the PCI device for any device which is a memory of the
* Intelligent, I2O class or an Adaptec Zero Channel Controller. We
* attempt to set up each such device and register it with the core.
*
* Returns 0 on success or negative error code on failure.
*/
static int __devinit i2o_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
struct i2o_controller *c;
int rc;
printk(KERN_INFO "i2o: Checking for PCI I2O controllers...\n");
if ((pdev->class & 0xff) > 1) {
printk(KERN_WARNING "i2o: I2O controller found but does not "
"support I2O 1.5 (skipping).\n");
return -ENODEV;
}
if ((rc = pci_enable_device(pdev))) {
printk(KERN_WARNING "i2o: I2O controller found but could not be"
" enabled.\n");
return rc;
}
printk(KERN_INFO "i2o: I2O controller found on bus %d at %d.\n",
pdev->bus->number, pdev->devfn);
if (pci_set_dma_mask(pdev, DMA_32BIT_MASK)) {
printk(KERN_WARNING "i2o: I2O controller on bus %d at %d: No "
"suitable DMA available!\n", pdev->bus->number,
pdev->devfn);
rc = -ENODEV;
goto disable;
}
pci_set_master(pdev);
c = i2o_iop_alloc();
if (IS_ERR(c)) {
printk(KERN_ERR "i2o: memory for I2O controller could not be "
"allocated\n");
rc = PTR_ERR(c);
goto disable;
}
c->pdev = pdev;
c->device = pdev->dev;
/* Cards that fall apart if you hit them with large I/O loads... */
if (pdev->vendor == PCI_VENDOR_ID_NCR && pdev->device == 0x0630) {
c->short_req = 1;
printk(KERN_INFO "i2o: Symbios FC920 workarounds activated.\n");
}
if (pdev->subsystem_vendor == PCI_VENDOR_ID_PROMISE) {
c->promise = 1;
printk(KERN_INFO "i2o: Promise workarounds activated.\n");
}
/* Cards that go bananas if you quiesce them before you reset them. */
if (pdev->vendor == PCI_VENDOR_ID_DPT) {
c->no_quiesce = 1;
if (pdev->device == 0xa511)
c->raptor = 1;
}
if ((rc = i2o_pci_alloc(c))) {
printk(KERN_ERR "i2o: DMA / IO allocation for I2O controller "
" failed\n");
goto free_controller;
}
if (i2o_pci_irq_enable(c)) {
printk(KERN_ERR "i2o: unable to enable interrupts for I2O "
"controller\n");
goto free_pci;
}
if ((rc = i2o_iop_add(c)))
goto uninstall;
return 0;
uninstall:
i2o_pci_irq_disable(c);
free_pci:
i2o_pci_free(c);
free_controller:
i2o_iop_free(c);
disable:
pci_disable_device(pdev);
return rc;
}
/**
* i2o_pci_remove - Removes a I2O controller from the system
* pdev: I2O controller which should be removed
*
* Reset the I2O controller, disable interrupts and remove all allocated
* resources.
*/
static void __devexit i2o_pci_remove(struct pci_dev *pdev)
{
struct i2o_controller *c;
c = pci_get_drvdata(pdev);
i2o_iop_remove(c);
i2o_pci_irq_disable(c);
i2o_pci_free(c);
printk(KERN_INFO "%s: Controller removed.\n", c->name);
i2o_iop_free(c);
pci_disable_device(pdev);
};
/* PCI driver for I2O controller */
static struct pci_driver i2o_pci_driver = {
.name = "I2O controller",
.id_table = i2o_pci_ids,
.probe = i2o_pci_probe,
.remove = __devexit_p(i2o_pci_remove),
};
/**
* i2o_pci_init - registers I2O PCI driver in PCI subsystem
*
* Returns > 0 on success or negative error code on failure.
*/
int __init i2o_pci_init(void)
{
return pci_register_driver(&i2o_pci_driver);
};
/**
* i2o_pci_exit - unregisters I2O PCI driver from PCI subsystem
*/
void __exit i2o_pci_exit(void)
{
pci_unregister_driver(&i2o_pci_driver);
};
EXPORT_SYMBOL(i2o_dma_realloc);
/* /*
* I2O user space accessible structures/APIs * I2O user space accessible structures/APIs
* *
* (c) Copyright 1999, 2000 Red Hat Software * (c) Copyright 1999, 2000 Red Hat Software
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License * modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version * as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
* *
************************************************************************* *************************************************************************
* *
* This header file defines the I2O APIs that are available to both * This header file defines the I2O APIs that are available to both
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
/* How many controllers are we allowing */ /* How many controllers are we allowing */
#define MAX_I2O_CONTROLLERS 32 #define MAX_I2O_CONTROLLERS 32
#include <linux/ioctl.h> //#include <linux/ioctl.h>
/* /*
* I2O Control IOCTLs and structures * I2O Control IOCTLs and structures
...@@ -42,17 +42,24 @@ ...@@ -42,17 +42,24 @@
#define I2OEVTREG _IOW(I2O_MAGIC_NUMBER,10,struct i2o_evt_id) #define I2OEVTREG _IOW(I2O_MAGIC_NUMBER,10,struct i2o_evt_id)
#define I2OEVTGET _IOR(I2O_MAGIC_NUMBER,11,struct i2o_evt_info) #define I2OEVTGET _IOR(I2O_MAGIC_NUMBER,11,struct i2o_evt_info)
#define I2OPASSTHRU _IOR(I2O_MAGIC_NUMBER,12,struct i2o_cmd_passthru) #define I2OPASSTHRU _IOR(I2O_MAGIC_NUMBER,12,struct i2o_cmd_passthru)
#define I2OPASSTHRU32 _IOR(I2O_MAGIC_NUMBER,12,struct i2o_cmd_passthru32)
struct i2o_cmd_passthru32
{
unsigned int iop; /* IOP unit number */
u32 msg; /* message */
};
struct i2o_cmd_passthru struct i2o_cmd_passthru
{ {
unsigned int iop; /* IOP unit number */ unsigned int iop; /* IOP unit number */
void __user *msg; /* message */ void __user *msg; /* message */
}; };
struct i2o_cmd_hrtlct struct i2o_cmd_hrtlct
{ {
unsigned int iop; /* IOP unit number */ unsigned int iop; /* IOP unit number */
void __user *resbuf; /* Buffer for result */ void __user *resbuf; /* Buffer for result */
unsigned int __user *reslen; /* Buffer length in bytes */ unsigned int __user *reslen; /* Buffer length in bytes */
}; };
...@@ -351,14 +358,15 @@ typedef struct _i2o_status_block ...@@ -351,14 +358,15 @@ typedef struct _i2o_status_block
#define I2O_CLASS_BUS_ADAPTER_PORT 0x080 #define I2O_CLASS_BUS_ADAPTER_PORT 0x080
#define I2O_CLASS_PEER_TRANSPORT_AGENT 0x090 #define I2O_CLASS_PEER_TRANSPORT_AGENT 0x090
#define I2O_CLASS_PEER_TRANSPORT 0x091 #define I2O_CLASS_PEER_TRANSPORT 0x091
#define I2O_CLASS_END 0xfff
/* /*
* Rest of 0x092 - 0x09f reserved for peer-to-peer classes * Rest of 0x092 - 0x09f reserved for peer-to-peer classes
*/ */
#define I2O_CLASS_MATCH_ANYCLASS 0xffffffff #define I2O_CLASS_MATCH_ANYCLASS 0xffffffff
/* /*
* Subclasses * Subclasses
*/ */
...@@ -380,7 +388,7 @@ typedef struct _i2o_status_block ...@@ -380,7 +388,7 @@ typedef struct _i2o_status_block
#define I2O_PARAMS_TABLE_CLEAR 0x000A #define I2O_PARAMS_TABLE_CLEAR 0x000A
/* /*
* I2O serial number conventions / formats * I2O serial number conventions / formats
* (circa v1.5) * (circa v1.5)
*/ */
...@@ -391,7 +399,7 @@ typedef struct _i2o_status_block ...@@ -391,7 +399,7 @@ typedef struct _i2o_status_block
#define I2O_SNFORMAT_LAN48_MAC 4 #define I2O_SNFORMAT_LAN48_MAC 4
#define I2O_SNFORMAT_WAN 5 #define I2O_SNFORMAT_WAN 5
/* /*
* Plus new in v2.0 (Yellowstone pdf doc) * Plus new in v2.0 (Yellowstone pdf doc)
*/ */
...@@ -402,7 +410,7 @@ typedef struct _i2o_status_block ...@@ -402,7 +410,7 @@ typedef struct _i2o_status_block
#define I2O_SNFORMAT_UNKNOWN2 0xff #define I2O_SNFORMAT_UNKNOWN2 0xff
/* /*
* I2O Get Status State values * I2O Get Status State values
*/ */
#define ADAPTER_STATE_INITIALIZING 0x01 #define ADAPTER_STATE_INITIALIZING 0x01
......
/* /*
* I2O kernel space accessible structures/APIs * I2O kernel space accessible structures/APIs
* *
* (c) Copyright 1999, 2000 Red Hat Software * (c) Copyright 1999, 2000 Red Hat Software
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License * modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version * as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
* *
************************************************************************* *************************************************************************
* *
* This header file defined the I2O APIs/structures for use by * This header file defined the I2O APIs/structures for use by
* the I2O kernel modules. * the I2O kernel modules.
* *
*/ */
...@@ -23,66 +23,116 @@ ...@@ -23,66 +23,116 @@
#include <linux/i2o-dev.h> #include <linux/i2o-dev.h>
/* How many different OSM's are we allowing */ /* How many different OSM's are we allowing */
#define MAX_I2O_MODULES 4 #define I2O_MAX_DRIVERS 4
/* How many OSMs can register themselves for device status updates? */
#define I2O_MAX_MANAGERS 4
#include <asm/io.h>
#include <asm/semaphore.h> /* Needed for MUTEX init macros */ #include <asm/semaphore.h> /* Needed for MUTEX init macros */
#include <linux/config.h> #include <linux/pci.h>
#include <linux/notifier.h> #include <asm/dma-mapping.h>
#include <asm/atomic.h>
/* message queue empty */
#define I2O_QUEUE_EMPTY 0xffffffff
/* /*
* Message structures * Message structures
*/ */
struct i2o_message struct i2o_message
{ {
u8 version_offset; union {
u8 flags; struct {
u16 size; u8 version_offset;
u32 target_tid:12; u8 flags;
u32 init_tid:12; u16 size;
u32 function:8; u32 target_tid:12;
u32 initiator_context; u32 init_tid:12;
u32 function:8;
u32 icntxt; /* initiator context */
u32 tcntxt; /* transaction context */
} s;
u32 head[4];
} u;
/* List follows */ /* List follows */
u32 body[0];
}; };
/* /*
* Each I2O device entity has one or more of these. There is one * Each I2O device entity has one of these. There is one per device.
* per device.
*/ */
struct i2o_device struct i2o_device
{ {
i2o_lct_entry lct_data; /* Device LCT information */ i2o_lct_entry lct_data; /* Device LCT information */
u32 flags;
int i2oversion; /* I2O version supported. Actually
* there should be high and low
* version */
struct proc_dir_entry *proc_entry; /* /proc dir */ struct i2o_controller *iop; /* Controlling IOP */
struct list_head list; /* node in IOP devices list */
struct device device;
struct semaphore lock; /* device lock */
struct class_device classdev; /* i2o device class */
};
/*
* Event structure provided to the event handling function
*/
struct i2o_event {
struct work_struct work;
struct i2o_device *i2o_dev; /* I2O device pointer from which the
event reply was initiated */
u16 size; /* Size of data in 32-bit words */
u32 tcntxt; /* Transaction context used at
registration */
u32 event_indicator; /* Event indicator from reply */
u32 data[0]; /* Event data from reply */
};
/*
* I2O classes which could be handled by the OSM
*/
struct i2o_class_id {
u16 class_id:12;
};
/*
* I2O driver structure for OSMs
*/
struct i2o_driver {
char *name; /* OSM name */
int context; /* Low 8 bits of the transaction info */
struct i2o_class_id *classes; /* I2O classes that this OSM handles */
/* Primary user */ /* Message reply handler */
struct i2o_handler *owner; int (*reply)(struct i2o_controller *, u32, struct i2o_message *);
/* Event handler */
void (*event)(struct i2o_event *);
struct workqueue_struct *event_queue; /* Event queue */
/* Management users */ struct device_driver driver;
struct i2o_handler *managers[I2O_MAX_MANAGERS];
int num_managers;
struct i2o_controller *controller; /* Controlling IOP */ struct semaphore lock;
struct i2o_device *next; /* Chain */
struct i2o_device *prev;
char dev_name[8]; /* linux /dev name if available */
}; };
/* /*
* context queue entry, used for 32-bit context on 64-bit systems * Contains all information which are necessary for DMA operations
*/
struct i2o_dma {
void *virt;
dma_addr_t phys;
u32 len;
};
/*
* Context queue entry, used for 32-bit context on 64-bit systems
*/ */
struct i2o_context_list_element { struct i2o_context_list_element {
struct i2o_context_list_element *next; struct list_head list;
u32 context; u32 context;
void *ptr; void *ptr;
unsigned int flags; unsigned long timestamp;
}; };
/* /*
...@@ -93,47 +143,42 @@ struct i2o_controller ...@@ -93,47 +143,42 @@ struct i2o_controller
char name[16]; char name[16];
int unit; int unit;
int type; int type;
int enabled;
struct pci_dev *pdev; /* PCI device */
struct pci_dev *pdev; /* PCI device */
int irq; int short_req:1; /* use small block sizes */
int short_req:1; /* Use small block sizes */ int no_quiesce:1; /* dont quiesce before reset */
int dpt:1; /* Don't quiesce */ int raptor:1; /* split bar */
int raptor:1; /* split bar */ int promise:1; /* Promise controller */
int promise:1; /* Promise controller */
#ifdef CONFIG_MTRR #ifdef CONFIG_MTRR
int mtrr_reg0; int mtrr_reg0;
int mtrr_reg1; int mtrr_reg1;
#endif #endif
struct list_head devices; /* list of I2O devices */
struct notifier_block *event_notifer; /* Events */ struct notifier_block *event_notifer; /* Events */
atomic_t users; atomic_t users;
struct i2o_device *devices; /* I2O device chain */ struct list_head list; /* Controller list */
struct i2o_controller *next; /* Controller chain */
void *post_port; /* Inbout port address */ void *post_port; /* Inbout port address */
void *reply_port; /* Outbound port address */ void *reply_port; /* Outbound port address */
void *irq_mask; /* Interrupt register address */ void *irq_mask; /* Interrupt register address */
/* Dynamic LCT related data */ /* Dynamic LCT related data */
struct semaphore lct_sem;
int lct_pid; struct i2o_dma status; /* status of IOP */
int lct_running;
struct i2o_dma hrt; /* HW Resource Table */
i2o_status_block *status_block; /* IOP status block */ i2o_lct *lct; /* Logical Config Table */
dma_addr_t status_block_phys; struct i2o_dma dlct; /* Temp LCT */
i2o_lct *lct; /* Logical Config Table */ struct semaphore lct_lock; /* Lock for LCT updates */
dma_addr_t lct_phys; struct i2o_dma status_block; /* IOP status block */
i2o_lct *dlct; /* Temp LCT */
dma_addr_t dlct_phys;
i2o_hrt *hrt; /* HW Resource Table */ struct i2o_dma base; /* controller messaging unit */
dma_addr_t hrt_phys; struct i2o_dma in_queue; /* inbound message queue Host->IOP */
u32 hrt_len; struct i2o_dma out_queue; /* outbound message queue IOP->Host */
void *base_virt; /* base virtual address */
unsigned long base_phys; /* base physical address */
void *msg_virt; /* messages virtual address */
unsigned long msg_phys; /* messages physical address */
int battery:1; /* Has a battery backup */ int battery:1; /* Has a battery backup */
int io_alloc:1; /* An I/O resource was allocated */ int io_alloc:1; /* An I/O resource was allocated */
...@@ -145,67 +190,19 @@ struct i2o_controller ...@@ -145,67 +190,19 @@ struct i2o_controller
struct proc_dir_entry *proc_entry; /* /proc dir */ struct proc_dir_entry *proc_entry; /* /proc dir */
void *page_frame; /* Message buffers */ struct list_head bus_list; /* list of busses on IOP */
dma_addr_t page_frame_map; /* Cache map */ struct device device;
struct i2o_device *exec; /* Executive */
#if BITS_PER_LONG == 64 #if BITS_PER_LONG == 64
spinlock_t context_list_lock; /* lock for context_list */ spinlock_t context_list_lock; /* lock for context_list */
struct i2o_context_list_element *context_list; /* list of context id's atomic_t context_list_counter; /* needed for unique contexts */
struct list_head context_list; /* list of context id's
and pointers */ and pointers */
#endif #endif
spinlock_t lock; /* lock for controller
configuration */
}; };
/*
* OSM resgistration block
*
* Each OSM creates at least one of these and registers it with the
* I2O core through i2o_register_handler. An OSM may want to
* register more than one if it wants a fast path to a reply
* handler by having a separate initiator context for each
* class function.
*/
struct i2o_handler
{
/* Message reply handler */
void (*reply)(struct i2o_handler *, struct i2o_controller *,
struct i2o_message *);
/* New device notification handler */
void (*new_dev_notify)(struct i2o_controller *, struct i2o_device *);
/* Device deltion handler */
void (*dev_del_notify)(struct i2o_controller *, struct i2o_device *);
/* Reboot notification handler */
void (*reboot_notify)(void);
char *name; /* OSM name */
int context; /* Low 8 bits of the transaction info */
u32 class; /* I2O classes that this driver handles */
/* User data follows */
};
#ifdef MODULE
/*
* Used by bus specific modules to communicate with the core
*
* This is needed because the bus modules cannot make direct
* calls to the core as this results in the i2o_bus_specific_module
* being dependent on the core, not the otherway around.
* In that case, a 'modprobe i2o_lan' loads i2o_core & i2o_lan,
* but _not_ i2o_pci...which makes the whole thing pretty useless :)
*
*/
struct i2o_core_func_table
{
int (*install)(struct i2o_controller *);
int (*activate)(struct i2o_controller *);
struct i2o_controller *(*find)(int);
void (*unlock)(struct i2o_controller *);
void (*run_queue)(struct i2o_controller * c);
int (*delete)(struct i2o_controller *);
};
#endif /* MODULE */
/* /*
* I2O System table entry * I2O System table entry
* *
...@@ -242,85 +239,305 @@ struct i2o_sys_tbl ...@@ -242,85 +239,305 @@ struct i2o_sys_tbl
struct i2o_sys_tbl_entry iops[0]; struct i2o_sys_tbl_entry iops[0];
}; };
extern struct list_head i2o_controllers;
/* Message functions */
static inline u32 i2o_msg_get(struct i2o_controller *, struct i2o_message **);
extern u32 i2o_msg_get_wait(struct i2o_controller *, struct i2o_message **,int);
static inline void i2o_msg_post(struct i2o_controller *, u32);
static inline int i2o_msg_post_wait(struct i2o_controller *,u32,unsigned long);
extern int i2o_msg_post_wait_mem(struct i2o_controller *, u32, unsigned long,
struct i2o_dma *);
extern void i2o_msg_nop(struct i2o_controller *, u32);
static inline void i2o_flush_reply(struct i2o_controller *, u32);
/* DMA handling functions */
static inline int i2o_dma_alloc(struct device *, struct i2o_dma *, size_t,
unsigned int);
static inline void i2o_dma_free(struct device *, struct i2o_dma *);
int i2o_dma_realloc(struct device *, struct i2o_dma *, size_t, unsigned int);
static inline int i2o_dma_map(struct device *, struct i2o_dma *);
static inline void i2o_dma_unmap(struct device *, struct i2o_dma *);
/* IOP functions */
extern int i2o_status_get(struct i2o_controller *);
extern int i2o_hrt_get(struct i2o_controller *);
extern int i2o_event_register(struct i2o_device *, struct i2o_driver *,int,u32);
extern struct i2o_device *i2o_iop_find_device(struct i2o_controller *, u16);
extern struct i2o_controller *i2o_find_iop(int);
/* Functions needed for handling 64-bit pointers in 32-bit context */
#if BITS_PER_LONG == 64
extern u32 i2o_cntxt_list_add(struct i2o_controller *, void *);
extern void *i2o_cntxt_list_get(struct i2o_controller *, u32);
extern u32 i2o_cntxt_list_remove(struct i2o_controller *, void *);
static inline u32 i2o_ptr_low(void *ptr)
{
return (u32)(u64)ptr;
};
static inline u32 i2o_ptr_high(void *ptr)
{
return (u32)((u64)ptr>>32);
};
#else
static inline u32 i2o_cntxt_list_add(struct i2o_controller *c, void *ptr)
{
return (u32)ptr;
};
static inline void *i2o_cntxt_list_get(struct i2o_controller *c, u32 context)
{
return (void *)context;
};
static inline u32 i2o_cntxt_list_remove(struct i2o_controller *c, void *ptr)
{
return (u32)ptr;
};
static inline u32 i2o_ptr_low(void *ptr)
{
return (u32)ptr;
};
static inline u32 i2o_ptr_high(void *ptr)
{
return 0;
};
#endif
/* I2O driver (OSM) functions */
extern int i2o_driver_register(struct i2o_driver *);
extern void i2o_driver_unregister(struct i2o_driver *);
/* I2O device functions */
extern int i2o_device_claim(struct i2o_device *);
extern int i2o_device_claim_release(struct i2o_device *);
/* Exec OSM functions */
extern int i2o_exec_lct_get(struct i2o_controller *);
extern int i2o_exec_lct_notify(struct i2o_controller *, u32);
/* device to i2o_device and driver to i2o_driver convertion functions */
#define to_i2o_driver(drv) container_of(drv,struct i2o_driver, driver)
#define to_i2o_device(dev) container_of(dev, struct i2o_device, device)
/* /*
* Messenger inlines * Messenger inlines
*/ */
static inline u32 I2O_POST_READ32(struct i2o_controller *c) static inline u32 I2O_POST_READ32(struct i2o_controller *c)
{ {
rmb();
return readl(c->post_port); return readl(c->post_port);
} };
static inline void I2O_POST_WRITE32(struct i2o_controller *c, u32 val) static inline void I2O_POST_WRITE32(struct i2o_controller *c, u32 val)
{ {
wmb();
writel(val, c->post_port); writel(val, c->post_port);
} };
static inline u32 I2O_REPLY_READ32(struct i2o_controller *c) static inline u32 I2O_REPLY_READ32(struct i2o_controller *c)
{ {
rmb();
return readl(c->reply_port); return readl(c->reply_port);
} };
static inline void I2O_REPLY_WRITE32(struct i2o_controller *c, u32 val) static inline void I2O_REPLY_WRITE32(struct i2o_controller *c, u32 val)
{ {
wmb();
writel(val, c->reply_port); writel(val, c->reply_port);
} };
static inline u32 I2O_IRQ_READ32(struct i2o_controller *c) static inline u32 I2O_IRQ_READ32(struct i2o_controller *c)
{ {
rmb();
return readl(c->irq_mask); return readl(c->irq_mask);
} };
static inline void I2O_IRQ_WRITE32(struct i2o_controller *c, u32 val) static inline void I2O_IRQ_WRITE32(struct i2o_controller *c, u32 val)
{ {
wmb();
writel(val, c->irq_mask); writel(val, c->irq_mask);
} wmb();
};
/**
* i2o_msg_get - obtain an I2O message from the IOP
* @c: I2O controller
* @msg: pointer to a I2O message pointer
*
* This function tries to get a message slot. If no message slot is
* available do not wait until one is availabe (see also i2o_msg_get_wait).
*
* On a success the message is returned and the pointer to the message is
* set in msg. The returned message is the physical page frame offset
* address from the read port (see the i2o spec). If no message is
* available returns I2O_QUEUE_EMPTY and msg is leaved untouched.
*/
static inline u32 i2o_msg_get(struct i2o_controller *c,struct i2o_message **msg)
{
u32 m;
if((m=I2O_POST_READ32(c))!=I2O_QUEUE_EMPTY)
*msg = c->in_queue.virt + m;
return m;
};
static inline void i2o_post_message(struct i2o_controller *c, u32 m) /**
* i2o_msg_post - Post I2O message to I2O controller
* @c: I2O controller to which the message should be send
* @m: the message identifier
*
* Post the message to the I2O controller.
*/
static inline void i2o_msg_post(struct i2o_controller *c, u32 m)
{ {
/* The second line isnt spurious - thats forcing PCI posting */
I2O_POST_WRITE32(c, m); I2O_POST_WRITE32(c, m);
(void) I2O_IRQ_READ32(c); };
}
/**
* i2o_msg_post_wait - Post and wait a message and wait until return
* @c: controller
* @m: message to post
* @timeout: time in seconds to wait
*
* This API allows an OSM to post a message and then be told whether or
* not the system received a successful reply. If the message times out
* then the value '-ETIMEDOUT' is returned.
*
* Returns 0 on success or negative error code on failure.
*/
static inline int i2o_msg_post_wait(struct i2o_controller *c, u32 m,
unsigned long timeout)
{
return i2o_msg_post_wait_mem(c, m, timeout, NULL);
};
/**
* i2o_flush_reply - Flush reply from I2O controller
* @c: I2O controller
* @m: the message identifier
*
* The I2O controller must be informed that the reply message is not needed
* anymore. If you forget to flush the reply, the message frame can't be
* used by the controller anymore and is therefore lost.
*
* FIXME: is there a timeout after which the controller reuse the message?
*/
static inline void i2o_flush_reply(struct i2o_controller *c, u32 m) static inline void i2o_flush_reply(struct i2o_controller *c, u32 m)
{ {
I2O_REPLY_WRITE32(c, m); I2O_REPLY_WRITE32(c, m);
} };
/**
* i2o_dma_alloc - Allocate DMA memory
* @dev: struct device pointer to the PCI device of the I2O controller
* @addr: i2o_dma struct which should get the DMA buffer
* @len: length of the new DMA memory
* @gfp_mask: GFP mask
*
* Allocate a coherent DMA memory and write the pointers into addr.
*
* Returns 0 on success or -ENOMEM on failure.
*/
static inline int i2o_dma_alloc(struct device *dev, struct i2o_dma *addr,
size_t len, unsigned int gfp_mask)
{
addr->virt = dma_alloc_coherent(dev, len, &addr->phys, gfp_mask);
if(!addr->virt)
return -ENOMEM;
memset(addr->virt, 0, len);
addr->len = len;
return 0;
};
/**
* i2o_dma_free - Free DMA memory
* @dev: struct device pointer to the PCI device of the I2O controller
* @addr: i2o_dma struct which contains the DMA buffer
*
* Free a coherent DMA memory and set virtual address of addr to NULL.
*/
static inline void i2o_dma_free(struct device *dev, struct i2o_dma *addr)
{
if(addr->virt) {
if(addr->phys)
dma_free_coherent(dev, addr->len,addr->virt,addr->phys);
else
kfree(addr->virt);
addr->virt = NULL;
}
};
/**
* i2o_dma_map - Map the memory to DMA
* @dev: struct device pointer to the PCI device of the I2O controller
* @addr: i2o_dma struct which should be mapped
*
* Map the memory in addr->virt to coherent DMA memory and write the
* physical address into addr->phys.
*
* Returns 0 on success or -ENOMEM on failure.
*/
static inline int i2o_dma_map(struct device *dev, struct i2o_dma *addr)
{
if(!addr->virt)
return -EFAULT;
if(!addr->phys)
addr->phys = dma_map_single(dev, addr->virt, addr->len,
DMA_BIDIRECTIONAL);
if(!addr->phys)
return -ENOMEM;
return 0;
};
/**
* i2o_dma_unmap - Unmap the DMA memory
* @dev: struct device pointer to the PCI device of the I2O controller
* @addr: i2o_dma struct which should be unmapped
*
* Unmap the memory in addr->virt from DMA memory.
*/
static inline void i2o_dma_unmap(struct device *dev, struct i2o_dma *addr)
{
if(!addr->virt)
return;
if(addr->phys) {
dma_unmap_single(dev, addr->phys, addr->len, DMA_BIDIRECTIONAL);
addr->phys = 0;
}
};
/* /*
* Endian handling wrapped into the macro - keeps the core code * Endian handling wrapped into the macro - keeps the core code
* cleaner. * cleaner.
*/ */
#define i2o_raw_writel(val, mem) __raw_writel(cpu_to_le32(val), mem)
extern struct i2o_controller *i2o_find_controller(int);
extern void i2o_unlock_controller(struct i2o_controller *);
extern struct i2o_controller *i2o_controller_chain;
extern int i2o_num_controllers;
extern int i2o_status_get(struct i2o_controller *);
extern int i2o_install_handler(struct i2o_handler *); #define i2o_raw_writel(val, mem) __raw_writel(cpu_to_le32(val), mem)
extern int i2o_remove_handler(struct i2o_handler *);
extern int i2o_claim_device(struct i2o_device *, struct i2o_handler *);
extern int i2o_release_device(struct i2o_device *, struct i2o_handler *);
extern int i2o_device_notify_on(struct i2o_device *, struct i2o_handler *);
extern int i2o_device_notify_off(struct i2o_device *,
struct i2o_handler *);
extern int i2o_post_this(struct i2o_controller *, u32 *, int);
extern int i2o_post_wait(struct i2o_controller *, u32 *, int, int);
extern int i2o_post_wait_mem(struct i2o_controller *, u32 *, int, int,
void *, void *, dma_addr_t, dma_addr_t, int, int);
extern int i2o_query_scalar(struct i2o_controller *, int, int, int, void *, extern int i2o_parm_field_get(struct i2o_device *, int, int, void *, int);
int); extern int i2o_parm_field_set(struct i2o_device *, int, int, void *, int);
extern int i2o_set_scalar(struct i2o_controller *, int, int, int, void *, extern int i2o_parm_table_get(struct i2o_device *, int, int, int, void *, int,
int); void *, int);
/* FIXME: remove
extern int i2o_query_table(int, struct i2o_controller *, int, int, int, extern int i2o_query_table(int, struct i2o_controller *, int, int, int,
void *, int, void *, int); void *, int, void *, int);
extern int i2o_clear_table(struct i2o_controller *, int, int); extern int i2o_clear_table(struct i2o_controller *, int, int);
...@@ -328,51 +545,27 @@ extern int i2o_row_add_table(struct i2o_controller *, int, int, int, ...@@ -328,51 +545,27 @@ extern int i2o_row_add_table(struct i2o_controller *, int, int, int,
void *, int); void *, int);
extern int i2o_issue_params(int, struct i2o_controller *, int, void *, int, extern int i2o_issue_params(int, struct i2o_controller *, int, void *, int,
void *, int); void *, int);
*/
extern int i2o_event_register(struct i2o_controller *, u32, u32, u32, u32);
extern int i2o_event_ack(struct i2o_controller *, u32 *);
extern void i2o_report_status(const char *, const char *, u32 *); /* debugging functions */
extern void i2o_dump_message(u32 *); extern void i2o_report_status(const char *, const char *, struct i2o_message *);
extern const char *i2o_get_class_name(int); extern void i2o_dump_message(struct i2o_message *);
extern void i2o_dump_hrt(struct i2o_controller *c);
extern void i2o_debug_state(struct i2o_controller *c);
extern int i2o_install_controller(struct i2o_controller *);
extern int i2o_activate_controller(struct i2o_controller *);
extern void i2o_run_queue(struct i2o_controller *);
extern int i2o_delete_controller(struct i2o_controller *);
#if BITS_PER_LONG == 64
extern u32 i2o_context_list_add(void *, struct i2o_controller *);
extern void *i2o_context_list_get(u32, struct i2o_controller *);
extern u32 i2o_context_list_remove(void *, struct i2o_controller *);
#else
static inline u32 i2o_context_list_add(void *ptr, struct i2o_controller *c)
{
return (u32)ptr;
}
static inline void *i2o_context_list_get(u32 context, struct i2o_controller *c)
{
return (void *)context;
}
static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c)
{
return (u32)ptr;
}
#endif
/* /*
* Cache strategies * Cache strategies
*/ */
/* The NULL strategy leaves everything up to the controller. This tends to be a /* The NULL strategy leaves everything up to the controller. This tends to be a
* pessimal but functional choice. * pessimal but functional choice.
*/ */
#define CACHE_NULL 0 #define CACHE_NULL 0
/* Prefetch data when reading. We continually attempt to load the next 32 sectors /* Prefetch data when reading. We continually attempt to load the next 32 sectors
* into the controller cache. * into the controller cache.
*/ */
#define CACHE_PREFETCH 1 #define CACHE_PREFETCH 1
/* Prefetch data when reading. We sometimes attempt to load the next 32 sectors /* Prefetch data when reading. We sometimes attempt to load the next 32 sectors
...@@ -406,14 +599,12 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c) ...@@ -406,14 +599,12 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c)
/* /*
* Ioctl structures * Ioctl structures
*/ */
#define BLKI2OGRSTRAT _IOR('2', 1, int)
#define BLKI2OGWSTRAT _IOR('2', 2, int)
#define BLKI2OSRSTRAT _IOW('2', 3, int)
#define BLKI2OSWSTRAT _IOW('2', 4, int)
#define BLKI2OGRSTRAT _IOR('2', 1, int)
#define BLKI2OGWSTRAT _IOR('2', 2, int)
#define BLKI2OSRSTRAT _IOW('2', 3, int)
#define BLKI2OSWSTRAT _IOW('2', 4, int)
/* /*
...@@ -679,7 +870,7 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c) ...@@ -679,7 +870,7 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c)
#define ADAPTER_TID 0 #define ADAPTER_TID 0
#define HOST_TID 1 #define HOST_TID 1
#define MSG_FRAME_SIZE 64 /* i2o_scsi assumes >= 32 */ #define MSG_FRAME_SIZE 128 /* i2o_scsi assumes >= 32 */
#define REPLY_FRAME_SIZE 17 #define REPLY_FRAME_SIZE 17
#define SG_TABLESIZE 30 #define SG_TABLESIZE 30
#define NMBR_MSG_FRAMES 128 #define NMBR_MSG_FRAMES 128
...@@ -693,5 +884,22 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c) ...@@ -693,5 +884,22 @@ static inline u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c)
#define I2O_CONTEXT_LIST_USED 0x01 #define I2O_CONTEXT_LIST_USED 0x01
#define I2O_CONTEXT_LIST_DELETED 0x02 #define I2O_CONTEXT_LIST_DELETED 0x02
/* timeouts */
#define I2O_TIMEOUT_INIT_OUTBOUND_QUEUE 15
#define I2O_TIMEOUT_MESSAGE_GET 5
#define I2O_TIMEOUT_RESET 30
#define I2O_TIMEOUT_STATUS_GET 5
#define I2O_TIMEOUT_LCT_GET 20
/* retries */
#define I2O_HRT_GET_TRIES 3
#define I2O_LCT_GET_TRIES 3
/* request queue sizes */
#define I2O_MAX_SECTORS 1024
#define I2O_MAX_SEGMENTS 128
#define I2O_REQ_MEMPOOL_SIZE 32
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _I2O_H */ #endif /* _I2O_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment