Commit be457375 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] s390: common i/o layer.

From: Martin Schwidefsky <schwidefsky@de.ibm.com>

Common i/o layer fixes:
 - Add atomic onoff variable to ccw devices and ccw-group devices to
   avoid races during online/offline.
 - Fix pr_debug calls.
 - A lot of path fixes:
   + Set device to disconnected state after no path event.
   + Fix chpid vary on/off for single path devices.
   + Make logical vary on/off consistent with physical vary on/off.
   + Don't update subchannel schib if the device is gone (dnv not set).
   + Add code to recover lost chpids after machine checks.
   + Avoid processing link incidents, resource accessability events and
     chpid machine checks for logically offline chpids.
   + Recover disconnected devices after chsc machine checks.
   + Delay de-registering of no path devices to avoid deadlocks.
   + Don't redo ssd for known subchannels - the info is static.
   + Introduce a second, "slow" machine check handler thread for new devices.
     The "fast" machine check handler only recovers disconnected devices.
 - Deregister subchannel rather than ccw device on not oper events.
 - Fix calling sequence of notify function vs. path verification.
 - Reset timeout for disconnected devices.
 - Fix problem with debug feature and %s arguments.
 - Fix __get_subchannel_by_stsch to deal with "zombie" subchannels.
 - Avoid "zombie" subchannels if device is not operational during sense id.
 - Handle call to the io_subchannel remove function if the ccw device
   is not registered yet.
 - Add availability attribute for ccw devices: "good", "no device",
   "no path", "boxed".
 - Export ccw_device_work for qdio as module.
 - Retry sense id for tape devices which present intervention required.
 - Don't check the activity control to decide if the device driver interrupt
   handler needs to be called but use the bits in status control.
 - Fix race in ccw_device_stlck.
 - Accumulate deferred condition code.
 - Fix setting_up_sema locking.
 - Call qdio_shutdown instead of qdio_cleanup on failed establish.
 - Fix problem when 64 FCP adapters are initialized simultaneously.
 - Fix problem with >64 adapter interrupt capable devices.
 - Reduce stack usage in qdio.
parent b5f520b7
......@@ -8,23 +8,26 @@ Command line parameters
Determines whether information on found devices and sensed device
characteristics should be shown during startup, i. e. messages of the types
"Detected device 4711 on subchannel 42" and "SenseID: Device 4711 reports: ...".
"Detected device 0.0.4711 on subchannel 0.0.0042" and "SenseID: Device
0.0.4711 reports: ...".
Default is off.
* cio_notoper_msg = yes | no
Determines whether messages of the type "Device 4711 became 'not operational'"
should be shown during startup; after startup, they will always be shown.
Determines whether messages of the type "Device 0.0.4711 became 'not
operational'" should be shown during startup; after startup, they will always
be shown.
Default is on.
* cio_ignore = <device number> | <range of device numbers>,
<device number> | <range of device numbers>, ...
* cio_ignore = {all} |
{<device> | <range of devices>} |
{!<device> | !<range of devices>}
The given device numbers will be ignored by the common I/O-layer; no detection
The given devices will be ignored by the common I/O-layer; no detection
and device sensing will be done on any of those devices. The subchannel to
which the device in question is attached will be treated as if no device was
attached.
......@@ -32,12 +35,19 @@ Command line parameters
An ignored device can be un-ignored later; see the "/proc entries"-section for
details.
The device numbers must be given hexadecimal.
The devices must be given either as bus ids (0.0.abcd) or as hexadecimal
device numbers (0xabcd or abcd, for 2.4 backward compatibility).
You can use the 'all' keyword to ignore all devices.
The '!' operator will cause the I/O-layer to _not_ ignore a device.
The order on the command line is not important.
For example,
cio_ignore=0x23-0x42,0x4711
will ignore all devices with device numbers ranging from 23 to 42 and the
device with device number 4711, if detected.
cio_ignore=0.0.0023-0.0.0042,0.0.4711
will ignore all devices ranging from 0.0.0023 to 0.0.0042 and the device
0.0.4711, if detected.
As another example,
cio_ignore=all,!0.0.4711,!0.0.fd00-0.0.fd02
will ignore all devices but 0.0.4711, 0.0.fd00, 0.0.fd01, 0.0.fd02.
By default, no devices are ignored.
......@@ -47,17 +57,19 @@ Command line parameters
* /proc/cio_ignore
Lists the ranges of device numbers which are ignored by common I/O.
Lists the ranges of devices (by bus id) which are ignored by common I/O.
You can un-ignore certain or all devices by piping to /proc/cio_ignore.
"free all" will un-ignore all ignored devices,
"free <devnorange>, <devnorange>, ..." will un-ignore the specified devices.
For example, if devices 23 to 42 and 4711 are ignored,
- echo free 0x30-0x32 > /proc/cio_ignore
will un-ignore devices 30 to 32 and will leave devices 23 to 2F, 33 to 42
and 4711 ignored;
- echo free 0x41 > /proc/cio_ignore will furthermore un-ignore device 41;
"free <device range>, <device range>, ..." will un-ignore the specified
devices.
For example, if devices 0.0.0023 to 0.0.0042 and 0.0.4711 are ignored,
- echo free 0.0.0030-0.0.0032 > /proc/cio_ignore
will un-ignore devices 0.0.0030 to 0.0.0032 and will leave devices 0.0.0023
to 0.0.002f, 0.0.0033 to 0.0.0042 and 0.0.4711 ignored;
- echo free 0.0.0041 > /proc/cio_ignore will furthermore un-ignore device
0.0.0041;
- echo free all > /proc/cio_ignore will un-ignore all remaining ignored
devices.
......@@ -66,15 +78,19 @@ Command line parameters
available to the system.
You can also add ranges of devices to be ignored by piping to
/proc/cio_ignore; "add <devnorange>, <devnorange>, ..." will ignore the
/proc/cio_ignore; "add <device range>, <device range>, ..." will ignore the
specified devices.
Note: Already known devices cannot be ignored; this also applies to devices
which are gone after a machine check.
Note: Already known devices cannot be ignored.
For example, if device 0.0.abcd is already known and all other devices
0.0.a000-0.0.afff are not known,
"echo add 0.0.a000-0.0.accc, 0.0.af00-0.0.afff > /proc/cio_ignore"
will add 0.0.a000-0.0.abcc, 0.0.abce-0.0.accc and 0.0.af00-0.0.afff to the
list of ignored devices and skip 0.0.abcd.
For example, if device abcd is already known and all other devices a000-afff
are not known, "echo add 0xa000-0xaccc, 0xaf00-0xafff > /proc/cio_ignore"
will add af00-afff to the list of ignored devices and skip a000-accc.
The devices can be specified either by bus id (0.0.abcd) or, for 2.4 backward
compatibilty, by the device number in hexadecimal (0xabcd or abcd).
* /proc/s390dbf/cio_*/ (S/390 debug feature)
......
......@@ -216,9 +216,17 @@ mind that most drivers will need to implement both a ccwgroup and a ccw driver
Channel paths show up, like subchannels, under the channel subsystem root (css0)
and are called 'chp0.<chpid>'. They have no driver and do not belong to any bus.
Please note, that unlike /proc/chpids in 2.4, the channel path objects reflect
only the logical state and not the physical state, since we cannot track the
latter consistently due to lacking machine support (we don't need to be aware
of anyway).
status - Can be 'online', 'logically offline' or 'n/a'.
status - Can be 'online' or 'offline'.
Piping 'on' or 'off' sets the chpid logically online/offline.
Piping 'on' to an online chpid triggers path reprobing for all devices
the chpid connects to. This can be used to force the kernel to re-use
a channel path the user knows to be online, but the machine hasn't
created a machine check for.
3. System devices
......
/*
* drivers/s390/cio/blacklist.c
* S/390 common I/O routines -- blacklisting of specific devices
* $Revision: 1.27 $
* $Revision: 1.29 $
*
* Copyright (C) 1999-2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -18,9 +18,11 @@
#include <linux/ctype.h>
#include <linux/device.h>
#include <asm/cio.h>
#include <asm/uaccess.h>
#include "blacklist.h"
#include "cio.h"
#include "cio_debug.h"
#include "css.h"
......@@ -199,8 +201,6 @@ is_blacklisted (int devno)
}
#ifdef CONFIG_PROC_FS
extern void css_reiterate_subchannels(void);
/*
* Function: s390_redo_validation
* Look for no longer blacklisted devices
......@@ -208,9 +208,29 @@ extern void css_reiterate_subchannels(void);
static inline void
s390_redo_validation (void)
{
CIO_TRACE_EVENT (0, "redoval");
unsigned int irq;
css_reiterate_subchannels();
CIO_TRACE_EVENT (0, "redoval");
for (irq = 0; irq <= __MAX_SUBCHANNELS; irq++) {
int ret;
struct subchannel *sch;
sch = get_subchannel_by_schid(irq);
if (sch) {
/* Already known. */
put_device(&sch->dev);
continue;
}
ret = css_probe_device(irq);
if (ret == -ENXIO)
break; /* We're through. */
if (ret == -ENOMEM)
/*
* Stop validation for now. Bad, but no need for a
* panic.
*/
break;
}
}
/*
......
/*
* drivers/s390/cio/ccwgroup.c
* bus driver for ccwgroup
* $Revision: 1.19 $
* $Revision: 1.23 $
*
* Copyright (C) 2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -164,6 +164,7 @@ ccwgroup_create(struct device *root,
return -ENOMEM;
memset(gdev, 0, sizeof(*gdev) + argc*sizeof(gdev->cdev[0]));
atomic_set(&gdev->onoff, 0);
for (i = 0; i < argc; i++) {
gdev->cdev[i] = get_ccwdev_by_busid(cdrv, argv[i]);
......@@ -242,18 +243,24 @@ ccwgroup_set_online(struct ccwgroup_device *gdev)
struct ccwgroup_driver *gdrv;
int ret;
if (gdev->state == CCWGROUP_ONLINE)
return 0;
if (!gdev->dev.driver)
return -EINVAL;
if (atomic_compare_and_swap(0, 1, &gdev->onoff))
return -EAGAIN;
if (gdev->state == CCWGROUP_ONLINE) {
ret = 0;
goto out;
}
if (!gdev->dev.driver) {
ret = -EINVAL;
goto out;
}
gdrv = to_ccwgroupdrv (gdev->dev.driver);
if ((ret = gdrv->set_online(gdev)))
return ret;
goto out;
gdev->state = CCWGROUP_ONLINE;
return 0;
out:
atomic_set(&gdev->onoff, 0);
return ret;
}
static int
......@@ -262,18 +269,24 @@ ccwgroup_set_offline(struct ccwgroup_device *gdev)
struct ccwgroup_driver *gdrv;
int ret;
if (gdev->state == CCWGROUP_OFFLINE)
return 0;
if (!gdev->dev.driver)
return -EINVAL;
if (atomic_compare_and_swap(0, 1, &gdev->onoff))
return -EAGAIN;
if (gdev->state == CCWGROUP_OFFLINE) {
ret = 0;
goto out;
}
if (!gdev->dev.driver) {
ret = -EINVAL;
goto out;
}
gdrv = to_ccwgroupdrv (gdev->dev.driver);
if ((ret = gdrv->set_offline(gdev)))
return ret;
goto out;
gdev->state = CCWGROUP_OFFLINE;
return 0;
out:
atomic_set(&gdev->onoff, 0);
return ret;
}
static ssize_t
......@@ -324,7 +337,7 @@ ccwgroup_probe (struct device *dev)
if ((ret = device_create_file(dev, &dev_attr_online)))
return ret;
pr_debug("%s: device %s\n", __func__, gdev->dev.name);
pr_debug("%s: device %s\n", __func__, gdev->dev.bus_id);
ret = gdrv->probe ? gdrv->probe(gdev) : -ENODEV;
if (ret)
device_remove_file(dev, &dev_attr_online);
......@@ -341,7 +354,7 @@ ccwgroup_remove (struct device *dev)
gdev = to_ccwgroupdev(dev);
gdrv = to_ccwgroupdrv(dev->driver);
pr_debug("%s: device %s\n", __func__, gdev->dev.name);
pr_debug("%s: device %s\n", __func__, gdev->dev.bus_id);
device_remove_file(dev, &dev_attr_online);
......
/*
* drivers/s390/cio/chsc.c
* S/390 common I/O routines -- channel subsystem call
* $Revision: 1.92 $
* $Revision: 1.105 $
*
* Copyright (C) 1999-2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -27,30 +27,20 @@
#define CHPID_LONGS (256 / (8 * sizeof(long))) /* 256 chpids */
static struct channel_path *chps[NR_CHPIDS];
static int new_channel_path(int chpid, int status);
static void *sei_page;
static inline void
set_chp_online(int chp, int onoff)
{
chps[chp]->state.online = onoff;
}
static int new_channel_path(int chpid);
static inline void
set_chp_logically_online(int chp, int onoff)
{
chps[chp]->state.logically_online = onoff;
chps[chp]->state = onoff;
}
static int
get_chp_status(int chp)
{
int ret;
if (!chps[chp])
return 0;
ret = chps[chp]->state.online ? CHP_ONLINE : CHP_STANDBY;
return (chps[chp]->state.logically_online ?
ret : ret | CHP_LOGICALLY_OFFLINE);
return (chps[chp] ? chps[chp]->state : -ENODEV);
}
void
......@@ -60,13 +50,25 @@ chsc_validate_chpids(struct subchannel *sch)
for (chp = 0; chp <= 7; chp++) {
mask = 0x80 >> chp;
if (get_chp_status(sch->schib.pmcw.chpid[chp])
& CHP_LOGICALLY_OFFLINE)
if (!get_chp_status(sch->schib.pmcw.chpid[chp]))
/* disable using this path */
sch->opm &= ~mask;
}
}
void
chpid_is_actually_online(int chp)
{
int state;
state = get_chp_status(chp);
if (state < 0)
new_channel_path(chp);
else
WARN_ON(!state);
/* FIXME: should notify other subchannels here */
}
/* FIXME: this is _always_ called for every subchannel. shouldn't we
* process more than one at a time? */
static int
......@@ -204,8 +206,8 @@ css_get_ssd_info(struct subchannel *sch)
/* Allocate channel path structures, if needed. */
for (j = 0; j < 8; j++) {
chpid = sch->ssd_info.chpid[j];
if (chpid && !get_chp_status(chpid))
new_channel_path(chpid, CHP_ONLINE);
if (chpid && (get_chp_status(chpid) < 0))
new_channel_path(chpid);
}
}
return ret;
......@@ -218,6 +220,7 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
int mask;
struct subchannel *sch;
__u8 *chpid;
struct schib schib;
sch = to_subchannel(dev);
chpid = data;
......@@ -230,7 +233,13 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
mask = 0x80 >> j;
spin_lock(&sch->lock);
stsch(sch->irq, &sch->schib);
stsch(sch->irq, &schib);
if (!schib.pmcw.dnv)
goto out_unreg;
memcpy(&sch->schib, &schib, sizeof(struct schib));
/* Check for single path devices. */
if (sch->schib.pmcw.pim == 0x80)
goto out_unreg;
if (sch->vpm == mask)
goto out_unreg;
......@@ -275,12 +284,9 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
return 0;
out_unreg:
spin_unlock(&sch->lock);
if (sch->driver && sch->driver->notify &&
sch->driver->notify(&sch->dev, CIO_NO_PATH))
return 0;
device_unregister(&sch->dev);
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
sch->lpm = 0;
/* We can't block here. */
device_call_nopath_notify(sch);
return 0;
}
......@@ -292,10 +298,8 @@ s390_set_chpid_offline( __u8 chpid)
sprintf(dbf_txt, "chpr%x", chpid);
CIO_TRACE_EVENT(2, dbf_txt);
if (!get_chp_status(chpid))
return; /* we didn't know the chpid anyway */
set_chp_online(chpid, 0);
if (get_chp_status(chpid) <= 0)
return;
bus_for_each_dev(&css_bus_type, NULL, &chpid,
s390_subchannel_remove_chpid);
......@@ -303,16 +307,12 @@ s390_set_chpid_offline( __u8 chpid)
static int
s390_process_res_acc_sch(u8 chpid, __u16 fla, u32 fla_mask,
struct subchannel *sch, void *page)
struct subchannel *sch)
{
int found;
int chp;
int ccode;
/* Update our ssd_info */
if (chsc_get_sch_desc_irq(sch, page))
return 0;
found = 0;
for (chp = 0; chp <= 7; chp++)
/*
......@@ -340,14 +340,12 @@ s390_process_res_acc_sch(u8 chpid, __u16 fla, u32 fla_mask,
return 0x80 >> chp;
}
static void
static int
s390_process_res_acc (u8 chpid, __u16 fla, u32 fla_mask)
{
struct subchannel *sch;
int irq;
int ret;
int irq, rc;
char dbf_txt[15];
void *page;
sprintf(dbf_txt, "accpr%x", chpid);
CIO_TRACE_EVENT( 2, dbf_txt);
......@@ -364,18 +362,17 @@ s390_process_res_acc (u8 chpid, __u16 fla, u32 fla_mask)
* will we have to do.
*/
if (get_chp_status(chpid) & CHP_LOGICALLY_OFFLINE)
return; /* no need to do the rest */
page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!page)
return;
if (!get_chp_status(chpid))
return 0; /* no need to do the rest */
rc = 0;
for (irq = 0; irq < __MAX_SUBCHANNELS; irq++) {
int chp_mask;
int chp_mask, old_lpm;
sch = get_subchannel_by_schid(irq);
if (!sch) {
struct schib schib;
int ret;
/*
* We don't know the device yet, but since a path
* may be available now to the device we'll have
......@@ -384,18 +381,29 @@ s390_process_res_acc (u8 chpid, __u16 fla, u32 fla_mask)
* that beast may be on we'll have to do a stsch
* on all devices, grr...
*/
ret = css_probe_device(irq);
if (ret == -ENXIO)
if (stsch(irq, &schib)) {
/* We're through */
if (need_rescan)
rc = -EAGAIN;
break;
}
if (need_rescan) {
rc = -EAGAIN;
continue;
}
/* Put it on the slow path. */
ret = css_enqueue_subchannel_slow(irq);
if (ret) {
css_clear_subchannel_slow_list();
need_rescan = 1;
}
rc = -EAGAIN;
continue;
}
spin_lock_irq(&sch->lock);
chp_mask = s390_process_res_acc_sch(chpid, fla, fla_mask,
sch, page);
clear_page(page);
chp_mask = s390_process_res_acc_sch(chpid, fla, fla_mask, sch);
if (chp_mask == 0) {
......@@ -406,21 +414,22 @@ s390_process_res_acc (u8 chpid, __u16 fla, u32 fla_mask)
else
continue;
}
old_lpm = sch->lpm;
sch->lpm = ((sch->schib.pmcw.pim &
sch->schib.pmcw.pam &
sch->schib.pmcw.pom)
| chp_mask) & sch->opm;
if (sch->driver && sch->driver->verify)
spin_unlock_irq(&sch->lock);
if (!old_lpm && sch->lpm)
device_trigger_reprobe(sch);
else if (sch->driver && sch->driver->verify)
sch->driver->verify(&sch->dev);
spin_unlock_irq(&sch->lock);
put_device(&sch->dev);
if (fla_mask != 0)
break;
}
free_page((unsigned long)page);
return rc;
}
static int
......@@ -453,10 +462,10 @@ __get_chpid_from_lir(void *data)
return (u16) (lir->indesc[0]&0x000000ff);
}
void
int
chsc_process_crw(void)
{
int chpid;
int chpid, ret;
struct {
struct chsc_header request;
u32 reserved1;
......@@ -476,21 +485,20 @@ chsc_process_crw(void)
/* ccdf has to be big enough for a link-incident record */
} *sei_area;
if (!sei_page)
return 0;
/*
* build the chsc request block for store event information
* and do the call
* This function is only called by the machine check handler thread,
* so we don't need locking for the sei_page.
*/
sei_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!sei_area) {
CIO_CRW_EVENT(0, "No memory for sei area!\n");
return;
}
sei_area = sei_page;
CIO_TRACE_EVENT( 2, "prcss");
ret = 0;
do {
int ccode;
int ccode, status;
memset(sei_area, 0, sizeof(*sei_area));
sei_area->request = (struct chsc_header) {
......@@ -500,7 +508,7 @@ chsc_process_crw(void)
ccode = chsc(sei_area);
if (ccode > 0)
goto out;
return 0;
switch (sei_area->response.code) {
/* for debug purposes, check for problems */
......@@ -511,19 +519,19 @@ chsc_process_crw(void)
case 0x0002:
CIO_CRW_EVENT(2,
"chsc_process_crw: invalid command!\n");
goto out;
return 0;
case 0x0003:
CIO_CRW_EVENT(2, "chsc_process_crw: error in chsc "
"request block!\n");
goto out;
return 0;
case 0x0005:
CIO_CRW_EVENT(2, "chsc_process_crw: no event "
"information stored\n");
goto out;
return 0;
default:
CIO_CRW_EVENT(2, "chsc_process_crw: chsc response %d\n",
sei_area->response.code);
goto out;
return 0;
}
/* Check if we might have lost some information. */
......@@ -561,24 +569,27 @@ chsc_process_crw(void)
pr_debug("Validity flags: %x\n", sei_area->vf);
/* allocate a new channel path structure, if needed */
if (chps[sei_area->rsid] == NULL)
new_channel_path(sei_area->rsid, CHP_ONLINE);
else
set_chp_online(sei_area->rsid, 1);
status = get_chp_status(sei_area->rsid);
if (status < 0)
new_channel_path(sei_area->rsid);
else if (!status)
return 0;
if ((sei_area->vf & 0x80) == 0) {
pr_debug("chpid: %x\n", sei_area->rsid);
s390_process_res_acc(sei_area->rsid, 0, 0);
ret = s390_process_res_acc(sei_area->rsid,
0, 0);
} else if ((sei_area->vf & 0xc0) == 0x80) {
pr_debug("chpid: %x link addr: %x\n",
sei_area->rsid, sei_area->fla);
s390_process_res_acc(sei_area->rsid,
sei_area->fla, 0xff00);
ret = s390_process_res_acc(sei_area->rsid,
sei_area->fla,
0xff00);
} else if ((sei_area->vf & 0xc0) == 0xc0) {
pr_debug("chpid: %x full link addr: %x\n",
sei_area->rsid, sei_area->fla);
s390_process_res_acc(sei_area->rsid,
sei_area->fla, 0xffff);
ret = s390_process_res_acc(sei_area->rsid,
sei_area->fla,
0xffff);
}
pr_debug("\n");
......@@ -590,33 +601,47 @@ chsc_process_crw(void)
break;
}
} while (sei_area->flags & 0x80);
out:
free_page((unsigned long)sei_area);
return ret;
}
static void
static int
chp_add(int chpid)
{
struct subchannel *sch;
int irq, ret;
int irq, ret, rc;
char dbf_txt[15];
if (get_chp_status(chpid) & CHP_LOGICALLY_OFFLINE)
return; /* no need to do the rest */
if (!get_chp_status(chpid))
return 0; /* no need to do the rest */
sprintf(dbf_txt, "cadd%x", chpid);
CIO_TRACE_EVENT(2, dbf_txt);
rc = 0;
for (irq = 0; irq < __MAX_SUBCHANNELS; irq++) {
int i;
sch = get_subchannel_by_schid(irq);
if (!sch) {
ret = css_probe_device(irq);
if (ret == -ENXIO)
struct schib schib;
if (stsch(irq, &schib)) {
/* We're through */
return;
if (need_rescan)
rc = -EAGAIN;
break;
}
if (need_rescan) {
rc = -EAGAIN;
continue;
}
/* Put it on the slow path. */
ret = css_enqueue_subchannel_slow(irq);
if (ret) {
css_clear_subchannel_slow_list();
need_rescan = 1;
}
rc = -EAGAIN;
continue;
}
......@@ -626,13 +651,13 @@ chp_add(int chpid)
if (stsch(sch->irq, &sch->schib) != 0) {
/* Endgame. */
spin_unlock(&sch->lock);
return;
return rc;
}
break;
}
if (i==8) {
spin_unlock(&sch->lock);
return;
return rc;
}
sch->lpm = ((sch->schib.pmcw.pim &
sch->schib.pmcw.pam &
......@@ -645,70 +670,80 @@ chp_add(int chpid)
spin_unlock(&sch->lock);
put_device(&sch->dev);
}
return rc;
}
/*
* Handling of crw machine checks with channel path source.
*/
void
int
chp_process_crw(int chpid, int on)
{
if (on == 0) {
/* Path has gone. We use the link incident routine.*/
s390_set_chpid_offline(chpid);
} else {
/*
* Path has come. Allocate a new channel path structure,
* if needed.
*/
if (chps[chpid] == NULL)
new_channel_path(chpid, CHP_ONLINE);
else
set_chp_online(chpid, 1);
/* Avoid the extra overhead in process_rec_acc. */
chp_add(chpid);
return 0; /* De-register is async anyway. */
}
/*
* Path has come. Allocate a new channel path structure,
* if needed.
*/
if (get_chp_status(chpid) < 0)
new_channel_path(chpid);
/* Avoid the extra overhead in process_rec_acc. */
return chp_add(chpid);
}
static inline void
static inline int
__check_for_io_and_kill(struct subchannel *sch, int index)
{
int cc;
cc = stsch(sch->irq, &sch->schib);
if (cc)
return;
if (sch->schib.scsw.actl && sch->schib.pmcw.lpum == (0x80 >> index))
return 0;
if (sch->schib.scsw.actl && sch->schib.pmcw.lpum == (0x80 >> index)) {
device_set_waiting(sch);
return 1;
}
return 0;
}
static inline void
__s390_subchannel_vary_chpid(struct subchannel *sch, __u8 chpid, int on)
{
int chp;
int chp, old_lpm;
if (!sch->ssd_info.valid)
return;
old_lpm = sch->lpm;
for (chp = 0; chp < 8; chp++) {
if (sch->ssd_info.chpid[chp] == chpid) {
if (on) {
sch->opm |= (0x80 >> chp);
sch->lpm |= (0x80 >> chp);
} else {
sch->opm &= ~(0x80 >> chp);
sch->lpm &= ~(0x80 >> chp);
/*
* Give running I/O a grace period in which it
* can successfully terminate, even using the
* just varied off path. Then kill it.
*/
__check_for_io_and_kill(sch, chp);
}
if (sch->driver && sch->driver->verify)
if (sch->ssd_info.chpid[chp] != chpid)
continue;
if (on) {
sch->opm |= (0x80 >> chp);
sch->lpm |= (0x80 >> chp);
if (!old_lpm)
device_trigger_reprobe(sch);
else if (sch->driver && sch->driver->verify)
sch->driver->verify(&sch->dev);
} else {
sch->opm &= ~(0x80 >> chp);
sch->lpm &= ~(0x80 >> chp);
/*
* Give running I/O a grace period in which it
* can successfully terminate, even using the
* just varied off path. Then kill it.
*/
if (!__check_for_io_and_kill(sch, chp) && !sch->lpm)
/* Get over with it now. */
device_call_nopath_notify(sch);
else if (sch->driver && sch->driver->verify)
sch->driver->verify(&sch->dev);
break;
}
break;
}
}
......@@ -738,6 +773,11 @@ s390_subchannel_vary_chpid_on(struct device *dev, void *data)
return 0;
}
extern void css_trigger_slow_path(void);
typedef void (*workfunc)(void *);
static DECLARE_WORK(varyonoff_work, (workfunc)css_trigger_slow_path,
NULL);
/*
* Function: s390_vary_chpid
* Varies the specified chpid online or offline
......@@ -746,21 +786,20 @@ static int
s390_vary_chpid( __u8 chpid, int on)
{
char dbf_text[15];
int status;
int status, irq, ret;
struct subchannel *sch;
sprintf(dbf_text, on?"varyon%x":"varyoff%x", chpid);
CIO_TRACE_EVENT( 2, dbf_text);
status = get_chp_status(chpid);
if (!status) {
if (status < 0) {
printk(KERN_ERR "Can't vary unknown chpid %02X\n", chpid);
return -EINVAL;
}
if ((on && !(status & CHP_LOGICALLY_OFFLINE)) ||
(!on && (status & CHP_LOGICALLY_OFFLINE))) {
printk(KERN_ERR "chpid %x is "
"already %sline\n", chpid, on ? "on" : "off");
if (!on && !status) {
printk(KERN_ERR "chpid %x is already offline\n", chpid);
return -EINVAL;
}
......@@ -773,6 +812,30 @@ s390_vary_chpid( __u8 chpid, int on)
bus_for_each_dev(&css_bus_type, NULL, &chpid, on ?
s390_subchannel_vary_chpid_on :
s390_subchannel_vary_chpid_off);
if (!on)
return 0;
/* Scan for new devices on varied on path. */
for (irq = 0; irq < __MAX_SUBCHANNELS; irq++) {
struct schib schib;
sch = get_subchannel_by_schid(irq);
if (sch)
continue;
if (stsch(irq, &schib))
/* We're through */
break;
if (need_rescan)
continue;
/* Put it on the slow path. */
ret = css_enqueue_subchannel_slow(irq);
if (ret) {
css_clear_subchannel_slow_list();
need_rescan = 1;
}
continue;
}
if (need_rescan || css_slow_subchannels_exist())
schedule_work(&varyonoff_work);
return 0;
}
......@@ -783,16 +846,11 @@ static ssize_t
chp_status_show(struct device *dev, char *buf)
{
struct channel_path *chp = container_of(dev, struct channel_path, dev);
int state;
if (!chp)
return 0;
state = get_chp_status(chp->id);
if (state & CHP_STANDBY)
return sprintf(buf, "n/a\n");
return (state & CHP_LOGICALLY_OFFLINE) ?
sprintf(buf, "logically offline\n") :
sprintf(buf, "online\n");
return (get_chp_status(chp->id) ? sprintf(buf, "online\n") :
sprintf(buf, "offline\n"));
}
static ssize_t
......@@ -835,7 +893,7 @@ chp_release(struct device *dev)
* This replaces /proc/chpids.
*/
static int
new_channel_path(int chpid, int status)
new_channel_path(int chpid)
{
struct channel_path *chp;
int ret;
......@@ -849,19 +907,7 @@ new_channel_path(int chpid, int status)
/* fill in status, etc. */
chp->id = chpid;
switch (status) {
case CHP_STANDBY:
chp->state.online = 0;
chp->state.logically_online = 1;
break;
case CHP_LOGICALLY_OFFLINE:
chp->state.logically_online = 0;
chp->state.online = 1;
break;
case CHP_ONLINE:
chp->state.online = 1;
chp->state.logically_online = 1;
}
chp->state = 1;
chp->dev = (struct device) {
.parent = &css_bus_device,
.release = chp_release,
......@@ -882,3 +928,14 @@ new_channel_path(int chpid, int status)
return ret;
}
static int __init
chsc_alloc_sei_area(void)
{
sei_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!sei_page)
printk(KERN_WARNING"Can't allocate page for processing of " \
"chsc machine checks!\n");
return (sei_page ? 0 : -ENOMEM);
}
subsys_initcall(chsc_alloc_sei_area);
......@@ -3,10 +3,6 @@
#define NR_CHPIDS 256
#define CHP_STANDBY 1
#define CHP_LOGICALLY_OFFLINE 2
#define CHP_ONLINE 4
#define CHSC_SEI_ACC_CHPID 1
#define CHSC_SEI_ACC_LINKADDR 2
#define CHSC_SEI_ACC_FULLLINKADDR 3
......@@ -18,10 +14,7 @@ struct chsc_header {
struct channel_path {
int id;
struct {
unsigned int online:1;
unsigned int logically_online:1;
}__attribute__((packed)) state;
int state;
struct device dev;
};
......@@ -29,4 +22,6 @@ extern struct channel_path *chps[];
extern void s390_process_css( void );
extern void chsc_validate_chpids(struct subchannel *);
extern void chpid_is_actually_online(int);
extern int is_chpid_online(int);
#endif
/*
* drivers/s390/cio/cio.c
* S/390 common I/O routines -- low level i/o calls
* $Revision: 1.114 $
* $Revision: 1.117 $
*
* Copyright (C) 1999-2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -173,7 +173,7 @@ cio_start_handle_notoper(struct subchannel *sch, __u8 lpm)
stsch (sch->irq, &sch->schib);
CIO_MSG_EVENT(0, "cio_start: 'not oper' status for "
"subchannel %s!\n", sch->dev.bus_id);
"subchannel %04x!\n", sch->irq);
sprintf(dbf_text, "no%s", sch->dev.bus_id);
CIO_TRACE_EVENT(0, dbf_text);
CIO_HEX_EVENT(0, &sch->schib, sizeof (struct schib));
......@@ -572,9 +572,9 @@ cio_validate_subchannel (struct subchannel *sch, unsigned int irq)
sch->opm;
CIO_DEBUG(KERN_INFO, 0,
"Detected device %04X on subchannel %s"
"Detected device %04X on subchannel %04X"
" - PIM = %02X, PAM = %02X, POM = %02X\n",
sch->schib.pmcw.dev, sch->dev.bus_id, sch->schib.pmcw.pim,
sch->schib.pmcw.dev, sch->irq, sch->schib.pmcw.pim,
sch->schib.pmcw.pam, sch->schib.pmcw.pom);
/*
......
/*
* drivers/s390/cio/css.c
* driver for channel subsystem
* $Revision: 1.65 $
* $Revision: 1.69 $
*
* Copyright (C) 2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -13,6 +13,7 @@
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include "css.h"
#include "cio.h"
......@@ -20,6 +21,7 @@
#include "ioasm.h"
unsigned int highest_subchannel;
int need_rescan = 0;
int css_init_done = 0;
struct device css_bus_device = {
......@@ -130,7 +132,7 @@ __get_subchannel_by_stsch(int irq)
struct schib schib;
cc = stsch(irq, &schib);
if (cc)
if (cc || !schib.pmcw.dnv)
return NULL;
sch = (struct subchannel *)(unsigned long)schib.pmcw.intparm;
if (!sch)
......@@ -154,15 +156,18 @@ get_subchannel_by_schid(int irq)
sch = __get_subchannel_by_stsch(irq);
if (sch)
goto out;
if (!get_driver(&io_subchannel_driver.drv))
goto out;
down_read(&css_bus_type.subsys.rwsem);
list_for_each(entry, &io_subchannel_driver.drv.devices) {
list_for_each(entry, &css_bus_type.devices.list) {
dev = get_device(container_of(entry,
struct device, driver_list));
struct device, bus_list));
if (!dev)
continue;
/* Skip channel paths. */
if (dev->release != &css_subchannel_release) {
put_device(dev);
continue;
}
sch = to_subchannel(dev);
if (sch->irq == irq)
break;
......@@ -170,7 +175,6 @@ get_subchannel_by_schid(int irq)
sch = NULL;
}
up_read(&css_bus_type.subsys.rwsem);
put_driver(&io_subchannel_driver.drv);
out:
put_bus(&css_bus_type);
......@@ -188,19 +192,24 @@ css_get_subchannel_status(struct subchannel *sch, int schid)
return CIO_GONE;
if (!schib.pmcw.dnv)
return CIO_GONE;
if (sch && (schib.pmcw.dev != sch->schib.pmcw.dev))
if (sch && sch->schib.pmcw.dnv &&
(schib.pmcw.dev != sch->schib.pmcw.dev))
return CIO_REVALIDATE;
return CIO_OPER;
}
static inline int
css_evaluate_subchannel(int irq)
css_evaluate_subchannel(int irq, int slow)
{
int event, ret, disc;
struct subchannel *sch;
sch = get_subchannel_by_schid(irq);
disc = sch ? device_is_disconnected(sch) : 0;
if (disc && slow)
return 0; /* Already processed. */
if (!disc && !slow)
return -EAGAIN; /* Will be done on the slow path. */
event = css_get_subchannel_status(sch, irq);
switch (event) {
case CIO_GONE:
......@@ -252,18 +261,13 @@ css_evaluate_subchannel(int irq)
return ret;
}
/*
* Rescan for new devices. FIXME: This is slow.
* This function is called when we have lost CRWs due to overflows and we have
* to do subchannel housekeeping.
*/
void
css_reiterate_subchannels(void)
static void
css_rescan_devices(void)
{
int irq, ret;
for (irq = 0; irq <= __MAX_SUBCHANNELS; irq++) {
ret = css_evaluate_subchannel(irq);
ret = css_evaluate_subchannel(irq, 1);
/* No more memory. It doesn't make sense to continue. No
* panic because this can happen in midflight and just
* because we can't use a new device is no reason to crash
......@@ -276,20 +280,61 @@ css_reiterate_subchannels(void)
}
}
static void
css_evaluate_slow_subchannel(unsigned long schid)
{
css_evaluate_subchannel(schid, 1);
}
void
css_trigger_slow_path(void)
{
if (need_rescan) {
need_rescan = 0;
css_rescan_devices();
return;
}
css_walk_subchannel_slow_list(css_evaluate_slow_subchannel);
}
/*
* Called from the machine check handler for subchannel report words.
* Rescan for new devices. FIXME: This is slow.
* This function is called when we have lost CRWs due to overflows and we have
* to do subchannel housekeeping.
*/
void
css_reiterate_subchannels(void)
{
css_clear_subchannel_slow_list();
need_rescan = 1;
}
/*
* Called from the machine check handler for subchannel report words.
*/
int
css_process_crw(int irq)
{
int ret;
CIO_CRW_EVENT(2, "source is subchannel %04X\n", irq);
if (need_rescan)
/* We need to iterate all subchannels anyway. */
return -EAGAIN;
/*
* Since we are always presented with IPI in the CRW, we have to
* use stsch() to find out if the subchannel in question has come
* or gone.
*/
css_evaluate_subchannel(irq);
ret = css_evaluate_subchannel(irq, 0);
if (ret == -EAGAIN) {
if (css_enqueue_subchannel_slow(irq)) {
css_clear_subchannel_slow_list();
need_rescan = 1;
}
}
return ret;
}
/*
......@@ -412,6 +457,73 @@ s390_root_dev_unregister(struct device *dev)
device_unregister(dev);
}
struct slow_subchannel {
struct list_head slow_list;
unsigned long schid;
};
static LIST_HEAD(slow_subchannels_head);
static spinlock_t slow_subchannel_lock = SPIN_LOCK_UNLOCKED;
int
css_enqueue_subchannel_slow(unsigned long schid)
{
struct slow_subchannel *new_slow_sch;
unsigned long flags;
new_slow_sch = kmalloc(sizeof(struct slow_subchannel), GFP_ATOMIC);
if (!new_slow_sch)
return -ENOMEM;
new_slow_sch->schid = schid;
spin_lock_irqsave(&slow_subchannel_lock, flags);
list_add_tail(&new_slow_sch->slow_list, &slow_subchannels_head);
spin_unlock_irqrestore(&slow_subchannel_lock, flags);
return 0;
}
void
css_clear_subchannel_slow_list(void)
{
unsigned long flags;
spin_lock_irqsave(&slow_subchannel_lock, flags);
while (!list_empty(&slow_subchannels_head)) {
struct slow_subchannel *slow_sch =
list_entry(slow_subchannels_head.next,
struct slow_subchannel, slow_list);
list_del_init(slow_subchannels_head.next);
kfree(slow_sch);
}
spin_unlock_irqrestore(&slow_subchannel_lock, flags);
}
void
css_walk_subchannel_slow_list(void (*fn)(unsigned long))
{
unsigned long flags;
spin_lock_irqsave(&slow_subchannel_lock, flags);
while (!list_empty(&slow_subchannels_head)) {
struct slow_subchannel *slow_sch =
list_entry(slow_subchannels_head.next,
struct slow_subchannel, slow_list);
list_del_init(slow_subchannels_head.next);
spin_unlock_irqrestore(&slow_subchannel_lock, flags);
fn(slow_sch->schid);
spin_lock_irqsave(&slow_subchannel_lock, flags);
kfree(slow_sch);
}
spin_unlock_irqrestore(&slow_subchannel_lock, flags);
}
int
css_slow_subchannels_exist(void)
{
return (!list_empty(&slow_subchannels_head));
}
MODULE_LICENSE("GPL");
EXPORT_SYMBOL(css_bus_type);
EXPORT_SYMBOL(s390_root_dev_register);
......
......@@ -65,6 +65,7 @@ struct senseid {
struct ccw_device_private {
int state; /* device state */
atomic_t onoff;
__u16 devno; /* device number */
__u16 irq; /* subchannel number */
__u8 imask; /* lpm mask for SNID/SID/SPGID */
......@@ -127,6 +128,14 @@ int device_is_disconnected(struct subchannel *);
void device_set_disconnected(struct subchannel *);
void device_trigger_reprobe(struct subchannel *);
/* Helper function for vary on/off. */
/* Helper functions for vary on/off. */
void device_set_waiting(struct subchannel *);
void device_call_nopath_notify(struct subchannel *);
/* Helper functions to build lists for the slow path. */
int css_enqueue_subchannel_slow(unsigned long schid);
void css_walk_subchannel_slow_list(void (*fn)(unsigned long));
void css_clear_subchannel_slow_list(void);
int css_slow_subchannels_exist(void);
extern int need_rescan;
#endif
/*
* drivers/s390/cio/device.c
* bus driver for ccw devices
* $Revision: 1.85 $
* $Revision: 1.103 $
*
* Copyright (C) 2002 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
......@@ -240,6 +240,22 @@ online_show (struct device *dev, char *buf)
return sprintf(buf, cdev->online ? "1\n" : "0\n");
}
static void
ccw_device_remove_disconnected(struct ccw_device *cdev)
{
struct subchannel *sch;
/*
* Forced offline in disconnected state means
* 'throw away device'.
*/
sch = to_subchannel(cdev->dev.parent);
device_unregister(&sch->dev);
/* Reset intparm to zeroes. */
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
put_device(&sch->dev);
}
int
ccw_device_set_offline(struct ccw_device *cdev)
{
......@@ -250,20 +266,6 @@ ccw_device_set_offline(struct ccw_device *cdev)
if (!cdev->online || !cdev->drv)
return -EINVAL;
if (cdev->private->state == DEV_STATE_DISCONNECTED) {
struct subchannel *sch;
/*
* Forced offline in disconnected state means
* 'throw away device'.
*/
sch = to_subchannel(cdev->dev.parent);
device_unregister(&sch->dev);
/* Reset intparm to zeroes. */
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
put_device(&sch->dev);
return 0;
}
if (cdev->drv->set_offline) {
ret = cdev->drv->set_offline(cdev);
if (ret != 0)
......@@ -280,7 +282,7 @@ ccw_device_set_offline(struct ccw_device *cdev)
ret, cdev->dev.bus_id);
cdev->online = 1;
}
return ret;
return ret;
}
int
......@@ -329,15 +331,19 @@ online_store (struct device *dev, const char *buf, size_t count)
if (!cdev->drv)
return count;
if (atomic_compare_and_swap(0, 1, &cdev->private->onoff))
return -EAGAIN;
i = simple_strtoul(buf, &tmp, 16);
if (i == 1 && cdev->drv->set_online)
ccw_device_set_online(cdev);
else if (i == 0 && cdev->drv->set_offline)
ccw_device_set_offline(cdev);
else
return -EINVAL;
else if (i == 0 && cdev->drv->set_offline) {
if (cdev->private->state == DEV_STATE_DISCONNECTED)
ccw_device_remove_disconnected(cdev);
else
ccw_device_set_offline(cdev);
}
atomic_set(&cdev->private->onoff, 0);
return count;
}
......@@ -369,12 +375,36 @@ stlck_store(struct device *dev, const char *buf, size_t count)
return count;
}
static ssize_t
available_show (struct device *dev, char *buf)
{
struct ccw_device *cdev = to_ccwdev(dev);
struct subchannel *sch;
switch (cdev->private->state) {
case DEV_STATE_BOXED:
return sprintf(buf, "boxed\n");
case DEV_STATE_DISCONNECTED:
case DEV_STATE_DISCONNECTED_SENSE_ID:
case DEV_STATE_NOT_OPER:
sch = to_subchannel(dev->parent);
if (!sch->lpm)
return sprintf(buf, "no path\n");
else
return sprintf(buf, "no device\n");
default:
/* All other states considered fine. */
return sprintf(buf, "good\n");
}
}
static DEVICE_ATTR(chpids, 0444, chpids_show, NULL);
static DEVICE_ATTR(pimpampom, 0444, pimpampom_show, NULL);
static DEVICE_ATTR(devtype, 0444, devtype_show, NULL);
static DEVICE_ATTR(cutype, 0444, cutype_show, NULL);
static DEVICE_ATTR(online, 0644, online_show, online_store);
static DEVICE_ATTR(steal_lock, 0200, NULL, stlck_store);
static DEVICE_ATTR(availability, 0444, available_show, NULL);
/* A device has been unboxed. Start device recognition. */
static void
......@@ -419,6 +449,7 @@ static struct attribute * ccwdev_attrs[] = {
&dev_attr_devtype.attr,
&dev_attr_cutype.attr,
&dev_attr_online.attr,
&dev_attr_availability.attr,
NULL,
};
......@@ -468,7 +499,7 @@ ccw_device_register(struct ccw_device *cdev)
return ret;
if ((ret = device_add_files(dev)))
device_unregister(dev);
device_del(dev);
return ret;
}
......@@ -521,6 +552,7 @@ io_subchannel_register(void *data)
if (ret) {
printk (KERN_WARNING "%s: could not register %s\n",
__func__, cdev->dev.bus_id);
put_device(&cdev->dev);
sch->dev.driver_data = 0;
kfree (cdev->private);
kfree (cdev);
......@@ -533,10 +565,25 @@ io_subchannel_register(void *data)
__func__, sch->dev.bus_id);
if (cdev->private->state == DEV_STATE_BOXED)
device_create_file(&cdev->dev, &dev_attr_steal_lock);
put_device(&cdev->dev);
out:
put_device(&sch->dev);
}
static void
device_call_sch_unregister(void *data)
{
struct ccw_device *cdev = data;
struct subchannel *sch;
sch = to_subchannel(cdev->dev.parent);
device_unregister(&sch->dev);
/* Reset intparm to zeroes. */
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
put_device(&cdev->dev);
}
/*
* subchannel recognition done. Called from the state machine.
*/
......@@ -550,11 +597,12 @@ io_subchannel_recog_done(struct ccw_device *cdev)
switch (cdev->private->state) {
case DEV_STATE_NOT_OPER:
/* Remove device found not operational. */
if (!get_device(&cdev->dev))
break;
sch = to_subchannel(cdev->dev.parent);
sch->dev.driver_data = 0;
put_device(&sch->dev);
if (cdev->dev.release)
cdev->dev.release(&cdev->dev);
INIT_WORK(&cdev->private->kick_work,
device_call_sch_unregister, (void *) cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
break;
case DEV_STATE_BOXED:
/* Device did not respond in time. */
......@@ -563,6 +611,8 @@ io_subchannel_recog_done(struct ccw_device *cdev)
* We can't register the device in interrupt context so
* we schedule a work item.
*/
if (!get_device(&cdev->dev))
break;
INIT_WORK(&cdev->private->kick_work,
io_subchannel_register, (void *) cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
......@@ -647,6 +697,7 @@ io_subchannel_probe (struct device *pdev)
return -ENOMEM;
}
memset(cdev->private, 0, sizeof(struct ccw_device_private));
atomic_set(&cdev->private->onoff, 0);
cdev->dev = (struct device) {
.parent = pdev,
.release = ccw_device_release,
......@@ -657,18 +708,17 @@ io_subchannel_probe (struct device *pdev)
if (!get_device(&sch->dev)) {
if (cdev->dev.release)
cdev->dev.release(&cdev->dev);
return 0;
return -ENODEV;
}
rc = io_subchannel_recog(cdev, to_subchannel(pdev));
if (rc) {
sch->dev.driver_data = 0;
put_device(&sch->dev);
if (cdev->dev.release)
cdev->dev.release(&cdev->dev);
}
return 0;
return rc;
}
static int
......@@ -680,8 +730,14 @@ io_subchannel_remove (struct device *dev)
return 0;
cdev = dev->driver_data;
/* Set ccw device to not operational and drop reference. */
dev_fsm_event(cdev, DEV_EVENT_NOTOPER);
put_device(&cdev->dev);
cdev->private->state = DEV_STATE_NOT_OPER;
/*
* Careful here. Our ccw device might be yet unregistered when
* de-registering its subchannel (machine check during device
* recognition). Better look if the subchannel has children.
*/
if (!list_empty(&dev->children))
device_unregister(&cdev->dev);
dev->driver_data = NULL;
return 0;
}
......@@ -860,10 +916,7 @@ ccw_device_remove (struct device *dev)
struct ccw_driver *cdrv = cdev->drv;
int ret;
pr_debug("removing device %s, sch %d, devno %x\n",
cdev->dev.bus_id,
cdev->private->irq,
cdev->private->devno);
pr_debug("removing device %s\n", cdev->dev.bus_id);
if (cdrv->remove)
cdrv->remove(cdev);
if (cdev->online) {
......@@ -879,6 +932,7 @@ ccw_device_remove (struct device *dev)
pr_debug("ccw_device_offline returned %d, device %s\n",
ret, cdev->dev.bus_id);
}
cdev->drv = 0;
return 0;
}
......
......@@ -19,6 +19,7 @@
#include "cio_debug.h"
#include "css.h"
#include "device.h"
#include "chsc.h"
#include "ioasm.h"
#include "qdio.h"
......@@ -42,6 +43,7 @@ device_set_disconnected(struct subchannel *sch)
if (!sch->dev.driver_data)
return;
cdev = sch->dev.driver_data;
ccw_device_set_timeout(cdev, 0);
cdev->private->state = DEV_STATE_DISCONNECTED;
}
......@@ -78,8 +80,7 @@ void
ccw_device_set_timeout(struct ccw_device *cdev, int expires)
{
if (expires == 0) {
if (timer_pending(&cdev->private->timer))
del_timer(&cdev->private->timer);
del_timer(&cdev->private->timer);
return;
}
if (timer_pending(&cdev->private->timer)) {
......@@ -166,6 +167,26 @@ ccw_device_handle_oper(struct ccw_device *cdev)
ccw_device_online(cdev);
}
/*
* The machine won't give us any notification by machine check if a chpid has
* been varied online on the SE so we have to find out by magic (i. e. driving
* the channel subsystem to device selection and updating our path masks).
*/
static inline void
__recover_lost_chpids(struct subchannel *sch, int old_lpm)
{
int mask, i;
for (i = 0; i<8; i++) {
mask = 0x80 >> i;
if (!(sch->lpm & mask))
continue;
if (old_lpm & mask)
continue;
chpid_is_actually_online(sch->schib.pmcw.chpid[i]);
}
}
/*
* Stop device recognition.
*/
......@@ -173,12 +194,27 @@ static void
ccw_device_recog_done(struct ccw_device *cdev, int state)
{
struct subchannel *sch;
int notify;
int notify, old_lpm;
sch = to_subchannel(cdev->dev.parent);
ccw_device_set_timeout(cdev, 0);
cio_disable_subchannel(sch);
/*
* Now that we tried recognition, we have performed device selection
* through ssch() and the path information is up to date.
*/
old_lpm = sch->lpm;
stsch(sch->irq, &sch->schib);
sch->lpm = sch->schib.pmcw.pim &
sch->schib.pmcw.pam &
sch->schib.pmcw.pom &
sch->opm;
if (cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID)
/* Force reprobe on all chpids. */
old_lpm = 0;
if (sch->lpm != old_lpm)
__recover_lost_chpids(sch, old_lpm);
if (cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID) {
if (state == DEV_STATE_NOT_OPER) {
cdev->private->state = DEV_STATE_DISCONNECTED;
......@@ -190,8 +226,8 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
switch (state) {
case DEV_STATE_NOT_OPER:
CIO_DEBUG(KERN_WARNING, 2,
"SenseID : unknown device %s on subchannel %s\n",
cdev->dev.bus_id, sch->dev.bus_id);
"SenseID : unknown device %04x on subchannel %04x\n",
cdev->private->devno, sch->irq);
break;
case DEV_STATE_OFFLINE:
if (cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID)
......@@ -204,16 +240,16 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
.dev_model = cdev->private->senseid.dev_model,
};
/* Issue device info message. */
CIO_DEBUG(KERN_INFO, 2, "SenseID : device %s reports: "
CIO_DEBUG(KERN_INFO, 2, "SenseID : device %04x reports: "
"CU Type/Mod = %04X/%02X, Dev Type/Mod = "
"%04X/%02X\n", cdev->dev.bus_id,
"%04X/%02X\n", cdev->private->devno,
cdev->id.cu_type, cdev->id.cu_model,
cdev->id.dev_type, cdev->id.dev_model);
break;
case DEV_STATE_BOXED:
CIO_DEBUG(KERN_WARNING, 2,
"SenseID : boxed device %s on subchannel %s\n",
cdev->dev.bus_id, sch->dev.bus_id);
"SenseID : boxed device %04x on subchannel %04x\n",
cdev->private->devno, sch->irq);
break;
}
cdev->private->state = state;
......@@ -283,8 +319,8 @@ ccw_device_done(struct ccw_device *cdev, int state)
if (state == DEV_STATE_BOXED) {
CIO_DEBUG(KERN_WARNING, 2,
"Boxed device %s on subchannel %s\n",
cdev->dev.bus_id, sch->dev.bus_id);
"Boxed device %04x on subchannel %04x\n",
cdev->private->devno, sch->irq);
INIT_WORK(&cdev->private->kick_work,
ccw_device_add_stlck, (void *) cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
......@@ -387,6 +423,46 @@ ccw_device_recog_timeout(struct ccw_device *cdev, enum dev_event dev_event)
}
static void
ccw_device_nopath_notify(void *data)
{
struct ccw_device *cdev;
struct subchannel *sch;
int ret;
cdev = (struct ccw_device *)data;
sch = to_subchannel(cdev->dev.parent);
/* Extra sanity. */
if (sch->lpm)
return;
ret = (sch->driver && sch->driver->notify) ?
sch->driver->notify(&sch->dev, CIO_NO_PATH) : 0;
if (!ret) {
/* Driver doesn't want to keep device. */
device_unregister(&sch->dev);
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
} else {
ccw_device_set_timeout(cdev, 0);
cdev->private->state = DEV_STATE_DISCONNECTED;
wake_up(&cdev->private->wait_q);
}
}
void
device_call_nopath_notify(struct subchannel *sch)
{
struct ccw_device *cdev;
if (!sch->dev.driver_data)
return;
cdev = sch->dev.driver_data;
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
}
void
ccw_device_verify_done(struct ccw_device *cdev, int err)
{
......@@ -399,6 +475,9 @@ ccw_device_verify_done(struct ccw_device *cdev, int err)
ccw_device_done(cdev, DEV_STATE_BOXED);
break;
default:
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
ccw_device_done(cdev, DEV_STATE_NOT_OPER);
break;
}
......@@ -508,10 +587,7 @@ ccw_device_onoff_timeout(struct ccw_device *cdev, enum dev_event dev_event)
static void
ccw_device_recog_notoper(struct ccw_device *cdev, enum dev_event dev_event)
{
if (cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID)
cdev->private->state = DEV_STATE_DISCONNECTED;
else
ccw_device_recog_done(cdev, DEV_STATE_NOT_OPER);
ccw_device_recog_done(cdev, DEV_STATE_NOT_OPER);
}
/*
......@@ -520,8 +596,13 @@ ccw_device_recog_notoper(struct ccw_device *cdev, enum dev_event dev_event)
static void
ccw_device_offline_notoper(struct ccw_device *cdev, enum dev_event dev_event)
{
struct subchannel *sch;
cdev->private->state = DEV_STATE_NOT_OPER;
device_unregister(&cdev->dev);
sch = to_subchannel(cdev->dev.parent);
device_unregister(&sch->dev);
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
wake_up(&cdev->private->wait_q);
}
......@@ -540,7 +621,9 @@ ccw_device_online_notoper(struct ccw_device *cdev, enum dev_event dev_event)
// FIXME: not-oper indication to device driver ?
ccw_device_call_handler(cdev);
}
device_unregister(&cdev->dev);
device_unregister(&sch->dev);
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
wake_up(&cdev->private->wait_q);
}
......@@ -553,7 +636,9 @@ ccw_device_disconnected_notoper(struct ccw_device *cdev,
sch = to_subchannel(cdev->dev.parent);
cdev->private->state = DEV_STATE_NOT_OPER;
cio_disable_subchannel(sch);
device_unregister(&cdev->dev);
device_unregister(&sch->dev);
sch->schib.pmcw.intparm = 0;
cio_modify(sch);
wake_up(&cdev->private->wait_q);
}
......@@ -692,11 +777,21 @@ ccw_device_clear_verify(struct ccw_device *cdev, enum dev_event dev_event)
static void
ccw_device_killing_irq(struct ccw_device *cdev, enum dev_event dev_event)
{
struct subchannel *sch;
sch = to_subchannel(cdev->dev.parent);
/* OK, i/o is dead now. Call interrupt handler. */
cdev->private->state = DEV_STATE_ONLINE;
if (cdev->handler)
cdev->handler(cdev, cdev->private->intparm,
ERR_PTR(-ETIMEDOUT));
if (!sch->lpm) {
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
} else if (cdev->private->flags.doverify)
/* Start delayed path verification. */
ccw_device_online_verify(cdev, 0);
}
static void
......@@ -710,6 +805,14 @@ ccw_device_killing_timeout(struct ccw_device *cdev, enum dev_event dev_event)
return;
}
if (ret == -ENODEV) {
struct subchannel *sch;
sch = to_subchannel(cdev->dev.parent);
if (!sch->lpm) {
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
}
dev_fsm_event(cdev, DEV_EVENT_NOTOPER);
return;
}
......@@ -751,7 +854,12 @@ ccw_device_wait4io_irq(struct ccw_device *cdev, enum dev_event dev_event)
if (sch->schib.scsw.actl == 0)
ccw_device_set_timeout(cdev, 0);
/* Call the handler. */
if (ccw_device_call_handler(cdev) && cdev->private->flags.doverify)
ccw_device_call_handler(cdev);
if (!sch->lpm) {
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
} else if (cdev->private->flags.doverify)
ccw_device_online_verify(cdev, 0);
}
......@@ -759,6 +867,7 @@ static void
ccw_device_wait4io_timeout(struct ccw_device *cdev, enum dev_event dev_event)
{
int ret;
struct subchannel *sch;
ccw_device_set_timeout(cdev, 0);
ret = ccw_device_cancel_halt_clear(cdev);
......@@ -767,11 +876,24 @@ ccw_device_wait4io_timeout(struct ccw_device *cdev, enum dev_event dev_event)
cdev->private->state = DEV_STATE_TIMEOUT_KILL;
return;
}
if (ret == -ENODEV)
if (ret == -ENODEV) {
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
dev_fsm_event(cdev, DEV_EVENT_NOTOPER);
else if (cdev->handler)
return;
}
if (cdev->handler)
cdev->handler(cdev, cdev->private->intparm,
ERR_PTR(-ETIMEDOUT));
sch = to_subchannel(cdev->dev.parent);
if (!sch->lpm) {
PREPARE_WORK(&cdev->private->kick_work,
ccw_device_nopath_notify, (void *)cdev);
queue_work(ccw_device_work, &cdev->private->kick_work);
} else if (cdev->private->flags.doverify)
/* Start delayed path verification. */
ccw_device_online_verify(cdev, 0);
}
static void
......@@ -831,9 +953,24 @@ device_trigger_reprobe(struct subchannel *sch)
if (!sch->dev.driver_data)
return;
cdev = sch->dev.driver_data;
if (cdev->private->state != DEV_STATE_DISCONNECTED)
return;
spin_lock_irqsave(&sch->lock, flags);
if (cdev->private->state != DEV_STATE_DISCONNECTED) {
spin_unlock_irqrestore(&sch->lock, flags);
return;
}
/* Update some values. */
if (stsch(sch->irq, &sch->schib)) {
spin_unlock_irqrestore(&sch->lock, flags);
return;
}
/*
* The pim, pam, pom values may not be accurate, but they are the best
* we have before performing device selection :/
*/
sch->lpm = sch->schib.pmcw.pim &
sch->schib.pmcw.pam &
sch->schib.pmcw.pom &
sch->opm;
/* Re-set some bits in the pmcw that were lost. */
sch->schib.pmcw.isc = 3;
sch->schib.pmcw.csense = 1;
......
......@@ -195,7 +195,7 @@ __ccw_device_sense_id_start(struct ccw_device *cdev)
/* Try on every path. */
ret = -ENODEV;
while (cdev->private->imask != 0) {
if ((sch->lpm & cdev->private->imask) != 0 &&
if ((sch->opm & cdev->private->imask) != 0 &&
cdev->private->iretry > 0) {
cdev->private->iretry--;
ret = cio_start (sch, cdev->private->iccws,
......@@ -246,22 +246,26 @@ ccw_device_check_sense_id(struct ccw_device *cdev)
/* Check the error cases. */
if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC))
return -ETIME;
if (irb->esw.esw0.erw.cons &&
(irb->ecw[0] & (SNS0_CMD_REJECT | SNS0_INTERVENTION_REQ))) {
if (irb->esw.esw0.erw.cons && (irb->ecw[0] & SNS0_CMD_REJECT)) {
/*
* if the device doesn't support the SenseID
* command further retries wouldn't help ...
* NB: We don't check here for intervention required like we
* did before, because tape devices with no tape inserted
* may present this status *in conjunction with* the
* sense id information. So, for intervention required,
* we use the "whack it until it talks" strategy...
*/
CIO_MSG_EVENT(2, "SenseID : device %s on Subchannel %s "
"reports cmd reject or intervention required\n",
cdev->dev.bus_id, sch->dev.bus_id);
CIO_MSG_EVENT(2, "SenseID : device %04x on Subchannel %04x "
"reports cmd reject\n",
cdev->private->devno, sch->irq);
return -EOPNOTSUPP;
}
if (irb->esw.esw0.erw.cons) {
CIO_MSG_EVENT(2, "SenseID : UC on dev %s, "
CIO_MSG_EVENT(2, "SenseID : UC on dev %04x, "
"lpum %02X, cnt %02d, sns :"
" %02X%02X%02X%02X %02X%02X%02X%02X ...\n",
cdev->dev.bus_id,
cdev->private->devno,
irb->esw.esw0.sublog.lpum,
irb->esw.esw0.erw.scnt,
irb->ecw[0], irb->ecw[1],
......@@ -271,15 +275,18 @@ ccw_device_check_sense_id(struct ccw_device *cdev)
return -EAGAIN;
}
if (irb->scsw.cc == 3) {
CIO_MSG_EVENT(2, "SenseID : path %02X for device %s on "
"subchannel %s is 'not operational'\n",
sch->orb.lpm, cdev->dev.bus_id, sch->dev.bus_id);
if ((sch->orb.lpm &
sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
CIO_MSG_EVENT(2, "SenseID : path %02X for device %04x on"
" subchannel %04x is 'not operational'\n",
sch->orb.lpm, cdev->private->devno,
sch->irq);
return -EACCES;
}
/* Hmm, whatever happened, try again. */
CIO_MSG_EVENT(2, "SenseID : start_IO() for device %s on "
"subchannel %s returns status %02X%02X\n",
cdev->dev.bus_id, sch->dev.bus_id,
CIO_MSG_EVENT(2, "SenseID : start_IO() for device %04x on "
"subchannel %04x returns status %02X%02X\n",
cdev->private->devno, sch->irq,
irb->scsw.dstat, irb->scsw.cstat);
return -EAGAIN;
}
......
......@@ -154,6 +154,7 @@ ccw_device_call_handler(struct ccw_device *cdev)
{
struct subchannel *sch;
unsigned int stctl;
int ending_status;
sch = to_subchannel(cdev->dev.parent);
......@@ -166,7 +167,10 @@ ccw_device_call_handler(struct ccw_device *cdev)
* - unsolicited interrupts
*/
stctl = cdev->private->irb.scsw.stctl;
if (sch->schib.scsw.actl != 0 &&
ending_status = (stctl & SCSW_STCTL_SEC_STATUS) ||
(stctl == (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)) ||
(stctl == SCSW_STCTL_STATUS_PEND);
if (!ending_status &&
!cdev->private->options.repall &&
!(stctl & SCSW_STCTL_INTER_STATUS) &&
!(cdev->private->options.fast &&
......@@ -469,6 +473,7 @@ ccw_device_stlck(struct ccw_device *cdev)
cio_disable_subchannel(sch); //FIXME: return code?
goto out_unlock;
}
cdev->private->irb.scsw.actl |= SCSW_ACTL_START_PEND;
spin_unlock_irqrestore(&sch->lock, flags);
wait_event(cdev->private->wait_q, cdev->private->irb.scsw.actl == 0);
spin_lock_irqsave(&sch->lock, flags);
......
......@@ -55,10 +55,10 @@ __ccw_device_sense_pgid_start(struct ccw_device *cdev)
/* ret is 0, -EBUSY, -EACCES or -ENODEV */
if (ret != -EACCES)
return ret;
CIO_MSG_EVENT(2, "SNID - Device %s on Subchannel "
"%s, lpm %02X, became 'not "
CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel "
"%04x, lpm %02X, became 'not "
"operational'\n",
cdev->dev.bus_id, sch->dev.bus_id,
cdev->private->devno, sch->irq,
cdev->private->imask);
}
......@@ -105,10 +105,10 @@ __ccw_device_check_sense_pgid(struct ccw_device *cdev)
return -EOPNOTSUPP;
}
if (irb->esw.esw0.erw.cons) {
CIO_MSG_EVENT(2, "SNID - device %s, unit check, "
CIO_MSG_EVENT(2, "SNID - device %04x, unit check, "
"lpum %02X, cnt %02d, sns : "
"%02X%02X%02X%02X %02X%02X%02X%02X ...\n",
cdev->dev.bus_id,
cdev->private->devno,
irb->esw.esw0.sublog.lpum,
irb->esw.esw0.erw.scnt,
irb->ecw[0], irb->ecw[1],
......@@ -118,15 +118,15 @@ __ccw_device_check_sense_pgid(struct ccw_device *cdev)
return -EAGAIN;
}
if (irb->scsw.cc == 3) {
CIO_MSG_EVENT(2, "SNID - Device %s on Subchannel "
"%s, lpm %02X, became 'not operational'\n",
cdev->dev.bus_id, sch->dev.bus_id, sch->orb.lpm);
CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel "
"%04x, lpm %02X, became 'not operational'\n",
cdev->private->devno, sch->irq, sch->orb.lpm);
return -EACCES;
}
if (cdev->private->pgid.inf.ps.state2 == SNID_STATE2_RESVD_ELSE) {
CIO_MSG_EVENT(2, "SNID - Device %s on Subchannel %s "
CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel %04x "
"is reserved by someone else\n",
cdev->dev.bus_id, sch->dev.bus_id);
cdev->private->devno, sch->irq);
return -EUSERS;
}
return 0;
......@@ -233,9 +233,9 @@ __ccw_device_do_pgid(struct ccw_device *cdev, __u8 func)
/* PGID command failed on this path. Switch it off. */
sch->lpm &= ~cdev->private->imask;
sch->vpm &= ~cdev->private->imask;
CIO_MSG_EVENT(2, "SPID - Device %s on Subchannel "
"%s, lpm %02X, became 'not operational'\n",
cdev->dev.bus_id, sch->dev.bus_id, cdev->private->imask);
CIO_MSG_EVENT(2, "SPID - Device %04x on Subchannel "
"%04x, lpm %02X, became 'not operational'\n",
cdev->private->devno, sch->irq, cdev->private->imask);
return ret;
}
......@@ -257,9 +257,9 @@ __ccw_device_check_pgid(struct ccw_device *cdev)
if (irb->ecw[0] & SNS0_CMD_REJECT)
return -EOPNOTSUPP;
/* Hmm, whatever happened, try again. */
CIO_MSG_EVENT(2, "SPID - device %s, unit check, cnt %02d, "
CIO_MSG_EVENT(2, "SPID - device %04x, unit check, cnt %02d, "
"sns : %02X%02X%02X%02X %02X%02X%02X%02X ...\n",
cdev->dev.bus_id, irb->esw.esw0.erw.scnt,
cdev->private->devno, irb->esw.esw0.erw.scnt,
irb->ecw[0], irb->ecw[1],
irb->ecw[2], irb->ecw[3],
irb->ecw[4], irb->ecw[5],
......@@ -267,9 +267,9 @@ __ccw_device_check_pgid(struct ccw_device *cdev)
return -EAGAIN;
}
if (irb->scsw.cc == 3) {
CIO_MSG_EVENT(2, "SPID - Device %s on Subchannel "
"%s, lpm %02X, became 'not operational'\n",
cdev->dev.bus_id, sch->dev.bus_id,
CIO_MSG_EVENT(2, "SPID - Device %04x on Subchannel "
"%04x, lpm %02X, became 'not operational'\n",
cdev->private->devno, sch->irq,
cdev->private->imask);
return -EACCES;
}
......
......@@ -62,8 +62,8 @@ ccw_device_path_notoper(struct ccw_device *cdev)
sch = to_subchannel(cdev->dev.parent);
stsch (sch->irq, &sch->schib);
CIO_MSG_EVENT(0, "%s(%s) - path(s) %02x are "
"not operational \n", __FUNCTION__, sch->dev.bus_id,
CIO_MSG_EVENT(0, "%s(%04x) - path(s) %02x are "
"not operational \n", __FUNCTION__, sch->irq,
sch->schib.pmcw.pnom);
sch->lpm &= ~sch->schib.pmcw.pnom;
......@@ -228,8 +228,8 @@ ccw_device_accumulate_irb(struct ccw_device *cdev, struct irb *irb)
cdev_irb->scsw.key = irb->scsw.key;
/* Copy suspend control bit. */
cdev_irb->scsw.sctl = irb->scsw.sctl;
/* Copy deferred condition code. */
cdev_irb->scsw.cc = irb->scsw.cc;
/* Accumulate deferred condition code. */
cdev_irb->scsw.cc |= irb->scsw.cc;
/* Copy ccw format bit. */
cdev_irb->scsw.fmt = irb->scsw.fmt;
/* Copy prefetch bit. */
......
......@@ -6,8 +6,8 @@
* version 2
*
* Copyright 2000,2002 IBM Corporation
* Author(s): Utz Bacher <utz.bacher@de.ibm.com>
* Cornelia Huck <cohuck@de.ibm.com>
* Author(s): Utz Bacher <utz.bacher@de.ibm.com>
* 2.6 cio integration by Cornelia Huck <cohuck@de.ibm.com>
*
* Restriction: only 63 iqdio subchannels would have its own indicator,
* after that, subsequent subchannels share one indicator
......@@ -56,7 +56,7 @@
#include "ioasm.h"
#include "chsc.h"
#define VERSION_QDIO_C "$Revision: 1.67 $"
#define VERSION_QDIO_C "$Revision: 1.74 $"
/****************** MODULE PARAMETER VARIABLES ********************/
MODULE_AUTHOR("Utz Bacher <utz.bacher@de.ibm.com>");
......@@ -76,6 +76,7 @@ static struct qdio_perf_stats perf_stats;
#endif /* QDIO_PERFORMANCE_STATS */
static int hydra_thinints;
static int omit_svs;
static int indicator_used[INDICATORS_PER_CACHELINE];
static __u32 * volatile indicators;
......@@ -114,7 +115,7 @@ qdio_min(int a,int b)
static inline volatile __u64
qdio_get_micros(void)
{
return (get_clock() >> 12); /* time>>12 is microseconds */
return (get_clock() >> 10); /* time>>12 is microseconds */
}
/*
......@@ -530,7 +531,6 @@ qdio_has_outbound_q_moved(struct qdio_q *q)
if ( (i!=GET_SAVED_FRONTIER(q)) ||
(q->error_status_flags&QDIO_STATUS_LOOK_FOR_ERROR) ) {
SAVE_FRONTIER(q,i);
SAVE_TIMESTAMP(q);
QDIO_DBF_TEXT4(0,trace,"oqhasmvd");
QDIO_DBF_HEX4(0,trace,&q,sizeof(void*));
return 1;
......@@ -596,8 +596,8 @@ qdio_kick_outbound_handler(struct qdio_q *q)
q->error_status_flags=0;
}
static void
qdio_outbound_processing(struct qdio_q *q)
static inline void
__qdio_outbound_processing(struct qdio_q *q)
{
QDIO_DBF_TEXT4(0,trace,"qoutproc");
QDIO_DBF_HEX4(0,trace,&q,sizeof(void*));
......@@ -639,6 +639,12 @@ qdio_outbound_processing(struct qdio_q *q)
qdio_release_q(q);
}
static void
qdio_outbound_processing(struct qdio_q *q)
{
__qdio_outbound_processing(q);
}
/************************* INBOUND ROUTINES *******************************/
......@@ -997,7 +1003,7 @@ __tiqdio_inbound_processing(struct qdio_q *q, int spare_ind_was_set)
perf_stats.tl_runs--;
#endif /* QDIO_PERFORMANCE_STATS */
if (!qdio_is_outbound_q_done(oq))
qdio_outbound_processing(oq);
__qdio_outbound_processing(oq);
}
}
......@@ -1024,8 +1030,8 @@ tiqdio_inbound_processing(struct qdio_q *q)
__tiqdio_inbound_processing(q, atomic_read(&spare_indicator_usecount));
}
static void
qdio_inbound_processing(struct qdio_q *q)
static inline void
__qdio_inbound_processing(struct qdio_q *q)
{
int q_laps=0;
......@@ -1067,6 +1073,12 @@ qdio_inbound_processing(struct qdio_q *q)
qdio_release_q(q);
}
static void
qdio_inbound_processing(struct qdio_q *q)
{
__qdio_inbound_processing(q);
}
/************************* MAIN ROUTINES *******************************/
#ifdef QDIO_USE_PROCESSING_STATE
......@@ -1211,8 +1223,7 @@ qdio_release_irq_memory(struct qdio_irq *irq_ptr)
kfree(irq_ptr->output_qs[i]);
}
if (irq_ptr->qdr)
kfree(irq_ptr->qdr);
kfree(irq_ptr->qdr);
kfree(irq_ptr);
}
......@@ -1493,8 +1504,11 @@ tiqdio_thinint_handler(void)
perf_stats.thinints++;
perf_stats.start_time_inbound=NOW;
#endif /* QDIO_PERFORMANCE_STATS */
/* VM will do the SVS for us */
if (!MACHINE_IS_VM)
/* SVS only when needed:
* issue SVS to benefit from iqdio interrupt avoidance
* (SVS clears AISOI)*/
if (!omit_svs)
tiqdio_clear_global_summary();
tiqdio_inbound_checks();
......@@ -1554,7 +1568,7 @@ qdio_handle_pci(struct qdio_irq *irq_ptr)
#ifdef QDIO_PERFORMANCE_STATS
perf_stats.tl_runs--;
#endif /* QDIO_PERFORMANCE_STATS */
qdio_inbound_processing(q);
__qdio_inbound_processing(q);
}
}
if (!irq_ptr->hydra_gives_outbound_pcis)
......@@ -1568,7 +1582,7 @@ qdio_handle_pci(struct qdio_irq *irq_ptr)
continue;
if (!irq_ptr->sync_done_on_outb_pcis)
SYNC_MEMORY;
qdio_outbound_processing(q);
__qdio_outbound_processing(q);
}
}
......@@ -1700,7 +1714,6 @@ qdio_handler(struct ccw_device *cdev, unsigned long intparm, struct irb *irb)
case -EIO:
QDIO_PRINT_ERR("i/o error on device %s\n",
cdev->dev.bus_id);
//FIXME: hm?
return;
case -ETIMEDOUT:
qdio_timeout_handler(cdev);
......@@ -1817,12 +1830,13 @@ qdio_check_siga_needs(int sch)
u8 ocnt;
} *ssqd_area;
/* FIXME make this GFP_KERNEL */
ssqd_area = (void *)get_zeroed_page(GFP_ATOMIC | GFP_DMA);
ssqd_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!ssqd_area) {
QDIO_PRINT_WARN("Could not get memory for chsc. Using all " \
"SIGAs for sch x%x.\n", sch);
return -1; /* all flags set */
return CHSC_FLAG_SIGA_INPUT_NECESSARY ||
CHSC_FLAG_SIGA_OUTPUT_NECESSARY ||
CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */
}
ssqd_area->request = (struct chsc_header) {
.length = 0x0010,
......@@ -1838,7 +1852,9 @@ qdio_check_siga_needs(int sch)
QDIO_PRINT_WARN("CHSC returned cc %i. Using all " \
"SIGAs for sch x%x.\n",
result,sch);
qdioac = -1; /* all flags set */
qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY ||
CHSC_FLAG_SIGA_OUTPUT_NECESSARY ||
CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */
goto out;
}
......@@ -1846,7 +1862,9 @@ qdio_check_siga_needs(int sch)
QDIO_PRINT_WARN("response upon checking SIGA needs " \
"is 0x%x. Using all SIGAs for sch x%x.\n",
ssqd_area->response.code, sch);
qdioac = -1; /* all flags set */
qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY ||
CHSC_FLAG_SIGA_OUTPUT_NECESSARY ||
CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */
goto out;
}
if (!(ssqd_area->flags & CHSC_FLAG_QDIO_CAPABILITY) ||
......@@ -1930,6 +1948,13 @@ tiqdio_check_chsc_availability(void)
== 0x10000000);
sprintf(dbf_text,"hydrati%1x", hydra_thinints);
QDIO_DBF_TEXT0(0,setup,dbf_text);
/* Check for aif time delay disablement fac (bit 56). If installed,
* omit svs even under lpar (good point by rick again) */
omit_svs = ((scsc_area->general_char[1] & 0x00000080)
== 0x00000080);
sprintf(dbf_text,"omitsvs%1x", omit_svs);
QDIO_DBF_TEXT0(0,setup,dbf_text);
exit:
free_page ((unsigned long) scsc_area);
return result;
......@@ -2122,7 +2147,7 @@ qdio_shutdown(struct ccw_device *cdev, int how)
int result = 0;
unsigned long flags;
int timeout;
char dbf_text[15]="12345678";
char dbf_text[15];
irq_ptr = cdev->private->qdio_data;
if (!irq_ptr)
......@@ -2152,13 +2177,6 @@ qdio_shutdown(struct ccw_device *cdev, int how)
use_count),
QDIO_NO_USE_COUNT_TIMEOUT);
if (atomic_read(&irq_ptr->input_qs[i]->use_count))
/*
* FIXME:
* nobody cares about such retval,
* does a timeout make sense at all?
* can this case be eliminated?
* mutex should be released anyway, shouldn't it?
*/
result=-EINPROGRESS;
}
......@@ -2170,13 +2188,6 @@ qdio_shutdown(struct ccw_device *cdev, int how)
use_count),
QDIO_NO_USE_COUNT_TIMEOUT);
if (atomic_read(&irq_ptr->output_qs[i]->use_count))
/*
* FIXME:
* nobody cares about such retval,
* does a timeout make sense at all?
* can this case be eliminated?
* mutex should be released anyway, shouldn't it?
*/
result=-EINPROGRESS;
}
......@@ -2260,11 +2271,10 @@ qdio_free(struct ccw_device *cdev)
static inline void
qdio_allocate_do_dbf(struct qdio_initialize *init_data)
{
char dbf_text[20]; /* if a printf would print out more than 8 chars */
char dbf_text[20]; /* if a printf printed out more than 8 chars */
sprintf(dbf_text,"qfmt:%x",init_data->q_format);
QDIO_DBF_TEXT0(0,setup,dbf_text);
QDIO_DBF_TEXT0(0,setup,init_data->adapter_name);
QDIO_DBF_HEX0(0,setup,init_data->adapter_name,8);
sprintf(dbf_text,"qpff%4x",init_data->qib_param_field_format);
QDIO_DBF_TEXT0(0,setup,dbf_text);
......@@ -2510,7 +2520,6 @@ qdio_allocate(struct qdio_initialize *init_data)
irq_ptr->qdr=kmalloc(sizeof(struct qdr), GFP_KERNEL | GFP_DMA);
if (!(irq_ptr->qdr)) {
kfree(irq_ptr->qdr);
kfree(irq_ptr);
QDIO_PRINT_ERR("kmalloc of irq_ptr->qdr failed!\n");
return -ENOMEM;
......@@ -2660,8 +2669,6 @@ int qdio_fill_irq(struct qdio_initialize *init_data)
irq_ptr->original_int_handler = init_data->cdev->handler;
init_data->cdev->handler = qdio_handler;
up(&irq_ptr->setting_up_sema);
return 0;
}
......@@ -2692,7 +2699,7 @@ qdio_establish(struct qdio_initialize *init_data)
result = tiqdio_set_subchannel_ind(irq_ptr,0);
if (result) {
up(&irq_ptr->setting_up_sema);
qdio_cleanup(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
return result;
}
tiqdio_set_delay_target(irq_ptr,TIQDIO_DELAY_TARGET);
......@@ -2740,23 +2747,23 @@ qdio_establish(struct qdio_initialize *init_data)
return result;
}
/* FIXME: don't wait forever if hardware is broken */
wait_event(cdev->private->wait_q,
irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
irq_ptr->state == QDIO_IRQ_STATE_ERR);
wait_event_interruptible_timeout(cdev->private->wait_q,
irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
irq_ptr->state == QDIO_IRQ_STATE_ERR,
QDIO_ESTABLISH_TIMEOUT);
if (irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED)
result = 0;
else {
up(&irq_ptr->setting_up_sema);
qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
result = -EIO;
return -EIO;
}
if (MACHINE_IS_VM)
irq_ptr->qdioac=qdio_check_siga_needs(irq_ptr->irq);
else
irq_ptr->qdioac=CHSC_FLAG_SIGA_INPUT_NECESSARY
| CHSC_FLAG_SIGA_OUTPUT_NECESSARY;
irq_ptr->qdioac=qdio_check_siga_needs(irq_ptr->irq);
/* if this gets set once, we're running under VM and can omit SVSes */
if (irq_ptr->qdioac&CHSC_FLAG_SIGA_SYNC_NECESSARY)
omit_svs=1;
sprintf(dbf_text,"qdioac%2x",irq_ptr->qdioac);
QDIO_DBF_TEXT2(0,setup,dbf_text);
......@@ -2864,7 +2871,9 @@ qdio_activate(struct ccw_device *cdev, int flags)
switch (irq_ptr->state) {
case QDIO_IRQ_STATE_STOPPED:
case QDIO_IRQ_STATE_ERR:
up(&irq_ptr->setting_up_sema);
qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
down(&irq_ptr->setting_up_sema);
result = -EIO;
break;
default:
......@@ -2878,7 +2887,7 @@ qdio_activate(struct ccw_device *cdev, int flags)
}
/* buffers filled forwards again to make Rick happy */
static void
static inline void
qdio_do_qdio_fill_input(struct qdio_q *q, unsigned int qidx,
unsigned int count, struct qdio_buffer *buffers)
{
......@@ -2972,7 +2981,7 @@ do_qdio_handle_outbound(struct qdio_q *q, unsigned int callflags,
while (count--)
qdio_kick_outbound_q(q);
qdio_outbound_processing(q);
__qdio_outbound_processing(q);
} else {
/* under VM, we do a SIGA sync unconditionally */
SYNC_MEMORY;
......@@ -2998,7 +3007,7 @@ do_qdio_handle_outbound(struct qdio_q *q, unsigned int callflags,
* the upper layer module could do a lot of
* traffic in that time
*/
qdio_outbound_processing(q);
__qdio_outbound_processing(q);
}
#ifdef QDIO_PERFORMANCE_STATS
......
......@@ -11,6 +11,7 @@
#include <linux/config.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/errno.h>
#include <asm/lowcore.h>
......@@ -19,12 +20,14 @@
#define DBG printk
// #define DBG(args,...) do {} while (0);
static struct semaphore m_sem;
static struct semaphore s_sem;
extern void css_process_crw(int);
extern void chsc_process_crw(void);
extern void chp_process_crw(int, int);
extern int css_process_crw(int);
extern int chsc_process_crw(void);
extern int chp_process_crw(int, int);
extern void css_reiterate_subchannels(void);
extern void css_trigger_slow_path(void);
static void
s390_handle_damage(char *msg)
......@@ -36,6 +39,21 @@ s390_handle_damage(char *msg)
disabled_wait((unsigned long) __builtin_return_address(0));
}
static int
s390_mchk_slow_path(void *param)
{
struct semaphore *sem;
sem = (struct semaphore *)param;
/* Set a nice name. */
daemonize("kslowcrw");
repeat:
down_interruptible(sem);
css_trigger_slow_path();
goto repeat;
return 0;
}
/*
* Retrieve CRWs and call function to handle event.
*
......@@ -45,15 +63,15 @@ static int
s390_collect_crw_info(void *param)
{
struct crw crw;
int ccode;
int ccode, ret, slow;
struct semaphore *sem;
sem = (struct semaphore *)param;
/* Set a nice name. */
daemonize("kmcheck");
repeat:
down_interruptible(sem);
slow = 0;
while (1) {
ccode = stcrw(&crw);
if (ccode != 0)
......@@ -66,12 +84,15 @@ s390_collect_crw_info(void *param)
if (crw.oflw) {
pr_debug("%s: crw overflow detected!\n", __FUNCTION__);
css_reiterate_subchannels();
slow = 1;
continue;
}
switch (crw.rsc) {
case CRW_RSC_SCH:
pr_debug("source is subchannel %04X\n", crw.rsid);
css_process_crw (crw.rsid);
ret = css_process_crw (crw.rsid);
if (ret == -EAGAIN)
slow = 1;
break;
case CRW_RSC_MONITOR:
pr_debug("source is monitoring facility\n");
......@@ -80,28 +101,36 @@ s390_collect_crw_info(void *param)
pr_debug("source is channel path %02X\n", crw.rsid);
switch (crw.erc) {
case CRW_ERC_IPARM: /* Path has come. */
chp_process_crw(crw.rsid, 1);
ret = chp_process_crw(crw.rsid, 1);
break;
case CRW_ERC_PERRI: /* Path has gone. */
chp_process_crw(crw.rsid, 0);
case CRW_ERC_PERRN:
ret = chp_process_crw(crw.rsid, 0);
break;
default:
pr_debug("Don't know how to handle erc=%x\n",
crw.erc);
ret = 0;
}
if (ret == -EAGAIN)
slow = 1;
break;
case CRW_RSC_CONFIG:
pr_debug("source is configuration-alert facility\n");
break;
case CRW_RSC_CSS:
pr_debug("source is channel subsystem\n");
chsc_process_crw();
ret = chsc_process_crw();
if (ret == -EAGAIN)
slow = 1;
break;
default:
pr_debug("unknown source\n");
break;
}
}
if (slow)
up(&s_sem);
goto repeat;
return 0;
}
......@@ -140,7 +169,7 @@ s390_do_machine_check(void)
"check\n");
if (mci->cp) /* channel report word pending */
up(&s_sem);
up(&m_sem);
#ifdef CONFIG_MACHCHK_WARNING
/*
......@@ -172,6 +201,7 @@ s390_do_machine_check(void)
static int
machine_check_init(void)
{
init_MUTEX_LOCKED(&m_sem);
init_MUTEX_LOCKED( &s_sem );
ctl_clear_bit(14, 25); /* disable damage MCH */
ctl_set_bit(14, 26); /* enable degradation MCH */
......@@ -195,7 +225,8 @@ arch_initcall(machine_check_init);
static int __init
machine_check_crw_init (void)
{
kernel_thread(s390_collect_crw_info, &s_sem, CLONE_FS|CLONE_FILES);
kernel_thread(s390_collect_crw_info, &m_sem, CLONE_FS|CLONE_FILES);
kernel_thread(s390_mchk_slow_path, &s_sem, CLONE_FS|CLONE_FILES);
ctl_set_bit(14, 28); /* enable channel report MCH */
return 0;
}
......
......@@ -10,6 +10,7 @@ struct ccwgroup_device {
CCWGROUP_OFFLINE,
CCWGROUP_ONLINE,
} state;
atomic_t onoff;
unsigned int count; /* number of attached slave devices */
struct device dev; /* master device */
struct ccw_device *cdev[0]; /* variable number, allocate as needed */
......
......@@ -17,6 +17,7 @@
#include <linux/interrupt.h>
#include <asm/cio.h>
#include <asm/ccwdev.h>
#define QDIO_NAME "qdio "
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment