Commit d1b858cb authored by Linus Torvalds's avatar Linus Torvalds

Merge http://lia64.bkbits.net/to-linus-2.5

into home.osdl.org:/home/torvalds/v2.5/linux
parents 3483a9c3 b60485d6
......@@ -828,8 +828,8 @@ W: http://www.xenotime.net/linux/linux.html
W: http://www.linux-usb.org
D: Linux-USB subsystem, USB core/UHCI/printer/storage drivers
D: x86 SMP, ACPI, bootflag hacking
S: 15275 SW Koll Parkway, Suite H
S: Beaverton, Oregon 97006
S: 12725 SW Millikan Way, Suite 400
S: Beaverton, Oregon 97005
S: USA
N: Bob Dunlop
......@@ -2228,8 +2228,8 @@ N: Patrick Mochel
E: mochel@osdl.org
E: mochelp@infinity.powertie.org
D: PCI Power Management, ACPI work
S: 15275 SW Koll Parkway, Suite H
S: Beaverton, OR 97006
S: 12725 SW Millikan Way, Suite 400
S: Beaverton, Oregon 97005
S: USA
N: Eberhard Moenkeberg
......@@ -3046,12 +3046,10 @@ S: Sevilla 41005
S: Spain
N: Linus Torvalds
E: torvalds@transmeta.com
W: http://www.cs.helsinki.fi/Linus.Torvalds
P: 1024/A86B35C5 96 54 50 29 EC 11 44 7A BE 67 3C 24 03 13 62 C8
E: torvalds@osdl.org
D: Original kernel hacker
S: 3990 Freedom Circle
S: Santa Clara, California 95054
S: 12725 SW Millikan Way, Suite 400
S: Beaverton, Oregon 97005
S: USA
N: Marcelo W. Tosatti
......
......@@ -41,7 +41,7 @@ Linux 2.4:
Linux 2.5:
The same rules apply as 2.4 except that you should follow linux-kernel
to track changes in API's. The final contact point for Linux 2.5
submissions is Linus Torvalds <torvalds@transmeta.com>.
submissions is Linus Torvalds <torvalds@osdl.org>.
What Criteria Determine Acceptance
----------------------------------
......
......@@ -105,7 +105,7 @@ linux-kernel@vger.kernel.org. Most kernel developers monitor this
e-mail list, and can comment on your changes.
Linus Torvalds is the final arbiter of all changes accepted into the
Linux kernel. His e-mail address is torvalds@transmeta.com. He gets
Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets
a lot of e-mail, so typically you should do your best to -avoid- sending
him e-mail.
......
......@@ -14,8 +14,8 @@ for further details. This implementation is on-disk compatible
with the IRIX version of XFS.
Options
=======
Mount Options
=============
When mounting an XFS filesystem, the following options are accepted.
......@@ -107,3 +107,66 @@ When mounting an XFS filesystem, the following options are accepted.
Don't check for double mounted file systems using the file system uuid.
This is useful to mount LVM snapshot volumes.
sysctls
=======
The following sysctls are available for the XFS filesystem:
fs.xfs.stats_clear (Min: 0 Default: 0 Max: 1)
Setting this to "1" clears accumulated XFS statistics
in /proc/fs/xfs/stat. It then immediately reset to "0".
fs.xfs.sync_interval (Min: HZ Default: 30*HZ Max: 60*HZ)
The interval at which the xfssyncd thread for xfs filesystems
flushes metadata out to disk. This thread will flush log
activity out, and do some processing on unlinked inodes
fs.xfs.error_level (Min: 0 Default: 3 Max: 11)
A volume knob for error reporting when internal errors occur.
This will generate detailed messages & backtraces for filesystem
shutdowns, for example. Current threshold values are:
XFS_ERRLEVEL_OFF: 0
XFS_ERRLEVEL_LOW: 1
XFS_ERRLEVEL_HIGH: 5
fs.xfs.panic_mask (Min: 0 Default: 0 Max: 127)
Causes certain error conditions to call BUG(). Value is a bitmask;
AND together the tags which represent errors which should cause panics:
XFS_NO_PTAG 0LL
XFS_PTAG_IFLUSH 0x0000000000000001LL
XFS_PTAG_LOGRES 0x0000000000000002LL
XFS_PTAG_AILDELETE 0x0000000000000004LL
XFS_PTAG_ERROR_REPORT 0x0000000000000008LL
XFS_PTAG_SHUTDOWN_CORRUPT 0x0000000000000010LL
XFS_PTAG_SHUTDOWN_IOERROR 0x0000000000000020LL
XFS_PTAG_SHUTDOWN_LOGERROR 0x0000000000000040LL
This option is intended for debugging only.
fs.xfs.irix_symlink_mode (Min: 0 Default: 0 Max: 1)
Controls whether symlinks are created with mode 0777 (default)
or whether their mode is affected by the umask (irix mode).
fs.xfs.irix_sgid_inherit (Min: 0 Default: 0 Max: 1)
Controls files created in SGID directories.
If the group ID of the new file does not match the effective group
ID or one of the supplementary group IDs of the parent dir, the
ISGID bit is cleared if the irix_sgid_inherit compatibility sysctl
is set.
fs.xfs.restrict_chown (Min: 0 Default: 1 Max: 1)
Controls whether unprivileged users can use chown to "give away"
a file to another user.
vm.pagebuf.stats_clear (Min: 0 Default: 0 Max: 1)
Setting this to "1" clears accumulated pagebuf statistics
in /proc/fs/pagebuf/stat. It then immediately reset to "0".
vm.pagebuf.flush_age (Min: 1*HZ Default: 15*HZ Max: 300*HZ)
The age at which dirty metadata buffers are flushed to disk
vm.pagebuf.flush_int (Min: HZ/2 Default: HZ Max: 30*HZ)
The interval at which the list of dirty metadata buffers is
scanned.
......@@ -52,7 +52,7 @@ latter, man ksymoops for details.
Full Information
----------------
From: Linus Torvalds <torvalds@transmeta.com>
From: Linus Torvalds <torvalds@osdl.org>
How to track down an Oops.. [originally a mail to linux-kernel]
......
......@@ -21,7 +21,7 @@ difficult to maintain, add yourself with a patch if desired.
Bill Ryder <bryder@sgi.com>
Thomas Sailer <sailer@ife.ee.ethz.ch>
Gregory P. Smith <greg@electricrain.com>
Linus Torvalds <torvalds@transmeta.com>
Linus Torvalds <torvalds@osdl.org>
Roman Weissgaerber <weissg@vienna.at>
<Kazuki.Yasumatsu@fujixerox.co.jp>
......
......@@ -225,10 +225,8 @@ IF SOMETHING GOES WRONG:
the file MAINTAINERS to see if there is a particular person associated
with the part of the kernel that you are having trouble with. If there
isn't anyone listed there, then the second best thing is to mail
them to me (torvalds@transmeta.com), and possibly to any other
relevant mailing-list or to the newsgroup. The mailing-lists are
useful especially for SCSI and networking problems, as I can't test
either of those personally anyway.
them to me (torvalds@osdl.org), and possibly to any other relevant
mailing-list or to the newsgroup.
- In all bug-reports, *please* tell what kernel you are talking about,
how to duplicate the problem, and what your setup is (use your common
......
......@@ -385,11 +385,21 @@ static int try_to_identify (ide_drive_t *drive, u8 cmd)
int autoprobe = 0;
unsigned long cookie = 0;
if (IDE_CONTROL_REG && !hwif->irq) {
autoprobe = 1;
cookie = probe_irq_on();
/* enable device irq */
hwif->OUTB(drive->ctl, IDE_CONTROL_REG);
/*
* Disable device irq unless we need to
* probe for it. Otherwise we'll get spurious
* interrupts during the identify-phase that
* the irq handler isn't expecting.
*/
if (IDE_CONTROL_REG) {
u8 ctl = drive->ctl | 2;
if (!hwif->irq) {
autoprobe = 1;
cookie = probe_irq_on();
/* enable device irq */
ctl &= ~2;
}
hwif->OUTB(ctl, IDE_CONTROL_REG);
}
retval = actual_try_to_identify(drive, cmd);
......
......@@ -844,7 +844,7 @@ ppp_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (ns == 0)
goto outf;
skb_reserve(ns, dev->hard_header_len);
memcpy(skb_put(ns, skb->len), skb->data, skb->len);
skb_copy_bits(skb, 0, skb_put(ns, skb->len), skb->len);
kfree_skb(skb);
skb = ns;
}
......@@ -1455,7 +1455,7 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb)
goto err;
}
skb_reserve(ns, 2);
memcpy(skb_put(ns, skb->len), skb->data, skb->len);
skb_copy_bits(skb, 0, skb_put(ns, skb->len), skb->len);
kfree_skb(skb);
skb = ns;
}
......@@ -1826,7 +1826,7 @@ ppp_mp_reconstruct(struct ppp *ppp)
if (head != tail)
/* copy to a single skb */
for (p = head; p != tail->next; p = p->next)
memcpy(skb_put(skb, p->len), p->data, p->len);
skb_copy_bits(p, 0, skb_put(skb, p->len), p->len);
ppp->nextseq = tail->sequence + 1;
head = tail->next;
}
......
......@@ -1521,6 +1521,12 @@ static int happy_meal_init(struct happy_meal *hp)
hme_write32(hp, bregs + BMAC_IGAP1, DEFAULT_IPG1);
hme_write32(hp, bregs + BMAC_IGAP2, DEFAULT_IPG2);
/* Make sure we can handle VLAN frames. */
hme_write32(hp, bregs + BMAC_TXMAX,
ETH_DATA_LEN + ETH_HLEN + 8);
hme_write32(hp, bregs + BMAC_RXMAX,
ETH_DATA_LEN + ETH_HLEN + 8);
/* Load up the MAC address and random seed. */
HMD(("rseed/macaddr, "));
......
......@@ -38,6 +38,7 @@
#include "xfs.h"
#include "xfs_bmap_btree.h"
#include "xfs_bit.h"
#include "xfs_rw.h"
/*
* System memory size - used to scale certain data structures in XFS.
......@@ -48,7 +49,18 @@ unsigned long xfs_physmem;
* Tunable XFS parameters. xfs_params is required even when CONFIG_SYSCTL=n,
* other XFS code uses these values.
*/
xfs_param_t xfs_params = { 0, 1, 0, 0, 3, 30 * HZ, 0 };
xfs_param_t xfs_params = {
/* MIN DFLT MAX */
restrict_chown: { 0, 1, 1 },
sgid_inherit: { 0, 0, 1 },
symlink_mode: { 0, 0, 1 },
panic_mask: { 0, 0, 127 },
error_level: { 0, 3, 11 },
sync_interval: { HZ, 30*HZ, 60*HZ },
stats_clear: { 0, 0, 1 },
};
/*
* Global system credential structure.
......
......@@ -87,11 +87,15 @@ static inline void set_buffer_unwritten_io(struct buffer_head *bh)
bh->b_end_io = linvfs_unwritten_done;
}
#define restricted_chown xfs_params.restrict_chown
#define irix_sgid_inherit xfs_params.sgid_inherit
#define irix_symlink_mode xfs_params.symlink_mode
#define xfs_panic_mask xfs_params.panic_mask
#define xfs_error_level xfs_params.error_level
#define xfs_refcache_size xfs_params.refcache_size.val
#define xfs_refcache_purge_count xfs_params.refcache_purge.val
#define restricted_chown xfs_params.restrict_chown.val
#define irix_sgid_inherit xfs_params.sgid_inherit.val
#define irix_symlink_mode xfs_params.symlink_mode.val
#define xfs_panic_mask xfs_params.panic_mask.val
#define xfs_error_level xfs_params.error_level.val
#define xfs_syncd_interval xfs_params.sync_interval.val
#define xfs_stats_clear xfs_params.stats_clear.val
#define NBPP PAGE_SIZE
#define DPPSHFT (PAGE_SHIFT - 9)
......
......@@ -71,6 +71,7 @@
#include <linux/namei.h>
#include <linux/init.h>
#include <linux/mount.h>
#include <linux/suspend.h>
STATIC struct quotactl_ops linvfs_qops;
STATIC struct super_operations linvfs_sops;
......@@ -400,7 +401,10 @@ syncd(void *arg)
for (;;) {
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(xfs_params.sync_interval);
schedule_timeout(xfs_syncd_interval);
/* swsusp */
if (current->flags & PF_FREEZE)
refrigerator(PF_IOTHREAD);
if (vfsp->vfs_flag & VFS_UMOUNT)
break;
if (vfsp->vfs_flag & VFS_RDONLY)
......
/*
* Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it would be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*
* Further, this software is distributed without any warranty that it is
* free of the rightful claim of any third person regarding infringement
* or the like. Any license provided herein, whether implied or
* otherwise, applies only to this software file. Patent licenses, if
* any, provided herein do not apply to combinations of this program with
* other software, or any other product whatsoever.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston MA 02111-1307, USA.
*
* Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
* Mountain View, CA 94043, or:
*
* http://www.sgi.com
*
* For further information regarding this notice, see:
*
* http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
*/
#include <xfs.h>
#define SYNCD_FLAGS (SYNC_FSDATA|SYNC_BDFLUSH|SYNC_ATTR)
int syncd(void *arg)
{
vfs_t *vfsp = (vfs_t *) arg;
int error;
daemonize("xfs_syncd");
vfsp->vfs_sync_task = current;
wmb();
wake_up(&vfsp->vfs_wait_sync_task);
for (;;) {
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(xfs_params.sync_interval);
if (vfsp->vfs_flag & VFS_UMOUNT)
break;
if (vfsp->vfs_flag & VFS_RDONLY);
continue;
VFS_SYNC(vfsp, SYNCD_FLAGS, NULL, error);
}
vfsp->vfs_sync_task = NULL;
wmb();
wake_up(&vfsp->vfs_wait_sync_task);
return 0;
}
int
linvfs_start_syncd(vfs_t *vfsp)
{
int pid;
pid = kernel_thread(syncd, (void *) vfsp,
CLONE_VM | CLONE_FS | CLONE_FILES);
if (pid < 0)
return pid;
wait_event(vfsp->vfs_wait_sync_task, vfsp->vfs_sync_task);
return 0;
}
void
linvfs_stop_syncd(vfs_t *vfsp)
{
vfsp->vfs_flag |= VFS_UMOUNT;
wmb();
wake_up_process(vfsp->vfs_sync_task);
wait_event(vfsp->vfs_wait_sync_task, !vfsp->vfs_sync_task);
}
......@@ -36,9 +36,6 @@
#include <linux/proc_fs.h>
STATIC ulong xfs_min[XFS_PARAM] = { 0, 0, 0, 0, 0, HZ, 0 };
STATIC ulong xfs_max[XFS_PARAM] = { 1, 1, 1, 127, 3, HZ * 60, 1 };
static struct ctl_table_header *xfs_table_header;
......@@ -62,7 +59,7 @@ xfs_stats_clear_proc_handler(
vn_active = xfsstats.vn_active;
memset(&xfsstats, 0, sizeof(xfsstats));
xfsstats.vn_active = vn_active;
xfs_params.stats_clear = 0;
xfs_stats_clear = 0;
}
return ret;
......@@ -70,35 +67,42 @@ xfs_stats_clear_proc_handler(
#endif /* CONFIG_PROC_FS */
STATIC ctl_table xfs_table[] = {
{XFS_RESTRICT_CHOWN, "restrict_chown", &xfs_params.restrict_chown,
{XFS_RESTRICT_CHOWN, "restrict_chown", &xfs_params.restrict_chown.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[0], &xfs_max[0]},
&sysctl_intvec, NULL,
&xfs_params.restrict_chown.min, &xfs_params.restrict_chown.max},
{XFS_SGID_INHERIT, "irix_sgid_inherit", &xfs_params.sgid_inherit,
{XFS_SGID_INHERIT, "irix_sgid_inherit", &xfs_params.sgid_inherit.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[1], &xfs_max[1]},
&sysctl_intvec, NULL,
&xfs_params.sgid_inherit.min, &xfs_params.sgid_inherit.max},
{XFS_SYMLINK_MODE, "irix_symlink_mode", &xfs_params.symlink_mode,
{XFS_SYMLINK_MODE, "irix_symlink_mode", &xfs_params.symlink_mode.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[2], &xfs_max[2]},
&sysctl_intvec, NULL,
&xfs_params.symlink_mode.min, &xfs_params.symlink_mode.max},
{XFS_PANIC_MASK, "panic_mask", &xfs_params.panic_mask,
{XFS_PANIC_MASK, "panic_mask", &xfs_params.panic_mask.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[3], &xfs_max[3]},
&sysctl_intvec, NULL,
&xfs_params.panic_mask.min, &xfs_params.panic_mask.max},
{XFS_ERRLEVEL, "error_level", &xfs_params.error_level,
{XFS_ERRLEVEL, "error_level", &xfs_params.error_level.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[4], &xfs_max[4]},
&sysctl_intvec, NULL,
&xfs_params.error_level.min, &xfs_params.error_level.max},
{XFS_SYNC_INTERVAL, "sync_interval", &xfs_params.sync_interval,
{XFS_SYNC_INTERVAL, "sync_interval", &xfs_params.sync_interval.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &xfs_min[5], &xfs_max[5]},
&sysctl_intvec, NULL,
&xfs_params.sync_interval.min, &xfs_params.sync_interval.max},
/* please keep this the last entry */
#ifdef CONFIG_PROC_FS
{XFS_STATS_CLEAR, "stats_clear", &xfs_params.stats_clear,
{XFS_STATS_CLEAR, "stats_clear", &xfs_params.stats_clear.val,
sizeof(ulong), 0644, NULL, &xfs_stats_clear_proc_handler,
&sysctl_intvec, NULL, &xfs_min[6], &xfs_max[6]},
&sysctl_intvec, NULL,
&xfs_params.stats_clear.min, &xfs_params.stats_clear.max},
#endif /* CONFIG_PROC_FS */
{0}
......
......@@ -39,17 +39,22 @@
* Tunable xfs parameters
*/
#define XFS_PARAM (sizeof(struct xfs_param) / sizeof(ulong))
typedef struct xfs_sysctl_val {
ulong min;
ulong val;
ulong max;
} xfs_sysctl_val_t;
typedef struct xfs_param {
ulong restrict_chown; /* Root/non-root can give away files. */
ulong sgid_inherit; /* Inherit ISGID bit if process' GID is */
/* not a member of the parent dir GID. */
ulong symlink_mode; /* Symlink creat mode affected by umask. */
ulong panic_mask; /* bitmask to specify panics on errors. */
ulong error_level; /* Degree of reporting for internal probs*/
ulong sync_interval; /* time between sync calls */
ulong stats_clear; /* Reset all XFS statistics to zero. */
xfs_sysctl_val_t restrict_chown;/* Root/non-root can give away files.*/
xfs_sysctl_val_t sgid_inherit; /* Inherit ISGID bit if process' GID
* is not a member of the parent dir
* GID */
xfs_sysctl_val_t symlink_mode; /* Link creat mode affected by umask */
xfs_sysctl_val_t panic_mask; /* bitmask to cause panic on errors. */
xfs_sysctl_val_t error_level; /* Degree of reporting for problems */
xfs_sysctl_val_t sync_interval; /* time between sync calls */
xfs_sysctl_val_t stats_clear; /* Reset all XFS statistics to zero. */
} xfs_param_t;
/*
......
......@@ -92,7 +92,7 @@ pb_trace_func(
int j;
unsigned long flags;
if (!pb_params.p_un.debug) return;
if (!pb_params.debug.val) return;
if (ra == NULL) ra = (void *)__builtin_return_address(0);
......@@ -129,10 +129,13 @@ STATIC struct workqueue_struct *pagebuf_dataio_workqueue;
* /proc/sys/vm/pagebuf
*/
unsigned long pagebuf_min[P_PARAM] = { HZ/2, 1*HZ, 0, 0 };
unsigned long pagebuf_max[P_PARAM] = { HZ*30, HZ*300, 1, 1 };
pagebuf_param_t pb_params = {{ HZ, 15 * HZ, 0, 0 }};
pagebuf_param_t pb_params = {
/* MIN DFLT MAX */
flush_interval: { HZ/2, HZ, 30*HZ },
age_buffer: { 1*HZ, 15*HZ, 300*HZ },
stats_clear: { 0, 0, 1 },
debug: { 0, 0, 1 },
};
/*
* Pagebuf statistics variables
......@@ -1556,7 +1559,7 @@ pagebuf_delwri_queue(
}
list_add_tail(&pb->pb_list, &pbd_delwrite_queue);
pb->pb_flushtime = jiffies + pb_params.p_un.age_buffer;
pb->pb_flushtime = jiffies + pb_params.age_buffer.val;
spin_unlock(&pbd_delwrite_lock);
if (unlock && (pb->pb_flags & _PBF_LOCKABLE)) {
......@@ -1621,7 +1624,7 @@ pagebuf_daemon(
if (pbd_active == 1) {
mod_timer(&pb_daemon_timer,
jiffies + pb_params.p_un.flush_interval);
jiffies + pb_params.flush_interval.val);
interruptible_sleep_on(&pbd_waitq);
}
......@@ -1824,7 +1827,7 @@ pb_stats_clear_handler(
if (!ret && write && *valp) {
printk("XFS Clearing pbstats\n");
memset(&pbstats, 0, sizeof(pbstats));
pb_params.p_un.stats_clear = 0;
pb_params.stats_clear.val = 0;
}
return ret;
......@@ -1833,22 +1836,26 @@ pb_stats_clear_handler(
STATIC struct ctl_table_header *pagebuf_table_header;
STATIC ctl_table pagebuf_table[] = {
{PB_FLUSH_INT, "flush_int", &pb_params.data[0],
{PB_FLUSH_INT, "flush_int", &pb_params.flush_interval.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_ms_jiffies_minmax,
&sysctl_intvec, NULL, &pagebuf_min[0], &pagebuf_max[0]},
&sysctl_intvec, NULL,
&pb_params.flush_interval.min, &pb_params.flush_interval.max},
{PB_FLUSH_AGE, "flush_age", &pb_params.data[1],
{PB_FLUSH_AGE, "flush_age", &pb_params.age_buffer.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_ms_jiffies_minmax,
&sysctl_intvec, NULL, &pagebuf_min[1], &pagebuf_max[1]},
&sysctl_intvec, NULL,
&pb_params.age_buffer.min, &pb_params.age_buffer.max},
{PB_STATS_CLEAR, "stats_clear", &pb_params.data[2],
{PB_STATS_CLEAR, "stats_clear", &pb_params.stats_clear.val,
sizeof(ulong), 0644, NULL, &pb_stats_clear_handler,
&sysctl_intvec, NULL, &pagebuf_min[2], &pagebuf_max[2]},
&sysctl_intvec, NULL,
&pb_params.stats_clear.min, &pb_params.stats_clear.max},
#ifdef PAGEBUF_TRACE
{PB_DEBUG, "debug", &pb_params.data[3],
{PB_DEBUG, "debug", &pb_params.debug.val,
sizeof(ulong), 0644, NULL, &proc_doulongvec_minmax,
&sysctl_intvec, NULL, &pagebuf_min[3], &pagebuf_max[3]},
&sysctl_intvec, NULL,
&pb_params.debug.min, &pb_params.debug.max},
#endif
{0}
};
......
......@@ -85,18 +85,19 @@ struct pagebuf_trace_buf {
* Tunable pagebuf parameters
*/
#define P_PARAM 4
typedef union pagebuf_param {
struct {
ulong flush_interval; /* interval between runs of the
typedef struct pb_sysctl_val {
ulong min;
ulong val;
ulong max;
} pb_sysctl_val_t;
typedef struct pagebuf_param {
pb_sysctl_val_t flush_interval; /* interval between runs of the
* delwri flush daemon. */
ulong age_buffer; /* time for buffer to age before
pb_sysctl_val_t age_buffer; /* time for buffer to age before
* we flush it. */
ulong debug; /* debug tracing on or off */
ulong stats_clear; /* clear the pagebuf stats */
} p_un;
ulong data[P_PARAM];
pb_sysctl_val_t stats_clear; /* clear the pagebuf stats */
pb_sysctl_val_t debug; /* debug tracing on or off */
} pagebuf_param_t;
enum {
......
......@@ -96,9 +96,6 @@ xfs_init(void)
#endif /* DEBUG */
#ifdef XFS_DABUF_DEBUG
extern lock_t xfs_dabuf_global_lock;
#endif
#ifdef XFS_DABUF_DEBUG
spinlock_init(&xfs_dabuf_global_lock, "xfsda");
#endif
......
......@@ -68,7 +68,7 @@
* The attribute `pure' is not implemented in GCC versions earlier
* than 2.96.
*/
#if (__GNUC__ == 2 && __GNUC_MINOR >= 96) || __GNUC__ > 2
#if (__GNUC__ == 2 && __GNUC_MINOR__ >= 96) || __GNUC__ > 2
#define __attribute_pure__ __attribute__((pure))
#else
#define __attribute_pure__ /* unimplemented */
......
......@@ -276,8 +276,10 @@ static inline void ipv6_addr_prefix(struct in6_addr *pfx,
b = plen & 0x7;
memcpy(pfx->s6_addr, addr, o);
if (b != 0)
if (b != 0) {
pfx->s6_addr[o] = addr->s6_addr[o] & (0xff00 >> b);
o++;
}
if (o < 16)
memset(pfx->s6_addr + o, 0, 16 - o);
}
......
......@@ -628,13 +628,6 @@ static inline void tcp_openreq_free(struct open_request *req)
/*
* Pointers to address related TCP functions
* (i.e. things that depend on the address family)
*
* BUGGG_FUTURE: all the idea behind this struct is wrong.
* It mixes socket frontend with transport function.
* With port sharing between IPv6/v4 it gives the only advantage,
* only poor IPv6 needs to permanently recheck, that it
* is still IPv6 8)8) It must be cleaned up as soon as possible.
* --ANK (980802)
*/
struct tcp_func {
......
......@@ -692,7 +692,6 @@ static int suspend_prepare_image(void)
printk(KERN_CRIT "%sCouldn't get enough free pages, on %d pages short\n",
name_suspend, nr_needed_pages-nr_free_pages());
root_swap = 0xFFFF;
spin_unlock_irq(&suspend_pagedir_lock);
return 1;
}
si_swapinfo(&i); /* FIXME: si_swapinfo(&i) returns all swap devices information.
......@@ -700,7 +699,6 @@ static int suspend_prepare_image(void)
if (i.freeswap < nr_needed_pages) {
printk(KERN_CRIT "%sThere's not enough swap space available, on %ld pages short\n",
name_suspend, nr_needed_pages-i.freeswap);
spin_unlock_irq(&suspend_pagedir_lock);
return 1;
}
......@@ -710,7 +708,6 @@ static int suspend_prepare_image(void)
/* Shouldn't happen */
printk(KERN_CRIT "%sCouldn't allocate enough pages\n",name_suspend);
panic("Really should not happen");
spin_unlock_irq(&suspend_pagedir_lock);
return 1;
}
nr_copy_pages_check = nr_copy_pages;
......@@ -724,12 +721,9 @@ static int suspend_prepare_image(void)
* End of critical section. From now on, we can write to memory,
* but we should not touch disk. This specially means we must _not_
* touch swap space! Except we must write out our image of course.
*
* Following line enforces not writing to disk until we choose.
*/
printk( "critical section/: done (%d pages copied)\n", nr_copy_pages );
spin_unlock_irq(&suspend_pagedir_lock);
return 0;
}
......@@ -808,6 +802,24 @@ void do_magic_resume_2(void)
#endif
}
/* do_magic() is implemented in arch/?/kernel/suspend_asm.S, and basically does:
if (!resume) {
do_magic_suspend_1();
save_processor_state();
SAVE_REGISTERS
do_magic_suspend_2();
return;
}
GO_TO_SWAPPER_PAGE_TABLES
do_magic_resume_1();
COPY_PAGES_BACK
RESTORE_REGISTERS
restore_processor_state();
do_magic_resume_2();
*/
void do_magic_suspend_1(void)
{
mb();
......@@ -818,8 +830,13 @@ void do_magic_suspend_1(void)
void do_magic_suspend_2(void)
{
int is_problem;
read_swapfiles();
if (!suspend_prepare_image()) { /* suspend_save_image realeses suspend_pagedir_lock */
is_problem = suspend_prepare_image();
spin_unlock_irq(&suspend_pagedir_lock);
if (!is_problem) {
kernel_fpu_end(); /* save_processor_state() does kernel_fpu_begin, and we need to revert it in order to pass in_atomic() checks */
BUG_ON(in_atomic());
suspend_save_image();
suspend_power_down(); /* FIXME: if suspend_power_down is commented out, console is lost after few suspends ?! */
}
......@@ -1224,13 +1241,13 @@ void software_resume(void)
orig_loglevel = console_loglevel;
console_loglevel = new_loglevel;
if(!resume_file[0] && resume_status == RESUME_SPECIFIED) {
if (!resume_file[0] && resume_status == RESUME_SPECIFIED) {
printk( "suspension device unspecified\n" );
return;
}
printk( "resuming from %s\n", resume_file);
if(read_suspend_image(resume_file, 0))
if (read_suspend_image(resume_file, 0))
goto read_failure;
do_magic(1);
panic("This never returns");
......
......@@ -1153,7 +1153,9 @@ static int ipgre_tunnel_init(struct net_device *dev)
tunnel = (struct ip_tunnel*)dev->priv;
iph = &tunnel->parms.iph;
tunnel->dev = dev;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
......@@ -1215,6 +1217,9 @@ int __init ipgre_fb_tunnel_init(struct net_device *dev)
struct ip_tunnel *tunnel = (struct ip_tunnel*)dev->priv;
struct iphdr *iph = &tunnel->parms.iph;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
iph->version = 4;
iph->protocol = IPPROTO_GRE;
iph->ihl = 5;
......
......@@ -805,7 +805,10 @@ static int ipip_tunnel_init(struct net_device *dev)
tunnel = (struct ip_tunnel*)dev->priv;
iph = &tunnel->parms.iph;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
......@@ -841,6 +844,9 @@ static int __init ipip_fb_tunnel_init(struct net_device *dev)
struct ip_tunnel *tunnel = dev->priv;
struct iphdr *iph = &tunnel->parms.iph;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
iph->version = 4;
iph->protocol = IPPROTO_IPIP;
iph->ihl = 5;
......
......@@ -3693,7 +3693,17 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
tcp_sync_mss(sk, tp->pmtu_cookie);
tcp_initialize_rcv_mss(sk);
/* Make sure socket is routed, for correct metrics. */
tp->af_specific->rebuild_header(sk);
tcp_init_metrics(sk);
/* Prevent spurious tcp_cwnd_restart() on first data
* packet.
*/
tp->lsndtime = tcp_time_stamp;
tcp_init_buffer_space(sk);
if (sock_flag(sk, SOCK_KEEPOPEN))
......@@ -3959,7 +3969,18 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
if (tp->tstamp_ok)
tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
/* Make sure socket is routed, for
* correct metrics.
*/
tp->af_specific->rebuild_header(sk);
tcp_init_metrics(sk);
/* Prevent spurious tcp_cwnd_restart() on
* first data packet.
*/
tp->lsndtime = tcp_time_stamp;
tcp_initialize_rcv_mss(sk);
tcp_init_buffer_space(sk);
tcp_fast_path_on(tp);
......
......@@ -87,18 +87,11 @@ static void ip6_xmit_unlock(void)
static int ip6ip6_fb_tnl_dev_init(struct net_device *dev);
static int ip6ip6_tnl_dev_init(struct net_device *dev);
static void ip6ip6_tnl_dev_setup(struct net_device *dev);
/* the IPv6 tunnel fallback device */
static struct net_device ip6ip6_fb_tnl_dev = {
.name = "ip6tnl0",
.init = ip6ip6_fb_tnl_dev_init
};
static struct net_device *ip6ip6_fb_tnl_dev;
/* the IPv6 fallback tunnel */
static struct ip6_tnl ip6ip6_fb_tnl = {
.dev = &ip6ip6_fb_tnl_dev,
.parms ={.name = "ip6tnl0", .proto = IPPROTO_IPV6}
};
/* lists for storing tunnels in use */
static struct ip6_tnl *tnls_r_l[HASH_SIZE];
......@@ -216,59 +209,39 @@ static int
ip6_tnl_create(struct ip6_tnl_parm *p, struct ip6_tnl **pt)
{
struct net_device *dev;
int err = -ENOBUFS;
struct ip6_tnl *t;
char name[IFNAMSIZ];
int err;
dev = kmalloc(sizeof (*dev) + sizeof (*t), GFP_KERNEL);
if (!dev)
return err;
memset(dev, 0, sizeof (*dev) + sizeof (*t));
dev->priv = (void *) (dev + 1);
t = (struct ip6_tnl *) dev->priv;
t->dev = dev;
dev->init = ip6ip6_tnl_dev_init;
memcpy(&t->parms, p, sizeof (*p));
t->parms.name[IFNAMSIZ - 1] = '\0';
strcpy(dev->name, t->parms.name);
if (!dev->name[0]) {
int i = 0;
int exists = 0;
do {
sprintf(dev->name, "ip6tnl%d", ++i);
exists = (__dev_get_by_name(dev->name) != NULL);
} while (i < IP6_TNL_MAX && exists);
if (i == IP6_TNL_MAX) {
goto failed;
if (p->name[0]) {
strlcpy(name, p->name, IFNAMSIZ);
} else {
int i;
for (i = 1; i < IP6_TNL_MAX; i++) {
sprintf(name, "ip6tnl%d", i);
if (__dev_get_by_name(name) == NULL)
break;
}
memcpy(t->parms.name, dev->name, IFNAMSIZ);
if (i == IP6_TNL_MAX)
return -ENOBUFS;
}
SET_MODULE_OWNER(dev);
dev = alloc_netdev(sizeof (*t), name, ip6ip6_tnl_dev_setup);
if (dev == NULL)
return -ENOMEM;
t = dev->priv;
dev->init = ip6ip6_tnl_dev_init;
t->parms = *p;
if ((err = register_netdevice(dev)) < 0) {
goto failed;
kfree(dev);
return err;
}
dev_hold(dev);
ip6ip6_tnl_link(t);
*pt = t;
return 0;
failed:
kfree(dev);
return err;
}
/**
* ip6_tnl_destroy() - destroy old tunnel
* @t: tunnel to be destroyed
*
* Return:
* whatever unregister_netdevice() returns
**/
static inline int
ip6_tnl_destroy(struct ip6_tnl *t)
{
return unregister_netdevice(t->dev);
}
/**
......@@ -304,23 +277,12 @@ ip6ip6_tnl_locate(struct ip6_tnl_parm *p, struct ip6_tnl **pt, int create)
return (create ? -EEXIST : 0);
}
}
if (!create) {
if (!create)
return -ENODEV;
}
return ip6_tnl_create(p, pt);
}
/**
* ip6ip6_tnl_dev_destructor - tunnel device destructor
* @dev: the device to be destroyed
**/
static void
ip6ip6_tnl_dev_destructor(struct net_device *dev)
{
kfree(dev);
}
/**
* ip6ip6_tnl_dev_uninit - tunnel device uninitializer
* @dev: the device to be destroyed
......@@ -332,14 +294,14 @@ ip6ip6_tnl_dev_destructor(struct net_device *dev)
static void
ip6ip6_tnl_dev_uninit(struct net_device *dev)
{
if (dev == &ip6ip6_fb_tnl_dev) {
if (dev == ip6ip6_fb_tnl_dev) {
write_lock_bh(&ip6ip6_lock);
tnls_wc[0] = NULL;
write_unlock_bh(&ip6ip6_lock);
} else {
struct ip6_tnl *t = (struct ip6_tnl *) dev->priv;
ip6ip6_tnl_unlink(t);
ip6ip6_tnl_unlink((struct ip6_tnl *) dev->priv);
}
dev_put(dev);
}
/**
......@@ -506,7 +468,7 @@ void ip6ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
icmpv6_send(skb2, rel_type, rel_code, rel_info, skb2->dev);
if (rt)
dst_free(&rt->u.dst);
dst_release(&rt->u.dst);
kfree_skb(skb2);
}
......@@ -878,7 +840,6 @@ static void ip6_tnl_set_cap(struct ip6_tnl *t)
}
}
static void ip6ip6_tnl_link_config(struct ip6_tnl *t)
{
struct net_device *dev = t->dev;
......@@ -906,31 +867,25 @@ static void ip6ip6_tnl_link_config(struct ip6_tnl *t)
if (p->flags & IP6_TNL_F_CAP_XMIT) {
struct rt6_info *rt = rt6_lookup(&p->raddr, &p->laddr,
p->link, 0);
if (rt) {
struct net_device *rtdev;
if (!(rtdev = rt->rt6i_dev) ||
rtdev->type == ARPHRD_TUNNEL6) {
/* as long as tunnels use the same socket
for transmission, locally nested tunnels
won't work */
dst_release(&rt->u.dst);
goto no_link;
} else {
dev->iflink = rtdev->ifindex;
dev->hard_header_len = rtdev->hard_header_len +
sizeof (struct ipv6hdr);
dev->mtu = rtdev->mtu - sizeof (struct ipv6hdr);
if (dev->mtu < IPV6_MIN_MTU)
dev->mtu = IPV6_MIN_MTU;
dst_release(&rt->u.dst);
}
if (rt == NULL)
return;
/* as long as tunnels use the same socket for transmission,
locally nested tunnels won't work */
if (rt->rt6i_dev && rt->rt6i_dev->type != ARPHRD_TUNNEL6) {
dev->iflink = rt->rt6i_dev->ifindex;
dev->hard_header_len = rt->rt6i_dev->hard_header_len +
sizeof (struct ipv6hdr);
dev->mtu = rt->rt6i_dev->mtu - sizeof (struct ipv6hdr);
if (dev->mtu < IPV6_MIN_MTU)
dev->mtu = IPV6_MIN_MTU;
}
} else {
no_link:
dev->iflink = 0;
dev->hard_header_len = LL_MAX_HEADER + sizeof (struct ipv6hdr);
dev->mtu = ETH_DATA_LEN - sizeof (struct ipv6hdr);
dst_release(&rt->u.dst);
}
}
......@@ -995,7 +950,7 @@ ip6ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
switch (cmd) {
case SIOCGETTUNNEL:
if (dev == &ip6ip6_fb_tnl_dev) {
if (dev == ip6ip6_fb_tnl_dev) {
if (copy_from_user(&p,
ifr->ifr_ifru.ifru_data,
sizeof (p))) {
......@@ -1024,7 +979,7 @@ ip6ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
err = -EFAULT;
break;
}
if (!create && dev != &ip6ip6_fb_tnl_dev) {
if (!create && dev != ip6ip6_fb_tnl_dev) {
t = (struct ip6_tnl *) dev->priv;
}
if (!t && (err = ip6ip6_tnl_locate(&p, &t, create))) {
......@@ -1052,7 +1007,7 @@ ip6ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
if (!capable(CAP_NET_ADMIN))
break;
if (dev == &ip6ip6_fb_tnl_dev) {
if (dev == ip6ip6_fb_tnl_dev) {
if (copy_from_user(&p, ifr->ifr_ifru.ifru_data,
sizeof (p))) {
err = -EFAULT;
......@@ -1061,14 +1016,14 @@ ip6ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
err = ip6ip6_tnl_locate(&p, &t, 0);
if (err)
break;
if (t == &ip6ip6_fb_tnl) {
if (t == ip6ip6_fb_tnl_dev->priv) {
err = -EPERM;
break;
}
} else {
t = (struct ip6_tnl *) dev->priv;
}
err = ip6_tnl_destroy(t);
err = unregister_netdevice(t->dev);
break;
default:
err = -EINVAL;
......@@ -1110,40 +1065,49 @@ ip6ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
}
/**
* ip6ip6_tnl_dev_init_gen - general initializer for all tunnel devices
* ip6ip6_tnl_dev_setup - setup virtual tunnel device
* @dev: virtual device associated with tunnel
*
* Description:
* Set function pointers and initialize the &struct flowi template used
* by the tunnel.
* Initialize function pointers and device parameters
**/
static void
ip6ip6_tnl_dev_init_gen(struct net_device *dev)
static void ip6ip6_tnl_dev_setup(struct net_device *dev)
{
struct ip6_tnl *t = (struct ip6_tnl *) dev->priv;
struct flowi *fl = &t->fl;
memset(fl, 0, sizeof (*fl));
fl->proto = IPPROTO_IPV6;
dev->destructor = ip6ip6_tnl_dev_destructor;
SET_MODULE_OWNER(dev);
dev->uninit = ip6ip6_tnl_dev_uninit;
dev->destructor = (void (*)(struct net_device *))kfree;
dev->hard_start_xmit = ip6ip6_tnl_xmit;
dev->get_stats = ip6ip6_tnl_get_stats;
dev->do_ioctl = ip6ip6_tnl_ioctl;
dev->change_mtu = ip6ip6_tnl_change_mtu;
dev->type = ARPHRD_TUNNEL6;
dev->hard_header_len = LL_MAX_HEADER + sizeof (struct ipv6hdr);
dev->mtu = ETH_DATA_LEN - sizeof (struct ipv6hdr);
dev->flags |= IFF_NOARP;
if (ipv6_addr_type(&t->parms.raddr) & IPV6_ADDR_UNICAST &&
ipv6_addr_type(&t->parms.laddr) & IPV6_ADDR_UNICAST)
dev->flags |= IFF_POINTOPOINT;
/* Hmm... MAX_ADDR_LEN is 8, so the ipv6 addresses can't be
dev->iflink = 0;
/* Hmm... MAX_ADDR_LEN is 8, so the ipv6 addresses can't be
copied to dev->dev_addr and dev->broadcast, like the ipv4
addresses were in ipip.c, ip_gre.c and sit.c. */
dev->addr_len = 0;
}
/**
* ip6ip6_tnl_dev_init_gen - general initializer for all tunnel devices
* @dev: virtual device associated with tunnel
**/
static inline void
ip6ip6_tnl_dev_init_gen(struct net_device *dev)
{
struct ip6_tnl *t = (struct ip6_tnl *) dev->priv;
t->fl.proto = IPPROTO_IPV6;
t->dev = dev;
strcpy(t->parms.name, dev->name);
}
/**
* ip6ip6_tnl_dev_init - initializer for all non fallback tunnel devices
* @dev: virtual device associated with tunnel
......@@ -1167,8 +1131,10 @@ ip6ip6_tnl_dev_init(struct net_device *dev)
int ip6ip6_fb_tnl_dev_init(struct net_device *dev)
{
ip6ip6_tnl_dev_init_gen(dev);
tnls_wc[0] = &ip6ip6_fb_tnl;
struct ip6_tnl *t = dev->priv;
ip6ip6_tnl_dev_init_gen(dev);
dev_hold(dev);
tnls_wc[0] = t;
return 0;
}
......@@ -1190,8 +1156,6 @@ int __init ip6_tunnel_init(void)
struct sock *sk;
struct ipv6_pinfo *np;
ip6ip6_fb_tnl_dev.priv = (void *) &ip6ip6_fb_tnl;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_possible(i))
continue;
......@@ -1219,10 +1183,23 @@ int __init ip6_tunnel_init(void)
goto fail;
}
SET_MODULE_OWNER(&ip6ip6_fb_tnl_dev);
register_netdev(&ip6ip6_fb_tnl_dev);
ip6ip6_fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6tnl0",
ip6ip6_tnl_dev_setup);
if (!ip6ip6_fb_tnl_dev) {
err = -ENOMEM;
goto tnl_fail;
}
ip6ip6_fb_tnl_dev->init = ip6ip6_fb_tnl_dev_init;
if ((err = register_netdev(ip6ip6_fb_tnl_dev))) {
kfree(ip6ip6_fb_tnl_dev);
goto tnl_fail;
}
return 0;
tnl_fail:
inet6_del_protocol(&ip6ip6_protocol, IPPROTO_IPV6);
fail:
for (j = 0; j < i; j++) {
if (!cpu_possible(j))
......@@ -1241,7 +1218,7 @@ void ip6_tunnel_cleanup(void)
{
int i;
unregister_netdev(&ip6ip6_fb_tnl_dev);
unregister_netdev(ip6ip6_fb_tnl_dev);
inet6_del_protocol(&ip6ip6_protocol, IPPROTO_IPV6);
......
......@@ -743,7 +743,10 @@ static int ipip6_tunnel_init(struct net_device *dev)
tunnel = (struct ip_tunnel*)dev->priv;
iph = &tunnel->parms.iph;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
......@@ -780,6 +783,9 @@ int __init ipip6_fb_tunnel_init(struct net_device *dev)
struct ip_tunnel *tunnel = dev->priv;
struct iphdr *iph = &tunnel->parms.iph;
tunnel->dev = dev;
strcpy(tunnel->parms.name, dev->name);
iph->version = 4;
iph->protocol = IPPROTO_IPV6;
iph->ihl = 5;
......
......@@ -544,7 +544,6 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
struct ipv6_pinfo *np = inet6_sk(sk);
struct tcp_opt *tp = tcp_sk(sk);
struct in6_addr *saddr = NULL;
struct in6_addr saddr_buf;
struct flowi fl;
struct dst_entry *dst;
int addr_type;
......@@ -671,23 +670,24 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
goto failure;
}
ip6_dst_store(sk, dst, NULL);
sk->sk_route_caps = dst->dev->features &
~(NETIF_F_IP_CSUM | NETIF_F_TSO);
if (saddr == NULL) {
err = ipv6_get_saddr(dst, &np->daddr, &saddr_buf);
if (err)
err = ipv6_get_saddr(dst, &np->daddr, &fl.fl6_src);
if (err) {
dst_release(dst);
goto failure;
saddr = &saddr_buf;
}
saddr = &fl.fl6_src;
ipv6_addr_copy(&np->rcv_saddr, saddr);
}
/* set the source address */
ipv6_addr_copy(&np->rcv_saddr, saddr);
ipv6_addr_copy(&np->saddr, saddr);
inet->rcv_saddr = LOOPBACK4_IPV6;
ip6_dst_store(sk, dst, NULL);
sk->sk_route_caps = dst->dev->features &
~(NETIF_F_IP_CSUM | NETIF_F_TSO);
tp->ext_header_len = 0;
if (np->opt)
tp->ext_header_len = np->opt->opt_flen + np->opt->opt_nflen;
......@@ -714,8 +714,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
late_failure:
tcp_set_state(sk, TCP_CLOSE);
failure:
__sk_dst_reset(sk);
failure:
inet->dport = 0;
sk->sk_route_caps = 0;
return err;
......
......@@ -1614,7 +1614,7 @@ asmlinkage long sys_sendmsg(int fd, struct msghdr __user *msg, unsigned flags)
goto out;
/* do not move before msg_sys is valid */
err = -EINVAL;
err = -EMSGSIZE;
if (msg_sys.msg_iovlen > UIO_MAXIOV)
goto out_put;
......@@ -1713,7 +1713,7 @@ asmlinkage long sys_recvmsg(int fd, struct msghdr __user *msg, unsigned int flag
if (!sock)
goto out;
err = -EINVAL;
err = -EMSGSIZE;
if (msg_sys.msg_iovlen > UIO_MAXIOV)
goto out_put;
......
......@@ -922,6 +922,8 @@ EXPORT_SYMBOL(wanrouter_type_trans);
EXPORT_SYMBOL(lock_adapter_irq);
EXPORT_SYMBOL(unlock_adapter_irq);
MODULE_LICENSE("GPL");
/*
* End
*/
......@@ -790,8 +790,10 @@ int xfrm_lookup(struct dst_entry **dst_p, struct flowi *fl,
goto error;
}
if (err == -EAGAIN ||
genid != atomic_read(&flow_cache_genid))
genid != atomic_read(&flow_cache_genid)) {
xfrm_pol_put(policy);
goto restart;
}
}
if (err)
goto error;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment