- 11 May, 2009 3 commits
-
-
Ryusuke Konishi authored
Although some ioctls of nilfs2 exchange data in the form of indirectly referenced array, some of them lack size check on the array elements. This inserts the missing checks and rejects requests if data of ioctl does not have a valid format. We usually don't have to check size of structures that we associated with ioctl commands because the size is tested implicitly for identifying ioctl command; the checks this patch adds are for the cases where the implicit check is not applied. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
Ryusuke Konishi authored
This is a companion patch to ("nilfs2: fix possible circular locking for get information ioctls"). This corrects lock order reversal between mm->mmap_sem and nilfs->ns_segctor_sem in nilfs_clean_segments() which was detected by lockdep check: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.30-rc3-nilfs-00003-g360bdc1 #7 ------------------------------------------------------- mmap/5294 is trying to acquire lock: (&nilfs->ns_segctor_sem){++++.+}, at: [<d0d0e846>] nilfs_transaction_begin+0xb6/0x10c [nilfs2] but task is already holding lock: (&mm->mmap_sem){++++++}, at: [<c043700a>] do_page_fault+0x1d8/0x30a which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++++}: [<c01470a5>] __lock_acquire+0x1066/0x13b0 [<c01474a9>] lock_acquire+0xba/0xdd [<c01836bc>] might_fault+0x68/0x88 [<c023c61d>] copy_from_user+0x2a/0x111 [<d0d120d0>] nilfs_ioctl_prepare_clean_segments+0x1d/0xf1 [nilfs2] [<d0d0e2aa>] nilfs_clean_segments+0x6d/0x1b9 [nilfs2] [<d0d11f68>] nilfs_ioctl+0x2ad/0x318 [nilfs2] [<c01a3be7>] vfs_ioctl+0x22/0x69 [<c01a408e>] do_vfs_ioctl+0x460/0x499 [<c01a4107>] sys_ioctl+0x40/0x5a [<c01031a4>] sysenter_do_call+0x12/0x38 [<ffffffff>] 0xffffffff -> #0 (&nilfs->ns_segctor_sem){++++.+}: [<c0146e0b>] __lock_acquire+0xdcc/0x13b0 [<c01474a9>] lock_acquire+0xba/0xdd [<c0433f1d>] down_read+0x2a/0x3e [<d0d0e846>] nilfs_transaction_begin+0xb6/0x10c [nilfs2] [<d0cfe0e5>] nilfs_page_mkwrite+0xe7/0x154 [nilfs2] [<c0183b0b>] __do_fault+0x165/0x376 [<c01855cd>] handle_mm_fault+0x287/0x5d1 [<c043712d>] do_page_fault+0x2fb/0x30a [<c0435462>] error_code+0x72/0x78 [<ffffffff>] 0xffffffff where nilfs_clean_segments() holds: nilfs->ns_segctor_sem -> copy_from_user() --> page fault -> mm->mmap_sem And, page fault path may hold: page fault -> mm->mmap_sem --> nilfs_page_mkwrite() -> nilfs->ns_segctor_sem Even though nilfs_clean_segments() does not perform write access on given user pages, it may cause deadlock because nilfs->ns_segctor_sem is shared per device and mm->mmap_sem can be shared with other tasks. To avoid this problem, this patch moves all calls of copy_from_user() outside the nilfs->ns_segctor_sem lock in the ioctl. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
Ryusuke Konishi authored
This is one of two patches which are to correct possible circular locking between mm->mmap_sem and nilfs->ns_segctor_sem. The problem was detected by lockdep check as follows: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.30-rc3-nilfs-00002-g3552613 #6 ------------------------------------------------------- mmap/5418 is trying to acquire lock: (&nilfs->ns_segctor_sem){++++.+}, at: [<d0d0e852>] nilfs_transaction_begin+0xb6/0x10c [nilfs2] but task is already holding lock: (&mm->mmap_sem){++++++}, at: [<c043700a>] do_page_fault+0x1d8/0x30a which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++++}: [<c01470a5>] __lock_acquire+0x1066/0x13b0 [<c01474a9>] lock_acquire+0xba/0xdd [<c01836bc>] might_fault+0x68/0x88 [<c023c730>] copy_to_user+0x2c/0xfc [<d0d11b4f>] nilfs_ioctl_wrap_copy+0x103/0x160 [nilfs2] [<d0d11fa9>] nilfs_ioctl+0x30a/0x3b0 [nilfs2] [<c01a3be7>] vfs_ioctl+0x22/0x69 [<c01a408e>] do_vfs_ioctl+0x460/0x499 [<c01a4107>] sys_ioctl+0x40/0x5a [<c01031a4>] sysenter_do_call+0x12/0x38 [<ffffffff>] 0xffffffff -> #0 (&nilfs->ns_segctor_sem){++++.+}: [<c0146e0b>] __lock_acquire+0xdcc/0x13b0 [<c01474a9>] lock_acquire+0xba/0xdd [<c0433f1d>] down_read+0x2a/0x3e [<d0d0e852>] nilfs_transaction_begin+0xb6/0x10c [nilfs2] [<d0cfe0e5>] nilfs_page_mkwrite+0xe7/0x154 [nilfs2] [<c0183b0b>] __do_fault+0x165/0x376 [<c01855cd>] handle_mm_fault+0x287/0x5d1 [<c043712d>] do_page_fault+0x2fb/0x30a [<c0435462>] error_code+0x72/0x78 [<ffffffff>] 0xffffffff other info that might help us debug this: 1 lock held by mmap/5418: #0: (&mm->mmap_sem){++++++}, at: [<c043700a>] do_page_fault+0x1d8/0x30a stack backtrace: Pid: 5418, comm: mmap Not tainted 2.6.30-rc3-nilfs-00002-g3552613 #6 Call Trace: [<c0432145>] ? printk+0xf/0x12 [<c0145c48>] print_circular_bug_tail+0xaa/0xb5 [<c0146e0b>] __lock_acquire+0xdcc/0x13b0 [<d0d10149>] ? nilfs_sufile_get_stat+0x1e/0x105 [nilfs2] [<c013b59a>] ? up_read+0x16/0x2c [<d0d10225>] ? nilfs_sufile_get_stat+0xfa/0x105 [nilfs2] [<c01474a9>] lock_acquire+0xba/0xdd [<d0d0e852>] ? nilfs_transaction_begin+0xb6/0x10c [nilfs2] [<c0433f1d>] down_read+0x2a/0x3e [<d0d0e852>] ? nilfs_transaction_begin+0xb6/0x10c [nilfs2] [<d0d0e852>] nilfs_transaction_begin+0xb6/0x10c [nilfs2] [<d0cfe0e5>] nilfs_page_mkwrite+0xe7/0x154 [nilfs2] [<c0183b0b>] __do_fault+0x165/0x376 [<c01855cd>] handle_mm_fault+0x287/0x5d1 [<c043700a>] ? do_page_fault+0x1d8/0x30a [<c013b54f>] ? down_read_trylock+0x39/0x43 [<c043712d>] do_page_fault+0x2fb/0x30a [<c0436e32>] ? do_page_fault+0x0/0x30a [<c0435462>] error_code+0x72/0x78 [<c0436e32>] ? do_page_fault+0x0/0x30a This makes the lock granularity of nilfs->ns_segctor_sem finer than that of the mmap semaphore for ioctl commands except nilfs_clean_segments(). The successive patch ("nilfs2: fix lock order reversal in nilfs_clean_segments ioctl") is required to fully resolve the problem. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
- 10 May, 2009 1 commit
-
-
Ryusuke Konishi authored
This would fix the following failure during GC: nilfs_cpfile_delete_checkpoints: cannot delete block NILFS: GC failed during preparation: cannot delete checkpoints: err=-2 The problem was caused by a break in state consistency between page cache and btree; the above block was removed from the btree but the page buffering the block was remaining in the page cache in dirty state. This resolves the inconsistency by ensuring to clear dirty state of the page buffering the deleted block. Reported-by: David Arendt <admin@prnet.org> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
- 09 May, 2009 4 commits
-
-
Ryusuke Konishi authored
This fixes the following circular locking dependency problem: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.30-rc3 #5 ------------------------------------------------------- segctord/3895 is trying to acquire lock: (&nilfs->ns_writer_mutex){+.+...}, at: [<d0d02172>] nilfs_mdt_get_block+0x89/0x20f [nilfs2] but task is already holding lock: (&bmap->b_sem){++++..}, at: [<d0d02d99>] nilfs_bmap_propagate+0x14/0x2e [nilfs2] which lock already depends on the new lock. The bugfix is done by replacing call sites of nilfs_get_writer() which are never called from read-only context with direct dereferencing of pointer to a writable FS-instance. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
Ryusuke Konishi authored
Some function calls in nilfs_prepare_segment_for_recovery() may fail because they can create blocks on meta data files without configuring a writable FS-instance. Concretely, nilfs_mdt_create_block() routine of meta data files will fail in that case. This fixes the problem by temporarily attaching a writable FS-instace during the function is called. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
Linus Torvalds authored
-
git://git.infradead.org/mtd-2.6Linus Torvalds authored
* git://git.infradead.org/mtd-2.6: mtd: fix timeout in M25P80 driver mtd: Bug in m25p80.c during whole-chip erase mtd: expose subpage size via sysfs mtd: mtd in mtd_release is unused without CONFIG_MTD_CHAR
-
- 08 May, 2009 14 commits
-
-
Linus Torvalds authored
Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: MCE: make cmci_discover_lock irq-safe x86: xen, i386: reserve Xen pagetables x86, kexec: fix crashdump panic with CONFIG_KEXEC_JUMP x86-64: finish cleanup_highmaps()'s job wrt. _brk_end x86: fix boot hang in early_reserve_e820() x86: Fix a typo in a printk message x86, srat: do not register nodes beyond e820 map
-
git://jdelvare.pck.nerim.net/jdelvare-2.6Linus Torvalds authored
* 'hwmon-for-linus' of git://jdelvare.pck.nerim.net/jdelvare-2.6: hwmon: (w83781d) Fix W83782D support (NULL pointer dereference) hwmon: (asus_atk0110) Fix compiler warning
-
git://git.monstr.eu/linux-2.6-microblazeLinus Torvalds authored
* 'fixes-for-linus' of git://git.monstr.eu/linux-2.6-microblaze: microblaze: Fix return value for sys_ipc microblaze: Storage class should be before const qualifier
-
Masami Hiramatsu authored
Fix kprobes to lock text_mutex around some arch_arm/disarm_kprobe() which are newly added by commit de5bd88d. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jean Delvare authored
Commit 360782dd (hwmon: (w83781d) Stop abusing struct i2c_client for ISA devices) broke W83782D support for devices connected on the ISA bus. You will hit a NULL pointer dereference as soon as you read any device attribute. Other devices, and W83782D devices on the SMBus, aren't affected. Reported-by: Michel Abraham Signed-off-by: Jean Delvare <khali@linux-fr.org> Tested-by: Michel Abraham
-
Luca Tettamanti authored
atk_sensor_type is only used when DEBUG is defined. Signed-off-by: Luca Tettamanti <kronos.it@gmail.com> Signed-off-by: Jean Delvare <khali@linux-fr.org>
-
Peter Horton authored
Extend erase timeout in M25P80 SPI Flash driver. The M25P80 drivers fails erasing sectors on a M25P128 because the ready wait timeout is too short. Change the timeout from a simple loop count to a suitable number of seconds. Signed-off-by: Peter Horton <zero@colonel-panic.org> Tested-by: Martin Michlmayr <tbm@cyrius.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
-
Hidetoshi Seto authored
Lockdep reports the warning below when Li tries to offline one cpu: [ 110.835487] ================================= [ 110.835616] [ INFO: inconsistent lock state ] [ 110.835688] 2.6.30-rc4-00336-g8c9ed899 #52 [ 110.835757] --------------------------------- [ 110.835828] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage. [ 110.835908] swapper/0 [HC1[1]:SC0[0]:HE0:SE1] takes: [ 110.835982] (cmci_discover_lock){?.+...}, at: [<ffffffff80236dc0>] cmci_clear+0x30/0x9b cmci_clear() can be called via smp_call_function_single(). It is better to disable interrupt while holding cmci_discover_lock, to turn it into an irq-safe lock - we can deadlock otherwise. [ Impact: fix possible deadlock in the MCE code ] Reported-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <4A03ED38.8000700@jp.fujitsu.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Reported-by: Shaohua Li<shaohua.li@intel.com>
-
Jeremy Fitzhardinge authored
The Xen pagetables are no longer implicitly reserved as part of the other i386_start_kernel reservations, so make sure we explicitly reserve them. This prevents them from being released into the general kernel free page pool and reused. [ Impact: fix Xen guest crash ] Also-Bisected-by: Bryan Donlan <bdonlan@gmail.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Xen-devel <xen-devel@lists.xensource.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <4A032EEC.30509@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Huang Ying authored
Tim Starling reported that crashdump will panic with kernel compiled with CONFIG_KEXEC_JUMP due to null pointer deference in machine_kexec_32.c: machine_kexec(), when deferencing kexec_image. Refering to: http://bugzilla.kernel.org/show_bug.cgi?id=13265 This patch fixes the BUG via replacing global variable reference: kexec_image in machine_kexec() with local variable reference: image, which is more appropriate, and will not be null. Same BUG is in machine_kexec_64.c too, so fixed too in the same way. [ Impact: fix crash on kexec ] Reported-by: Tim Starling <tstarling@wikimedia.org> Signed-off-by: Huang Ying <ying.huang@intel.com> LKML-Reference: <1241751101.6259.85.camel@yhuang-dev.sh.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
-
Jan Beulich authored
With the introduction of the .brk section, special care must be taken that no unused page table entries remain if _brk_end and _end are separated by a 2M page boundary. cleanup_highmap() runs very early and hence cannot take care of that, hence potential entries needing to be removed past _brk_end must be cleared once the brk allocator has done its job. [ Impact: avoids undesirable TLB aliases ] Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
-
Jan Beulich authored
If the first non-reserved (sub-)range doesn't fit the size requested, an endless loop will be entered. If a range returned from find_e820_area_size() turns out insufficient in size, the range must be skipped before calling the function again. [ Impact: fixes boot hang on some platforms ] Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6: (32 commits) [CIFS] Fix double list addition in cifs posix open code [CIFS] Allow raw ntlmssp code to be enabled with sec=ntlmssp [CIFS] Fix SMB uid in NTLMSSP authenticate request [CIFS] NTLMSSP reenabled after move from connect.c to sess.c [CIFS] Remove sparse warning [CIFS] remove checkpatch warning [CIFS] Fix final user of old string conversion code [CIFS] remove cifs_strfromUCS_le [CIFS] NTLMSSP support moving into new file, old dead code removed [CIFS] Fix endian conversion of vcnum field [CIFS] Remove trailing whitespace [CIFS] Remove sparse endian warnings [CIFS] Add remaining ntlmssp flags and standardize field names [CIFS] Fix build warning cifs: fix length handling in cifs_get_name_from_search_buf [CIFS] Remove unneeded QuerySymlink call and fix mapping for unmapped status [CIFS] rename cifs_strndup to cifs_strndup_from_ucs Added loop check when mounting DFS tree. Enable dfs submounts to handle remote referrals. [CIFS] Remove older session setup implementation ...
-
Steve French authored
Remove adding open file entry twice to lists in the file Do not fill file info twice in case of posix opens and creates Signed-off-by: Shirish Pargaonkar <shirishp@us.ibm.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
-
- 07 May, 2009 13 commits
-
-
David Howells authored
Don't check vm_region::vm_start is page aligned in add_nommu_region() because the region may reflect some non-page-aligned mapped file, such as could be obtained from RomFS XIP. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://neil.brown.name/mdLinus Torvalds authored
* 'for-linus' of git://neil.brown.name/md: md: remove rd%d links immediately after stopping an array. md: remove ability to explicit set an inactive array to 'clean'. md: constify VFTs md: tidy up status_resync to handle large arrays. md: fix some (more) errors with bitmaps on devices larger than 2TB. md/raid10: don't clear bitmap during recovery if array will still be degraded. md: fix loading of out-of-date bitmap.
-
Linus Torvalds authored
It's a really simple patch that basically just open-codes the current "secure_ip_id()" call, but when open-coding it we now use a _static_ hashing area, so that it gets updated every time. And to make sure somebody can't just start from the same original seed of all-zeroes, and then do the "half_md4_transform()" over and over until they get the same sequence as the kernel has, each iteration also mixes in the same old "current->pid + jiffies" we used - so we should now have a regular strong pseudo-number generator, but we also have one that doesn't have a single seed. Note: the "pid + jiffies" is just meant to be a tiny tiny bit of noise. It has no real meaning. It could be anything. I just picked the previous seed, it's just that now we keep the state in between calls and that will feed into the next result, and that should make all the difference. I made that hash be a per-cpu data just to avoid cache-line ping-pong: having multiple CPU's write to the same data would be fine for randomness, and add yet another layer of chaos to it, but since get_random_int() is supposed to be a fast interface I did it that way instead. I considered using "__raw_get_cpu_var()" to avoid any preemption overhead while still getting the hash be _mostly_ ping-pong free, but in the end good taste won out. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
* master.kernel.org:/home/rmk/linux-2.6-arm: [ARM] 5507/1: support R_ARM_MOVW_ABS_NC and MOVT_ABS relocation types [ARM] 5506/1: davinci: DMA_32BIT_MASK --> DMA_BIT_MASK(32) i.MX31: Disable CPU_32v6K in mx3_defconfig. mx3fb: Fix compilation with CONFIG_PM mx27ads: move PBC mapping out of vmalloc space MXC: remove BUG_ON in interrupt handler mx31: remove mx31moboard_defconfig ARM: ARCH_MXC should select HAVE_CLK mxc : BUG in imx_dma_request mxc : Clean up properly when imx_dma_free() used without imx_dma_disable() [ARM] mv78xx0: update defconfig [ARM] orion5x: update defconfig [ARM] Kirkwood: update defconfig [ARM] Kconfig typo fix: "PXA930" -> "CPU_PXA930". [ARM] S3C2412: Add missing cache flush in suspend code [ARM] S3C: Add UDIVSLOT support for newer UARTS [ARM] S3C64XX: Add S3C64XX_PA_IIS{0,1} to <mach/map.h>
-
Paul Gortmaker authored
From: Bruce Ashfield <bruce.ashfield@windriver.com> To fully support the armv7-a instruction set/optimizations, support for the R_ARM_MOVW_ABS_NC and R_ARM_MOVT_ABS relocation types is required. The MOVW and MOVT are both load-immediate instructions, MOVW loads 16 bits into the bottom half of a register, and MOVT loads 16 bits into the top half of a register. The relocation information for these instructions has a full 32 bit value, plus an addend which is stored in the 16 immediate bits in the instruction itself. The immediate bits in the instruction are not contiguous (the register # splits it into a 4 bit and 12 bit value), so the addend has to be extracted accordingly and added to the value. The value is then split and put into the instruction; a MOVW uses the bottom 16 bits of the value, and a MOVT uses the top 16 bits. Signed-off-by: David Borman <david.borman@windriver.com> Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Kevin Hilman authored
As per commit 284901a9, use DMA_BIT_MASK(n) Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
NeilBrown authored
md maintains link in sys/mdXX/md/ to identify which device has which role in the array. e.g. rd2 -> dev-sda indicates that the device with role '2' in the array is sda. These links are only present when the array is active. They are created immediately after ->run is called, and so should be removed immediately after ->stop is called. However they are currently removed a little bit later, and it is possible for ->run to be called again, thus adding these links, before they are removed. So move the removal earlier so they are consistently only present when the array is active. Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
Being able to write 'clean' to an 'array_state' of an inactive array to activate it in 'clean' mode is both unnecessary and inconvenient. It is unnecessary because the same can be achieved by writing 'active'. This activates and array, but it still remains 'clean' until the first write. It is inconvenient because writing 'clean' is more often used to cause an 'active' array to revert to 'clean' mode (thus blocking any writes until a 'write-pending' is promoted to 'active'). Allowing 'clean' to both activate an array and mark an active array as clean can lead to races: One program writes 'clean' to mark the active array as clean at the same time as another program writes 'inactive' to deactivate (stop) and active array. Depending on which writes first, the array could be deactivated and immediately reactivated which isn't what was desired. So just disable the use of 'clean' to activate an array. This avoids a race that can be triggered with mdadm-3.0 and external metadata, so it suitable for -stable. Reported-by: Rafal Marszewski <rafal.marszewski@intel.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Cc: <stable@kernel.org> Signed-off-by: NeilBrown <neilb@suse.de>
-
Jan Engelhardt authored
Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
Two problems in status_resync. 1/ It still used Kilobytes as the basic block unit, while most code now uses sectors uniformly. 2/ It doesn't allow for the possibility that max_sectors exceeds the range of "unsigned long". So - change "max_blocks" to "max_sectors", and store sector numbers in there and in 'resync' - Make 'rt' a 'sector_t' so it can temporarily hold the number of remaining sectors. - use sector_div rather than normal division. - change the magic '100' used to preserve precision to '32'. + making it a power of 2 makes division easier + it doesn't need to be as large as it was chosen when we averaged speed over the entire run. Now we average speed over the last 30 seconds or so. Reported-by: "Mario 'BitKoenig' Holbe" <Mario.Holbe@TU-Ilmenau.DE> Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
If a write intent bitmap covers more than 2TB, we sometimes work with values beyond 32bit, so these need to be sector_t. This patches add the required casts to some unsigned longs that are being shifted up. This will affect any raid10 larger than 2TB, or any raid1/4/5/6 with member devices that are larger than 2TB. Signed-off-by: NeilBrown <neilb@suse.de> Reported-by: "Mario 'BitKoenig' Holbe" <Mario.Holbe@TU-Ilmenau.DE> Cc: stable@kernel.org
-
NeilBrown authored
If we have a raid10 with multiple missing devices, and we recover just one of these to a spare, then we risk (depending on the bitmap and array chunk size) clearing bits of the bitmap for which recovery isn't complete (because a device is still missing). This can lead to a subsequent "re-add" being recovered without any IO happening, which would result in loss of data. This patch takes the safe approach of not clearing bitmap bits if the array will still be degraded. This patch is suitable for all active -stable kernels. Cc: stable@kernel.org Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
When md is loading a bitmap which it knows is out of date, it fills each page with 1s and writes it back out again. However the write_page call makes used of bitmap->file_pages and bitmap->last_page_size which haven't been set correctly yet. So this can sometimes fail. Move the setting of file_pages and last_page_size to before the call to write_page. This bug can cause the assembly on an array to fail, thus making the data inaccessible. Hence I think it is a suitable candidate for -stable. Cc: stable@kernel.org Reported-by: Vojtech Pavlik <vojtech@suse.cz> Signed-off-by: NeilBrown <neilb@suse.de>
-
- 06 May, 2009 5 commits
-
-
Andrew Morton authored
Fix zillions of -mm x86_64 allmodconfig build errors - the file uses EXPORT_SYMBOL() and kmalloc but misses the needed includes. Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Eric Piel authored
With the removal of duplicate unpack_to_rootfs() (commit df52092f) the messages displayed do not actually correspond to what the kernel is doing. In addition, depending if ramdisks are supported or not, the messages are not at all the same. So keep the messages more in sync with what is really doing the kernel, and only display a second message in case of failure. This also ensure that the printk message cannot be split by other printk's. Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
NOMMU mmap() has an option controlled by a sysctl variable that determines whether the allocations made by do_mmap_private() should have the excess space trimmed off and returned to the allocator. Make the initial setting of this variable a Kconfig configuration option. The reason there can be excess space is that the allocator only allocates in power-of-2 size chunks, but mmap()'s can be made in sizes that aren't a power of 2. There are two alternatives: (1) Keep the excess as dead space. The dead space then remains unused for the lifetime of the mapping. Mappings of shared objects such as libc, ld.so or busybox's text segment may retain their dead space forever. (2) Return the excess to the allocator. This means that the dead space is limited to less than a page per mapping, but it means that for a transient process, there's more chance of fragmentation as the excess space may be reused fairly quickly. During the boot process, a lot of transient processes are created, and this can cause a lot of fragmentation as the pagecache and various slabs grow greatly during this time. By turning off the trimming of excess space during boot and disabling batching of frees, Coldfire can manage to boot. A better way of doing things might be to have /sbin/init turn this option off. By that point libc, ld.so and init - which are all long-duration processes - have all been loaded and trimmed. Reported-by: Lanttor Guo <lanttor.guo@freescale.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Lanttor Guo <lanttor.guo@freescale.com> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
Clamp zone_batchsize() to 0 under NOMMU conditions to stop free_hot_cold_page() from queueing and batching frees. The problem is that under NOMMU conditions it is really important to be able to allocate large contiguous chunks of memory, but when munmap() or exit_mmap() releases big stretches of memory, return of these to the buddy allocator can be deferred, and when it does finally happen, it can be in small chunks. Whilst the fragmentation this incurs isn't so much of a problem under MMU conditions as userspace VM is glued together from individual pages with the aid of the MMU, it is a real problem if there isn't an MMU. By clamping the page freeing queue size to 0, pages are returned to the allocator immediately, and the buddy detector is more likely to be able to glue them together into large chunks immediately, and fragmentation is less likely to occur. By disabling batching of frees, and by turning off the trimming of excess space during boot, Coldfire can manage to boot. Reported-by: Lanttor Guo <lanttor.guo@freescale.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Lanttor Guo <lanttor.guo@freescale.com> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
Use roundown_pow_of_two(N) in zone_batchsize() rather than (1 << (fls(N)-1)) as they are equivalent, and with the former it is easier to see what is going on. Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Lanttor Guo <lanttor.guo@freescale.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-