- 11 Aug, 2008 7 commits
-
-
Peter Zijlstra authored
The nesting is correct due to holding mmap_sem, use the new annotation to annotate this. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Expose the new lock protection lock. This can be used to annotate places where we take multiple locks of the same class and avoid deadlocks by always taking another (top-level) lock first. NOTE: we're still bound to the MAX_LOCK_DEPTH (48) limit. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
On Fri, 2008-08-01 at 16:26 -0700, Linus Torvalds wrote: > On Fri, 1 Aug 2008, David Miller wrote: > > > > Taking more than a few locks of the same class at once is bad > > news and it's better to find an alternative method. > > It's not always wrong. > > If you can guarantee that anybody that takes more than one lock of a > particular class will always take a single top-level lock _first_, then > that's all good. You can obviously screw up and take the same lock _twice_ > (which will deadlock), but at least you cannot get into ABBA situations. > > So maybe the right thing to do is to just teach lockdep about "lock > protection locks". That would have solved the multi-queue issues for > networking too - all the actual network drivers would still have taken > just their single queue lock, but the one case that needs to take all of > them would have taken a separate top-level lock first. > > Never mind that the multi-queue locks were always taken in the same order: > it's never wrong to just have some top-level serialization, and anybody > who needs to take <n> locks might as well do <n+1>, because they sure as > hell aren't going to be on _any_ fastpaths. > > So the simplest solution really sounds like just teaching lockdep about > that one special case. It's not "nesting" exactly, although it's obviously > related to it. Do as Linus suggested. The lock protection lock is called nest_lock. Note that we still have the MAX_LOCK_DEPTH (48) limit to consider, so anything that spills that it still up shit creek. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Most the free-standing lock_acquire() usages look remarkably similar, sweep them into a new helper. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Dave Jones authored
struct held_lock { u64 prev_chain_key; /* 0 8 */ struct lock_class * class; /* 8 8 */ long unsigned int acquire_ip; /* 16 8 */ struct lockdep_map * instance; /* 24 8 */ int irq_context; /* 32 4 */ int trylock; /* 36 4 */ int read; /* 40 4 */ int check; /* 44 4 */ int hardirqs_off; /* 48 4 */ /* size: 56, cachelines: 1 */ /* padding: 4 */ /* last cacheline: 56 bytes */ }; struct held_lock { u64 prev_chain_key; /* 0 8 */ long unsigned int acquire_ip; /* 8 8 */ struct lockdep_map * instance; /* 16 8 */ unsigned int class_idx:11; /* 24:21 4 */ unsigned int irq_context:2; /* 24:19 4 */ unsigned int trylock:1; /* 24:18 4 */ unsigned int read:2; /* 24:16 4 */ unsigned int check:2; /* 24:14 4 */ unsigned int hardirqs_off:1; /* 24:13 4 */ /* size: 32, cachelines: 1 */ /* padding: 4 */ /* bit_padding: 13 bits */ /* last cacheline: 32 bytes */ }; [mingo@elte.hu: shrunk hlock->class too] [peterz@infradead.org: fixup bit sizes] Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
-
Peter Zijlstra authored
Instead of using a per-rq lock class, use the regular nesting operations. However, take extra care with double_lock_balance() as it can release the already held rq->lock (and therefore change its nesting class). So what can happen is: spin_lock(rq->lock); // this rq subclass 0 double_lock_balance(rq, other_rq); // release rq // acquire other_rq->lock subclass 0 // acquire rq->lock subclass 1 spin_unlock(other_rq->lock); leaving you with rq->lock in subclass 1 So a subsequent double_lock_balance() call can try to nest a subclass 1 lock while already holding a subclass 1 lock. Fix this by introducing double_unlock_balance() which releases the other rq's lock, but also re-sets the subclass for this rq's lock to 0. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
this can be used to reset a held lock's subclass, for arbitrary-depth iterated data structures such as trees or lists which have per-node locks. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 01 Aug, 2008 2 commits
-
-
Peter Zijlstra authored
While thinking about David's graph walk lockdep patch it _finally_ dawned on me that there is no reason we have a lock class per cpu ... Sorry for being dense :-/ The below changes the annotation from a lock class per cpu, to a single nested lock, as the scheduler never holds more that 2 rq locks at a time anyway. If there was code requiring holding all rq locks this would not work and the original annotation would be the only option, but that not being the case, this is a much lighter one. Compiles and boots on a 2-way x86_64. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
David Miller authored
Otherwise lock debugging messages on runqueue locks can deadlock the system due to the wakeups performed by printk(). Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 31 Jul, 2008 1 commit
-
-
David Miller authored
When we traverse the graph, either forwards or backwards, we are interested in whether a certain property exists somewhere in a node reachable in the graph. Therefore it is never necessary to traverse through a node more than once to get a correct answer to the given query. Take advantage of this property using a global ID counter so that we need not clear all the markers in all the lock_class entries before doing a traversal. A new ID is choosen when we start to traverse, and we continue through a lock_class only if it's ID hasn't been marked with the new value yet. This short-circuiting is essential especially for high CPU count systems. The scheduler has a runqueue per cpu, and needs to take two runqueue locks at a time, which leads to long chains of backwards and forwards subgraphs from these runqueue lock nodes. Without the short-circuit implemented here, a graph traversal on a runqueue lock can take up to (1 << (N - 1)) checks on a system with N cpus. For anything more than 16 cpus or so, lockdep will eventually bring the machine to a complete standstill. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 29 Jul, 2008 6 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: lguest: turn Waker into a thread, not a process lguest: Enlarge virtio rings lguest: Use GSO/IFF_VNET_HDR extensions on tun/tap lguest: Remove 'network: no dma buffer!' warning lguest: Adaptive timeout lguest: Tell Guest net not to notify us on every packet xmit lguest: net block unneeded receive queue update notifications lguest: wrap last_avail accesses. lguest: use cpu capability accessors lguest: virtio-rng support lguest: Support assigning a MAC address lguest: Don't leak /dev/zero fd lguest: fix verbose printing of device features. lguest: fix switcher_page leak on unload lguest: Guest int3 fix lguest: set max_pfn_mapped, growl loudly at Yinghai Lu
-
git://git.o-hand.com/linux-mfdLinus Torvalds authored
* 'for-linus' of git://git.o-hand.com/linux-mfd: mfd: accept pure device as a parent, not only platform_device mfd: add platform_data to mfd_cell mfd: Coding style fixes mfd: Use to_platform_device instead of container_of
-
git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: (21 commits) x86/PCI: use dev_printk when possible PCI: add D3 power state avoidance quirk PCI: fix bogus "'device' may be used uninitialized" warning in pci_slot PCI: add an option to allow ASPM enabled forcibly PCI: disable ASPM on pre-1.1 PCIe devices PCI: disable ASPM per ACPI FADT setting PCI MSI: Don't disable MSIs if the mask bit isn't supported PCI: handle 64-bit resources better on 32-bit machines PCI: rewrite PCI BAR reading code PCI: document pci_target_state PCI hotplug: fix typo in pcie hotplug output x86 gart: replace to_pages macro with iommu_num_pages x86, AMD IOMMU: replace to_pages macro with iommu_num_pages iommu: add iommu_num_pages helper function dma-coherent: add documentation to new interfaces Cris: convert to using generic dma-coherent mem allocator Sh: use generic per-device coherent dma allocator ARM: support generic per-device coherent dma mem Generic dma-coherent: fix DMA_MEMORY_EXCLUSIVE x86: use generic per-device dma coherent allocator ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6: [SCSI] qla2xxx: fix msleep compile error
-
Linus Torvalds authored
Alexey Dobriyan reported trouble with LTP with the new fast-gup code, and Johannes Weiner debugged it to non-page-aligned addresses, where the new get_user_pages_fast() code would do all the wrong things, including just traversing past the end of the requested area due to 'addr' never matching 'end' exactly. This is not a pretty fix, and we may actually want to move the alignment into generic code, leaving just the core code per-arch, but Alexey verified that the vmsplice01 LTP test doesn't crash with this. Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com> Debugged-by: Johannes Weiner <hannes@saeurebad.de> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 28 Jul, 2008 24 commits
-
-
Rusty Russell authored
lguest uses a Waker process to break it out of the kernel (ie. actually running the guest) when file descriptor needs attention. Changing this from a process to a thread somewhat simplifies things: it can directly access the fd_set of things to watch. More importantly, it means that the Waker can see Guest memory correctly, so /dev/vring file descriptors will work as anticipated (the alternative is to actually mmap MAP_SHARED, but you can't do that with /dev/zero). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
With big packets, 128 entries is a little small. Guest -> Host 1GB TCP: Before: 8.43625 seconds xmit 95640 recv 198266 timeout 49771 usec 1252 After: 8.01099 seconds xmit 49200 recv 102263 timeout 26014 usec 2118 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Guest -> Host 1GB TCP: Before 20.1974 seconds xmit 214510 recv 5 timeout 214491 usec 278 After 8.43625 seconds xmit 95640 recv 198266 timeout 49771 usec 1252 Host -> Guest 1GB TCP: Before: Seconds 9.98854 xmit 172166 recv 5344 timeout 172157 usec 251 After: Seconds 5.72803 xmit 244322 recv 9919 timeout 244302 usec 156 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This warning can happen a lot under load, and it should be warnx not warn anwyay. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Since the correct timeout value varies, use a heuristic which adjusts the timeout depending on how many packets we've seen. This gives slightly worse results, but doesn't need tweaking when GSO is introduced. 500 usec 19.1887 xmit 561141 recv 1 timeout 559657 Dynamic (278) 20.1974 xmit 214510 recv 5 timeout 214491 usec 278 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
virtio_ring has the ability to suppress notifications. This prevents a guest exit for every packet, but we need to set a timer on packet receipt to re-check if there were any remaining packets. Here are the times for 1G TCP Guest->Host with different timeout settings (it matters because the TCP window doesn't grow big enough to fill the entire buffer): Timeout value Seconds Xmit/Recv/Timeout None (before) 25.3784 xmit 7750233 recv 1 2500 usec 62.5119 xmit 207020 recv 2 timeout 207020 1000 usec 34.5379 xmit 207003 recv 2 timeout 207003 750 usec 29.2305 xmit 207002 recv 1 timeout 207002 500 usec 19.1887 xmit 561141 recv 1 timeout 559657 250 usec 20.0465 xmit 214128 recv 2 timeout 214110 100 usec 19.2583 xmit 561621 recv 1 timeout 560153 (Note that these values are sensitive to the GSO patches which come later, and probably other traffic-related variables, so take with a large grain of salt). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Number of exits transmitting 10GB Guest->Host before: network xmit 7858610 recv 118136 After: network xmit 7750233 recv 1 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
To simplify the transition to when we publish indices in the ring (and make shuffling my patch queue easier), wrap them in a lg_last_avail() macro. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Andrew Morton authored
To support my little make-x86-bitops-use-proper-typechecking projectlet. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrea Arcangeli <andrea@qumranet.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This is a simple patch to add support for the virtio "hardware random generator" to lguest. It gets about 1.2 MB/sec reading from /dev/hwrng in the guest. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Mark McLoughlin authored
If you've got a nice DHCP configuration which maps MAC addresses to specific IP addresses, then you're going to want to start your guest with one of those MAC addresses. Also, in Fedora, we have persistent network interface naming based on the MAC address, so with randomly assigned addresses you're soon going to hit eth13. Who knows what will happen then! Allow assigning a MAC address to the network interface with e.g. --tunnet=bridge:eth0:00:FF:95:6B:DA:3D or: --tunnet=192.168.121.1:00:FF:95:6B:DA:3D which is pretty unintelligable, but ... (includes Rusty's minor rework) Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Mark McLoughlin authored
Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
%02x is more appropriate for bytes than %08x. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Johannes Weiner authored
map_switcher allocates the array, unmap_switcher has to free it accordingly. Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
Ron Minnich noticed that guest userspace gets a GPF when it tries to int3: we need to copy the privilege level from the guest-supplied IDT to the real IDT. int3 is the only common case where guest userspace expects to invoke an interrupt, so that's the symptom of failing to do this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
6af61a76 'x86: clean up max_pfn_mapped usage - 32-bit' makes the following comment: XEN PV and lguest may need to assign max_pfn_mapped too. But no CC. Yinghai, wasting fellow developers' time is a VERY bad habit. If you do it again, I will hunt you down and try to extract the three hours of my life I just lost :) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Yinghai Lu <yhlu.kernel@gmail.com>
-
Dmitry Baryshkov authored
Signed-off-by: Dmitry Baryshkov <dbaryshkov@gmail.com> Signed-off-by: Samuel Ortiz <sameo@openedhand.com>
-
Andrew Morton authored
arch/x86/mm/pgtable.c: In function 'pgd_mop_up_pmds': arch/x86/mm/pgtable.c:194: warning: unused variable 'pmd' Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Manuel Lauss authored
The computed color value is never actually written to hardware colormap register. Signed-off-by: Manuel Lauss <mano@roarinelk.homelinux.net> Cc: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> Cc: Munakata Hisao <munakata.hisao@renesas.com> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Eric Sandeen authored
With SLUB debugging turned on in 2.6.26, I was getting memory corruption when testing eCryptfs. The root cause turned out to be that eCryptfs was doing kmalloc(PAGE_CACHE_SIZE); virt_to_page() and treating that as a nice page-aligned chunk of memory. But at least with SLUB debugging on, this is not always true, and the page we get from virt_to_page does not necessarily match the PAGE_CACHE_SIZE worth of memory we got from kmalloc. My simple testcase was 2 loops doing "rm -f fileX; cp /tmp/fileX ." for 2 different multi-megabyte files. With this change I no longer see the corruption. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Acked-by: Michael Halcrow <mhalcrow@us.ibm.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Atsushi Nemoto authored
If CONFIG_GENERIC_GPIO=y && CONFIG_GPIO_SYSFS=n, gpio_export() in asm-generic/gpio.h refers -ENOSYS and causes build error. Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Acked-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yoichi Yuasa authored
I got section mismatch message about bio_integrity_init_slab(). WARNING: fs/built-in.o(__ksymtab+0xb60): Section mismatch in reference from the variable __ksymtab_bio_integrity_init_slab to the function .init.text:bio_integrity_init_slab() The symbol bio_integrity_init_slab is exported and annotated __init Fix this by removing the __init annotation of bio_integrity_init_slab or drop the export. It only call from init_bio(). The EXPORT_SYMBOL() can be removed. Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hisashi Hifumi authored
When we read some part of a file through pagecache, if there is a pagecache of corresponding index but this page is not uptodate, read IO is issued and this page will be uptodate. I think this is good for pagesize == blocksize environment but there is room for improvement on pagesize != blocksize environment. Because in this case a page can have multiple buffers and even if a page is not uptodate, some buffers can be uptodate. So I suggest that when all buffers which correspond to a part of a file that we want to read are uptodate, use this pagecache and copy data from this pagecache to user buffer even if a page is not uptodate. This can reduce read IO and improve system throughput. I wrote a benchmark program and got result number with this program. This benchmark do: 1: mount and open a test file. 2: create a 512MB file. 3: close a file and umount. 4: mount and again open a test file. 5: pwrite randomly 300000 times on a test file. offset is aligned by IO size(1024bytes). 6: measure time of preading randomly 100000 times on a test file. The result was: 2.6.26 330 sec 2.6.26-patched 226 sec Arch:i386 Filesystem:ext3 Blocksize:1024 bytes Memory: 1GB On ext3/4, a file is written through buffer/block. So random read/write mixed workloads or random read after random write workloads are optimized with this patch under pagesize != blocksize environment. This test result showed this. The benchmark program is as follows: #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <time.h> #include <stdlib.h> #include <string.h> #include <sys/mount.h> #define LEN 1024 #define LOOP 1024*512 /* 512MB */ main(void) { unsigned long i, offset, filesize; int fd; char buf[LEN]; time_t t1, t2; if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) { perror("cannot mount\n"); exit(1); } memset(buf, 0, LEN); fd = open("/root/test1/testfile", O_CREAT|O_RDWR|O_TRUNC); if (fd < 0) { perror("cannot open file\n"); exit(1); } for (i = 0; i < LOOP; i++) write(fd, buf, LEN); close(fd); if (umount("/root/test1/") < 0) { perror("cannot umount\n"); exit(1); } if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) { perror("cannot mount\n"); exit(1); } fd = open("/root/test1/testfile", O_RDWR); if (fd < 0) { perror("cannot open file\n"); exit(1); } filesize = LEN * LOOP; for (i = 0; i < 300000; i++){ offset = (random() % filesize) & (~(LEN - 1)); pwrite(fd, buf, LEN, offset); } printf("start test\n"); time(&t1); for (i = 0; i < 100000; i++){ offset = (random() % filesize) & (~(LEN - 1)); pread(fd, buf, LEN, offset); } time(&t2); printf("%ld sec\n", t2-t1); close(fd); if (umount("/root/test1/") < 0) { perror("cannot umount\n"); exit(1); } } Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Christoph Hellwig <hch@infradead.org> Cc: Jan Kara <jack@ucw.cz> Cc: <linux-ext4@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Simon Horman authored
Signed-off-by: Simon Horman <horms@verge.net.au> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-