- 16 Sep, 2014 7 commits
-
-
Alexander Gordeev authored
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> Cc: support@lsi.com Cc: DL-MPTFusionLinux@lsi.com Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexander Gordeev authored
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> Cc: support@lsi.com Cc: DL-MPTFusionLinux@lsi.com Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexander Gordeev authored
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Acked-by: Kashyap Desai <Kashyap.desai@avagotech.com> Cc: Neela Syam Kolli <megaraidlinux@lsi.com> Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexander Gordeev authored
Currently the driver fails to analize MSI-X re-enablement status on resuming and always assumes the success. This update checks the MSI-X initialization result and fails to resume if MSI-Xs re-enablement failed. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Acked-by: Kashyap Desai <Kashyap.desai@avagotech.com> Cc: Neela Syam Kolli <megaraidlinux@lsi.com> Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexander Gordeev authored
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Acked-by: "Stephen M. Cameron" <scameron@beardog.cce.hp.com> Cc: iss_storagedev@hp.com Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexander Gordeev authored
Currently the driver falls back to INTx mode when MSI-X initialization failed. This is a suboptimal behaviour for chips that also support MSI. This update changes that behaviour and falls back to MSI mode in case MSI-X mode initialization failed. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Acked-by: "Stephen M. Cameron" <scameron@beardog.cce.hp.com> Cc: iss_storagedev@hp.com Cc: linux-scsi@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
port_detect is only called from the module_init routine and thus implicitly serialized, so remove the driver lock which was held over potentially sleeping function calls. Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Arthur Marsh <arthur.marsh@internode.on.net> Tested-by: Arthur Marsh <arthur.marsh@internode.on.net> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
- 15 Sep, 2014 13 commits
-
-
Subhash Jadavani authored
SCSI Well-known logical units generally don't have any scsi driver associated with it which means no one will call scsi_autopm_put_device() on these wlun scsi devices and this would result in keeping the corresponding scsi device always active (hence LLD can't be suspended as well). Same exact problem can be seen for other scsi device representing normal logical unit whose driver is yet to be loaded. This patch fixes the above problem with this approach: - make the scsi_autopm_put_device call at the end of scsi_sysfs_add_sdev to make it balance out the get earlier in the function. - let drivers do paired get/put calls in their probe methods. Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org> Signed-off-by: Dolev Raviv <draviv@codeaurora.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alan Stern authored
The SCSI specification requires that the second Command Data Byte should contain the LUN value in its high-order bits if the recipient device reports SCSI level 2 or below. Nevertheless, some USB mass-storage devices use those bits for other purposes in vendor-specific commands. Currently Linux has no way to send such commands, because the SCSI stack always overwrites the LUN bits. Testing shows that Windows 7 and XP do not store the LUN bits in the CDB when sending commands to a USB device. This doesn't matter if the device uses the Bulk-Only or UAS transports (which virtually all modern USB mass-storage devices do), as these have a separate mechanism for sending the LUN value. Therefore this patch introduces a flag in the Scsi_Host structure to inform the SCSI midlayer that a transport does not require the LUN bits to be stored in the CDB, and it makes usb-storage set this flag for all devices using the Bulk-Only transport. (UAS is handled by a separate driver, but it doesn't really matter because no SCSI-2 or lower device is at all likely to use UAS.) The patch also cleans up the code responsible for storing the LUN value by adding a bitflag to the scsi_device structure. The test for whether to stick the LUN value in the CDB can be made when the device is probed, and stored for future use rather than being made over and over in the fast path. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Tiziano Bacocco <tiziano.bacocco@gmail.com> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Kashyap.Desai@avagotech.com authored
Add a use_cmd_list flag in struct Scsi_Host to request keeping track of all outstanding commands per device. Default behaviour is not to keep track of cmd_list per sdev, as this may introduce lock contention. (overhead is more on multi-node NUMA.), and only enable it on the two drivers that need it. Signed-off-by: Kashyap Desai <kashyap.desai@avagotech.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Sujit Reddy Thumma authored
The SYNCHRONIZE_CACHE command is a medium write command and hence can fail when the device is write protected. Avoid sending such commands by making sure that write-cache-enable is disabled even though the device claim to support it. Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org> Signed-off-by: Dolev Raviv <draviv@codeaurora.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Venkatesh Srinivas <venkateshs@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Randy Dunlap authored
Convert spaces to tabs in kernel-doc notation. Correct duplicated (copy-paste) kernel-doc comments that are incorrect. Fix kernel-doc warning: Warning(..//drivers/scsi/scsi_error.c:1647): No description found for parameter 'shost' Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Please try the fix below, looks like the commit broke TCQ for all drivers using block-level tagging. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Mike Christie authored
This patches fixes a potential buffer overrun in __iscsi_conn_send_pdu. This function is used by iscsi drivers and userspace to send iscsi PDUs/ commands. For login commands, we have a set buffer size. For all other commands we do not support data buffers. This was reported by Dan Carpenter here: http://www.spinics.net/lists/linux-scsi/msg66838.htmlReported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Tony Luck authored
Cut & paste typo from the line above. Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Commit 9226b5b4 ("vfs: avoid non-forwarding large load after small store in path lookup") made link_path_walk() always access the "hash_len" field as a single 64-bit entity, in order to avoid mixed size accesses to the members. However, what I didn't notice was that that effectively means that the whole "struct qstr this" is now basically redundant. We already explicitly track the "const char *name", and if we just use "u64 hash_len" instead of "long len", there is nothing else left of the "struct qstr". We do end up wanting the "struct qstr" if we have a filesystem with a "d_hash()" function, but that's a rare case, and we might as well then just squirrell away the name and hash_len at that point. End result: fewer live variables in the loop, a smaller stack frame, and better code generation. And we don't need to pass in pointers variables to helper functions any more, because the return value contains all the relevant information. So this removes more lines than it adds, and the source code is clearer too. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds authored
Pull crypto fixes from Herbert Xu: "This fixes the newly added drbg generator so that it actually works on 32-bit machines. Previously the code was only tested on 64-bit and on 32-bit it overflowed and simply doesn't work" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: drbg - remove check for uninitialized DRBG handle crypto: drbg - backport "fix maximum value checks on 32 bit systems"
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull vfs fixes from Al Viro: "double iput() on failure exit in lustre, racy removal of spliced dentries from ->s_anon in __d_materialise_dentry() plus a bunch of assorted RCU pathwalk fixes" The RCU pathwalk fixes end up fixing a couple of cases where we incorrectly dropped out of RCU walking, due to incorrect initialization and testing of the sequence locks in some corner cases. Since dropping out of RCU walk mode forces the slow locked accesses, those corner cases slowed down quite dramatically. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: be careful with nd->inode in path_init() and follow_dotdot_rcu() don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu() fix bogus read_seqretry() checks introduced in b37199e6 move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon) [fix] lustre: d_make_root() does iput() on dentry allocation failure
-
Linus Torvalds authored
The performance regression that Josef Bacik reported in the pathname lookup (see commit 99d263d4 "vfs: fix bad hashing of dentries") made me look at performance stability of the dcache code, just to verify that the problem was actually fixed. That turned up a few other problems in this area. There are a few cases where we exit RCU lookup mode and go to the slow serializing case when we shouldn't, Al has fixed those and they'll come in with the next VFS pull. But my performance verification also shows that link_path_walk() turns out to have a very unfortunate 32-bit store of the length and hash of the name we look up, followed by a 64-bit read of the combined hash_len field. That screws up the processor store to load forwarding, causing an unnecessary hickup in this critical routine. It's caused by the ugly calling convention for the "hash_name()" function, and easily fixed by just making hash_name() fill in the whole 'struct qstr' rather than passing it a pointer to just the hash value. With that, the profile for this function looks much smoother. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 14 Sep, 2014 12 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linuxLinus Torvalds authored
Pull parisc updates from Helge Deller: "The most important patch is a new Light Weigth Syscall (LWS) for 8, 16, 32 and 64 bit atomic CAS operations which is required in order to be able to implement the atomic gcc builtins on our platform. Other than that, we wire up the seccomp, getrandom and memfd_create syscalls, fixes a minor off-by-one bug and a wrong printk string" * 'parisc-3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Implement new LWS CAS supporting 64 bit operations. parisc: Wire up seccomp, getrandom and memfd_create syscalls parisc: dino: fix %d confusingly prefixed with 0x in format string parisc: sys_hpux: NUL terminator is one past the end
-
Al Viro authored
in the former we simply check if dentry is still valid after picking its ->d_inode; in the latter we fetch ->d_inode in the same places where we fetch dentry and its ->d_seq, under the same checks. Cc: stable@vger.kernel.org # 2.6.38+ Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
return the value instead, and have path_init() do the assignment. Broken by "vfs: Fix absolute RCU path walk failures due to uninitialized seq number", which was Cc-stable with 2.6.38+ as destination. This one should go where it went. To avoid dummy value returned in case when root is already set (it would do no harm, actually, since the only caller that doesn't ignore the return value is guaranteed to have nd->root *not* set, but it's more obvious that way), lift the check into callers. And do the same to set_root(), to keep them in sync. Cc: stable@vger.kernel.org # 2.6.38+ Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
git://github.com/jonmason/ntbLinus Torvalds authored
Pull ntb driver bugfixes from Jon Mason: "NTB driver fixes for queue spread and buffer alignment. Also, update to MAINTAINERS to reflect new e-mail address" * tag 'ntb-3.17' of git://github.com/jonmason/ntb: ntb: Add alignment check to meet hardware requirement MAINTAINERS: update NTB info NTB: correct the spread of queues over mw's
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull ARM irq chip fixes from Thomas Gleixner: "Another pile of ARM specific irq chip fixlets: - off by one bugs in the crossbar driver - missing annotations - a bunch of "make it compile" updates I pulled the lot today from Jason, but it has been in -next for at least a week" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip: gic-v3: Declare rdist as __percpu pointer to __iomem pointer irqchip: gic: Make gic_default_routable_irq_domain_ops static irqchip: exynos-combiner: Fix compilation error on ARM64 irqchip: crossbar: Off by one bugs in init irqchip: gic-v3: Tag all low level accessors __maybe_unused irqchip: gic-v3: Only define gic_peek_irq() when building SMP
-
git://git.infradead.org/users/jcooper/linuxThomas Gleixner authored
irqchip fixes for v3.17 from Jason Cooper - GIC/GICV3: Various fixlets - crossbar: Fix off-by-one bug - exynos-combiner: Fix arm64 build error
-
Dave Jiang authored
The NTB translate register must have the value to be BAR size aligned. This alignment check make sure that the DMA memory allocated has the proper alignment. Another requirement for NTB to function properly with memory window BAR size greater or equal to 4M is to use the CMA feature in 3.16 kernel with the appropriate CONFIG_CMA_ALIGNMENT and CONFIG_CMA_SIZE_MBYTES set. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Jon Mason <jdmason@kudzu.us>
-
Jon Mason authored
Update my contact info to my personal email address and add Dave Jiang. Signed-off-by: Jon Mason <jon.mason@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
-
Jon Mason authored
The detection of an uneven number of queues on the given memory windows was not correct. The mw_num is zero based and the mod should be division to spread them evenly over the mw's. Signed-off-by: Jon Mason <jon.mason@intel.com>
-
Al Viro authored
read_seqretry() returns true on mismatch, not on match... Cc: stable@vger.kernel.org # 3.15+ Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
and lock the right list there Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
double-free is a bad thing Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-
- 13 Sep, 2014 8 commits
-
-
Linus Torvalds authored
Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull futex and timer fixes from Thomas Gleixner: "A oneliner bugfix for the jinxed futex code: - Drop hash bucket lock in the error exit path. I really could slap myself for intruducing that bug while fixing all the other horror in that code three month ago ... and the timer department is not too proud about the following fixes: - Deal with a long standing rounding bug in the timeval to jiffies conversion. It's a real issue and this fix fell through the cracks for quite some time. - Another round of alarmtimer fixes. Finally this code gets used more widely and the subtle issues hidden for quite some time are noticed and fixed. Nothing really exciting, just the itty bitty details which bite the serious users here and there" * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: futex: Unlock hb->lock in futex_wait_requeue_pi() error path * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: alarmtimer: Lock k_itimer during timer callback alarmtimer: Do not signal SIGEV_NONE timers alarmtimer: Return relative times in timer_gettime jiffies: Fix timeval conversion to jiffies
-
Guy Martin authored
The current LWS cas only works correctly for 32bit. The new LWS allows for CAS operations of variable size. Signed-off-by: Guy Martin <gmsoft@tuxicoman.be> Cc: <stable@vger.kernel.org> # 3.13+ Signed-off-by: Helge Deller <deller@gmx.de>
-
Linus Torvalds authored
Josef Bacik found a performance regression between 3.2 and 3.10 and narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long' accesses for dcache name comparison and hashing"). He reports: "The test case is essentially for (i = 0; i < 1000000; i++) mkdir("a$i"); On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k dir/sec with 3.10. This is because we spend waaaaay more time in __d_lookup on 3.10 than in 3.2. The new hashing function for strings is suboptimal for < sizeof(unsigned long) string names (and hell even > sizeof(unsigned long) string names that I've tested). I broke out the old hashing function and the new one into a userspace helper to get real numbers and this is what I'm getting: Old hash table had 1000000 entries, 0 dupes, 0 max dupes New hash table had 12628 entries, 987372 dupes, 900 max dupes We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash My test does the hash, and then does the d_hash into a integer pointer array the same size as the dentry hash table on my system, and then just increments the value at the address we got to see how many entries we overlap with. As you can see the old hash function ended up with all 1 million entries in their own bucket, whereas the new one they are only distributed among ~12.5k buckets, which is why we're using so much more CPU in __d_lookup". The reason for this hash regression is two-fold: - On 64-bit architectures the down-mixing of the original 64-bit word-at-a-time hash into the final 32-bit hash value is very simplistic and suboptimal, and just adds the two 32-bit parts together. In particular, because there is no bit shuffling and the mixing boundary is also a byte boundary, similar character patterns in the low and high word easily end up just canceling each other out. - the old byte-at-a-time hash mixed each byte into the final hash as it hashed the path component name, resulting in the low bits of the hash generally being a good source of hash data. That is not true for the word-at-a-time case, and the hash data is distributed among all the bits. The fix is the same in both cases: do a better job of mixing the bits up and using as much of the hash data as possible. We already have the "hash_32|64()" functions to do that. Reported-by: Josef Bacik <jbacik@fb.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: Chris Mason <clm@fb.com> Cc: linux-fsdevel@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
The hash_64() function historically does the multiply by the GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because unlike the 32-bit case, gcc seems unable to turn the constant multiply into the more appropriate shift and adds when required. However, that means that we generate those shifts and adds even when the architecture has a fast multiplier, and could just do it better in hardware. Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with "is it a 64-bit architecture") to decide whether to use an integer multiply or the explicit sequence of shift/add instructions. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
It used to be an ad-hoc hack defined by the x86 version of <asm/bitops.h> that enabled a couple of library routines to know whether an integer multiply is faster than repeated shifts and additions. This just makes it use the real Kconfig system instead, and makes x86 (which was the only architecture that did this) select the option. NOTE! Even for x86, this really is kind of wrong. If we cared, we would probably not enable this for builds optimized for netburst (P4), where shifts-and-adds are generally faster than multiplies. This patch does *not* change that kind of logic, though, it is purely a syntactic change with no code changes. This was triggered by the fact that we have other places that really want to know "do I want to expand multiples by constants by hand or not", particularly the hash generation code. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dmLinus Torvalds authored
Pull device mapper fix from Mike Snitzer: "Fix a race in the DM cache target that caused dirty blocks to be marked as clean. This could cause no writeback to occur or spurious dirty block counts" * tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm cache: fix race causing dirty blocks to be marked as clean
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "A small collection of fixes for the current rc series. This contains: - Two small blk-mq patches from Rob Elliott, cleaning up error case at init time. - A fix from Ming Lei, fixing SG merging for blk-mq where QUEUE_FLAG_SG_NO_MERGE is the default. - A dev_t minor lifetime fix from Keith, fixing an issue where a minor might be reused before all references to it were gone. - Fix from Alan Stern where an unbalanced queue bypass caused SCSI some headaches when it does a series of add/del on devices without fully registrering the queue. - A fix from me for improving the scaling of tag depth in blk-mq if we are short on memory" * 'for-linus' of git://git.kernel.dk/linux-block: blk-mq: scale depth and rq map appropriate if low on memory Block: fix unbalanced bypass-disable in blk_register_queue block: Fix dev_t minor allocation lifetime blk-mq: cleanup after blk_mq_init_rq_map failures blk-mq: pass along blk_mq_alloc_tag_set return values blk-merge: fix blk_recount_segments
-
Linus Torvalds authored
Merge tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull Xen ARM bugfix from Stefano Stabellini: "The patches fix the "xen_add_mach_to_phys_entry: cannot add" bug that has been affecting xen on arm and arm64 guests since 3.16. They require a few hypervisor side changes that just went in xen-unstable. A couple of days ago David sent out a pull request with a few other Xen fixes (it is already in master). Sorry we didn't synchronized better among us" * tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/arm: remove mach_to_phys rbtree xen/arm: reimplement xen_dma_unmap_page & friends xen/arm: introduce XENFEAT_grant_map_identity
-