- 28 Jan, 2014 1 commit
-
-
Michael Ellerman authored
This commit adds the architecture support required to enable the optimised implementation of lockrefs. That's as simple as defining arch_spin_value_unlocked() and selecting the Kconfig option. We also define cmpxchg64_relaxed(), because the lockref code does not need the cmpxchg to have barrier semantics. Using Linus' test case[1] on one system I see a 4x improvement for the basic enablement, and a further 1.3x for cmpxchg64_relaxed(), for a total of 5.3x vs the baseline. On another system I see more like 2x improvement. [1]: http://marc.info/?l=linux-fsdevel&m=137782380714721&w=4Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 Jan, 2014 24 commits
-
-
Vasant Hegde authored
Its possible that OPAL may be writing to host memory during kexec (like dump retrieve scenario). In this situation we might end up corrupting host memory. This patch makes OPAL sync call to make sure OPAL stops writing to host memory before kexec'ing. Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Sometimes, especially in sinario of loading another kernel with kdump, we got EEH error on non-existing PE. That means the PEEV / PEST in the corresponding PHB would be messy and we can't handle that case. The patch escalates the error to fenced PHB so that the PHB could be rested in order to revoer the errors on non-existing PEs. Reported-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Tested-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
For one PCI error relevant OPAL event, we possibly have multiple EEH errors for that. For example, multiple frozen PEs detected on different PHBs. Unfortunately, we didn't cover the case. The patch enumarates the return value from eeh_ops::next_error() and change eeh_handle_special_event() and eeh_ops::next_error() to handle all existing EEH errors. As Ben pointed out, we needn't list_for_each_entry_safe() since we are not deleting any PHB from the hose_list and the EEH serialized lock should be held while purging EEH events. The patch covers those suggestions as well. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
Freescale updates from Scott: << Highlights include 32-bit booke relocatable support, e6500 hardware tablewalk support, various e500 SPE fixes, some new/revived boards, and e6500 deeper idle and altivec powerdown modes. >>
-
Paul Mackerras authored
Currently, if a process starts a transaction and then takes an exception because the FPU, VMX or VSX unit is unavailable to it, we end up corrupting any FP/VMX/VSX state that was valid before the interrupt. For example, if the process starts a transaction with the FPU available to it but VMX unavailable, and then does a VMX instruction inside the transaction, the FP state gets corrupted. Loading up the desired state generally involves doing a reclaim and a recheckpoint. To avoid corrupting already-valid state, we have to be careful not to reload that state from the thread_struct between the reclaim and the recheckpoint (since the thread_struct values are stale by now), and we have to reload that state from the transact_fp/vr arrays after the recheckpoint to get back the current transactional values saved there by the reclaim. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
Currently, when we have a process using the transactional memory facilities on POWER8 (that is, the processor is in transactional or suspended state), and the process enters the kernel and the kernel then uses the floating-point or vector (VMX/Altivec) facility, we end up corrupting the user-visible FP/VMX/VSX state. This happens, for example, if a page fault causes a copy-on-write operation, because the copy_page function will use VMX to do the copy on POWER8. The test program below demonstrates the bug. The bug happens because when FP/VMX state for a transactional process is stored in the thread_struct, we store the checkpointed state in .fp_state/.vr_state and the transactional (current) state in .transact_fp/.transact_vr. However, when the kernel wants to use FP/VMX, it calls enable_kernel_fp() or enable_kernel_altivec(), which saves the current state in .fp_state/.vr_state. Furthermore, when we return to the user process we return with FP/VMX/VSX disabled. The next time the process uses FP/VMX/VSX, we don't know which set of state (the current register values, .fp_state/.vr_state, or .transact_fp/.transact_vr) we should be using, since we have no way to tell if we are still in the same transaction, and if not, whether the previous transaction succeeded or failed. Thus it is necessary to strictly adhere to the rule that if FP has been enabled at any point in a transaction, we must keep FP enabled for the user process with the current transactional state in the FP registers, until we detect that it is no longer in a transaction. Similarly for VMX; once enabled it must stay enabled until the process is no longer transactional. In order to keep this rule, we add a new thread_info flag which we test when returning from the kernel to userspace, called TIF_RESTORE_TM. This flag indicates that there is FP/VMX/VSX state to be restored before entering userspace, and when it is set the .tm_orig_msr field in the thread_struct indicates what state needs to be restored. The restoration is done by restore_tm_state(). The TIF_RESTORE_TM bit is set by new giveup_fpu/altivec_maybe_transactional helpers, which are called from enable_kernel_fp/altivec, giveup_vsx, and flush_fp/altivec_to_thread instead of giveup_fpu/altivec. The other thing to be done is to get the transactional FP/VMX/VSX state from .fp_state/.vr_state when doing reclaim, if that state has been saved there by giveup_fpu/altivec_maybe_transactional. Having done this, we set the FP/VMX bit in the thread's MSR after reclaim to indicate that that part of the state is now valid (having been reclaimed from the processor's checkpointed state). Finally, in the signal handling code, we move the clearing of the transactional state bits in the thread's MSR a bit earlier, before calling flush_fp_to_thread(), so that we don't unnecessarily set the TIF_RESTORE_TM bit. This is the test program: /* Michael Neuling 4/12/2013 * * See if the altivec state is leaked out of an aborted transaction due to * kernel vmx copy loops. * * gcc -m64 htm_vmxcopy.c -o htm_vmxcopy * */ /* We don't use all of these, but for reference: */ int main(int argc, char *argv[]) { long double vecin = 1.3; long double vecout; unsigned long pgsize = getpagesize(); int i; int fd; int size = pgsize*16; char tmpfile[] = "/tmp/page_faultXXXXXX"; char buf[pgsize]; char *a; uint64_t aborted = 0; fd = mkstemp(tmpfile); assert(fd >= 0); memset(buf, 0, pgsize); for (i = 0; i < size; i += pgsize) assert(write(fd, buf, pgsize) == pgsize); unlink(tmpfile); a = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0); assert(a != MAP_FAILED); asm __volatile__( "lxvd2x 40,0,%[vecinptr] ; " // set 40 to initial value TBEGIN "beq 3f ;" TSUSPEND "xxlxor 40,40,40 ; " // set 40 to 0 "std 5, 0(%[map]) ;" // cause kernel vmx copy page TABORT TRESUME TEND "li %[res], 0 ;" "b 5f ;" "3: ;" // Abort handler "li %[res], 1 ;" "5: ;" "stxvd2x 40,0,%[vecoutptr] ; " : [res]"=r"(aborted) : [vecinptr]"r"(&vecin), [vecoutptr]"r"(&vecout), [map]"r"(a) : "memory", "r0", "r3", "r4", "r5", "r6", "r7"); if (aborted && (vecin != vecout)){ printf("FAILED: vector state leaked on abort %f != %f\n", (double)vecin, (double)vecout); exit(1); } munmap(a, size); close(fd); printf("PASSED!\n"); return 0; } Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
TIF_PERFMON_WORK and TIF_PERFMON_CTXSW are completely unused. They appear to be related to the old perfmon2 code, which has been superseded by the perf_event infrastructure. This removes their definitions so that the bits can be used for other purposes. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
If we set irq_work on a processor and immediately afterward, before the irq work has a chance to be processed, we change the decrementer value, we can seriously delay the handling of that irq_work. Fix it by checking in a few places for pending irq work, first before changing the decrementer in decrementer_set_next_event() and after changing it in the same function and in timer_interrupt(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Mahesh Salgaonkar authored
Huge Dickins reported an issue that b5ff4211 "powerpc/book3s: Queue up and process delayed MCE events" breaks the PowerMac G5 boot. This patch fixes it by moving the mce even processing away from syscall exit, which was wrong to do that in first place, and using irq work framework to delay processing of mce event. Reported-by: Hugh Dickins <hughd@google.com Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Preeti U Murthy authored
Commit fbd7740f(powerpc: Simplify pSeries idle loop) switched pseries cpu idle handling from complete idle loops to ppc_md.powersave functions. Earlier to this switch, ppc64_runlatch_off() had to be called in each of the idle routines. But after the switch, this call is handled in arch_cpu_idle(),just before the call to ppc_md.powersave, where platform specific idle routines are called. As a consequence, the call to ppc64_runlatch_off() got duplicated in the arch_cpu_idle() routine as well as in the some of the idle routines in pseries and commit fbd7740f missed to get rid of these redundant calls. These calls were carried over subsequent enhancements to the pseries cpuidle routines. Although multiple calls to ppc64_runlatch_off() is harmless, there is still some overhead due to it. Besides that, these calls could also make way for a misunderstanding that it is *necessary* to call ppc64_runlatch_off() multiple times, when that is not the case. Hence this patch takes care of eliminating this redundancy. Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Geert Uytterhoeven authored
add_system_ram_resources() is a subsys_initcall. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Olof Johansson authored
This makes ppc64_defconfig bootable without initrd on pasemi systems, most of whom have MV SATA controllers. Some have SIL24, but that driver is already enabled. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Vasant Hegde authored
At present we assume candidate image is <= 256MB. But in P8, candidate image size can go up to 750MB. Hence increasing candidate image max size to 1GB. Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Srivatsa S. Bhat authored
There have been some weird bugs in the past where the kernel tried to associate threads of the same core to different NUMA nodes, and things went haywire after that point (as expected). But unfortunately, root-causing such issues have been quite challenging, due to the lack of appropriate debug checks in the kernel. These bugs usually lead to some odd soft-lockups in the scheduler's build-sched-domain code in the CPU hotplug path, which makes it very hard to trace it back to the incorrect cpu-to-node mappings. So add appropriate debug checks to catch such invalid cpu-to-node mappings as early as possible. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Srivatsa S. Bhat authored
On POWER platforms, the hypervisor can notify the guest kernel about dynamic changes in the cpu-numa associativity (VPHN topology update). Hence the cpu-to-node mappings that we got from the firmware during boot, may no longer be valid after such updates. This is handled using the arch_update_cpu_topology() hook in the scheduler, and the sched-domains are rebuilt according to the new mappings. But unfortunately, at the moment, CPU hotplug ignores these updated mappings and instead queries the firmware for the cpu-to-numa relationships and uses them during CPU online. So the kernel can end up assigning wrong NUMA nodes to CPUs during subsequent CPU hotplug online operations (after booting). Further, a particularly problematic scenario can result from this bug: On POWER platforms, the SMT mode can be switched between 1, 2, 4 (and even 8) threads per core. The switch to Single-Threaded (ST) mode is performed by offlining all except the first CPU thread in each core. Switching back to SMT mode involves onlining those other threads back, in each core. Now consider this scenario: 1. During boot, the kernel gets the cpu-to-node mappings from the firmware and assigns the CPUs to NUMA nodes appropriately, during CPU online. 2. Later on, the hypervisor updates the cpu-to-node mappings dynamically and communicates this update to the kernel. The kernel in turn updates its cpu-to-node associations and rebuilds its sched domains. Everything is fine so far. 3. Now, the user switches the machine from SMT to ST mode (say, by running ppc64_cpu --smt=1). This involves offlining all except 1 thread in each core. 4. The user then tries to switch back from ST to SMT mode (say, by running ppc64_cpu --smt=4), and this involves onlining those threads back. Since CPU hotplug ignores the new mappings, it queries the firmware and tries to associate the newly onlined sibling threads to the old NUMA nodes. This results in sibling threads within the same core getting associated with different NUMA nodes, which is incorrect. The scheduler's build-sched-domains code gets thoroughly confused with this and enters an infinite loop and causes soft-lockups, as explained in detail in commit 3be7db6a (powerpc: VPHN topology change updates all siblings). So to fix this, use the numa_cpu_lookup_table to remember the updated cpu-to-node mappings, and use them during CPU hotplug online operations. Further, we also need to ensure that all threads in a core are assigned to a common NUMA node, irrespective of whether all those threads were online during the topology update. To achieve this, we take care not to use cpu_sibling_mask() since it is not hotplug invariant. Instead, we use cpu_first_sibling_thread() and set up the mappings manually using the 'threads_per_core' value for that particular platform. This helps us ensure that we don't hit this bug with any combination of CPU hotplug and SMT mode switching. Cc: stable@vger.kernel.org Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Some devices, for example PCI root port, don't have IOMMU table and group. We needn't detach them from their IOMMU group. Otherwise, it potentially incurs kernel crash because of referring NULL IOMMU group as following backtrace indicates: .iommu_group_remove_device+0x74/0x1b0 .iommu_bus_notifier+0x94/0xb4 .notifier_call_chain+0x78/0xe8 .__blocking_notifier_call_chain+0x7c/0xbc .blocking_notifier_call_chain+0x38/0x48 .device_del+0x50/0x234 .pci_remove_bus_device+0x88/0x138 .pci_stop_and_remove_bus_device+0x2c/0x40 .pcibios_remove_pci_devices+0xcc/0xfc .pcibios_remove_pci_devices+0x3c/0xfc Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
When EEH error comes to one specific PCI device before its driver is loaded, we will apply hotplug to recover the error. During the plug time, the PCI device will be probed and its driver is loaded. Then we wrongly calls to the error handlers if the driver supports EEH explicitly. The patch intends to fix by introducing flag EEH_DEV_NO_HANDLER and set it before we remove the PCI device. In turn, we can avoid wrongly calls the error handlers of the PCI device after its driver loaded. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch implements the EEH operation backend restore_config() for PowerNV platform. That relies on OPAL API opal_pci_reinit() where we reinitialize the error reporting properly after PE or PHB reset. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
After reset on the specific PE or PHB, we never configure AER correctly on PowerNV platform. We needn't care it on pSeries platform. The patch introduces additional EEH operation eeh_ops:: restore_config() so that we have chance to configure AER correctly for PowerNV platform. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
We don't have IO ports on PHB3 and the assignment of variable "iomap_off" on PHB3 is meaningless. The patch just removes the unnecessary assignment to the variable. The code change should have been part of commit c35d2a8c ("powerpc/powernv: Needn't IO segment map for PHB3"). Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Nishanth Aravamudan authored
After reverting 25ebc45b ("powerpc/pseries/iommu: remove default window before attempting DDW manipulation"), we no longer remove the base window in enable_ddw. Therefore, we no longer need to reset the DMA window state in find_existing_ddw_windows(). We can instead go back to what was done before, which simply reuses the previous configuration, if any. Further, this removes the final caller of the reset-pe-dma-windows call, so remove those functions. This fixes an EEH on kdump with the ipr driver. The EEH occurs, because the initcall removes the DDW configuration (64-bit DMA window), but doesn't ensure the ops are via the IOMMU -- a DMA operation occurs during probe (still investigating this) and we EEH. This reverts commit 14b6f00f. Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Nishanth Aravamudan authored
Ben rightfully pointed out that there is a race in the "newer" DDW code. Presuming we are running on recent enough firmware that supports the "reset" DDW manipulation call, we currently always remove the base 32-bit DMA window in order to maximize the resources for Phyp when creating the 64-bit window. However, this can be problematic for the case where multiple functions are in the same PE (partitionable endpoint), where some funtions might be 32-bit DMA only. All of a sudden, the only functional DMA window for such functions is gone. We will have serious errors in such situations. The best solution is simply to revert the extension to the DDW code where we ever remove the base DMA window. This reverts commit 25ebc45b. Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Gortmaker authored
None of these files are actually using any __init type directives and hence don't need to include <linux/init.h>. Most are just a left over from __devinit and __cpuinit removal, or simply due to code getting copied from one driver to the next. The one instance where we add an include for init.h covers off a case where that file was implicitly getting it from another header which itself didn't need it. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Andreas Schwab authored
GCC 4.8 now generates out-of-line vr save/restore functions when optimizing for size. They are needed for the raid6 altivec support. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 Jan, 2014 7 commits
-
-
Shengzhou Liu authored
There are much pci compatible with version on existing platforms. To stop putting version numbers in device tree later, we add a generic compatible 'fsl,qoriq-pcie'. The version number is readable directly from a register. Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Shengzhou Liu authored
Add elo3-dma-2.dtsi to support the third DMA controller. This is used on T2080, T4240, B4860, etc. FSL MPIC v4.3 adds a new discontiguous address range for internal interrupts, e.g. internal interrupt 0 is at offset 0x200 and thus interrupt number is: 0x200 >> 5 = 16 in the device tree. DMA controller 3 channel 0 internal interrupt 240 is at offset 0x3a00, and thus the corresponding interrupt number is: 0x3a00 >> 5 = 464, it's similar for other 7 interrupt numbers of DMA 3 channels. Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com> Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Diana Craciun authored
On Freescale e6500 cores EPCR[DGTMI] controls whether guest supervisor state can execute TLB management instructions. If EPCR[DGTMI]=0 tlbwe and tlbilx are allowed to execute normally in the guest state. A hypervisor may choose to virtualize TLB1 and for this purpose it may use IPROT to protect the entries for being invalidated by the guest. However, because tlbwe and tlbilx execution in the guest state are sharing the same bit, it is not possible to have a scenario where tlbwe is allowed to be executed in guest state and tlbilx traps. When guest TLB management instructions are allowed to be executed in guest state the guest cannot use tlbilx to invalidate TLB1 guest entries. Linux is using tlbilx in the boot code to invalidate the temporary entries it creates when initializing the MMU. The patch is replacing the usage of tlbilx in initialization code with tlbwe with VALID bit cleared. Linux is also using tlbilx in other contexts (like huge pages or indirect entries) but removing the tlbilx from the initialization code offers the possibility to have scenarios under hypervisor which are not using huge pages or indirect entries. Signed-off-by: Diana Craciun <Diana.Craciun@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Scott Wood authored
It was branching to the cleanup part of the non-bolted handler, which would have been bad if there were any chips with tlbsrx. that use the bolted handler. Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Paul Gortmaker authored
As of commit b81f18e5 ("powerpc/boot: Only build board support files when required.") the two defconfigs ep88xc_defconfig and storcenter_defconfig would fail final link as follows: WRAP arch/powerpc/boot/dtbImage.ep88xc arch/powerpc/boot/wrapper.a(mpc8xx.o): In function `mpc885_get_clock': arch/powerpc/boot/mpc8xx.c:30: undefined reference to `fsl_get_immr' make[1]: *** [arch/powerpc/boot/dtbImage.ep88xc] Error 1 ...and... WRAP arch/powerpc/boot/cuImage.storcenter arch/powerpc/boot/cuboot-pq2.o: In function `pq2_platform_fixups': cuboot-pq2.c:(.text+0x324): undefined reference to `fsl_get_immr' make[1]: *** [arch/powerpc/boot/cuImage.storcenter] Error 1 We need the fsl-soc board files built for these two platforms. Cc: Tony Breeds <tony@bakeyournoodle.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Fixes: b81f18e5 ("powerpc/boot: Only build board support files when required.") Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Shaohui Xie authored
On P1020, P1021, P1022, and P1023, eLBC event interrupts are routed to internal interrupt 3 while ELBC error interrupts are routed to internal interrupt 0. We need to call request_irq for each. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> [scottwood@freescale.com: reworded commit message and fixed author] Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Wang Dongsheng authored
P1020, P1021, P1022, P1023 when the lbc get error, the error interrupt will be triggered. The corresponding interrupt is internal IRQ0. So system have to process the lbc IRQ0 interrupt. The corresponding lbc general interrupt is internal IRQ3. Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com> [scottwood@freescale.com: bracketed individual list elements] Signed-off-by: Scott Wood <scottwood@freescale.com>
-
- 09 Jan, 2014 8 commits
-
-
Stephen Chivers authored
Add support for the Motorola/Emerson MVME5100 Single Board Computer. The MVME5100 is a 6U form factor VME64 computer with: - A single MPC7410 or MPC750 CPU - A HAWK Processor Host Bridge (CPU to PCI) and MultiProcessor Interrupt Controller (MPIC) - Up to 500Mb of onboard memory - A M48T37 Real Time Clock (RTC) and Non-Volatile Memory chip - Two 16550 compatible UARTS - Two Intel E100 Fast Ethernets - Two PCI Mezzanine Card (PMC) Slots - PPCBug Firmware The HAWK PHB/MPIC is compatible with the MPC10x devices. There is no onboard disk support. This is usually provided by installing a PMC in first PMC slot. This patch revives the board support, it was present in early 2.6 series kernels. The board support in those days was by Matt Porter of MontaVista Software. CSC Australia has around 31 of these boards in service. The kernel in use for the boards is based on 2.6.31. The boards are operated without disks from a file server. This patch is based on linux-3.13-rc2 and has been boot tested. Only boards with 512 Mb of memory are known to work. Signed-off-by: Stephen Chivers <schivers@csc.com> Tested-by: Alessio Igor Bogani <alessio.bogani@elettra.eu> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Scott Wood authored
This keeps usage coordinated for hugetlb and indirect entries, which should make entry selection more predictable and probably improve overall performance when mixing the two. Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Scott Wood authored
There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
-
Scott Wood authored
There is no barrier between something like ioremap() writing to a PTE, and returning the value to a caller that may then store the pointer in a place that is visible to other CPUs. Such callers generally don't perform barriers of their own. Even if callers of ioremap() and similar things did use barriers, the most logical choise would be smp_wmb(), which is not architecturally sufficient when BookE hardware tablewalk is used. A full sync is specified by the architecture. For userspace mappings, OTOH, we generally already have an lwsync due to locking, and if we occasionally take a spurious fault due to not having a full sync with hardware tablewalk, it will not be fatal because we will retry rather than oops. Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Kevin Hao authored
The RELOCATABLE is more flexible and without any alignment restriction. And it is a superset of DYNAMIC_MEMSTART. So use it by default for a kdump kernel. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Kevin Hao authored
When booting above the 64M for a secondary cpu, we also face the same issue as the boot cpu that the PAGE_OFFSET map two different physical address for the init tlb and the final map. So we have to use switch_to_as1/restore_to_as0 between the conversion of these two maps. When restoring to as0 for a secondary cpu, we only need to return to the caller. So add a new parameter for function restore_to_as0 for this purpose. Use LOAD_REG_ADDR_PIC to get the address of variables which may be used before we set the final map in cams for the secondary cpu. Move the setting of cams a bit earlier in order to avoid the unnecessary using of LOAD_REG_ADDR_PIC. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Kevin Hao authored
This is always true for a non-relocatable kernel. Otherwise the kernel would get stuck. But for a relocatable kernel, it seems a little complicated. When booting a relocatable kernel, we just align the kernel start addr to 64M and map the PAGE_OFFSET from there. The relocation will base on this virtual address. But if this address is not the same as the memstart_addr, we will have to change the map of PAGE_OFFSET to the real memstart_addr and do another relocation again. Signed-off-by: Kevin Hao <haokexin@gmail.com> [scottwood@freescale.com: make offset long and non-negative in simple case] Signed-off-by: Scott Wood <scottwood@freescale.com>
-
Kevin Hao authored
Introduce this function so we can set both the physical and virtual address for the map in cams. This will be used by the relocation code. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
-