- 16 Sep, 2011 1 commit
-
-
- 14 Sep, 2011 7 commits
-
-
-
Nicolas Pitre authored
The rule to copy this file doesn't have to be forced. However lib1funcs.[So] have to be listed amongst the targets. This prevents zImage from being recreated needlessly. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
Nicolas Pitre authored
Some old bootloaders can't be updated to a device tree capable one, yet they provide ATAGs with memory configuration, the ramdisk address, the kernel cmdline string, etc. To allow a device tree enabled kernel to be used with such bootloaders, it is necessary to convert those ATAGs into FDT properties and fold them into the DTB appended to zImage. Currently the following ATAGs are converted: ATAG_CMDLINE ATAG_MEM ATAG_INITRD2 If the corresponding information already exists in the appended DTB, it is replaced, otherwise the required node is created to hold it. The code looks for ATAGs at the location pointed by the value of r2 upon entry into the zImage code. If no ATAGs are found there, an attempt at finding ATAGs at the typical 0x100 offset from start of RAM is made. Otherwise the DTB is left unchanged. Thisstarted from an older patch from John Bonesio <bones@secretlab.ca>, with contributions from David Brown <davidb@codeaurora.org>. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
Nicolas Pitre authored
This is a small subset of string functions needed by commits to come. Except for memcpy() which is unchanged from its original location, their implementation is meant to be small, and -Os is enforced to prevent gcc from doing pointless loop unrolling. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
Nicolas Pitre authored
The appended DTB gets relocated with the decompressor code to get out of the way of the decompressed kernel. However the kernel's .bss section may be larger than the relocated code and data, and then the DTB gets overwritten. Let's make sure the relocation takes care of moving zImage far enough so no such conflict with .bss occurs. Thanks to Tony Lindgren <tony@atomide.com> for figuring out this issue. While at it, let's clean up the code a bit so that the wont_overwrite symbol is used while determining if a conflict exists, making the above change more precise as well as eliminating some ARM/THUMB alternates. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: Tony Lindgren <tony@atomide.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
John Bonesio authored
This patch provides the ability to boot using a device tree that is appended to the raw binary zImage (e.g. cat zImage <filename>.dtb > zImage_w_dtb). Signed-off-by: John Bonesio <bones@secretlab.ca> [nico: ported to latest zImage changes plus additional cleanups/improvements] Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Tony Lindgren <tony@atomide.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
Nicolas Pitre authored
This is needed for proper alignment when the DTB appending feature is used. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: Tony Lindgren <tony@atomide.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Dave Martin <dave.martin@linaro.org> Tested-by: Thomas Abraham <thomas.abraham@linaro.org>
-
- 31 Aug, 2011 32 commits
-
-
Will Deacon authored
-
Mark Rutland authored
Currently, armpmu_enable iterates through the events for a given counter set, calling armpmu->enable on each before calling armpmu->start to start the PMU's counters. As armpmu->enable is called when each event is added, each event is already configured in hardware. Due to this, calling armpmu->enable in armpmu_enable is unnecessary and confusing. This patch removes the unnecessary calls to armpmu->enable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, struct arm_pmu and related functions are only visible to {,arch/arm/}/kernel/perf_event.c. This prevents new drivers from using the framework. This patch moves declarations to asm/pmu.h, allowing new PMU drivers to use the framework. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently struct cpu_hw_events stores data on events running on a PMU associated with a CPU. As this data is general enough to be used for system PMUs, this name is a misnomer, and may cause confusion when it is used for system PMUs. Additionally, 'armpmu' is commonly used as a parameter name for an instance of struct arm_pmu. The name is also used for a global instance which represents the CPU's PMU. As cpu_hw_events is now not tied to CPU PMUs, it is renamed to pmu_hw_events, with instances of it renamed similarly. As the global 'armpmu' is CPU-specfic, it is renamed to cpu_pmu. This should make it clearer which code is generic, and which is coupled with the CPU. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently the event accounting data in pmu_hw_events is stored in fixed-sized arrays within the structure. This patch refactors the accounting data to allow any number of events to be managed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, a single static instance of struct pmu is used when registering an ARM PMU with the main perf subsystem. This limits the ARM perf code to supporting a single PMU. This patch replaces the static struct pmu instance with a member variable on struct arm_pmu. This provides bidirectional mapping between the two structs, and therefore allows for support of multiple PMUs. The function 'to_arm_pmu' is provided for convenience. PMU-generic functions are also updated to use the new mapping, and PMU-generic initialisation of the member variables is moved into a new function: armpmu_init. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently mapping an event type to a hardware configuration value depends on the data being pointed to from struct arm_pmu. These fields (cache_map, event_map, raw_event_mask) are currently specific to CPU PMUs, and do not serve the general case well. This patch replaces the event map pointers on struct arm_pmu with a new 'map_event' function pointer. Small shim functions are used to reuse the existing common code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, the ARM perf code assumes all PMUs it will handle are CPU PMUs, having ARM_PMU_DEVICE_CPU hardcoded when reserving or releasing hardware. This means that currently, the ARM perf code can't support system PMUs. This patch adds a 'type' field to struct arm_pmu, which allows the code to reserve & release the hardware regardless of the PMU type. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, a single lock serialises access to CPU PMU registers. This global locking is unnecessary as PMU registers are local to the CPU they monitor. This patch replaces the global lock with a per-CPU lock. As the lock is in struct cpu_hw_events, PMUs providing a single cpu_hw_events instance can be locked globally. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
As armpmu_disable will call armpmu->stop when the last event has been removed, this is pointless and simply adds to the noise when debugging. Additionally, due to this call occurring in a preemptible context, this is problematic for per-cpu locking of PMU registers (where we will attempt to access per-cpu spinlock for use with raw_spin_lock_irqsave). This patch removes the call to armpmu->stop. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, cpu_hw_events is a global per-CPU variable. To enable support for multiple PMUs, there needs to be a mapping from an instance of arm_pmu to its cpu_hw_events. Additionally, as system PMUs are not CPU-affine, they should not have this stored per-CPU. This patch moves access to the hardware events data behind an accessor function (arm_pmu::get_hw_events). This allows each instance to have its own hardware event data, which can be stored per-CPU or globally as required. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently the ARM perf code supports having a single struct platform_device to supply IRQ numbers, limiting it to supporting a single PMU. This patch makes a platform_device instance variable on struct arm_pmu. This should allow for multiple PMUs to be supported in future. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
This patch moves the active_events counter into struct arm_pmu, in preparation for supporting multiple PMUs. This also moves pmu_reserve_mutex, as it is used to guard accesses to active_events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, pmu_hw_events::active_mask is used to keep track of which events are active in hardware. As we can stop counters and their interrupts, this is unnecessary. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, event group validation compares each event's 'pmu' pointer against the static 'pmu' pointer. This limits the code to supporting only 1 PMU. This patch changes the behaviour to consider an event's group leader's 'pmu' pointer as canonical for validation. This should ease later generalisation of the code to support multiple PMUs at once. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, an "empty" struct pmu is registered as the CPU PMU, regardless of whether there is a physical PMU. This burdens the accessor functions with checks to see whether a PMU is actually present. This patch changes initialisation to register a PMU only if there is a supported PMU present, and removes the checks that this change makes redundant. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently, an event's 'pmu' field is set after pmu::event_init() is called. This means that pmu::event_init() must figure out which struct pmu the event was initialised from. This makes it difficult to consolidate common event initialisation code for similar PMUs, and very difficult to implement drivers for PMUs which can have multiple instances (e.g. a USB controller PMU, a GPU PMU, etc). This patch sets the 'pmu' field before initialising the event, allowing event init code to identify the struct pmu instance easily. In the event of failure to initialise an event, the event is destroyed via kfree() without calling perf_event::destroy(), so this shouldn't result in bad behaviour even if the destroy field was set before failure to initialise was noted. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1313062280-19123-1-git-send-email-mark.rutland@arm.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Will Deacon authored
The ARM hw_breakpoint backend is currently a bit too noisy when things start to go awry. This patch removes a couple of over-zealous WARN_ONCE invocations and replaces then with pr_warnings instead. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The ARM debug registers can only be accessed if the DBGSWENABLE signal to the core is driven HIGH by the DAP. The architecture does not provide a way to detect the value of this signal, so the best we can do is register an undef_hook to trap debug register co-processor accesses and then fail if the trap is taken. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
ARM debug architecture 7.1 mandates that the DFAR is updated on a watchpoint debug exception to contain the faulting virtual address of the memory access. This allows us to determine which watchpoints have fired and therefore report useful information to userspace. This patch adds support for using the DFAR in the watchpoint handler, which allows us to support multiple watchpoints on CPUs implementing the new debug architecture. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The current hw_breakpoint code on ARM reserves 1 breakpoint for each watchpoint that is available. Since debug architectures prior to 7.1 are restricted to 1 watchpoint anyway, only one breakpoint was ever reserved. This patch changes the reservation strategy so that a single breakpoint is reserved, regardless of the number of watchpoints. This is in preparation for multiple-watchpoint support on debug architectures from 7.1 onwards. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
This patch adds initial support for Cortex-A15 (debug architecture v7.1) to the hw_breakpoint ARM backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The Cortex-A15 PMU implements the PMUv2 specification and therefore has support for some mode exclusion. This patch adds support for excluding user, kernel and hypervisor counts from a given event. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Modern PMUs allow for mode exclusion, so we no longer wish to return -EPERM if it is requested. This patch provides a hook in the armpmu structure for implementing mode exclusion. The hw_perf_event initialisation is slightly delayed so that the backend code can update the structure if required. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
ARM PMU code used to use 1-based indices for PMU registers. This caused several data structures (pmu_hw_events::{active_events, used_mask, events}) to have an unused element at index zero. ARMPMU_MAX_HWEVENTS still takes this indexing into account, and currently equates to 33. This patch updates the core ARM perf code to use the 0th index again. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Now that the ARMv7 PMU backend indexes event counters from zero, follow suit and do the same for ARMv6 and Xscale. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The current ARMv7 PMU backend indexes event counters from two, with index zero being reserved and index one being used to represent the cycle counter. This patch tidies up the code by indexing from one instead (with zero for the cycle counter). This allows us to remove many of the accessor macros along with the counter enumeration and makes the code much more readable. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
This patch ensures that integers are used to represent event indices in the ARMv7 PMU backend. This ensures consistency between functions and also with the arm_pmu structure. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The ARMv7 perf backend mixes up u32 and unsigned long, which is rather ugly. This patch makes the ARMv7 PMU code consistently use the u32 type instead. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Commit 5dfc54e0 ("ARM: GIC: avoid routing interrupts to offline CPUs") prevents the GIC from setting the affinity of an IRQ to a CPU with id >= nr_cpu_ids. This was previously abused by perf on some platforms where more IRQs were registered than possible CPUs. This patch fixes the problem by using a cpumask_t to keep track of the active (requested) interrupts in perf. The same effect could be achieved by limiting the number of IRQs to the number of CPUs, but using a mask instead will be useful for adding extended CPU hotplug support in the future. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Once upon a time, OProfile and Perf fought hard over who could play with the PMU. To stop all hell from breaking loose, pmu.c offered an internal reserve/release API and took care of parsing PMU platform data passed in from board support code. Now that Perf has ingested OProfile, let's move the platform device handling into the Perf driver and out of the PMU locking code. Unfortunately, the lock has to remain to prevent Perf being bitten by out-of-tree modules such as LTTng, which still claim a right to the PMU when Perf isn't looking. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
This patch removes const qualifiers from instances of struct arm_pmu, and functions initialising them, in preparation for generalising arm_pmu usage to system (AKA uncore) PMUs. This will allow for dynamically modifiable structures (locks, struct pmu) to be added as members of struct arm_pmu. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-