- 01 Dec, 2015 1 commit
-
-
Ville Syrjälä authored
Rather than using drm_match_cea_mode() to see if the EDID detailed timings are supposed to represent one of the CEA/HDMI modes, add a special version of that function that takes in an explicit clock tolerance value (in kHz). When looking at the detailed timings specify the tolerance as 5kHz due to the 10kHz clock resolution limit inherent in detailed timings. drm_match_cea_mode() uses the normal KHZ2PICOS() matching of clocks, which only allows smaller errors for lower clocks (eg. for 25200 it won't allow any error) and a bigger error for higher clocks (eg. for 297000 it actually matches 296913-297000). So it doesn't really match what we want for the fixup. Using the explicit +-5kHz is much better for this use case. Not sure if we should change the normal mode matching to also use something else besides KHZ2PICOS() since it allows a different proportion of error depending on the clock. I believe VESA CVT allows a maximum deviation of .5%, so using that for normal mode matching might be a good idea? Cc: Adam Jackson <ajax@redhat.com> Tested-by: nathan.d.ciobanu@linux.intel.com Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92217 Fixes: fa3a7340 ("drm/edid: Fix up clock for CEA/HDMI modes specified via detailed timings") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Adam Jackson <ajax@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 Nov, 2015 1 commit
-
-
Jyri Sarha authored
Add drm_atomic_helper_disable_planes_on_crtc() for disabling all planes associated with the given CRTC. This can be used for instance in the CRTC helper disable callback to disable all planes before shutting down the display pipeline. v2: - Address Daniels review comments [1] - Do atomic_begin() and atomic_flush() always if they are defined and atomic knob is set - update kerneldoc - Put drm_atomic_helper_disable_planes_on_crtc() after drm_atomic_helper_commit_planes_on_crtc() in drm_atomic_helper.c to have functions in the same order as in drm_atomic_helper.h Signed-off-by: Jyri Sarha <jsarha@ti.com> Link: http://patchwork.freedesktop.org/patch/msgid/1448633641-6486-1-git-send-email-jsarha@ti.comSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 Nov, 2015 2 commits
-
-
Chris Wilson authored
The previous patch reintroduced a race condition whereby a failure in one reader may allow a second reader to see out-of-order events. Introduce a mutex to serialise readers so that an event is completed in its entirety before another reader may process an event. The two readers may race against each other, but the events each retrieves are in the correct order. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1448462343-2072-2-git-send-email-chris@chris-wilson.co.ukReviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
In commit cdd1cf79 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Dec 4 21:03:25 2014 +0000 drm: Make drm_read() more robust against multithreaded races I fixed the races by serialising the use of the event by extending the dev->event_lock. However, as Thomas pointed out, the copy_to_user() may fault (even the __copy_to_user_inatomic() variant used here) and calling into the driver backend with the spinlock held is bad news. Therefore we have to drop the spinlock before the copy, but that exposes us to the old race whereby a second reader could see an out-of-order event (as the first reader may claim the first request but fail to copy it back to userspace and so on returning it to the event list it will be behind the current event being copied by the second reader). Reported-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1448462343-2072-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 25 Nov, 2015 3 commits
-
-
Geliang Tang authored
To make the intention clearer, use list_next_entry instead of list_entry. Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
We have serious dangling else bugs waiting to happen in our for_each_ style macros with ifs. Consider, for example, #define for_each_power_domain(domain, mask) \ for ((domain) = 0; (domain) < POWER_DOMAIN_NUM; (domain)++) \ if ((1 << (domain)) & (mask)) If this is used in context: if (condition) for_each_power_domain(domain, mask); else foo(); foo() will be called for each domain *not* in mask, if condition holds, and not at all if condition doesn't hold. Fix this by reversing the conditions in the macros, and adding an else branch for the "for each" block, so that other if/else blocks can't interfere. Provide a "for_each_if" helper macro to make it easier to get this right. v2: move for_each_if to drmP.h in a separate patch. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1448392916-2281-2-git-send-email-jani.nikula@intel.comSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
We have serious dangling else bugs waiting to happen in our for_each_ style macros with ifs. Consider, for example, #define drm_for_each_plane_mask(plane, dev, plane_mask) \ list_for_each_entry((plane), &(dev)->mode_config.plane_list, head) \ if ((plane_mask) & (1 << drm_plane_index(plane))) If this is used in context: if (condition) drm_for_each_plane_mask(plane, dev, plane_mask); else foo(); foo() will be called for each plane *not* in plane_mask, if condition holds, and not at all if condition doesn't hold. Fix this by reversing the conditions in the macros, and adding an else branch for the "for each" block, so that other if/else blocks can't interfere. Provide a "for_each_if" helper macro to make it easier to get this right. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1448392916-2281-1-git-send-email-jani.nikula@intel.comSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 Nov, 2015 17 commits
-
-
Daniel Vetter authored
To avoid even more code duplication punt this all to the probe worker, which needs some slight adjustment to also generate a uevent when the status has changed to due connector->force. v2: Instead of running the output_poll_work (which is kinda the wrong thing and a layering violation since it's an internal of the probe helpers), or calling ->detect (which is again a layering violation since it's used only by probe helpers) just call the official ->fill_modes function, like a GET_CONNECTOR ioctl call. v3: Restore the accidentally removed forced-probe for echo "detect" > force. Cc: Chris Wilson <chris@chris-wilson.co.uk> Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1447951610-12622-22-git-send-email-daniel.vetter@ffwll.chReviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Maarten Lankhorst authored
Use the correct function name for drm_atomic_clean_old_fb docs. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Geliang Tang authored
When backwards is 0, __drm_mm_for_each_hole is same as drm_mm_for_each_hole. So I rewrite drm_mm_for_each_hole by using __drm_mm_for_each_hole. Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
We chase pointers/lists without taking the locks protecting them, which isn't that good. Fix it. v2: Actually unlock properly, spotted by Julia. v3: Put the label _before_ the mutex_unlock (Emil) Cc: Emil Velikov <emil.l.velikov@gmail.com> Cc: Julia Lawall <julia.lawall@lip6.fr> Link: http://patchwork.freedesktop.org/patch/msgid/1443783662-23066-1-git-send-email-daniel.vetter@ffwll.chReviewed-by: Emil Velikov <emil.l.velikov@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
To aid in debugging failures, print the src,dst,clip rectangles when drm_plane_helper_check_update() fails. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Allow the caller to specify a "prefix" string to drm_rect_debug_print() to make it easier to see which drm_rect is being printed. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Drivers shouldn't clobber the passed in addfb ioctl parameters. i915 was doing just that. To prevent it from happening again, pass the struct around as const, starting all the way from internal_framebuffer_create(). Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
LABBE Corentin authored
The simple_strtoul function is marked as obsolete. This patch replace it by kstrtouint. Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Robert Fekete authored
Adds clarification of the rotation property bits. I.e. rotation is counter clockwise and that reflects are applied before any rotation. v2: Refer to the define names instead of the property values. Signed-off-by: Robert Fekete <robert.fekete@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Lukas Wunner authored
I noticed that intel_fbdev->our_mode is unused. Introduced by 79e53945 ("DRM: i915: add mode setting support"). Then I noticed that intel_fbdev->fbdev_list is unused as well. Introduced by 38651674 ("drm/fb: fix fbdev object model + cleanup properly.") in i915, nouveau and radeon. Subsequently cargo culted to amdgpu, ast, cirrus, qxl, udl, virtio and mgag200. Already removed from the latter with cc59487a ("drm/mgag200: 'fbdev_list' in 'struct mga_fbdev' is not used"). Remove it from the others. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Markus Elfring authored
The drm_property_unreference_blob() function tests whether its argument is NULL and then returns immediately. Thus the tests around the calls are not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Link: http://patchwork.freedesktop.org/patch/msgid/563C8B3E.405@users.sourceforge.netSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jani Nikula authored
Cc: Yetunde Adebisi <yetundex.adebisi@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Archit Taneja authored
DRM_TEGRA_FBDEV config is currently used to enable/disable legacy fbdev emulation for the tegra kms driver. Remove this local config option and use the top level DRM_FBDEV_EMULATION config option instead. Signed-off-by: Archit Taneja <architt@codeaurora.org> Link: http://patchwork.freedesktop.org/patch/msgid/1445933459-5249-4-git-send-email-architt@codeaurora.orgSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Archit Taneja authored
DRM_IMX_FB_HELPER config is currently used to enable/disable fbdev emulation for the imx kms driver. Remove this local config option and use the top level DRM_FBDEV_EMULATION config option where applicable. Using this config lets us also prevent wrapping around drm_fb_helper_* calls with #ifdefs in certain places. Tested-by: Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by: Archit Taneja <architt@codeaurora.org> Link: http://patchwork.freedesktop.org/patch/msgid/1445933459-5249-2-git-send-email-architt@codeaurora.orgSigned-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
A bunch of things have been removed meanwhile and docs not fully brought up to speed. And a few gaps closed where I noticed missing kerneldoc while reading through the overview sections. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1445533889-7661-3-git-send-email-daniel.vetter@ffwll.chReviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
I just realized that I've forgotten to update all the gem refcounting docs. For pennance also add pretty docs for the overall drm_gem_object structure, with a few links thrown in fore good. As usually we need to make sure the kerneldoc reference is at most a sect2 for otherwise it won't be listed. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1445533889-7661-1-git-send-email-daniel.vetter@ffwll.chReviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 Nov, 2015 1 commit
-
-
Linus Torvalds authored
-
- 22 Nov, 2015 15 commits
-
-
Linus Torvalds authored
Merge slub bulk allocator updates from Andrew Morton: "This missed the merge window because I was waiting for some repairs to come in. Nothing actually uses the bulk allocator yet and the changes to other code paths are pretty small. And the net guys are waiting for this so they can start merging the client code" More comments from Jesper Dangaard Brouer: "The kmem_cache_alloc_bulk() call, in mm/slub.c, were included in previous kernel. The present version contains a bug. Vladimir Davydov noticed it contained a bug, when kernel is compiled with CONFIG_MEMCG_KMEM (see commit 03ec0ed5: "slub: fix kmem cgroup bug in kmem_cache_alloc_bulk"). Plus the mem cgroup counterpart in kmem_cache_free_bulk() were missing (see commit 03374518 "slub: add missing kmem cgroup support to kmem_cache_free_bulk"). I don't consider the fix stable-material because there are no in-tree users of the API. But with known bugs (for memcg) I cannot start using the API in the net-tree" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: slab/slub: adjust kmem_cache_alloc_bulk API slub: add missing kmem cgroup support to kmem_cache_free_bulk slub: fix kmem cgroup bug in kmem_cache_alloc_bulk slub: optimize bulk slowpath free by detached freelist slub: support for bulk free with SLUB freelists
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/ttyLinus Torvalds authored
Pull tty/serial fixes from Greg KH: "Here are a few small tty/serial driver fixes for 4.4-rc2 that resolve some reported problems. All have been in linux-next, full details are in the shortlog below" * tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: serial: export fsl8250_handle_irq serial: 8250_mid: Add missing dependency tty: audit: Fix audit source serial: etraxfs-uart: Fix crash serial: fsl_lpuart: Fix earlycon support bcm63xx_uart: Use the device name when registering an interrupt tty: Fix direct use of tty buffer work tty: Fix tty_send_xchar() lock order inversion
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/stagingLinus Torvalds authored
Pull staging/IIO fixes from Greg KH: "Here are some staging and iio driver fixes for 4.4-rc2. All of these are in response to issues that have been reported and have been in linux-next for a while" * tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: Revert "Staging: wilc1000: coreconfigurator: Drop unneeded wrapper functions" iio: adc: xilinx: Fix VREFN scale iio: si7020: Swap data byte order iio: adc: vf610_adc: Fix division by zero error iio:ad7793: Fix ad7785 product ID iio: ad5064: Fix ad5629/ad5669 shift iio:ad5064: Make sure ad5064_i2c_write() returns 0 on success iio: lpc32xx_adc: fix warnings caused by enabling unprepared clock staging: iio: select IRQ_WORK for IIO_DUMMY_EVGEN vf610_adc: Fix internal temperature calculation
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usbLinus Torvalds authored
Pull USB fixes from Greg KH: "Here are a number of USB fixes and new device ids for 4.4-rc2. All have been in linux-next and the details are in the shortlog" * tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (28 commits) usblp: do not set TASK_INTERRUPTIBLE before lock USB: MAINTAINERS: cxacru usb: kconfig: fix warning of select USB_OTG USB: option: add XS Stick W100-2 from 4G Systems xhci: Fix a race in usb2 LPM resume, blocking U3 for usb2 devices usb: xhci: fix checking ep busy for CFC xhci: Workaround to get Intel xHCI reset working more reliably usb: chipidea: imx: fix a possible NULL dereference usb: chipidea: usbmisc_imx: fix a possible NULL dereference usb: chipidea: otg: gadget module load and unload support usb: chipidea: debug: disable usb irq while role switch ARM: dts: imx27.dtsi: change the clock information for usb usb: chipidea: imx: refine clock operations to adapt for all platforms usb: gadget: atmel_usba_udc: Expose correct device speed usb: musb: enable usb_dma parameter usb: phy: phy-mxs-usb: fix a possible NULL dereference usb: dwc3: gadget: let us set lower max_speed usb: musb: fix tx fifo flush handling usb: gadget: f_loopback: fix the warning during the enumeration usb: dwc2: host: Fix remote wakeup when not in DWC2_L2 ...
-
git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds authored
Pull MIPS fixes from Ralf Baechle: - Fix a flood of annoying build warnings - A number of fixes for Atheros 79xx platforms * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: MIPS: ath79: Add a machine entry for booting OF machines MIPS: ath79: Fix the size of the MISC INTC registers in ar9132.dtsi MIPS: ath79: Fix the DDR control initialization on ar71xx and ar934x MIPS: Fix flood of warnings about comparsion being always true.
-
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linuxLinus Torvalds authored
Pull parisc update from Helge Deller: "This patchset adds Huge Page and HUGETLBFS support for parisc" Honestly, the hugepage support should have gone through in the merge window, and is not really an rc-time fix. But it only touches arch/parisc, and I cannot find it in myself to care. If one of the three parisc users notices a breakage, I will point at Helge and make rude farting noises. * 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Map kernel text and data on huge pages parisc: Add Huge Page and HUGETLBFS support parisc: Use long branch to do_syscall_trace_exit parisc: Increase initial kernel mapping to 32MB on 64bit kernel parisc: Initialize the fault vector earlier in the boot process. parisc: Add defines for Huge page support parisc: Drop unused MADV_xxxK_PAGES flags from asm/mman.h parisc: Drop definition of start_thread_som for HP-UX SOM binaries parisc: Fix wrong comment regarding first pmd entry flags
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf tool fixes from Thomas Gleixner: "A couple of fixes for perf tools: - Build system updates - Plug a memory leak in an error path of perf probe - Tear down probes correctly when adding fails - Fixes to the perf symbol handling - Fix ordering of event processing in buildid-list - Fix per DSO filtering in the histogram browser" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf probe: Clear probe_trace_event when add_probe_trace_event() fails perf probe: Fix memory leaking on failure by clearing all probe_trace_events perf inject: Also re-pipe lost_samples event perf buildid-list: Requires ordered events perf symbols: Fix dso lookup by long name and missing buildids perf symbols: Allow forcing reading of non-root owned files by root perf hists browser: The dso can be obtained from popup_action->ms.map->dso perf hists browser: Fix 'd' hotkey action to filter by DSO perf symbols: Rebuild rbtree when adjusting symbols for kcore tools: Add a "make all" rule tools: Actually install tmon in the install rule
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "This update contains: - MPX updates for handling 32bit processes - A fix for a long standing bug in 32bit signal frame handling related to FPU/XSAVE state - Handle get_xsave_addr() correctly in KVM - Fix SMAP check under paravirtualization - Add a comment to the static function trace entry to avoid further confusion about the difference to dynamic tracing" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Fix SMAP check in PVOPS environments x86/ftrace: Add comment on static function tracing x86/fpu: Fix get_xsave_addr() behavior under virtualization x86/fpu: Fix 32-bit signal frame handling x86/mpx: Fix 32-bit address space calculation x86/mpx: Do proper get_user() when running 32-bit binaries on 64-bit kernels
-
Jesper Dangaard Brouer authored
Adjust kmem_cache_alloc_bulk API before we have any real users. Adjust API to return type 'int' instead of previously type 'bool'. This is done to allow future extension of the bulk alloc API. A future extension could be to allow SLUB to stop at a page boundary, when specified by a flag, and then return the number of objects. The advantage of this approach, would make it easier to make bulk alloc run without local IRQs disabled. With an approach of cmpxchg "stealing" the entire c->freelist or page->freelist. To avoid overshooting we would stop processing at a slab-page boundary. Else we always end up returning some objects at the cost of another cmpxchg. To keep compatible with future users of this API linking against an older kernel when using the new flag, we need to return the number of allocated objects with this API change. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesper Dangaard Brouer authored
Initial implementation missed support for kmem cgroup support in kmem_cache_free_bulk() call, add this. If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough to not add any asm code. Incoming bulk free objects can belong to different kmem cgroups, and object free call can happen at a later point outside memcg context. Thus, we need to keep the orig kmem_cache, to correctly verify if a memcg object match against its "root_cache" (s->memcg_params.root_cache). Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesper Dangaard Brouer authored
The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to be called several times inside the bulk alloc for loop, due to the call to memcg_kmem_get_cache(). This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache. As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able to handle an array of objects. A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have same type (size_t) as size argument. This helps the compiler to easier realize that it can remove the loop, when all debug statements inside loop evaluates to nothing. Note, this is only an issue because the kernel is compiled with GCC option: -fno-strict-overflow In slab_alloc_node() the compiler inlines and optimizes the invocation of slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access object directly. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Reported-by: Vladimir Davydov <vdavydov@virtuozzo.com> Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com> Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesper Dangaard Brouer authored
This change focus on improving the speed of object freeing in the "slowpath" of kmem_cache_free_bulk. The calls slab_free (fastpath) and __slab_free (slowpath) have been extended with support for bulk free, which amortize the overhead of the (locked) cmpxchg_double. To use the new bulking feature, we build what I call a detached freelist. The detached freelist takes advantage of three properties: 1) the free function call owns the object that is about to be freed, thus writing into this memory is synchronization-free. 2) many freelist's can co-exist side-by-side in the same slab-page each with a separate head pointer. 3) it is the visibility of the head pointer that needs synchronization. Given these properties, the brilliant part is that the detached freelist can be constructed without any need for synchronization. The freelist is constructed directly in the page objects, without any synchronization needed. The detached freelist is allocated on the stack of the function call kmem_cache_free_bulk. Thus, the freelist head pointer is not visible to other CPUs. All objects in a SLUB freelist must belong to the same slab-page. Thus, constructing the detached freelist is about matching objects that belong to the same slab-page. The bulk free array is scanned is a progressive manor with a limited look-ahead facility. Kmem debug support is handled in call of slab_free(). Notice kmem_cache_free_bulk no longer need to disable IRQs. This only slowed down single free bulk with approx 3 cycles. Performance data: Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns To get stable and comparable numbers, the kernel have been booted with "slab_merge" (this also improve performance for larger bulk sizes). Performance data, compared against fallback bulking: bulk - fallback bulk - improvement with this patch 1 - 62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0% 2 - 55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5% 3 - 53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6% 4 - 52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5% 8 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0% 16 - 49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3% 30 - 49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3% 32 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0% 34 - 96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0% 48 - 83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7% 64 - 74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0% 128 - 90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0% 158 - 99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7% 250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4% Performance data, compared current in-kernel bulking: bulk - curr in-kernel - improvement with this patch 1 - 46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5% 2 - 27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1% 3 - 21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5% 4 - 18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1% 8 - 17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9% 16 - 18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1) 5.6% 30 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0% 32 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0% 34 - 78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5% 48 - 60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0% 64 - 49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2% 128 - 69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9% 158 - 79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0% 250 - 86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0% Performance with normal SLUB merging is significantly slower for larger bulking. This is believed to (primarily) be an effect of not having to share the per-CPU data-structures, as tuning per-CPU size can achieve similar performance. bulk - slab_nomerge - normal SLUB merge 1 - 49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0 2 - 30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0 3 - 23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0 4 - 20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0 8 - 18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0 16 - 17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0 30 - 18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5 32 - 18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4 34 - 23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1 48 - 21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1 64 - 20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28 128 - 27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30 158 - 30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29 250 - 37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19 Joint work with Alexander Duyck. [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c [akpm@linux-foundation.org: BUG_ON -> WARN_ON;return] Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesper Dangaard Brouer authored
Make it possible to free a freelist with several objects by adjusting API of slab_free() and __slab_free() to have head, tail and an objects counter (cnt). Tail being NULL indicate single object free of head object. This allow compiler inline constant propagation in slab_free() and slab_free_freelist_hook() to avoid adding any overhead in case of single object free. This allows a freelist with several objects (all within the same slab-page) to be free'ed using a single locked cmpxchg_double in __slab_free() and with an unlocked cmpxchg_double in slab_free(). Object debugging on the free path is also extended to handle these freelists. When CONFIG_SLUB_DEBUG is enabled it will also detect if objects don't belong to the same slab-page. These changes are needed for the next patch to bulk free the detached freelists it introduces and constructs. Micro benchmarking showed no performance reduction due to this change, when debugging is turned off (compiled with CONFIG_SLUB_DEBUG). Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Helge Deller authored
Adjust the linker script and map_pages() to map kernel text and data on physical 1MB huge/large pages. Signed-off-by: Helge Deller <deller@gmx.de>
-
Helge Deller authored
This patch adds huge page support to allow userspace to allocate huge pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels. A later patch will add kernel support to map kernel text and data on huge pages. The only requirement is, that the kernel needs to be compiled for a PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support variable page sizes. 64bit Kernels are compiled for PA2.0 by default. Technically on parisc multiple physical huge pages may be needed to emulate standard 2MB huge pages. Signed-off-by: Helge Deller <deller@gmx.de>
-