1. 19 Aug, 2016 12 commits
    • Mark Brown's avatar
      iio:ad7266: Fix support for optional regulators · 801044f8
      Mark Brown authored
      commit e5511c81 upstream.
      
      The ad7266 driver attempts to support deciding between the use of internal
      and external power supplies by checking to see if an error is returned when
      requesting the regulator. This doesn't work with the current code since the
      driver uses a normal regulator_get() which is for non-optional supplies
      and so assumes that if a regulator is not provided by the platform then
      this is a bug in the platform integration and so substitutes a dummy
      regulator. Use regulator_get_optional() instead which indicates to the
      framework that the regulator may be absent and provides a dummy regulator
      instead.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      801044f8
    • Mark Brown's avatar
      iio:ad7266: Fix broken regulator error handling · bd2f349d
      Mark Brown authored
      commit 6b7f4e25 upstream.
      
      All regulator_get() variants return either a pointer to a regulator or an
      ERR_PTR() so testing for NULL makes no sense and may lead to bugs if we
      use NULL as a valid regulator. Fix this by using IS_ERR() as expected.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      bd2f349d
    • Linus Walleij's avatar
      iio: accel: kxsd9: fix the usage of spi_w8r8() · 95ac1169
      Linus Walleij authored
      commit 0c1f91b9 upstream.
      
      These two spi_w8r8() calls return a value with is used by the code
      following the error check. The dubious use was caused by a cleanup
      patch.
      
      Fixes: d34dbee8 ("staging:iio:accel:kxsd9 cleanup and conversion to iio_chan_spec.")
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      95ac1169
    • Luis de Bethencourt's avatar
      staging: iio: accel: fix error check · 14a450ef
      Luis de Bethencourt authored
      commit ef3149eb upstream.
      
      sca3000_read_ctrl_reg() returns a negative number on failure, check for
      this instead of zero.
      Signed-off-by: default avatarLuis de Bethencourt <luisbg@osg.samsung.com>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      14a450ef
    • Crestez Dan Leonard's avatar
      iio: Fix error handling in iio_trigger_attach_poll_func · 4521af12
      Crestez Dan Leonard authored
      commit 99543823 upstream.
      
      When attaching a pollfunc iio_trigger_attach_poll_func will allocate a
      virtual irq and call the driver's set_trigger_state function. Fix error
      handling to undo previous steps if any fails.
      
      In particular this fixes handling errors from a driver's
      set_trigger_state function. When using triggered buffers a failure to
      enable the trigger used to make the buffer unusable.
      Signed-off-by: default avatarCrestez Dan Leonard <leonard.crestez@intel.com>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      4521af12
    • Lyude's avatar
      drm/i915/ilk: Don't disable SSC source if it's in use · fb3bb94d
      Lyude authored
      commit 476490a9 upstream.
      
      Thanks to Ville Syrjälä for pointing me towards the cause of this issue.
      
      Unfortunately one of the sideaffects of having the refclk for a DPLL set
      to SSC is that as long as it's set to SSC, the GPU will prevent us from
      powering down any of the pipes or transcoders using it. A couple of
      BIOSes enable SSC in both PCH_DREF_CONTROL and in the DPLL
      configurations. This causes issues on the first modeset, since we don't
      expect SSC to be left on and as a result, can't successfully power down
      the pipes or the transcoders using it. Here's an example from this Dell
      OptiPlex 990:
      
      [drm:intel_modeset_init] SSC enabled by BIOS, overriding VBT which says disabled
      [drm:intel_modeset_init] 2 display pipes available.
      [drm:intel_update_cdclk] Current CD clock rate: 400000 kHz
      [drm:intel_update_max_cdclk] Max CD clock rate: 400000 kHz
      [drm:intel_update_max_cdclk] Max dotclock rate: 360000 kHz
      vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem
      [drm:intel_crt_reset] crt adpa set to 0xf40000
      [drm:intel_dp_init_connector] Adding DP connector on port C
      [drm:intel_dp_aux_init] registering DPDDC-C bus for card0-DP-1
      [drm:ironlake_init_pch_refclk] has_panel 0 has_lvds 0 has_ck505 0
      [drm:ironlake_init_pch_refclk] Disabling SSC entirely
      … later we try committing the first modeset …
      [drm:intel_dump_pipe_config] [CRTC:26][modeset] config ffff88041b02e800 for pipe A
      [drm:intel_dump_pipe_config] cpu_transcoder: A
      …
      [drm:intel_dump_pipe_config] dpll_hw_state: dpll: 0xc4016001, dpll_md: 0x0, fp0: 0x20e08, fp1: 0x30d07
      [drm:intel_dump_pipe_config] planes on this crtc
      [drm:intel_dump_pipe_config] STANDARD PLANE:23 plane: 0.0 idx: 0 enabled
      [drm:intel_dump_pipe_config]     FB:42, fb = 800x600 format = 0x34325258
      [drm:intel_dump_pipe_config]     scaler:0 src (0, 0) 800x600 dst (0, 0) 800x600
      [drm:intel_dump_pipe_config] CURSOR PLANE:25 plane: 0.1 idx: 1 disabled, scaler_id = 0
      [drm:intel_dump_pipe_config] STANDARD PLANE:27 plane: 0.1 idx: 2 disabled, scaler_id = 0
      [drm:intel_get_shared_dpll] CRTC:26 allocated PCH DPLL A
      [drm:intel_get_shared_dpll] using PCH DPLL A for pipe A
      [drm:ilk_audio_codec_disable] Disable audio codec on port C, pipe A
      [drm:intel_disable_pipe] disabling pipe A
      ------------[ cut here ]------------
      WARNING: CPU: 1 PID: 130 at drivers/gpu/drm/i915/intel_display.c:1146 intel_disable_pipe+0x297/0x2d0 [i915]
      pipe_off wait timed out
      …
      ---[ end trace 94fc8aa03ae139e8 ]---
      [drm:intel_dp_link_down]
      [drm:ironlake_crtc_disable [i915]] *ERROR* failed to disable transcoder A
      
      Later modesets succeed since they reset the DPLL's configuration anyway,
      but this is enough to get stuck with a big fat warning in dmesg.
      
      A better solution would be to add refcounts for the SSC source, but for
      now leaving the source clock on should suffice.
      
      Changes since v4:
       - Fix calculation of final for systems with LVDS panels (fixes BUG() on
         CI test suite)
      Changes since v3:
       - Move temp variable into loop
       - Move checks for using_ssc_source to after we've figured out has_ck505
       - Add using_ssc_source to debug output
      Changes since v2:
       - Fix debug output for when we disable the CPU source
      Changes since v1:
       - Leave the SSC source clock on instead of just shutting it off on all
         of the DPLL configurations.
      Reviewed-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: default avatarLyude <cpaul@redhat.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1465916649-10228-1-git-send-email-cpaul@redhat.comSigned-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      fb3bb94d
    • Alex Deucher's avatar
      drm/radeon: fix asic initialization for virtualized environments · a5591555
      Alex Deucher authored
      commit 05082b8b upstream.
      
      When executing in a PCI passthrough based virtuzliation environment, the
      hypervisor will usually attempt to send a PCIe bus reset signal to the
      ASIC when the VM reboots. In this scenario, the card is not correctly
      initialized, but we still consider it to be posted. Therefore, in a
      passthrough based environemnt we should always post the card to guarantee
      it is in a good state for driver initialization.
      
      Ported from amdgpu commit:
      amdgpu: fix asic initialization for virtualized environments
      
      Cc: Andres Rodriguez <andres.rodriguez@amd.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      a5591555
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Handle NULL formats in hold_module_trace_bprintk_format() · b16e8932
      Steven Rostedt (Red Hat) authored
      commit 70c8217a upstream.
      
      If a task uses a non constant string for the format parameter in
      trace_printk(), then the trace_printk_fmt variable is set to NULL. This
      variable is then saved in the __trace_printk_fmt section.
      
      The function hold_module_trace_bprintk_format() checks to see if duplicate
      formats are used by modules, and reuses them if so (saves them to the list
      if it is new). But this function calls lookup_format() that does a strcmp()
      to the value (which is now NULL) and can cause a kernel oops.
      
      This wasn't an issue till 3debb0a9 ("tracing: Fix trace_printk() to print
      when not using bprintk()") which added "__used" to the trace_printk_fmt
      variable, and before that, the kernel simply optimized it out (no NULL value
      was saved).
      
      The fix is simply to handle the NULL pointer in lookup_format() and have the
      caller ignore the value if it was NULL.
      
      Link: http://lkml.kernel.org/r/1464769870-18344-1-git-send-email-zhengjun.xing@intel.comReported-by: default avatarxingzhen <zhengjun.xing@intel.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Fixes: 3debb0a9 ("tracing: Fix trace_printk() to print when not using bprintk()")
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      b16e8932
    • Xiubo Li's avatar
      kvm: Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES · 08ad57e2
      Xiubo Li authored
      commit caf1ff26 upstream.
      
      These days, we experienced one guest crash with 8 cores and 3 disks,
      with qemu error logs as bellow:
      
      qemu-system-x86_64: /build/qemu-2.0.0/kvm-all.c:984:
      kvm_irqchip_commit_routes: Assertion `ret == 0' failed.
      
      And then we found one patch(bdf026317d) in qemu tree, which said
      could fix this bug.
      
      Execute the following script will reproduce the BUG quickly:
      
      irq_affinity.sh
      ========================================================================
      
      vda_irq_num=25
      vdb_irq_num=27
      while [ 1 ]
      do
          for irq in {1,2,4,8,10,20,40,80}
              do
                  echo $irq > /proc/irq/$vda_irq_num/smp_affinity
                  echo $irq > /proc/irq/$vdb_irq_num/smp_affinity
                  dd if=/dev/vda of=/dev/zero bs=4K count=100 iflag=direct
                  dd if=/dev/vdb of=/dev/zero bs=4K count=100 iflag=direct
              done
      done
      ========================================================================
      
      The following qemu log is added in the qemu code and is displayed when
      this bug reproduced:
      
      kvm_irqchip_commit_routes: max gsi: 1008, nr_allocated_irq_routes: 1024,
      irq_routes->nr: 1024, gsi_count: 1024.
      
      That's to say when irq_routes->nr == 1024, there are 1024 routing entries,
      but in the kernel code when routes->nr >= 1024, will just return -EINVAL;
      
      The nr is the number of the routing entries which is in of
      [1 ~ KVM_MAX_IRQ_ROUTES], not the index in [0 ~ KVM_MAX_IRQ_ROUTES - 1].
      
      This patch fix the BUG above.
      Signed-off-by: default avatarXiubo Li <lixiubo@cmss.chinamobile.com>
      Signed-off-by: default avatarWei Tang <tangwei@cmss.chinamobile.com>
      Signed-off-by: default avatarZhang Zhuoyu <zhangzhuoyu@cmss.chinamobile.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      08ad57e2
    • Ilya Dryomov's avatar
      libceph: apply new_state before new_up_client on incrementals · bbc3aa6b
      Ilya Dryomov authored
      commit 930c5328 upstream.
      
      Currently, osd_weight and osd_state fields are updated in the encoding
      order.  This is wrong, because an incremental map may look like e.g.
      
          new_up_client: { osd=6, addr=... } # set osd_state and addr
          new_state: { osd=6, xorstate=EXISTS } # clear osd_state
      
      Suppose osd6's current osd_state is EXISTS (i.e. osd6 is down).  After
      applying new_up_client, osd_state is changed to EXISTS | UP.  Carrying
      on with the new_state update, we flip EXISTS and leave osd6 in a weird
      "!EXISTS but UP" state.  A non-existent OSD is considered down by the
      mapping code
      
      2087    for (i = 0; i < pg->pg_temp.len; i++) {
      2088            if (ceph_osd_is_down(osdmap, pg->pg_temp.osds[i])) {
      2089                    if (ceph_can_shift_osds(pi))
      2090                            continue;
      2091
      2092                    temp->osds[temp->size++] = CRUSH_ITEM_NONE;
      
      and so requests get directed to the second OSD in the set instead of
      the first, resulting in OSD-side errors like:
      
      [WRN] : client.4239 192.168.122.21:0/2444980242 misdirected client.4239.1:2827 pg 2.5df899f2 to osd.4 not [1,4,6] in e680/680
      
      and hung rbds on the client:
      
      [  493.566367] rbd: rbd0: write 400000 at 11cc00000 (0)
      [  493.566805] rbd: rbd0:   result -6 xferred 400000
      [  493.567011] blk_update_request: I/O error, dev rbd0, sector 9330688
      
      The fix is to decouple application from the decoding and:
      - apply new_weight first
      - apply new_state before new_up_client
      - twiddle osd_state flags if marking in
      - clear out some of the state if osd is destroyed
      
      Fixes: http://tracker.ceph.com/issues/14901Signed-off-by: default avatarIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: default avatarJosh Durgin <jdurgin@redhat.com>
      [idryomov@gmail.com: backport to 3.10-3.14: strip primary-affinity]
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      bbc3aa6b
    • Yan, Zheng's avatar
      libceph: set 'exists' flag for newly up osd · 2ffb16a7
      Yan, Zheng authored
      commit 6dd74e44 upstream.
      Signed-off-by: default avatarYan, Zheng <zyan@redhat.com>
      Reviewed-by: default avatarSage Weil <sage@redhat.com>
      Signed-off-by: default avatarIlya Dryomov <idryomov@gmail.com>
      Cc: Ilya Dryomov <idryomov@gmail.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2ffb16a7
    • Florian Westphal's avatar
      netfilter: x_tables: speed up jump target validation · 26bb96ed
      Florian Westphal authored
      commit f4dc7771 upstream.
      
      The dummy ruleset I used to test the original validation change was broken,
      most rules were unreachable and were not tested by mark_source_chains().
      
      In some cases rulesets that used to load in a few seconds now require
      several minutes.
      
      sample ruleset that shows the behaviour:
      
      echo "*filter"
      for i in $(seq 0 100000);do
              printf ":chain_%06x - [0:0]\n" $i
      done
      for i in $(seq 0 100000);do
         printf -- "-A INPUT -j chain_%06x\n" $i
         printf -- "-A INPUT -j chain_%06x\n" $i
         printf -- "-A INPUT -j chain_%06x\n" $i
      done
      echo COMMIT
      
      [ pipe result into iptables-restore ]
      
      This ruleset will be about 74mbyte in size, with ~500k searches
      though all 500k[1] rule entries. iptables-restore will take forever
      (gave up after 10 minutes)
      
      Instead of always searching the entire blob for a match, fill an
      array with the start offsets of every single ipt_entry struct,
      then do a binary search to check if the jump target is present or not.
      
      After this change ruleset restore times get again close to what one
      gets when reverting 36472341 (~3 seconds on my workstation).
      
      [1] every user-defined rule gets an implicit RETURN, so we get
      300k jumps + 100k userchains + 100k returns -> 500k rule entries
      
      Fixes: 36472341 ("netfilter: x_tables: validate targets of jumps")
      Reported-by: default avatarJeff Wu <wujiafu@gmail.com>
      Tested-by: default avatarJeff Wu <wujiafu@gmail.com>
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      26bb96ed
  2. 22 Jul, 2016 3 commits
  3. 21 Jul, 2016 25 commits
    • Richard Weinberger's avatar
      um: Stop abusing __KERNEL__ · ad6896a7
      Richard Weinberger authored
      commit 298e20ba upstream.
      
      Currently UML is abusing __KERNEL__ to distinguish between
      kernel and host code (os-Linux). It is better to use a custom
      define such that existing users of __KERNEL__ don't get confused.
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ad6896a7
    • Tejun Heo's avatar
      printk: do cond_resched() between lines while outputting to consoles · d47e078e
      Tejun Heo authored
      commit 8d91f8b1 upstream.
      
      @console_may_schedule tracks whether console_sem was acquired through
      lock or trylock.  If the former, we're inside a sleepable context and
      console_conditional_schedule() performs cond_resched().  This allows
      console drivers which use console_lock for synchronization to yield
      while performing time-consuming operations such as scrolling.
      
      However, the actual console outputting is performed while holding
      irq-safe logbuf_lock, so console_unlock() clears @console_may_schedule
      before starting outputting lines.  Also, only a few drivers call
      console_conditional_schedule() to begin with.  This means that when a
      lot of lines need to be output by console_unlock(), for example on a
      console registration, the task doing console_unlock() may not yield for
      a long time on a non-preemptible kernel.
      
      If this happens with a slow console devices, for example a serial
      console, the outputting task may occupy the cpu for a very long time.
      Long enough to trigger softlockup and/or RCU stall warnings, which in
      turn pile more messages, sometimes enough to trigger the next cycle of
      warnings incapacitating the system.
      
      Fix it by making console_unlock() insert cond_resched() between lines if
      @console_may_schedule.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarCalvin Owens <calvinowens@fb.com>
      Acked-by: default avatarJan Kara <jack@suse.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Kyle McMartin <kyle@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Charles (Chas) Williams <ciwillia@brocade.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      d47e078e
    • Vitaly Kuznetsov's avatar
      panic: release stale console lock to always get the logbuf printed out · 35e182c9
      Vitaly Kuznetsov authored
      commit 08d78658 upstream.
      
      In some cases we may end up killing the CPU holding the console lock
      while still having valuable data in logbuf. E.g. I'm observing the
      following:
      
      - A crash is happening on one CPU and console_unlock() is being called on
        some other.
      
      - console_unlock() tries to print out the buffer before releasing the lock
        and on slow console it takes time.
      
      - in the meanwhile crashing CPU does lots of printk()-s with valuable data
        (which go to the logbuf) and sends IPIs to all other CPUs.
      
      - console_unlock() finishes printing previous chunk and enables interrupts
        before trying to print out the rest, the CPU catches the IPI and never
        releases console lock.
      
      This is not the only possible case: in VT/fb subsystems we have many other
      console_lock()/console_unlock() users.  Non-masked interrupts (or
      receiving NMI in case of extreme slowness) will have the same result.
      Getting the whole console buffer printed out on crash should be top
      priority.
      
      [akpm@linux-foundation.org: tweak comment text]
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Xie XiuQi <xiexiuqi@huawei.com>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      35e182c9
    • Hugh Dickins's avatar
      mm: migrate dirty page without clear_page_dirty_for_io etc · 2c789028
      Hugh Dickins authored
      commit 42cb14b1 upstream.
      
      clear_page_dirty_for_io() has accumulated writeback and memcg subtleties
      since v2.6.16 first introduced page migration; and the set_page_dirty()
      which completed its migration of PageDirty, later had to be moderated to
      __set_page_dirty_nobuffers(); then PageSwapBacked had to skip that too.
      
      No actual problems seen with this procedure recently, but if you look into
      what the clear_page_dirty_for_io(page)+set_page_dirty(newpage) is actually
      achieving, it turns out to be nothing more than moving the PageDirty flag,
      and its NR_FILE_DIRTY stat from one zone to another.
      
      It would be good to avoid a pile of irrelevant decrementations and
      incrementations, and improper event counting, and unnecessary descent of
      the radix_tree under tree_lock (to set the PAGECACHE_TAG_DIRTY which
      radix_tree_replace_slot() left in place anyway).
      
      Do the NR_FILE_DIRTY movement, like the other stats movements, while
      interrupts still disabled in migrate_page_move_mapping(); and don't even
      bother if the zone is the same.  Do the PageDirty movement there under
      tree_lock too, where old page is frozen and newpage not yet visible:
      bearing in mind that as soon as newpage becomes visible in radix_tree, an
      un-page-locked set_page_dirty() might interfere (or perhaps that's just
      not possible: anything doing so should already hold an additional
      reference to the old page, preventing its migration; but play safe).
      
      But we do still need to transfer PageDirty in migrate_page_copy(), for
      those who don't go the mapping route through migrate_page_move_mapping().
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Charles (Chas) Williams <ciwillia@brocade.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2c789028
    • Andy Lutomirski's avatar
      x86/mm: Add barriers and document switch_mm()-vs-flush synchronization · aa8f21d0
      Andy Lutomirski authored
      commit 71b3c126 upstream.
      
      When switch_mm() activates a new PGD, it also sets a bit that
      tells other CPUs that the PGD is in use so that TLB flush IPIs
      will be sent.  In order for that to work correctly, the bit
      needs to be visible prior to loading the PGD and therefore
      starting to fill the local TLB.
      
      Document all the barriers that make this work correctly and add
      a couple that were missing.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Charles (Chas) Williams <ciwillia@brocade.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      aa8f21d0
    • Jiri Slaby's avatar
      Linux 3.12.62 · a656195a
      Jiri Slaby authored
      a656195a
    • Vladimir Davydov's avatar
      signal: remove warning about using SI_TKILL in rt_[tg]sigqueueinfo · 2b012f59
      Vladimir Davydov authored
      commit 69828dce upstream.
      
      Sending SI_TKILL from rt_[tg]sigqueueinfo was deprecated, so now we issue
      a warning on the first attempt of doing it.  We use WARN_ON_ONCE, which is
      not informative and, what is worse, taints the kernel, making the trinity
      syscall fuzzer complain false-positively from time to time.
      
      It does not look like we need this warning at all, because the behaviour
      changed quite a long time ago (2.6.39), and if an application relies on
      the old API, it gets EPERM anyway and can issue a warning by itself.
      
      So let us zap the warning in kernel.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2b012f59
    • James Hogan's avatar
      MIPS: KVM: Fix modular KVM under QEMU · 948546f8
      James Hogan authored
      commit 797179bc upstream.
      
      Copy __kvm_mips_vcpu_run() into unmapped memory, so that we can never
      get a TLB refill exception in it when KVM is built as a module.
      
      This was observed to happen with the host MIPS kernel running under
      QEMU, due to a not entirely transparent optimisation in the QEMU TLB
      handling where TLB entries replaced with TLBWR are copied to a separate
      part of the TLB array. Code in those pages continue to be executable,
      but those mappings persist only until the next ASID switch, even if they
      are marked global.
      
      An ASID switch happens in __kvm_mips_vcpu_run() at exception level after
      switching to the guest exception base. Subsequent TLB mapped kernel
      instructions just prior to switching to the guest trigger a TLB refill
      exception, which enters the guest exception handlers without updating
      EPC. This appears as a guest triggered TLB refill on a host kernel
      mapped (host KSeg2) address, which is not handled correctly as user
      (guest) mode accesses to kernel (host) segments always generate address
      error exceptions.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [james.hogan@imgtec.com: backported for stable 3.14]
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      948546f8
    • Bjørn Mork's avatar
      cdc_ncm: workaround for EM7455 "silent" data interface · 2cb8ebaa
      Bjørn Mork authored
      [ Upstream commit c086e709 ]
      
      Several Lenovo users have reported problems with their Sierra
      Wireless EM7455 modem. The driver has loaded successfully and
      the MBIM management channel has appeared to work, including
      establishing a connection to the mobile network. But no frames
      have been received over the data interface.
      
      The problem affects all EM7455 and MC7455, and is assumed to
      affect other modems based on the same Qualcomm chipset and
      baseband firmware.
      
      Testing narrowed the problem down to what seems to be a
      firmware timing bug during initialization. Adding a short sleep
      while probing is sufficient to make the problem disappear.
      Experiments have shown that 1-2 ms is too little to have any
      effect, while 10-20 ms is enough to reliably succeed.
      Reported-by: default avatarStefan Armbruster <ml001@armbruster-it.de>
      Reported-by: default avatarRalph Plawetzki <ralph@purejava.org>
      Reported-by: default avatarAndreas Fett <andreas.fett@secunet.com>
      Reported-by: default avatarRasmus Lerdorf <rasmus@lerdorf.com>
      Reported-by: default avatarSamo Ratnik <samo.ratnik@gmail.com>
      Reported-and-tested-by: default avatarAleksander Morgado <aleksander@aleksander.es>
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      2cb8ebaa
    • Oliver Neukum's avatar
      HID: elo: kill not flush the work · 5925ce0a
      Oliver Neukum authored
      commit ed596a4a upstream.
      
      Flushing a work that reschedules itself is not a sensible operation. It needs
      to be killed. Failure to do so leads to a kernel panic in the timer code.
      Signed-off-by: default avatarOliver Neukum <ONeukum@suse.com>
      Reviewed-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      5925ce0a
    • Dan Carpenter's avatar
      ALSA: compress: fix an integer overflow check · 9deea4dd
      Dan Carpenter authored
      commit 6217e5ed upstream.
      
      I previously added an integer overflow check here but looking at it now,
      it's still buggy.
      
      The bug happens in snd_compr_allocate_buffer().  We multiply
      ".fragments" and ".fragment_size" and that doesn't overflow but then we
      save it in an unsigned int so it truncates the high bits away and we
      allocate a smaller than expected size.
      
      Fixes: b35cc822 ('ALSA: compress_core: integer overflow in snd_compr_allocate_buffer()')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      9deea4dd
    • Scott Bauer's avatar
      HID: hiddev: validate num_values for HIDIOCGUSAGES, HIDIOCSUSAGES commands · 5b900329
      Scott Bauer authored
      commit 93a2001b upstream.
      
      This patch validates the num_values parameter from userland during the
      HIDIOCGUSAGES and HIDIOCSUSAGES commands. Previously, if the report id was set
      to HID_REPORT_ID_UNKNOWN, we would fail to validate the num_values parameter
      leading to a heap overflow.
      Signed-off-by: default avatarScott Bauer <sbauer@plzdonthack.me>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      5b900329
    • Lukasz Odzioba's avatar
      mm/swap.c: flush lru pvecs on compound page arrival · 93257ab3
      Lukasz Odzioba authored
      commit 8f182270 upstream.
      
      Currently we can have compound pages held on per cpu pagevecs, which
      leads to a lot of memory unavailable for reclaim when needed.  In the
      systems with hundreads of processors it can be GBs of memory.
      
      On of the way of reproducing the problem is to not call munmap
      explicitly on all mapped regions (i.e.  after receiving SIGTERM).  After
      that some pages (with THP enabled also huge pages) may end up on
      lru_add_pvec, example below.
      
        void main() {
        #pragma omp parallel
        {
      	size_t size = 55 * 1000 * 1000; // smaller than  MEM/CPUS
      	void *p = mmap(NULL, size, PROT_READ | PROT_WRITE,
      		MAP_PRIVATE | MAP_ANONYMOUS , -1, 0);
      	if (p != MAP_FAILED)
      		memset(p, 0, size);
      	//munmap(p, size); // uncomment to make the problem go away
        }
        }
      
      When we run it with THP enabled it will leave significant amount of
      memory on lru_add_pvec.  This memory will be not reclaimed if we hit
      OOM, so when we run above program in a loop:
      
      	for i in `seq 100`; do ./a.out; done
      
      many processes (95% in my case) will be killed by OOM.
      
      The primary point of the LRU add cache is to save the zone lru_lock
      contention with a hope that more pages will belong to the same zone and
      so their addition can be batched.  The huge page is already a form of
      batched addition (it will add 512 worth of memory in one go) so skipping
      the batching seems like a safer option when compared to a potential
      excess in the caching which can be quite large and much harder to fix
      because lru_add_drain_all is way to expensive and it is not really clear
      what would be a good moment to call it.
      
      Similarly we can reproduce the problem on lru_deactivate_pvec by adding:
      madvise(p, size, MADV_FREE); after memset.
      
      This patch flushes lru pvecs on compound page arrival making the problem
      less severe - after applying it kill rate of above example drops to 0%,
      due to reducing maximum amount of memory held on pvec from 28MB (with
      THP) to 56kB per CPU.
      Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
      Link: http://lkml.kernel.org/r/1466180198-18854-1-git-send-email-lukasz.odzioba@intel.comSigned-off-by: default avatarLukasz Odzioba <lukasz.odzioba@intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Ming Li <mingli199x@qq.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      93257ab3
    • Marcelo Tosatti's avatar
      KVM: x86: expose invariant tsc cpuid bit (v2) · a992f642
      Marcelo Tosatti authored
      commit e4c9a5a1 upstream.
      
      Invariant TSC is a property of TSC, no additional
      support code necessary.
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      a992f642
    • Jiri Slaby's avatar
      base: make module_create_drivers_dir race-free · 8617fe9e
      Jiri Slaby authored
      commit 7e1b1fc4 upstream.
      
      Modules which register drivers via standard path (driver_register) in
      parallel can cause a warning:
      WARNING: CPU: 2 PID: 3492 at ../fs/sysfs/dir.c:31 sysfs_warn_dup+0x62/0x80
      sysfs: cannot create duplicate filename '/module/saa7146/drivers'
      Modules linked in: hexium_gemini(+) mxb(+) ...
      ...
      Call Trace:
      ...
       [<ffffffff812e63a2>] sysfs_warn_dup+0x62/0x80
       [<ffffffff812e6487>] sysfs_create_dir_ns+0x77/0x90
       [<ffffffff8140f2c4>] kobject_add_internal+0xb4/0x340
       [<ffffffff8140f5b8>] kobject_add+0x68/0xb0
       [<ffffffff8140f631>] kobject_create_and_add+0x31/0x70
       [<ffffffff8157a703>] module_add_driver+0xc3/0xd0
       [<ffffffff8155e5d4>] bus_add_driver+0x154/0x280
       [<ffffffff815604c0>] driver_register+0x60/0xe0
       [<ffffffff8145bed0>] __pci_register_driver+0x60/0x70
       [<ffffffffa0273e14>] saa7146_register_extension+0x64/0x90 [saa7146]
       [<ffffffffa0033011>] hexium_init_module+0x11/0x1000 [hexium_gemini]
      ...
      
      As can be (mostly) seen, driver_register causes this call sequence:
        -> bus_add_driver
          -> module_add_driver
            -> module_create_drivers_dir
      The last one creates "drivers" directory in /sys/module/<...>. When
      this is done in parallel, the directory is attempted to be created
      twice at the same time.
      
      This can be easily reproduced by loading mxb and hexium_gemini in
      parallel:
      while :; do
        modprobe mxb &
        modprobe hexium_gemini
        wait
        rmmod mxb hexium_gemini saa7146_vv saa7146
      done
      
      saa7146 calls pci_register_driver for both mxb and hexium_gemini,
      which means /sys/module/saa7146/drivers is to be created for both of
      them.
      
      Fix this by a new mutex in module_create_drivers_dir which makes the
      test-and-create "drivers" dir atomic.
      
      I inverted the condition and removed 'return' to avoid multiple
      unlocks or a goto.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Fixes: fe480a26 (Modules: only add drivers/ direcory if needed)
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8617fe9e
    • Dan Carpenter's avatar
      KEYS: potential uninitialized variable · 8c903c05
      Dan Carpenter authored
      commit 38327424 upstream.
      
      If __key_link_begin() failed then "edit" would be uninitialized.  I've
      added a check to fix that.
      
      This allows a random user to crash the kernel, though it's quite
      difficult to achieve.  There are three ways it can be done as the user
      would have to cause an error to occur in __key_link():
      
       (1) Cause the kernel to run out of memory.  In practice, this is difficult
           to achieve without ENOMEM cropping up elsewhere and aborting the
           attempt.
      
       (2) Revoke the destination keyring between the keyring ID being looked up
           and it being tested for revocation.  In practice, this is difficult to
           time correctly because the KEYCTL_REJECT function can only be used
           from the request-key upcall process.  Further, users can only make use
           of what's in /sbin/request-key.conf, though this does including a
           rejection debugging test - which means that the destination keyring
           has to be the caller's session keyring in practice.
      
       (3) Have just enough key quota available to create a key, a new session
           keyring for the upcall and a link in the session keyring, but not then
           sufficient quota to create a link in the nominated destination keyring
           so that it fails with EDQUOT.
      
      The bug can be triggered using option (3) above using something like the
      following:
      
      	echo 80 >/proc/sys/kernel/keys/root_maxbytes
      	keyctl request2 user debug:fred negate @t
      
      The above sets the quota to something much lower (80) to make the bug
      easier to trigger, but this is dependent on the system.  Note also that
      the name of the keyring created contains a random number that may be
      between 1 and 10 characters in size, so may throw the test off by
      changing the amount of quota used.
      
      Assuming the failure occurs, something like the following will be seen:
      
      	kfree_debugcheck: out of range ptr 6b6b6b6b6b6b6b68h
      	------------[ cut here ]------------
      	kernel BUG at ../mm/slab.c:2821!
      	...
      	RIP: 0010:[<ffffffff811600f9>] kfree_debugcheck+0x20/0x25
      	RSP: 0018:ffff8804014a7de8  EFLAGS: 00010092
      	RAX: 0000000000000034 RBX: 6b6b6b6b6b6b6b68 RCX: 0000000000000000
      	RDX: 0000000000040001 RSI: 00000000000000f6 RDI: 0000000000000300
      	RBP: ffff8804014a7df0 R08: 0000000000000001 R09: 0000000000000000
      	R10: ffff8804014a7e68 R11: 0000000000000054 R12: 0000000000000202
      	R13: ffffffff81318a66 R14: 0000000000000000 R15: 0000000000000001
      	...
      	Call Trace:
      	  kfree+0xde/0x1bc
      	  assoc_array_cancel_edit+0x1f/0x36
      	  __key_link_end+0x55/0x63
      	  key_reject_and_link+0x124/0x155
      	  keyctl_reject_key+0xb6/0xe0
      	  keyctl_negate_key+0x10/0x12
      	  SyS_keyctl+0x9f/0xe7
      	  do_syscall_64+0x63/0x13a
      	  entry_SYSCALL64_slow_path+0x25/0x25
      
      Fixes: f70e2e06 ('KEYS: Do preallocation for __key_link()')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8c903c05
    • Brian King's avatar
      SCSI: Increase REPORT_LUNS timeout · e0856a10
      Brian King authored
      commit b39c9a66 upstream.
      
      This patch fixes an issue seen with an IBM 2145 (SVC) where, following an error
      injection test which results in paths going offline, when they came
      back online, the path would timeout the REPORT_LUNS issued during the
      scan. This timeout situation continued until retries were expired, resulting in
      falling back to a sequential LUN scan. Then, since the target responds
      with PQ=1, PDT=0 for all possible LUNs, due to the way the sequential
      LUN scan code works, we end up adding 512 LUNs for each target, when there
      is really only a small handful of LUNs that are actually present.
      
      This patch increases the timeout used on the REPORT_LUNS to 30 seconds.
      This patch solves the issue of 512 non existent LUNs showing up after
      this event.
      Signed-off-by: default avatarBrian King <brking@linux.vnet.ibm.com>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e0856a10
    • Tony Luck's avatar
      EDAC: Remove arbitrary limit on number of channels · 97f3455a
      Tony Luck authored
      commit c44696ff upstream.
      
      Currently set to "6", but the reset of the code will dynamically
      allocate as needed.  We need to go to "8" today, but drop the check
      completely to save doing this again when we need even larger numbers.
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      Acked-by: default avatarAristeu Rozanski <aris@redhat.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      97f3455a
    • Kangjie Lu's avatar
      rds: fix an infoleak in rds_inc_info_copy · 3360c517
      Kangjie Lu authored
      commit 4116def2 upstream.
      
      The last field "flags" of object "minfo" is not initialized.
      Copying this object out may leak kernel stack data.
      Assign 0 to it to avoid leak.
      Signed-off-by: default avatarKangjie Lu <kjlu@gatech.edu>
      Acked-by: default avatarSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      3360c517
    • Gavin Shan's avatar
      net/qlge: Avoids recursive EEH error · f3c9e9b1
      Gavin Shan authored
      commit 3275c0c6 upstream.
      
      One timer, whose handler keeps reading on MMIO register for EEH
      core to detect error in time, is started when the PCI device driver
      is loaded. MMIO register can't be accessed during PE reset in EEH
      recovery. Otherwise, the unexpected recursive error is triggered.
      The timer isn't closed that time if the interface isn't brought
      up. So the unexpected recursive error is seen during EEH recovery
      when the interface is down.
      
      This avoids the unexpected recursive EEH error by closing the timer
      in qlge_io_error_detected() before EEH PE reset unconditionally. The
      timer is started unconditionally after EEH PE reset in qlge_io_resume().
      Also, the timer should be closed unconditionally when the device is
      removed from the system permanently in qlge_io_error_detected().
      Reported-by: default avatarShriya R. Kulkarni <shriyakul@in.ibm.com>
      Signed-off-by: default avatarGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      f3c9e9b1
    • Kangjie Lu's avatar
      ALSA: timer: Fix leak in events via snd_timer_user_tinterrupt · bdb8bc5f
      Kangjie Lu authored
      commit e4ec8cc8 upstream.
      
      The stack object “r1” has a total size of 32 bytes. Its field
      “event” and “val” both contain 4 bytes padding. These 8 bytes
      padding bytes are sent to user without being initialized.
      Signed-off-by: default avatarKangjie Lu <kjlu@gatech.edu>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      bdb8bc5f
    • Kangjie Lu's avatar
      ALSA: timer: Fix leak in events via snd_timer_user_ccallback · 640b1f79
      Kangjie Lu authored
      commit 9a47e9cf upstream.
      
      The stack object “r1” has a total size of 32 bytes. Its field
      “event” and “val” both contain 4 bytes padding. These 8 bytes
      padding bytes are sent to user without being initialized.
      Signed-off-by: default avatarKangjie Lu <kjlu@gatech.edu>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      640b1f79
    • Kangjie Lu's avatar
      ALSA: timer: Fix leak in SNDRV_TIMER_IOCTL_PARAMS · 16e5f4c6
      Kangjie Lu authored
      commit cec8f96e upstream.
      
      The stack object “tread” has a total size of 32 bytes. Its field
      “event” and “val” both contain 4 bytes padding. These 8 bytes
      padding bytes are sent to user without being initialized.
      Signed-off-by: default avatarKangjie Lu <kjlu@gatech.edu>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      16e5f4c6
    • Takashi Iwai's avatar
      ALSA: hrtimer: Handle start/stop more properly · 6210a912
      Takashi Iwai authored
      commit d2c5cf88 upstream.
      
      This patch tries to address the still remaining issues in ALSA hrtimer
      driver:
      - Spurious use-after-free was detected in hrtimer callback
      - Incorrect rescheduling due to delayed start
      - WARN_ON() is triggered in hrtimer_forward() invoked in hrtimer
        callback
      
      The first issue happens only when the new timer is scheduled even
      while hrtimer is being closed.  It's related with the second and third
      items; since ALSA timer core invokes hw.start callback during hrtimer
      interrupt, this may result in the explicit call of hrtimer_start().
      
      Also, the similar problem is seen for the stop; ALSA timer core
      invokes hw.stop callback even in the hrtimer handler, too.  Since we
      must not call the synced hrtimer_cancel() in such a context, it's just
      a hrtimer_try_to_cancel() call that doesn't properly work.
      
      Another culprit of the second and third items is the call of
      hrtimer_forward_now() before snd_timer_interrupt().  The timer->stick
      value may change during snd_timer_interrupt() call, but this
      possibility is ignored completely.
      
      For covering these subtle and messy issues, the following changes have
      been done in this patch:
      - A new flag, in_callback, is introduced in the private data to
        indicate that the hrtimer handler is being processed.
      - Both start and stop callbacks skip when called from (during)
        in_callback flag.
      - The hrtimer handler returns properly HRTIMER_RESTART and NORESTART
        depending on the running state now.
      - The hrtimer handler reprograms the expiry properly after
        snd_timer_interrupt() call, instead of before.
      - The close callback clears running flag and sets in_callback flag
        to block any further start/stop calls.
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      6210a912
    • Jiri Slaby's avatar
      ktime: export ktime_divns · a882416d
      Jiri Slaby authored
      ktime_divns was exported in upstream as a side-effect of commit
      166afb64 (ktime: Sanitize
      ktime_to_us/ms conversion). But we do not want the commit given ktime
      is not nanoseconds in 3.12 yet.
      
      So we only export the function here as it is needed by upstream commit
      d2c5cf88 (ALSA: hrtimer: Handle
      start/stop more properly):
      ERROR: "ktime_divns" [sound/core/snd-hrtimer.ko] undefined!
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <john.stultz@linaro.org>
      a882416d