1. 26 Mar, 2015 7 commits
    • Alexey Kodanev's avatar
      net: sysctl_net_core: check SNDBUF and RCVBUF for min length · e2f572a9
      Alexey Kodanev authored
      [ Upstream commit b1cb59cf ]
      
      sysctl has sysctl.net.core.rmem_*/wmem_* parameters which can be
      set to incorrect values. Given that 'struct sk_buff' allocates from
      rcvbuf, incorrectly set buffer length could result to memory
      allocation failures. For example, set them as follows:
      
          # sysctl net.core.rmem_default=64
            net.core.wmem_default = 64
          # sysctl net.core.wmem_default=64
            net.core.wmem_default = 64
          # ping localhost -s 1024 -i 0 > /dev/null
      
      This could result to the following failure:
      
      skbuff: skb_over_panic: text:ffffffff81628db4 len:-32 put:-32
      head:ffff88003a1cc200 data:ffff88003a1cc200 tail:0xffffffe0 end:0xc0 dev:<NULL>
      kernel BUG at net/core/skbuff.c:102!
      invalid opcode: 0000 [#1] SMP
      ...
      task: ffff88003b7f5550 ti: ffff88003ae88000 task.ti: ffff88003ae88000
      RIP: 0010:[<ffffffff8155fbd1>]  [<ffffffff8155fbd1>] skb_put+0xa1/0xb0
      RSP: 0018:ffff88003ae8bc68  EFLAGS: 00010296
      RAX: 000000000000008d RBX: 00000000ffffffe0 RCX: 0000000000000000
      RDX: ffff88003fdcf598 RSI: ffff88003fdcd9c8 RDI: ffff88003fdcd9c8
      RBP: ffff88003ae8bc88 R08: 0000000000000001 R09: 0000000000000000
      R10: 0000000000000001 R11: 00000000000002b2 R12: 0000000000000000
      R13: 0000000000000000 R14: ffff88003d3f7300 R15: ffff88000012a900
      FS:  00007fa0e2b4a840(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000000d0f7e0 CR3: 000000003b8fb000 CR4: 00000000000006f0
      Stack:
       ffff88003a1cc200 00000000ffffffe0 00000000000000c0 ffffffff818cab1d
       ffff88003ae8bd68 ffffffff81628db4 ffff88003ae8bd48 ffff88003b7f5550
       ffff880031a09408 ffff88003b7f5550 ffff88000012aa48 ffff88000012ab00
      Call Trace:
       [<ffffffff81628db4>] unix_stream_sendmsg+0x2c4/0x470
       [<ffffffff81556f56>] sock_write_iter+0x146/0x160
       [<ffffffff811d9612>] new_sync_write+0x92/0xd0
       [<ffffffff811d9cd6>] vfs_write+0xd6/0x180
       [<ffffffff811da499>] SyS_write+0x59/0xd0
       [<ffffffff81651532>] system_call_fastpath+0x12/0x17
      Code: 00 00 48 89 44 24 10 8b 87 c8 00 00 00 48 89 44 24 08 48 8b 87 d8 00
            00 00 48 c7 c7 30 db 91 81 48 89 04 24 31 c0 e8 4f a8 0e 00 <0f> 0b
            eb fe 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 83
      RIP  [<ffffffff8155fbd1>] skb_put+0xa1/0xb0
      RSP <ffff88003ae8bc68>
      Kernel panic - not syncing: Fatal exception
      
      Moreover, the possible minimum is 1, so we can get another kernel panic:
      ...
      BUG: unable to handle kernel paging request at ffff88013caee5c0
      IP: [<ffffffff815604cf>] __alloc_skb+0x12f/0x1f0
      ...
      Signed-off-by: default avatarAlexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e2f572a9
    • David S. Miller's avatar
      sparc64: Fix several bugs in memmove(). · 1ca28933
      David S. Miller authored
      [ Upstream commit 2077cef4 ]
      
      Firstly, handle zero length calls properly.  Believe it or not there
      are a few of these happening during early boot.
      
      Next, we can't just drop to a memcpy() call in the forward copy case
      where dst <= src.  The reason is that the cache initializing stores
      used in the Niagara memcpy() implementations can end up clearing out
      cache lines before we've sourced their original contents completely.
      
      For example, considering NG4memcpy, the main unrolled loop begins like
      this:
      
           load   src + 0x00
           load   src + 0x08
           load   src + 0x10
           load   src + 0x18
           load   src + 0x20
           store  dst + 0x00
      
      Assume dst is 64 byte aligned and let's say that dst is src - 8 for
      this memcpy() call.  That store at the end there is the one to the
      first line in the cache line, thus clearing the whole line, which thus
      clobbers "src + 0x28" before it even gets loaded.
      
      To avoid this, just fall through to a simple copy only mildly
      optimized for the case where src and dst are 8 byte aligned and the
      length is a multiple of 8 as well.  We could get fancy and call
      GENmemcpy() but this is good enough for how this thing is actually
      used.
      Reported-by: default avatarDavid Ahern <david.ahern@oracle.com>
      Reported-by: default avatarBob Picco <bpicco@meloft.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1ca28933
    • David Ahern's avatar
      sparc: Touch NMI watchdog when walking cpus and calling printk · 146c982f
      David Ahern authored
      [ Upstream commit 31aaa98c ]
      
      With the increase in number of CPUs calls to functions that dump
      output to console (e.g., arch_trigger_all_cpu_backtrace) can take
      a long time to complete. If IRQs are disabled eventually the NMI
      watchdog kicks in and creates more havoc. Avoid by telling the NMI
      watchdog everything is ok.
      Signed-off-by: default avatarDavid Ahern <david.ahern@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      146c982f
    • David Ahern's avatar
      sparc: perf: Make counting mode actually work · d42f5dff
      David Ahern authored
      [ Upstream commit d51291cb ]
      
      Currently perf-stat (aka, counting mode) does not work:
      
      $ perf stat ls
      ...
       Performance counter stats for 'ls':
      
                1.585665      task-clock (msec)         #    0.580 CPUs utilized
                      24      context-switches          #    0.015 M/sec
                       0      cpu-migrations            #    0.000 K/sec
                      86      page-faults               #    0.054 M/sec
         <not supported>      cycles
         <not supported>      stalled-cycles-frontend
         <not supported>      stalled-cycles-backend
         <not supported>      instructions
         <not supported>      branches
         <not supported>      branch-misses
      
             0.002735100 seconds time elapsed
      
      The reason is that state is never reset (stays with PERF_HES_UPTODATE set).
      Add a call to sparc_pmu_enable_event during the added_event handling.
      Clean up the encoding since pmu_start calls sparc_pmu_enable_event which
      does the same. Passing PERF_EF_RELOAD to sparc_pmu_start means the call
      to sparc_perf_event_set_period can be removed as well.
      
      With this patch:
      
      $ perf stat ls
      ...
       Performance counter stats for 'ls':
      
                1.552890      task-clock (msec)         #    0.552 CPUs utilized
                      24      context-switches          #    0.015 M/sec
                       0      cpu-migrations            #    0.000 K/sec
                      86      page-faults               #    0.055 M/sec
               5,748,997      cycles                    #    3.702 GHz
         <not supported>      stalled-cycles-frontend:HG
         <not supported>      stalled-cycles-backend:HG
               1,684,362      instructions:HG           #    0.29  insns per cycle
                 295,133      branches:HG               #  190.054 M/sec
                  28,007      branch-misses:HG          #    9.49% of all branches
      
             0.002815665 seconds time elapsed
      Signed-off-by: default avatarDavid Ahern <david.ahern@oracle.com>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d42f5dff
    • David Ahern's avatar
      sparc: perf: Remove redundant perf_pmu_{en|dis}able calls · 5cea2eae
      David Ahern authored
      [ Upstream commit 5b0d4b55 ]
      
      perf_pmu_disable is called by core perf code before pmu->del and the
      enable function is called by core perf code afterwards. No need to
      call again within sparc_pmu_del.
      
      Ditto for pmu->add and sparc_pmu_add.
      Signed-off-by: default avatarDavid Ahern <david.ahern@oracle.com>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5cea2eae
    • Rob Gardner's avatar
      sparc: semtimedop() unreachable due to comparison error · 4b9529f3
      Rob Gardner authored
      [ Upstream commit 53eb2516 ]
      
      A bug was reported that the semtimedop() system call was always
      failing eith ENOSYS.
      
      Since SEMCTL is defined as 3, and SEMTIMEDOP is defined as 4,
      the comparison "call <= SEMCTL" will always prevent SEMTIMEDOP
      from getting through to the semaphore ops switch statement.
      
      This is corrected by changing the comparison to "call <= SEMTIMEDOP".
      
      Orabug: 20633375
      Signed-off-by: default avatarRob Gardner <rob.gardner@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4b9529f3
    • Andreas Larsson's avatar
      sparc32: destroy_context() and switch_mm() needs to disable interrupts. · 243bd1fa
      Andreas Larsson authored
      [ Upstream commit 66d0f7ec ]
      
      Load balancing can be triggered in the critical sections protected by
      srmmu_context_spinlock in destroy_context() and switch_mm() and can hang
      the cpu waiting for the rq lock of another cpu that in turn has called
      switch_mm hangning on srmmu_context_spinlock leading to deadlock.
      
      So, disable interrupt while taking srmmu_context_spinlock in
      destroy_context() and switch_mm() so we don't deadlock.
      
      See also commit 77b838fa ("[SPARC64]: destroy_context() needs to disable
      interrupts.")
      Signed-off-by: default avatarAndreas Larsson <andreas@gaisler.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      243bd1fa
  2. 18 Mar, 2015 33 commits