1. 24 Jan, 2016 2 commits
    • Huacai Chen's avatar
      MIPS: Fix some missing CONFIG_CPU_MIPSR6 #ifdefs · 4f33f6c5
      Huacai Chen authored
      Commit be0c37c9 (MIPS: Rearrange PTE bits into fixed positions.)
      defines fixed PTE bits for MIPS R2. Then, commit d7b63141
      (MIPS: pgtable-bits: Fix XPA damage to R6 definitions.) adds the MIPS
      R6 definitions in the same way as MIPS R2. But some R6 #ifdefs in the
      later commit are missing, so in this patch I fix that.
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12164/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      4f33f6c5
    • Huacai Chen's avatar
      MIPS: sync-r4k: reduce skew while synchronization · db0dbd57
      Huacai Chen authored
      While synchronization, count register will go backwards for the master.
      If synchronise_count_master() runs before synchronise_count_slave(),
      skew becomes even more. The skew is very harmful for CPU hotplug (CPU0
      do synchronization with CPU1, then CPU0 do synchronization with CPU2
      and CPU0's count goes backwards, so it will be out of sync with CPU1).
      
      After the commit cf9bfe55 (MIPS: Synchronize MIPS count one
      CPU at a time), we needn't evaluate count_reference at the beginning of
      synchronise_count_master() any more. Thus, we evaluate the initcount (It
      seems like count_reference is redundant) in the 2nd loop. Since we write
      the count register in the last loop, we don't need additional barriers
      (the existing memory barriers are enough).
      
      Moreover, I think we loop 3 times is enough to get a primed instruction
      cache, this can also get less skew than looping 5 times.
      
      Comments are also updated in this patch.
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Patchwork: https://patchwork.linux-mips.org/patch/12163/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      db0dbd57
  2. 22 Jan, 2016 4 commits
    • Huacai Chen's avatar
      MIPS: hpet: Choose a safe value for the ETIME check · 5610b125
      Huacai Chen authored
      This patch is borrowed from x86 hpet driver and explaind below:
      
      Due to the overly intelligent design of HPETs, we need to workaround
      the problem that the compare value which we write is already behind
      the actual counter value at the point where the value hits the real
      compare register. This happens for two reasons:
      
      1) We read out the counter, add the delta and write the result to the
         compare register. When a NMI hits between the read out and the write
         then the counter can be ahead of the event already.
      
      2) The write to the compare register is delayed by up to two HPET
         cycles in AMD chipsets.
      
      We can work around this by reading back the compare register to make
      sure that the written value has hit the hardware. But that is bad
      performance wise for the normal case where the event is far enough in
      the future.
      
      As we already know that the write can be delayed by up to two cycles
      we can avoid the read back of the compare register completely if we
      make the decision whether the delta has elapsed already or not based
      on the following calculation:
      
        cmp = event - actual_count;
      
      If cmp is less than 64 HPET clock cycles, then we decide that the event
      has happened already and return -ETIME. That covers the above #1 and #2
      problems which would cause a wait for HPET wraparound (~306 seconds).
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12162/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      5610b125
    • Huacai Chen's avatar
      MIPS: Loongson-3: Fix SMP_ASK_C0COUNT IPI handler · 57548432
      Huacai Chen authored
      When Core-0 handle SMP_ASK_C0COUNT IPI, we should make other cores to
      see the result as soon as possible (especially when Store-Fill-Buffer
      is enabled). Otherwise, C0_Count syncronization makes no sense.
      
      BTW, array is more suitable than per-cpu variable for syncronization,
      and there is a corner case should be avoid: C0_Count of Core-0 can be
      really 0.
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org>
      Patchwork: https://patchwork.linux-mips.org/patch/12160/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      57548432
    • Huacai Chen's avatar
      MIPS: Loongson-3: Improve -march option and move it to Platform · 5188129b
      Huacai Chen authored
      If GCC >= 4.9 and Binutils >=2.25, we use -march=loongson3a, otherwise
      we use -march=mips64r2, this can slightly improve performance. Besides,
      arch/mips/loongson64/Platform is a better location rather than arch/
      mips/Makefile.
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12161/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      5188129b
    • Huacai Chen's avatar
      MIPS: Cleanup the unused __arch_local_irq_restore() function · 6e526844
      Huacai Chen authored
      In history, __arch_local_irq_restore() is only used by SMTC. However,
      SMTC support has been removed since 3.16, this patch remove the unused
      function.
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J. Hill <Steven.Hill@imgtec.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12159/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      6e526844
  3. 19 Jan, 2016 20 commits
  4. 04 Jan, 2016 9 commits
  5. 03 Jan, 2016 3 commits
  6. 31 Dec, 2015 2 commits