1. 27 Oct, 2023 1 commit
  2. 17 Oct, 2023 1 commit
  3. 16 Oct, 2023 1 commit
  4. 14 Oct, 2023 1 commit
    • Ingo Molnar's avatar
      locking/seqlock: Propagate 'const' pointers within read-only methods, remove forced type casts · 886ee55e
      Ingo Molnar authored
      Currently __seqprop_ptr() is an inline function that must chose to either
      use 'const' or non-const seqcount related pointers - but this results in
      the undesirable loss of 'const' propagation, via a forced type cast.
      
      The easiest solution would be to turn the pointer wrappers into macros that
      pass through whatever type is passed to them - but the clever maze of
      seqlock API instantiation macros relies on the GCC CPP '##' macro
      extension, which isn't recursive, so inline functions must be used here.
      
      So create two wrapper variants instead: 'ptr' and 'const_ptr', and pick the
      right one for the codepaths that are const: read_seqcount_begin() and
      read_seqcount_retry().
      
      This cleans up type handling and allows the removal of all type forcing.
      
      No change in functionality.
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      886ee55e
  5. 12 Oct, 2023 3 commits
    • Lucy Mielke's avatar
      locking/lockdep: Fix string sizing bug that triggers a format-truncation compiler-warning · ac8b60be
      Lucy Mielke authored
      On an allyesconfig, with "treat warnings as errors" unset, GCC emits
      these warnings:
      
      	kernel/locking/lockdep_proc.c:438:32: Warning: Format specifier '%lld' may
      		be truncated when writing 1 to 17 bytes into a region
      		of size 15 [-Wformat-truncation=]
      
      	kernel/locking/lockdep_proc.c:438:31: Note: Format directive argument is
      		in the range [-9223372036854775, 9223372036854775]
      
      	kernel/locking/lockdep_proc.c:438:9: Note: 'snprintf' has output
      		between 5 and 22 bytes into a target of size 15
      
      In seq_time(), the longest s64 is "-9223372036854775808"-ish, which
      converted to the fixed-point float format is "-9223372036854775.80": 21 bytes,
      plus termination is another byte: 22. Therefore, a larger buffer size
      of 22 is needed here - not 15. The code was safe due to the snprintf().
      
      Fix it.
      Signed-off-by: default avatarLucy Mielke <lucymielke@icloud.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/ZSfOEHRkZAWaQr3U@fedora.fritz.box
      ac8b60be
    • Oleg Nesterov's avatar
      locking/seqlock: Change __seqprop() to return the function pointer · e6115c6f
      Oleg Nesterov authored
      This simplifies the macro and makes it easy to add the new seqprop's
      with 2 or more args.
      
      Plus this way we do not lose the type info, the (void*) type cast is
      no longer needed.
      
      And the latter reveals the problem: a lot of seqcount_t helpers pass
      the "const seqcount_t *s" argument to __seqprop_ptr(seqcount_t *s)
      but (before this patch) "(void *)(s)" masked the problem.
      
      So this patch changes __seqprop_ptr() and __seqprop_##lockname##_ptr()
      to accept the "const LOCKNAME *s" argument. This is not nice either,
      they need to drop the constness on return because these helpers are used
      by both the readers and writers, but at least it is clear what's going on.
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Link: https://lore.kernel.org/r/20231012143227.GA16143@redhat.com
      e6115c6f
    • Oleg Nesterov's avatar
      locking/seqlock: Simplify SEQCOUNT_LOCKNAME() · f995443f
      Oleg Nesterov authored
      1. Kill the "lockmember" argument. It is always s->lock plus
         __seqprop_##lockname##_sequence() already uses s->lock and
         ignores "lockmember".
      
      2. Kill the "lock_acquire" argument. __seqprop_##lockname##_sequence()
         can use the same "lockbase" prefix for _lock and _unlock.
      
      Apart from line numbers, gcc -E outputs the same code.
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Link: https://lore.kernel.org/r/20231012143158.GA16133@redhat.com
      f995443f
  6. 10 Oct, 2023 1 commit
  7. 09 Oct, 2023 4 commits
    • Uros Bizjak's avatar
      locking/atomic, xen: Use sync_try_cmpxchg() instead of sync_cmpxchg() · ad0a2e4c
      Uros Bizjak authored
      Use sync_try_cmpxchg() instead of sync_cmpxchg(*ptr, old, new) == old
      in clear_masked_cond(), clear_linked() and
      gnttab_end_foreign_access_ref_v1(). x86 CMPXCHG instruction returns
      success in ZF flag, so this change saves a compare after cmpxchg
      (and related move instruction in front of cmpxchg), improving the
      cmpxchg loop in gnttab_end_foreign_access_ref_v1() from:
      
           174:	eb 0e                	jmp    184 <...>
           176:	89 d0                	mov    %edx,%eax
           178:	f0 66 0f b1 31       	lock cmpxchg %si,(%rcx)
           17d:	66 39 c2             	cmp    %ax,%dx
           180:	74 11                	je     193 <...>
           182:	89 c2                	mov    %eax,%edx
           184:	89 d6                	mov    %edx,%esi
           186:	66 83 e6 18          	and    $0x18,%si
           18a:	74 ea                	je     176 <...>
      
      to:
      
           614:	89 c1                	mov    %eax,%ecx
           616:	66 83 e1 18          	and    $0x18,%cx
           61a:	75 11                	jne    62d <...>
           61c:	f0 66 0f b1 0a       	lock cmpxchg %cx,(%rdx)
           621:	75 f1                	jne    614 <...>
      
      No functional change intended.
      Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      ad0a2e4c
    • Uros Bizjak's avatar
      locking/atomic/x86: Introduce arch_sync_try_cmpxchg() · 636d6a8b
      Uros Bizjak authored
      Introduce the arch_sync_try_cmpxchg() macro to improve code using
      sync_try_cmpxchg() locking primitive. The new definitions use existing
      __raw_try_cmpxchg() macros, but use its own "lock; " prefix.
      
      The new macros improve assembly of the cmpxchg loop in
      evtchn_fifo_unmask() from drivers/xen/events/events_fifo.c from:
      
       57a:	85 c0                	test   %eax,%eax
       57c:	78 52                	js     5d0 <...>
       57e:	89 c1                	mov    %eax,%ecx
       580:	25 ff ff ff af       	and    $0xafffffff,%eax
       585:	c7 04 24 00 00 00 00 	movl   $0x0,(%rsp)
       58c:	81 e1 ff ff ff ef    	and    $0xefffffff,%ecx
       592:	89 4c 24 04          	mov    %ecx,0x4(%rsp)
       596:	89 44 24 08          	mov    %eax,0x8(%rsp)
       59a:	8b 74 24 08          	mov    0x8(%rsp),%esi
       59e:	8b 44 24 04          	mov    0x4(%rsp),%eax
       5a2:	f0 0f b1 32          	lock cmpxchg %esi,(%rdx)
       5a6:	89 04 24             	mov    %eax,(%rsp)
       5a9:	8b 04 24             	mov    (%rsp),%eax
       5ac:	39 c1                	cmp    %eax,%ecx
       5ae:	74 07                	je     5b7 <...>
       5b0:	a9 00 00 00 40       	test   $0x40000000,%eax
       5b5:	75 c3                	jne    57a <...>
       <...>
      
      to:
      
       578:	a9 00 00 00 40       	test   $0x40000000,%eax
       57d:	74 2b                	je     5aa <...>
       57f:	85 c0                	test   %eax,%eax
       581:	78 40                	js     5c3 <...>
       583:	89 c1                	mov    %eax,%ecx
       585:	25 ff ff ff af       	and    $0xafffffff,%eax
       58a:	81 e1 ff ff ff ef    	and    $0xefffffff,%ecx
       590:	89 4c 24 04          	mov    %ecx,0x4(%rsp)
       594:	89 44 24 08          	mov    %eax,0x8(%rsp)
       598:	8b 4c 24 08          	mov    0x8(%rsp),%ecx
       59c:	8b 44 24 04          	mov    0x4(%rsp),%eax
       5a0:	f0 0f b1 0a          	lock cmpxchg %ecx,(%rdx)
       5a4:	89 44 24 04          	mov    %eax,0x4(%rsp)
       5a8:	75 30                	jne    5da <...>
       <...>
       5da:	8b 44 24 04          	mov    0x4(%rsp),%eax
       5de:	eb 98                	jmp    578 <...>
      
      The new code removes move instructions from 585: 5a6: and 5a9:
      and the compare from 5ac:. Additionally, the compiler assumes that
      cmpxchg success is more probable and optimizes code flow accordingly.
      Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      636d6a8b
    • Uros Bizjak's avatar
      locking/atomic: Add generic support for sync_try_cmpxchg() and its fallback · e01cc1e8
      Uros Bizjak authored
      Provide the generic sync_try_cmpxchg() function from the
      raw_ prefixed version, also adding explicit instrumentation.
      
      The patch amends existing scripts to generate sync_try_cmpxchg()
      locking primitive and its raw_sync_try_cmpxchg() fallback, while
      leaving existing macros from the try_cmpxchg() family unchanged.
      
      The target can define its own arch_sync_try_cmpxchg() to override the
      generic version of raw_sync_try_cmpxchg(). This allows the target
      to generate more optimal assembly than the generic version.
      
      Additionally, the patch renames two scripts to better reflect
      whet they really do.
      Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      e01cc1e8
    • Ingo Molnar's avatar
      fdb8b7a1
  8. 08 Oct, 2023 4 commits
  9. 07 Oct, 2023 10 commits
  10. 06 Oct, 2023 14 commits