• Peter Zijlstra's avatar
    sched/fair: Fix cfs_rq avg tracking underflow · 43b1bfec
    Peter Zijlstra authored
    commit 89741892 upstream.
    
    As per commit:
    
      b7fa30c9 ("sched/fair: Fix post_init_entity_util_avg() serialization")
    
    > the code generated from update_cfs_rq_load_avg():
    >
    > 	if (atomic_long_read(&cfs_rq->removed_load_avg)) {
    > 		s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
    > 		sa->load_avg = max_t(long, sa->load_avg - r, 0);
    > 		sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
    > 		removed_load = 1;
    > 	}
    >
    > turns into:
    >
    > ffffffff81087064:       49 8b 85 98 00 00 00    mov    0x98(%r13),%rax
    > ffffffff8108706b:       48 85 c0                test   %rax,%rax
    > ffffffff8108706e:       74 40                   je     ffffffff810870b0 <update_blocked_averages+0xc0>
    > ffffffff81087070:       4c 89 f8                mov    %r15,%rax
    > ffffffff81087073:       49 87 85 98 00 00 00    xchg   %rax,0x98(%r13)
    > ffffffff8108707a:       49 29 45 70             sub    %rax,0x70(%r13)
    > ffffffff8108707e:       4c 89 f9                mov    %r15,%rcx
    > ffffffff81087081:       bb 01 00 00 00          mov    $0x1,%ebx
    > ffffffff81087086:       49 83 7d 70 00          cmpq   $0x0,0x70(%r13)
    > ffffffff8108708b:       49 0f 49 4d 70          cmovns 0x70(%r13),%rcx
    >
    > Which you'll note ends up with sa->load_avg -= r in memory at
    > ffffffff8108707a.
    
    So I _should_ have looked at other unserialized users of ->load_avg,
    but alas. Luckily nikbor reported a similar /0 from task_h_load() which
    instantly triggered recollection of this here problem.
    
    Aside from the intermediate value hitting memory and causing problems,
    there's another problem: the underflow detection relies on the signed
    bit. This reduces the effective width of the variables, IOW its
    effectively the same as having these variables be of signed type.
    
    This patch changes to a different means of unsigned underflow
    detection to not rely on the signed bit. This allows the variables to
    use the 'full' unsigned range. And it does so with explicit LOAD -
    STORE to ensure any intermediate value will never be visible in
    memory, allowing these unserialized loads.
    
    Note: GCC generates crap code for this, might warrant a look later.
    
    Note2: I say 'full' above, if we end up at U*_MAX we'll still explode;
           maybe we should do clamping on add too.
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Cc: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Yuyang Du <yuyang.du@intel.com>
    Cc: bsegall@google.com
    Cc: kernel@kyup.com
    Cc: morten.rasmussen@arm.com
    Cc: pjt@google.com
    Cc: steve.muckle@linaro.org
    Fixes: 9d89c257 ("sched/fair: Rewrite runnable load and utilization average tracking")
    Link: http://lkml.kernel.org/r/20160617091948.GJ30927@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
    Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    
    43b1bfec
fair.c 218 KB