1. 28 Jan, 2020 7 commits
    • Giovanni Gherdovich's avatar
      x86, sched: Add support for frequency invariance · 1567c3e3
      Giovanni Gherdovich authored
      Implement arch_scale_freq_capacity() for 'modern' x86. This function
      is used by the scheduler to correctly account usage in the face of
      DVFS.
      
      The present patch addresses Intel processors specifically and has positive
      performance and performance-per-watt implications for the schedutil cpufreq
      governor, bringing it closer to, if not on-par with, the powersave governor
      from the intel_pstate driver/framework.
      
      Large performance gains are obtained when the machine is lightly loaded and
      no regression are observed at saturation. The benchmarks with the largest
      gains are kernel compilation, tbench (the networking version of dbench) and
      shell-intensive workloads.
      
      1. FREQUENCY INVARIANCE: MOTIVATION
         * Without it, a task looks larger if the CPU runs slower
      
      2. PECULIARITIES OF X86
         * freq invariance accounting requires knowing the ratio freq_curr/freq_max
         2.1 CURRENT FREQUENCY
             * Use delta_APERF / delta_MPERF * freq_base (a.k.a "BusyMHz")
         2.2 MAX FREQUENCY
             * It varies with time (turbo). As an approximation, we set it to a
               constant, i.e. 4-cores turbo frequency.
      
      3. EFFECTS ON THE SCHEDUTIL FREQUENCY GOVERNOR
         * The invariant schedutil's formula has no feedback loop and reacts faster
           to utilization changes
      
      4. KNOWN LIMITATIONS
         * In some cases tasks can't reach max util despite how hard they try
      
      5. PERFORMANCE TESTING
         5.1 MACHINES
             * Skylake, Broadwell, Haswell
         5.2 SETUP
             * baseline Linux v5.2 w/ non-invariant schedutil. Tested freq_max = 1-2-3-4-8-12
               active cores turbo w/ invariant schedutil, and intel_pstate/powersave
         5.3 BENCHMARK RESULTS
             5.3.1 NEUTRAL BENCHMARKS
                   * NAS Parallel Benchmark (HPC), hackbench
             5.3.2 NON-NEUTRAL BENCHMARKS
                   * tbench (10-30% better), kernbench (10-15% better),
                     shell-intensive-scripts (30-50% better)
                   * no regressions
             5.3.3 SELECTION OF DETAILED RESULTS
             5.3.4 POWER CONSUMPTION, PERFORMANCE-PER-WATT
                   * dbench (5% worse on one machine), kernbench (3% worse),
                     tbench (5-10% better), shell-intensive-scripts (10-40% better)
      
      6. MICROARCH'ES ADDRESSED HERE
         * Xeon Core before Scalable Performance processors line (Xeon Gold/Platinum
           etc have different MSRs semantic for querying turbo levels)
      
      7. REFERENCES
         * MMTests performance testing framework, github.com/gormanm/mmtests
      
       +-------------------------------------------------------------------------+
       | 1. FREQUENCY INVARIANCE: MOTIVATION
       +-------------------------------------------------------------------------+
      
      For example; suppose a CPU has two frequencies: 500 and 1000 Mhz. When
      running a task that would consume 1/3rd of a CPU at 1000 MHz, it would
      appear to consume 2/3rd (or 66.6%) when running at 500 MHz, giving the
      false impression this CPU is almost at capacity, even though it can go
      faster [*]. In a nutshell, without frequency scale-invariance tasks look
      larger just because the CPU is running slower.
      
      [*] (footnote: this assumes a linear frequency/performance relation; which
      everybody knows to be false, but given realities its the best approximation
      we can make.)
      
       +-------------------------------------------------------------------------+
       | 2. PECULIARITIES OF X86
       +-------------------------------------------------------------------------+
      
      Accounting for frequency changes in PELT signals requires the computation of
      the ratio freq_curr / freq_max. On x86 neither of those terms is readily
      available.
      
      2.1 CURRENT FREQUENCY
      ====================
      
      Since modern x86 has hardware control over the actual frequency we run
      at (because amongst other things, Turbo-Mode), we cannot simply use
      the frequency as requested through cpufreq.
      
      Instead we use the APERF/MPERF MSRs to compute the effective frequency
      over the recent past. Also, because reading MSRs is expensive, don't
      do so every time we need the value, but amortize the cost by doing it
      every tick.
      
      2.2 MAX FREQUENCY
      =================
      
      Obtaining freq_max is also non-trivial because at any time the hardware can
      provide a frequency boost to a selected subset of cores if the package has
      enough power to spare (eg: Turbo Boost). This means that the maximum frequency
      available to a given core changes with time.
      
      The approach taken in this change is to arbitrarily set freq_max to a constant
      value at boot. The value chosen is the "4-cores (4C) turbo frequency" on most
      microarchitectures, after evaluating the following candidates:
      
          * 1-core (1C) turbo frequency (the fastest turbo state available)
          * around base frequency (a.k.a. max P-state)
          * something in between, such as 4C turbo
      
      To interpret these options, consider that this is the denominator in
      freq_curr/freq_max, and that ratio will be used to scale PELT signals such as
      util_avg and load_avg. A large denominator will undershoot (util_avg looks a
      bit smaller than it really is), viceversa with a smaller denominator PELT
      signals will tend to overshoot. Given that PELT drives frequency selection
      in the schedutil governor, we will have:
      
          freq_max set to     | effect on DVFS
          --------------------+------------------
          1C turbo            | power efficiency (lower freq choices)
          base freq           | performance (higher util_avg, higher freq requests)
          4C turbo            | a bit of both
      
      4C turbo proves to be a good compromise in a number of benchmarks (see below).
      
       +-------------------------------------------------------------------------+
       | 3. EFFECTS ON THE SCHEDUTIL FREQUENCY GOVERNOR
       +-------------------------------------------------------------------------+
      
      Once an architecture implements a frequency scale-invariant utilization (the
      PELT signal util_avg), schedutil switches its frequency selection formula from
      
          freq_next = 1.25 * freq_curr * util            [non-invariant util signal]
      
      to
      
          freq_next = 1.25 * freq_max * util             [invariant util signal]
      
      where, in the second formula, freq_max is set to the 1C turbo frequency (max
      turbo). The advantage of the second formula, whose usage we unlock with this
      patch, is that freq_next doesn't depend on the current frequency in an
      iterative fashion, but can jump to any frequency in a single update. This
      absence of feedback in the formula makes it quicker to react to utilization
      changes and more robust against pathological instabilities.
      
      Compare it to the update formula of intel_pstate/powersave:
      
          freq_next = 1.25 * freq_max * Busy%
      
      where again freq_max is 1C turbo and Busy% is the percentage of time not spent
      idling (calculated with delta_MPERF / delta_TSC); essentially the same as
      invariant schedutil, and largely responsible for intel_pstate/powersave good
      reputation. The non-invariant schedutil formula is derived from the invariant
      one by approximating util_inv with util_raw * freq_curr / freq_max, but this
      has limitations.
      
      Testing shows improved performances due to better frequency selections when
      the machine is lightly loaded, and essentially no change in behaviour at
      saturation / overutilization.
      
       +-------------------------------------------------------------------------+
       | 4. KNOWN LIMITATIONS
       +-------------------------------------------------------------------------+
      
      It's been shown that it is possible to create pathological scenarios where a
      CPU-bound task cannot reach max utilization, if the normalizing factor
      freq_max is fixed to a constant value (see [Lelli-2018]).
      
      If freq_max is set to 4C turbo as we do here, one needs to peg at least 5
      cores in a package doing some busywork, and observe that none of those task
      will ever reach max util (1024) because they're all running at less than the
      4C turbo frequency.
      
      While this concern still applies, we believe the performance benefit of
      frequency scale-invariant PELT signals outweights the cost of this limitation.
      
       [Lelli-2018]
       https://lore.kernel.org/lkml/20180517150418.GF22493@localhost.localdomain/
      
       +-------------------------------------------------------------------------+
       | 5. PERFORMANCE TESTING
       +-------------------------------------------------------------------------+
      
      5.1 MACHINES
      ============
      
      We tested the patch on three machines, with Skylake, Broadwell and Haswell
      CPUs. The details are below, together with the available turbo ratios as
      reported by the appropriate MSRs.
      
      * 8x-SKYLAKE-UMA:
        Single socket E3-1240 v5, Skylake 4 cores/8 threads
        Max EFFiciency, BASE frequency and available turbo levels (MHz):
      
          EFFIC    800 |********
          BASE    3500 |***********************************
          4C      3700 |*************************************
          3C      3800 |**************************************
          2C      3900 |***************************************
          1C      3900 |***************************************
      
      * 80x-BROADWELL-NUMA:
        Two sockets E5-2698 v4, 2x Broadwell 20 cores/40 threads
        Max EFFiciency, BASE frequency and available turbo levels (MHz):
      
          EFFIC   1200 |************
          BASE    2200 |**********************
          8C      2900 |*****************************
          7C      3000 |******************************
          6C      3100 |*******************************
          5C      3200 |********************************
          4C      3300 |*********************************
          3C      3400 |**********************************
          2C      3600 |************************************
          1C      3600 |************************************
      
      * 48x-HASWELL-NUMA
        Two sockets E5-2670 v3, 2x Haswell 12 cores/24 threads
        Max EFFiciency, BASE frequency and available turbo levels (MHz):
      
          EFFIC   1200 |************
          BASE    2300 |***********************
          12C     2600 |**************************
          11C     2600 |**************************
          10C     2600 |**************************
          9C      2600 |**************************
          8C      2600 |**************************
          7C      2600 |**************************
          6C      2600 |**************************
          5C      2700 |***************************
          4C      2800 |****************************
          3C      2900 |*****************************
          2C      3100 |*******************************
          1C      3100 |*******************************
      
      5.2 SETUP
      =========
      
      * The baseline is Linux v5.2 with schedutil (non-invariant) and the intel_pstate
        driver in passive mode.
      * The rationale for choosing the various freq_max values to test have been to
        try all the 1-2-3-4C turbo levels (note that 1C and 2C turbo are identical
        on all machines), plus one more value closer to base_freq but still in the
        turbo range (8C turbo for both 80x-BROADWELL-NUMA and 48x-HASWELL-NUMA).
      * In addition we've run all tests with intel_pstate/powersave for comparison.
      * The filesystem is always XFS, the userspace is openSUSE Leap 15.1.
      * 8x-SKYLAKE-UMA is capable of HWP (Hardware-Managed P-States), so the runs
        with active intel_pstate on this machine use that.
      
      This gives, in terms of combinations tested on each machine:
      
      * 8x-SKYLAKE-UMA
        * Baseline: Linux v5.2, non-invariant schedutil, intel_pstate passive
        * intel_pstate active + powersave + HWP
        * invariant schedutil, freq_max = 1C turbo
        * invariant schedutil, freq_max = 3C turbo
        * invariant schedutil, freq_max = 4C turbo
      
      * both 80x-BROADWELL-NUMA and 48x-HASWELL-NUMA
        * [same as 8x-SKYLAKE-UMA, but no HWP capable]
        * invariant schedutil, freq_max = 8C turbo
          (which on 48x-HASWELL-NUMA is the same as 12C turbo, or "all cores turbo")
      
      5.3 BENCHMARK RESULTS
      =====================
      
      5.3.1 NEUTRAL BENCHMARKS
      ------------------------
      
      Tests that didn't show any measurable difference in performance on any of the
      test machines between non-invariant schedutil and our patch are:
      
      * NAS Parallel Benchmarks (NPB) using either MPI or openMP for IPC, any
        computational kernel
      * flexible I/O (FIO)
      * hackbench (using threads or processes, and using pipes or sockets)
      
      5.3.2 NON-NEUTRAL BENCHMARKS
      ----------------------------
      
      What follow are summary tables where each benchmark result is given a score.
      
      * A tilde (~) means a neutral result, i.e. no difference from baseline.
      * Scores are computed with the ratio result_new / result_baseline, so a tilde
        means a score of 1.00.
      * The results in the score ratio are the geometric means of results running
        the benchmark with different parameters (eg: for kernbench: using 1, 2, 4,
        ... number of processes; for pgbench: varying the number of clients, and so
        on).
      * The first three tables show higher-is-better kind of tests (i.e. measured in
        operations/second), the subsequent three show lower-is-better kind of tests
        (i.e. the workload is fixed and we measure elapsed time, think kernbench).
      * "gitsource" is a name we made up for the test consisting in running the
        entire unit tests suite of the Git SCM and measuring how long it takes. We
        take it as a typical example of shell-intensive serialized workload.
      * In the "I_PSTATE" column we have the results for intel_pstate/powersave. Other
        columns show invariant schedutil for different values of freq_max. 4C turbo
        is circled as it's the value we've chosen for the final implementation.
      
      80x-BROADWELL-NUMA (comparison ratio; higher is better)
                                               +------+
                       I_PSTATE   1C     3C    | 4C   |  8C
      pgbench-ro           1.14   ~      ~     | 1.11 |  1.14
      pgbench-rw           ~      ~      ~     | ~    |  ~
      netperf-udp          1.06   ~      1.06  | 1.05 |  1.07
      netperf-tcp          ~      1.03   ~     | 1.01 |  1.02
      tbench4              1.57   1.18   1.22  | 1.30 |  1.56
                                               +------+
      
      8x-SKYLAKE-UMA (comparison ratio; higher is better)
                                               +------+
                   I_PSTATE/HWP   1C     3C    | 4C   |
      pgbench-ro           ~      ~      ~     | ~    |
      pgbench-rw           ~      ~      ~     | ~    |
      netperf-udp          ~      ~      ~     | ~    |
      netperf-tcp          ~      ~      ~     | ~    |
      tbench4              1.30   1.14   1.14  | 1.16 |
                                               +------+
      
      48x-HASWELL-NUMA (comparison ratio; higher is better)
                                               +------+
                       I_PSTATE   1C     3C    | 4C   |  12C
      pgbench-ro           1.15   ~      ~     | 1.06 |  1.16
      pgbench-rw           ~      ~      ~     | ~    |  ~
      netperf-udp          1.05   0.97   1.04  | 1.04 |  1.02
      netperf-tcp          0.96   1.01   1.01  | 1.01 |  1.01
      tbench4              1.50   1.05   1.13  | 1.13 |  1.25
                                               +------+
      
      In the table above we see that active intel_pstate is slightly better than our
      4C-turbo patch (both in reference to the baseline non-invariant schedutil) on
      read-only pgbench and much better on tbench. Both cases are notable in which
      it shows that lowering our freq_max (to 8C-turbo and 12C-turbo on
      80x-BROADWELL-NUMA and 48x-HASWELL-NUMA respectively) helps invariant
      schedutil to get closer.
      
      If we ignore active intel_pstate and focus on the comparison with baseline
      alone, there are several instances of double-digit performance improvement.
      
      80x-BROADWELL-NUMA (comparison ratio; lower is better)
                                               +------+
                       I_PSTATE   1C     3C    | 4C   |  8C
      dbench4              1.23   0.95   0.95  | 0.95 |  0.95
      kernbench            0.93   0.83   0.83  | 0.83 |  0.82
      gitsource            0.98   0.49   0.49  | 0.49 |  0.48
                                               +------+
      
      8x-SKYLAKE-UMA (comparison ratio; lower is better)
                                               +------+
                   I_PSTATE/HWP   1C     3C    | 4C   |
      dbench4              ~      ~      ~     | ~    |
      kernbench            ~      ~      ~     | ~    |
      gitsource            0.92   0.55   0.55  | 0.55 |
                                               +------+
      
      48x-HASWELL-NUMA (comparison ratio; lower is better)
                                               +------+
                       I_PSTATE   1C     3C    | 4C   |  8C
      dbench4              ~      ~      ~     | ~    |  ~
      kernbench            0.94   0.90   0.89  | 0.90 |  0.90
      gitsource            0.97   0.69   0.69  | 0.69 |  0.69
                                               +------+
      
      dbench is not very remarkable here, unless we notice how poorly active
      intel_pstate is performing on 80x-BROADWELL-NUMA: 23% regression versus
      non-invariant schedutil. We repeated that run getting consistent results. Out
      of scope for the patch at hand, but deserving future investigation. Other than
      that, we previously ran this campaign with Linux v5.0 and saw the patch doing
      better on dbench a the time. We haven't checked closely and can only speculate
      at this point.
      
      On the NUMA boxes kernbench gets 10-15% improvements on average; we'll see in
      the detailed tables that the gains concentrate on low process counts (lightly
      loaded machines).
      
      The test we call "gitsource" (running the git unit test suite, a long-running
      single-threaded shell script) appears rather spectacular in this table (gains
      of 30-50% depending on the machine). It is to be noted, however, that
      gitsource has no adjustable parameters (such as the number of jobs in
      kernbench, which we average over in order to get a single-number summary
      score) and is exactly the kind of low-parallelism workload that benefits the
      most from this patch. When looking at the detailed tables of kernbench or
      tbench4, at low process or client counts one can see similar numbers.
      
      5.3.3 SELECTION OF DETAILED RESULTS
      -----------------------------------
      
      Machine            : 48x-HASWELL-NUMA
      Benchmark          : tbench4 (i.e. dbench4 over the network, actually loopback)
      Varying parameter  : number of clients
      Unit               : MB/sec (higher is better)
      
                         5.2.0 vanilla (BASELINE)               5.2.0 intel_pstate                   5.2.0 1C-turbo
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Hmean  1        126.73  +- 0.31% (        )      315.91  +- 0.66% ( 149.28%)      125.03  +- 0.76% (  -1.34%)
      Hmean  2        258.04  +- 0.62% (        )      614.16  +- 0.51% ( 138.01%)      269.58  +- 1.45% (   4.47%)
      Hmean  4        514.30  +- 0.67% (        )     1146.58  +- 0.54% ( 122.94%)      533.84  +- 1.99% (   3.80%)
      Hmean  8       1111.38  +- 2.52% (        )     2159.78  +- 0.38% (  94.33%)     1359.92  +- 1.56% (  22.36%)
      Hmean  16      2286.47  +- 1.36% (        )     3338.29  +- 0.21% (  46.00%)     2720.20  +- 0.52% (  18.97%)
      Hmean  32      4704.84  +- 0.35% (        )     4759.03  +- 0.43% (   1.15%)     4774.48  +- 0.30% (   1.48%)
      Hmean  64      7578.04  +- 0.27% (        )     7533.70  +- 0.43% (  -0.59%)     7462.17  +- 0.65% (  -1.53%)
      Hmean  128     6998.52  +- 0.16% (        )     6987.59  +- 0.12% (  -0.16%)     6909.17  +- 0.14% (  -1.28%)
      Hmean  192     6901.35  +- 0.25% (        )     6913.16  +- 0.10% (   0.17%)     6855.47  +- 0.21% (  -0.66%)
      
                                   5.2.0 3C-turbo                   5.2.0 4C-turbo                  5.2.0 12C-turbo
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Hmean  1        128.43  +- 0.28% (   1.34%)      130.64  +- 3.81% (   3.09%)      153.71  +- 5.89% (  21.30%)
      Hmean  2        311.70  +- 6.15% (  20.79%)      281.66  +- 3.40% (   9.15%)      305.08  +- 5.70% (  18.23%)
      Hmean  4        641.98  +- 2.32% (  24.83%)      623.88  +- 5.28% (  21.31%)      906.84  +- 4.65% (  76.32%)
      Hmean  8       1633.31  +- 1.56% (  46.96%)     1714.16  +- 0.93% (  54.24%)     2095.74  +- 0.47% (  88.57%)
      Hmean  16      3047.24  +- 0.42% (  33.27%)     3155.02  +- 0.30% (  37.99%)     3634.58  +- 0.15% (  58.96%)
      Hmean  32      4734.31  +- 0.60% (   0.63%)     4804.38  +- 0.23% (   2.12%)     4674.62  +- 0.27% (  -0.64%)
      Hmean  64      7699.74  +- 0.35% (   1.61%)     7499.72  +- 0.34% (  -1.03%)     7659.03  +- 0.25% (   1.07%)
      Hmean  128     6935.18  +- 0.15% (  -0.91%)     6942.54  +- 0.10% (  -0.80%)     7004.85  +- 0.12% (   0.09%)
      Hmean  192     6901.62  +- 0.12% (   0.00%)     6856.93  +- 0.10% (  -0.64%)     6978.74  +- 0.10% (   1.12%)
      
      This is one of the cases where the patch still can't surpass active
      intel_pstate, not even when freq_max is as low as 12C-turbo. Otherwise, gains are
      visible up to 16 clients and the saturated scenario is the same as baseline.
      
      The scores in the summary table from the previous sections are ratios of
      geometric means of the results over different clients, as seen in this table.
      
      Machine            : 80x-BROADWELL-NUMA
      Benchmark          : kernbench (kernel compilation)
      Varying parameter  : number of jobs
      Unit               : seconds (lower is better)
      
                         5.2.0 vanilla (BASELINE)               5.2.0 intel_pstate                   5.2.0 1C-turbo
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Amean  2        379.68  +- 0.06% (        )      330.20  +- 0.43% (  13.03%)      285.93  +- 0.07% (  24.69%)
      Amean  4        200.15  +- 0.24% (        )      175.89  +- 0.22% (  12.12%)      153.78  +- 0.25% (  23.17%)
      Amean  8        106.20  +- 0.31% (        )       95.54  +- 0.23% (  10.03%)       86.74  +- 0.10% (  18.32%)
      Amean  16        56.96  +- 1.31% (        )       53.25  +- 1.22% (   6.50%)       48.34  +- 1.73% (  15.13%)
      Amean  32        34.80  +- 2.46% (        )       33.81  +- 0.77% (   2.83%)       30.28  +- 1.59% (  12.99%)
      Amean  64        26.11  +- 1.63% (        )       25.04  +- 1.07% (   4.10%)       22.41  +- 2.37% (  14.16%)
      Amean  128       24.80  +- 1.36% (        )       23.57  +- 1.23% (   4.93%)       21.44  +- 1.37% (  13.55%)
      Amean  160       24.85  +- 0.56% (        )       23.85  +- 1.17% (   4.06%)       21.25  +- 1.12% (  14.49%)
      
                                   5.2.0 3C-turbo                   5.2.0 4C-turbo                   5.2.0 8C-turbo
      - - - - - - - -  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Amean  2        284.08  +- 0.13% (  25.18%)      283.96  +- 0.51% (  25.21%)      285.05  +- 0.21% (  24.92%)
      Amean  4        153.18  +- 0.22% (  23.47%)      154.70  +- 1.64% (  22.71%)      153.64  +- 0.30% (  23.24%)
      Amean  8         87.06  +- 0.28% (  18.02%)       86.77  +- 0.46% (  18.29%)       86.78  +- 0.22% (  18.28%)
      Amean  16        48.03  +- 0.93% (  15.68%)       47.75  +- 1.99% (  16.17%)       47.52  +- 1.61% (  16.57%)
      Amean  32        30.23  +- 1.20% (  13.14%)       30.08  +- 1.67% (  13.57%)       30.07  +- 1.67% (  13.60%)
      Amean  64        22.59  +- 2.02% (  13.50%)       22.63  +- 0.81% (  13.32%)       22.42  +- 0.76% (  14.12%)
      Amean  128       21.37  +- 0.67% (  13.82%)       21.31  +- 1.15% (  14.07%)       21.17  +- 1.93% (  14.63%)
      Amean  160       21.68  +- 0.57% (  12.76%)       21.18  +- 1.74% (  14.77%)       21.22  +- 1.00% (  14.61%)
      
      The patch outperform active intel_pstate (and baseline) by a considerable
      margin; the summary table from the previous section says 4C turbo and active
      intel_pstate are 0.83 and 0.93 against baseline respectively, so 4C turbo is
      0.83/0.93=0.89 against intel_pstate (~10% better on average). There is no
      noticeable difference with regard to the value of freq_max.
      
      Machine            : 8x-SKYLAKE-UMA
      Benchmark          : gitsource (time to run the git unit test suite)
      Varying parameter  : none
      Unit               : seconds (lower is better)
      
                                  5.2.0 vanilla           5.2.0 intel_pstate/hwp         5.2.0 1C-turbo
      - - - - - - - -  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Amean         858.85  +- 1.16% (        )      791.94  +- 0.21% (   7.79%)      474.95 (  44.70%)
      
                                 5.2.0 3C-turbo                   5.2.0 4C-turbo
      - - - - - - - -  - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Amean         475.26  +- 0.20% (  44.66%)      474.34  +- 0.13% (  44.77%)
      
      In this test, which is of interest as representing shell-intensive
      (i.e. fork-intensive) serialized workloads, invariant schedutil outperforms
      intel_pstate/powersave by a whopping 40% margin.
      
      5.3.4 POWER CONSUMPTION, PERFORMANCE-PER-WATT
      ---------------------------------------------
      
      The following table shows average power consumption in watt for each
      benchmark. Data comes from turbostat (package average), which in turn is read
      from the RAPL interface on CPUs. We know the patch affects CPU frequencies so
      it's reasonable to ignore other power consumers (such as memory or I/O). Also,
      we don't have a power meter available in the lab so RAPL is the best we have.
      
      turbostat sampled average power every 10 seconds for the entire duration of
      each benchmark. We took all those values and averaged them (i.e. with don't
      have detail on a per-parameter granularity, only on whole benchmarks).
      
      80x-BROADWELL-NUMA (power consumption, watts)
                                                          +--------+
                     BASELINE I_PSTATE       1C       3C  |     4C |      8C
      pgbench-ro       130.01   142.77   131.11   132.45  | 134.65 |  136.84
      pgbench-rw        68.30    60.83    71.45    71.70  |  71.65 |   72.54
      dbench4           90.25    59.06   101.43    99.89  | 101.10 |  102.94
      netperf-udp       65.70    69.81    66.02    68.03  |  68.27 |   68.95
      netperf-tcp       88.08    87.96    88.97    88.89  |  88.85 |   88.20
      tbench4          142.32   176.73   153.02   163.91  | 165.58 |  176.07
      kernbench         92.94   101.95   114.91   115.47  | 115.52 |  115.10
      gitsource         40.92    41.87    75.14    75.20  |  75.40 |   75.70
                                                          +--------+
      8x-SKYLAKE-UMA (power consumption, watts)
                                                          +--------+
                    BASELINE I_PSTATE/HWP    1C       3C  |     4C |
      pgbench-ro        46.49    46.68    46.56    46.59  |  46.52 |
      pgbench-rw        29.34    31.38    30.98    31.00  |  31.00 |
      dbench4           27.28    27.37    27.49    27.41  |  27.38 |
      netperf-udp       22.33    22.41    22.36    22.35  |  22.36 |
      netperf-tcp       27.29    27.29    27.30    27.31  |  27.33 |
      tbench4           41.13    45.61    43.10    43.33  |  43.56 |
      kernbench         42.56    42.63    43.01    43.01  |  43.01 |
      gitsource         13.32    13.69    17.33    17.30  |  17.35 |
                                                          +--------+
      48x-HASWELL-NUMA (power consumption, watts)
                                                          +--------+
                     BASELINE I_PSTATE       1C       3C  |     4C |     12C
      pgbench-ro       128.84   136.04   129.87   132.43  | 132.30 |  134.86
      pgbench-rw        37.68    37.92    37.17    37.74  |  37.73 |   37.31
      dbench4           28.56    28.73    28.60    28.73  |  28.70 |   28.79
      netperf-udp       56.70    60.44    56.79    57.42  |  57.54 |   57.52
      netperf-tcp       75.49    75.27    75.87    76.02  |  76.01 |   75.95
      tbench4          115.44   139.51   119.53   123.07  | 123.97 |  130.22
      kernbench         83.23    91.55    95.58    95.69  |  95.72 |   96.04
      gitsource         36.79    36.99    39.99    40.34  |  40.35 |   40.23
                                                          +--------+
      
      A lower power consumption isn't necessarily better, it depends on what is done
      with that energy. Here are tables with the ratio of performance-per-watt on
      each machine and benchmark. Higher is always better; a tilde (~) means a
      neutral ratio (i.e. 1.00).
      
      80x-BROADWELL-NUMA (performance-per-watt ratios; higher is better)
                                           +------+
                   I_PSTATE     1C     3C  |   4C |    8C
      pgbench-ro       1.04   1.06   0.94  | 1.07 |  1.08
      pgbench-rw       1.10   0.97   0.96  | 0.96 |  0.97
      dbench4          1.24   0.94   0.95  | 0.94 |  0.92
      netperf-udp      ~      1.02   1.02  | ~    |  1.02
      netperf-tcp      ~      1.02   ~     | ~    |  1.02
      tbench4          1.26   1.10   1.06  | 1.12 |  1.26
      kernbench        0.98   0.97   0.97  | 0.97 |  0.98
      gitsource        ~      1.11   1.11  | 1.11 |  1.13
                                           +------+
      
      8x-SKYLAKE-UMA (performance-per-watt ratios; higher is better)
                                           +------+
               I_PSTATE/HWP     1C     3C  |   4C |
      pgbench-ro       ~      ~      ~     | ~    |
      pgbench-rw       0.95   0.97   0.96  | 0.96 |
      dbench4          ~      ~      ~     | ~    |
      netperf-udp      ~      ~      ~     | ~    |
      netperf-tcp      ~      ~      ~     | ~    |
      tbench4          1.17   1.09   1.08  | 1.10 |
      kernbench        ~      ~      ~     | ~    |
      gitsource        1.06   1.40   1.40  | 1.40 |
                                           +------+
      
      48x-HASWELL-NUMA  (performance-per-watt ratios; higher is better)
                                           +------+
                   I_PSTATE     1C     3C  |   4C |   12C
      pgbench-ro       1.09   ~      1.09  | 1.03 |  1.11
      pgbench-rw       ~      0.86   ~     | ~    |  0.86
      dbench4          ~      1.02   1.02  | 1.02 |  ~
      netperf-udp      ~      0.97   1.03  | 1.02 |  ~
      netperf-tcp      0.96   ~      ~     | ~    |  ~
      tbench4          1.24   ~      1.06  | 1.05 |  1.11
      kernbench        0.97   0.97   0.98  | 0.97 |  0.96
      gitsource        1.03   1.33   1.32  | 1.32 |  1.33
                                           +------+
      
      These results are overall pleasing: in plenty of cases we observe
      performance-per-watt improvements. The few regressions (read/write pgbench and
      dbench on the Broadwell machine) are of small magnitude. kernbench loses a few
      percentage points (it has a 10-15% performance improvement, but apparently the
      increase in power consumption is larger than that). tbench4 and gitsource, which
      benefit the most from the patch, keep a positive score in this table which is
      a welcome surprise; that suggests that in those particular workloads the
      non-invariant schedutil (and active intel_pstate, too) makes some rather
      suboptimal frequency selections.
      
      +-------------------------------------------------------------------------+
      | 6. MICROARCH'ES ADDRESSED HERE
      +-------------------------------------------------------------------------+
      
      The patch addresses Xeon Core processors that use MSR_PLATFORM_INFO and
      MSR_TURBO_RATIO_LIMIT to advertise their base frequency and turbo frequencies
      respectively. This excludes the recent Xeon Scalable Performance processors
      line (Xeon Gold, Platinum etc) whose MSRs have to be parsed differently.
      
      Subsequent patches will address:
      
      * Xeon Scalable Performance processors and Atom Goldmont/Goldmont Plus
      * Xeon Phi (Knights Landing, Knights Mill)
      * Atom Silvermont
      
      +-------------------------------------------------------------------------+
      | 7. REFERENCES
      +-------------------------------------------------------------------------+
      
      Tests have been run with the help of the MMTests performance testing
      framework, see github.com/gormanm/mmtests. The configuration file names for
      the benchmark used are:
      
          db-pgbench-timed-ro-small-xfs
          db-pgbench-timed-rw-small-xfs
          io-dbench4-async-xfs
          network-netperf-unbound
          network-tbench
          scheduler-unbound
          workload-kerndevel-xfs
          workload-shellscripts-xfs
          hpc-nas-c-class-mpi-full-xfs
          hpc-nas-c-class-omp-full
      
      All those benchmarks are generally available on the web:
      
      pgbench: https://www.postgresql.org/docs/10/pgbench.html
      netperf: https://hewlettpackard.github.io/netperf/
      dbench/tbench: https://dbench.samba.org/
      gitsource: git unit test suite, github.com/git/git
      NAS Parallel Benchmarks: https://www.nas.nasa.gov/publications/npb.html
      hackbench: https://people.redhat.com/mingo/cfs-scheduler/tools/hackbench.cSuggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarGiovanni Gherdovich <ggherdovich@suse.cz>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarDoug Smythies <dsmythies@telus.net>
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Link: https://lkml.kernel.org/r/20200122151617.531-2-ggherdovich@suse.cz
      1567c3e3
    • Vincent Guittot's avatar
      sched/fair: Prevent unlimited runtime on throttled group · 2a4b03ff
      Vincent Guittot authored
      When a running task is moved on a throttled task group and there is no
      other task enqueued on the CPU, the task can keep running using 100% CPU
      whatever the allocated bandwidth for the group and although its cfs rq is
      throttled. Furthermore, the group entity of the cfs_rq and its parents are
      not enqueued but only set as curr on their respective cfs_rqs.
      
      We have the following sequence:
      
      sched_move_task
        -dequeue_task: dequeue task and group_entities.
        -put_prev_task: put task and group entities.
        -sched_change_group: move task to new group.
        -enqueue_task: enqueue only task but not group entities because cfs_rq is
          throttled.
        -set_next_task : set task and group_entities as current sched_entity of
          their cfs_rq.
      
      Another impact is that the root cfs_rq runnable_load_avg at root rq stays
      null because the group_entities are not enqueued. This situation will stay
      the same until an "external" event triggers a reschedule. Let trigger it
      immediately instead.
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarBen Segall <bsegall@google.com>
      Link: https://lkml.kernel.org/r/1579011236-31256-1-git-send-email-vincent.guittot@linaro.org
      2a4b03ff
    • Wanpeng Li's avatar
      sched/nohz: Optimize get_nohz_timer_target() · e938b9c9
      Wanpeng Li authored
      On a machine, CPU 0 is used for housekeeping, the other 39 CPUs in the
      same socket are in nohz_full mode. We can observe huge time burn in the
      loop for seaching nearest busy housekeeper cpu by ftrace.
      
        2)               |                        get_nohz_timer_target() {
        2)   0.240 us    |                          housekeeping_test_cpu();
        2)   0.458 us    |                          housekeeping_test_cpu();
      
        ...
      
        2)   0.292 us    |                          housekeeping_test_cpu();
        2)   0.240 us    |                          housekeeping_test_cpu();
        2)   0.227 us    |                          housekeeping_any_cpu();
        2) + 43.460 us   |                        }
      
      This patch optimizes the searching logic by finding a nearest housekeeper
      CPU in the housekeeping cpumask, it can minimize the worst searching time
      from ~44us to < 10us in my testing. In addition, the last iterated busy
      housekeeper can become a random candidate while current CPU is a better
      fallback if it is a housekeeper.
      Signed-off-by: default avatarWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Link: https://lkml.kernel.org/r/1578876627-11938-1-git-send-email-wanpengli@tencent.com
      e938b9c9
    • Qais Yousef's avatar
      sched/uclamp: Reject negative values in cpu_uclamp_write() · b562d140
      Qais Yousef authored
      The check to ensure that the new written value into cpu.uclamp.{min,max}
      is within range, [0:100], wasn't working because of the signed
      comparison
      
       7301                 if (req.percent > UCLAMP_PERCENT_SCALE) {
       7302                         req.ret = -ERANGE;
       7303                         return req;
       7304                 }
      
      	# echo -1 > cpu.uclamp.min
      	# cat cpu.uclamp.min
      	42949671.96
      
      Cast req.percent into u64 to force the comparison to be unsigned and
      work as intended in capacity_from_percent().
      
      	# echo -1 > cpu.uclamp.min
      	sh: write error: Numerical result out of range
      
      Fixes: 2480c093 ("sched/uclamp: Extend CPU's cgroup controller")
      Signed-off-by: default avatarQais Yousef <qais.yousef@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/20200114210947.14083-1-qais.yousef@arm.com
      b562d140
    • Mel Gorman's avatar
      sched/fair: Allow a small load imbalance between low utilisation SD_NUMA domains · b396f523
      Mel Gorman authored
      The CPU load balancer balances between different domains to spread load
      and strives to have equal balance everywhere. Communicating tasks can
      migrate so they are topologically close to each other but these decisions
      are independent. On a lightly loaded NUMA machine, two communicating tasks
      pulled together at wakeup time can be pushed apart by the load balancer.
      In isolation, the load balancer decision is fine but it ignores the tasks
      data locality and the wakeup/LB paths continually conflict. NUMA balancing
      is also a factor but it also simply conflicts with the load balancer.
      
      This patch allows a fixed degree of imbalance of two tasks to exist
      between NUMA domains regardless of utilisation levels. In many cases,
      this prevents communicating tasks being pulled apart. It was evaluated
      whether the imbalance should be scaled to the domain size. However, no
      additional benefit was measured across a range of workloads and machines
      and scaling adds the risk that lower domains have to be rebalanced. While
      this could change again in the future, such a change should specify the
      use case and benefit.
      
      The most obvious impact is on netperf TCP_STREAM -- two simple
      communicating tasks with some softirq offload depending on the
      transmission rate.
      
       2-socket Haswell machine 48 core, HT enabled
       netperf-tcp -- mmtests config config-network-netperf-unbound
      			      baseline              lbnuma-v3
       Hmean     64         568.73 (   0.00%)      577.56 *   1.55%*
       Hmean     128       1089.98 (   0.00%)     1128.06 *   3.49%*
       Hmean     256       2061.72 (   0.00%)     2104.39 *   2.07%*
       Hmean     1024      7254.27 (   0.00%)     7557.52 *   4.18%*
       Hmean     2048     11729.20 (   0.00%)    13350.67 *  13.82%*
       Hmean     3312     15309.08 (   0.00%)    18058.95 *  17.96%*
       Hmean     4096     17338.75 (   0.00%)    20483.66 *  18.14%*
       Hmean     8192     25047.12 (   0.00%)    27806.84 *  11.02%*
       Hmean     16384    27359.55 (   0.00%)    33071.88 *  20.88%*
       Stddev    64           2.16 (   0.00%)        2.02 (   6.53%)
       Stddev    128          2.31 (   0.00%)        2.19 (   5.05%)
       Stddev    256         11.88 (   0.00%)        3.22 (  72.88%)
       Stddev    1024        23.68 (   0.00%)        7.24 (  69.43%)
       Stddev    2048        79.46 (   0.00%)       71.49 (  10.03%)
       Stddev    3312        26.71 (   0.00%)       57.80 (-116.41%)
       Stddev    4096       185.57 (   0.00%)       96.15 (  48.19%)
       Stddev    8192       245.80 (   0.00%)      100.73 (  59.02%)
       Stddev    16384      207.31 (   0.00%)      141.65 (  31.67%)
      
      In this case, there was a sizable improvement to performance and
      a general reduction in variance. However, this is not univeral.
      For most machines, the impact was roughly a 3% performance gain.
      
       Ops NUMA base-page range updates       19796.00         292.00
       Ops NUMA PTE updates                   19796.00         292.00
       Ops NUMA PMD updates                       0.00           0.00
       Ops NUMA hint faults                   16113.00         143.00
       Ops NUMA hint local faults %            8407.00         142.00
       Ops NUMA hint local percent               52.18          99.30
       Ops NUMA pages migrated                 4244.00           1.00
      
      Without the patch, only 52.18% of sampled accesses are local.  In an
      earlier changelog, 100% of sampled accesses are local and indeed on
      most machines, this was still the case. In this specific case, the
      local sampled rates was 99.3% but note the "base-page range updates"
      and "PTE updates".  The activity with the patch is negligible as were
      the number of faults. The small number of pages migrated were related to
      shared libraries.  A 2-socket Broadwell showed better results on average
      but are not presented for brevity as the performance was similar except
      it showed 100% of the sampled NUMA hints were local. The patch holds up
      for a 4-socket Haswell, an AMD EPYC and AMD Epyc 2 machine.
      
      For dbench, the impact depends on the filesystem used and the number of
      clients. On XFS, there is little difference as the clients typically
      communicate with workqueues which have a separate class of scheduler
      problem at the moment. For ext4, performance is generally better,
      particularly for small numbers of clients as NUMA balancing activity is
      negligible with the patch applied.
      
      A more interesting example is the Facebook schbench which uses a
      number of messaging threads to communicate with worker threads. In this
      configuration, one messaging thread is used per NUMA node and the number of
      worker threads is varied. The 50, 75, 90, 95, 99, 99.5 and 99.9 percentiles
      for response latency is then reported.
      
       Lat 50.00th-qrtle-1        44.00 (   0.00%)       37.00 (  15.91%)
       Lat 75.00th-qrtle-1        53.00 (   0.00%)       41.00 (  22.64%)
       Lat 90.00th-qrtle-1        57.00 (   0.00%)       42.00 (  26.32%)
       Lat 95.00th-qrtle-1        63.00 (   0.00%)       43.00 (  31.75%)
       Lat 99.00th-qrtle-1        76.00 (   0.00%)       51.00 (  32.89%)
       Lat 99.50th-qrtle-1        89.00 (   0.00%)       52.00 (  41.57%)
       Lat 99.90th-qrtle-1        98.00 (   0.00%)       55.00 (  43.88%)
       Lat 50.00th-qrtle-2        42.00 (   0.00%)       42.00 (   0.00%)
       Lat 75.00th-qrtle-2        48.00 (   0.00%)       47.00 (   2.08%)
       Lat 90.00th-qrtle-2        53.00 (   0.00%)       52.00 (   1.89%)
       Lat 95.00th-qrtle-2        55.00 (   0.00%)       53.00 (   3.64%)
       Lat 99.00th-qrtle-2        62.00 (   0.00%)       60.00 (   3.23%)
       Lat 99.50th-qrtle-2        63.00 (   0.00%)       63.00 (   0.00%)
       Lat 99.90th-qrtle-2        68.00 (   0.00%)       66.00 (   2.94%
      
      For higher worker threads, the differences become negligible but it's
      interesting to note the difference in wakeup latency at low utilisation
      and mpstat confirms that activity was almost all on one node until
      the number of worker threads increase.
      
      Hackbench generally showed neutral results across a range of machines.
      This is different to earlier versions of the patch which allowed imbalances
      for higher degrees of utilisation. perf bench pipe showed negligible
      differences in overall performance as the differences are very close to
      the noise.
      
      An earlier prototype of the patch showed major regressions for NAS C-class
      when running with only half of the available CPUs -- 20-30% performance
      hits were measured at the time. With this version of the patch, the impact
      is negligible with small gains/losses within the noise measured. This is
      because the number of threads far exceeds the small imbalance the aptch
      cares about. Similarly, there were report of regressions for the autonuma
      benchmark against earlier versions but again, normal load balancing now
      applies for that workload.
      
      In general, the patch simply seeks to avoid unnecessary cross-node
      migrations in the basic case where imbalances are very small.  For low
      utilisation communicating workloads, this patch generally behaves better
      with less NUMA balancing activity. For high utilisation, there is no
      change in behaviour.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Reviewed-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Reviewed-by: default avatarSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Acked-by: default avatarPhil Auld <pauld@redhat.com>
      Tested-by: default avatarPhil Auld <pauld@redhat.com>
      Link: https://lkml.kernel.org/r/20200114101319.GO3466@techsingularity.net
      b396f523
    • Peter Zijlstra (Intel)'s avatar
      timers/nohz: Update NOHZ load in remote tick · ebc0f83c
      Peter Zijlstra (Intel) authored
      The way loadavg is tracked during nohz only pays attention to the load
      upon entering nohz.  This can be particularly noticeable if full nohz is
      entered while non-idle, and then the cpu goes idle and stays that way for
      a long time.
      
      Use the remote tick to ensure that full nohz cpus report their deltas
      within a reasonable time.
      
      [ swood: Added changelog and removed recheck of stopped tick. ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarScott Wood <swood@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/1578736419-14628-3-git-send-email-swood@redhat.com
      ebc0f83c
    • Scott Wood's avatar
      sched/core: Don't skip remote tick for idle CPUs · 488603b8
      Scott Wood authored
      This will be used in the next patch to get a loadavg update from
      nohz cpus.  The delta check is skipped because idle_sched_class
      doesn't update se.exec_start.
      Signed-off-by: default avatarScott Wood <swood@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/1578736419-14628-2-git-send-email-swood@redhat.com
      488603b8
  2. 20 Jan, 2020 1 commit
  3. 17 Jan, 2020 14 commits
  4. 25 Dec, 2019 9 commits
  5. 23 Dec, 2019 2 commits
  6. 22 Dec, 2019 7 commits
    • Linus Torvalds's avatar
      Merge tag 'xfs-5.5-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux · c6017471
      Linus Torvalds authored
      Pull xfs fixes from Darrick Wong:
       "Fix a few bugs that could lead to corrupt files, fsck complaints, and
        filesystem crashes:
      
         - Minor documentation fixes
      
         - Fix a file corruption due to read racing with an insert range
           operation.
      
         - Fix log reservation overflows when allocating large rt extents
      
         - Fix a buffer log item flags check
      
         - Don't allow administrators to mount with sunit= options that will
           cause later xfs_repair complaints about the root directory being
           suspicious because the fs geometry appeared inconsistent
      
         - Fix a non-static helper that should have been static"
      
      * tag 'xfs-5.5-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
        xfs: Make the symbol 'xfs_rtalloc_log_count' static
        xfs: don't commit sunit/swidth updates to disk if that would cause repair failures
        xfs: split the sunit parameter update into two parts
        xfs: refactor agfl length computation function
        libxfs: resync with the userspace libxfs
        xfs: use bitops interface for buf log item AIL flag check
        xfs: fix log reservation overflows when allocating large rt extents
        xfs: stabilize insert range start boundary to avoid COW writeback race
        xfs: fix Sphinx documentation warning
      c6017471
    • Linus Torvalds's avatar
      Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 · a3965607
      Linus Torvalds authored
      Pull ext4 bug fixes from Ted Ts'o:
       "Ext4 bug fixes, including a regression fix"
      
      * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
        ext4: clarify impact of 'commit' mount option
        ext4: fix unused-but-set-variable warning in ext4_add_entry()
        jbd2: fix kernel-doc notation warning
        ext4: use RCU API in debug_print_tree
        ext4: validate the debug_want_extra_isize mount option at parse time
        ext4: reserve revoke credits in __ext4_new_inode
        ext4: unlock on error in ext4_expand_extra_isize()
        ext4: optimize __ext4_check_dir_entry()
        ext4: check for directory entries too close to block end
        ext4: fix ext4_empty_dir() for directories with holes
      a3965607
    • Linus Torvalds's avatar
      Merge tag 'block-5.5-20191221' of git://git.kernel.dk/linux-block · 44579f35
      Linus Torvalds authored
      Pull block fixes from Jens Axboe:
       "Let's try this one again, this time without the compat_ioctl changes.
        We've got those fixed up, but that can go out next week.
      
        This contains:
      
         - block queue flush lockdep annotation (Bart)
      
         - Type fix for bsg_queue_rq() (Bart)
      
         - Three dasd fixes (Stefan, Jan)
      
         - nbd deadlock fix (Mike)
      
         - Error handling bio user map fix (Yang)
      
         - iocost fix (Tejun)
      
         - sbitmap waitqueue addition fix that affects the kyber IO scheduler
           (David)"
      
      * tag 'block-5.5-20191221' of git://git.kernel.dk/linux-block:
        sbitmap: only queue kyber's wait callback if not already active
        block: fix memleak when __blk_rq_map_user_iov() is failed
        s390/dasd: fix typo in copyright statement
        s390/dasd: fix memleak in path handling error case
        s390/dasd/cio: Interpret ccw_device_get_mdc return value correctly
        block: Fix a lockdep complaint triggered by request queue flushing
        block: Fix the type of 'sts' in bsg_queue_rq()
        block: end bio with BLK_STS_AGAIN in case of non-mq devs and REQ_NOWAIT
        nbd: fix shutdown and recv work deadlock v2
        iocost: over-budget forced IOs should schedule async delay
      44579f35
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm · a313c8e0
      Linus Torvalds authored
      Pull KVM fixes from Paolo Bonzini:
       "PPC:
         - Fix a bug where we try to do an ultracall on a system without an
           ultravisor
      
        KVM:
         - Fix uninitialised sysreg accessor
         - Fix handling of demand-paged device mappings
         - Stop spamming the console on IMPDEF sysregs
         - Relax mappings of writable memslots
         - Assorted cleanups
      
        MIPS:
         - Now orphan, James Hogan is stepping down
      
        x86:
         - MAINTAINERS change, so long Radim and thanks for all the fish
         - supported CPUID fixes for AMD machines without SPEC_CTRL"
      
      * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
        MAINTAINERS: remove Radim from KVM maintainers
        MAINTAINERS: Orphan KVM for MIPS
        kvm: x86: Host feature SSBD doesn't imply guest feature AMD_SSBD
        kvm: x86: Host feature SSBD doesn't imply guest feature SPEC_CTRL_SSBD
        KVM: PPC: Book3S HV: Don't do ultravisor calls on systems without ultravisor
        KVM: arm/arm64: Properly handle faulting of device mappings
        KVM: arm64: Ensure 'params' is initialised when looking up sys register
        KVM: arm/arm64: Remove excessive permission check in kvm_arch_prepare_memory_region
        KVM: arm64: Don't log IMP DEF sysreg traps
        KVM: arm64: Sanely ratelimit sysreg messages
        KVM: arm/arm64: vgic: Use wrapper function to lock/unlock all vcpus in kvm_vgic_create()
        KVM: arm/arm64: vgic: Fix potential double free dist->spis in __kvm_vgic_destroy()
        KVM: arm/arm64: Get rid of unused arg in cpu_init_hyp_mode()
      a313c8e0
    • Linus Torvalds's avatar
      Merge tag 'riscv/for-v5.5-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux · 7214618c
      Linus Torvalds authored
      Pull RISC-V fixes from Paul Walmsley:
       "Several fixes, and one cleanup, for RISC-V.
      
        Fixes:
      
         - Fix an error in a Kconfig file that resulted in an undefined
           Kconfig option "CONFIG_CONFIG_MMU"
      
         - Fix undefined Kconfig option "CONFIG_CONFIG_MMU"
      
         - Fix scratch register clearing in M-mode (affects nommu users)
      
         - Fix a mismerge on my part that broke the build for
           CONFIG_SPARSEMEM_VMEMMAP users
      
        Cleanup:
      
         - Move SiFive L2 cache-related code to drivers/soc, per request"
      
      * tag 'riscv/for-v5.5-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
        riscv: move sifive_l2_cache.c to drivers/soc
        riscv: define vmemmap before pfn_to_page calls
        riscv: fix scratch register clearing in M-mode.
        riscv: Fix use of undefined config option CONFIG_CONFIG_MMU
      7214618c
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net · 78bac77b
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) Several nf_flow_table_offload fixes from Pablo Neira Ayuso,
          including adding a missing ipv6 match description.
      
       2) Several heap overflow fixes in mwifiex from qize wang and Ganapathi
          Bhat.
      
       3) Fix uninit value in bond_neigh_init(), from Eric Dumazet.
      
       4) Fix non-ACPI probing of nxp-nci, from Stephan Gerhold.
      
       5) Fix use after free in tipc_disc_rcv(), from Tuong Lien.
      
       6) Enforce limit of 33 tail calls in mips and riscv JIT, from Paul
          Chaignon.
      
       7) Multicast MAC limit test is off by one in qede, from Manish Chopra.
      
       8) Fix established socket lookup race when socket goes from
          TCP_ESTABLISHED to TCP_LISTEN, because there lacks an intervening
          RCU grace period. From Eric Dumazet.
      
       9) Don't send empty SKBs from tcp_write_xmit(), also from Eric Dumazet.
      
      10) Fix active backup transition after link failure in bonding, from
          Mahesh Bandewar.
      
      11) Avoid zero sized hash table in gtp driver, from Taehee Yoo.
      
      12) Fix wrong interface passed to ->mac_link_up(), from Russell King.
      
      13) Fix DSA egress flooding settings in b53, from Florian Fainelli.
      
      14) Memory leak in gmac_setup_txqs(), from Navid Emamdoost.
      
      15) Fix double free in dpaa2-ptp code, from Ioana Ciornei.
      
      16) Reject invalid MTU values in stmmac, from Jose Abreu.
      
      17) Fix refcount leak in error path of u32 classifier, from Davide
          Caratti.
      
      18) Fix regression causing iwlwifi firmware crashes on boot, from Anders
          Kaseorg.
      
      19) Fix inverted return value logic in llc2 code, from Chan Shu Tak.
      
      20) Disable hardware GRO when XDP is attached to qede, frm Manish
          Chopra.
      
      21) Since we encode state in the low pointer bits, dst metrics must be
          at least 4 byte aligned, which is not necessarily true on m68k. Add
          annotations to fix this, from Geert Uytterhoeven.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (160 commits)
        sfc: Include XDP packet headroom in buffer step size.
        sfc: fix channel allocation with brute force
        net: dst: Force 4-byte alignment of dst_metrics
        selftests: pmtu: fix init mtu value in description
        hv_netvsc: Fix unwanted rx_table reset
        net: phy: ensure that phy IDs are correctly typed
        mod_devicetable: fix PHY module format
        qede: Disable hardware gro when xdp prog is installed
        net: ena: fix issues in setting interrupt moderation params in ethtool
        net: ena: fix default tx interrupt moderation interval
        net/smc: unregister ib devices in reboot_event
        net: stmmac: platform: Fix MDIO init for platforms without PHY
        llc2: Fix return statement of llc_stat_ev_rx_null_dsap_xid_c (and _test_c)
        net: hisilicon: Fix a BUG trigered by wrong bytes_compl
        net: dsa: ksz: use common define for tag len
        s390/qeth: don't return -ENOTSUPP to userspace
        s390/qeth: fix promiscuous mode after reset
        s390/qeth: handle error due to unsupported transport mode
        cxgb4: fix refcount init for TC-MQPRIO offload
        tc-testing: initial tdc selftests for cls_u32
        ...
      78bac77b
    • Jan Stancek's avatar
      pipe: fix empty pipe check in pipe_write() · 0dd1e377
      Jan Stancek authored
      LTP pipeio_1 test is hanging with v5.5-rc2-385-gb8e382a1,
      with read side observing empty pipe and sleeping and write
      side running out of space and then sleeping as well. In this
      scenario there are 5 writers and 1 reader.
      
      Problem is that after pipe_write() reacquires pipe lock, it
      re-checks for empty pipe with potentially stale 'head' and
      doesn't wake up read side anymore. pipe->tail can advance
      beyond 'head', because there are multiple writers.
      
      Use pipe->head for empty pipe check after reacquiring lock
      to observe current state.
      
      Testing: With patch, LTP pipeio_1 ran successfully in loop for 1 hour.
               Without patch it hanged within a minute.
      
      Fixes: 1b6b26ae ("pipe: fix and clarify pipe write wakeup logic")
      Reported-by: default avatarRachel Sibley <rasibley@redhat.com>
      Signed-off-by: default avatarJan Stancek <jstancek@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0dd1e377