1. 22 Apr, 2014 1 commit
    • Igor Babaev's avatar
      Fixed the problem of mdev-5947. · 3e0f63c1
      Igor Babaev authored
      Back-ported from the mysql 5.6 code line the patch with
      the following comment:
      
        Fix for Bug#11757108 CHANGE IN EXECUTION PLAN FOR COUNT_DISTINCT_GROUP_ON_KEY
                             CAUSES PEFORMANCE REGRESSION
      
        The cause for the performance regression is that the access strategy for the
        GROUP BY query is changed form using "index scan" in mysql-5.1 to use "loose
        index scan" in mysql-5.5. The index used for group by is unique and thus each
        "loose scan" group will only contain one record. Since loose scan needs to
        re-position on each "loose scan" group this query will do a re-position for
        each index entry. Compared to just reading the next index entry as a normal
        index scan does, the use of loose scan for this query becomes more expensive.
      
        The cause for selecting to use loose scan for this query is that in the current
        code when the size of the "loose scan" group is one, the formula for
        calculating the cost estimates becomes almost identical to the cost of using
        normal index scan. Differences in use of integer versus floating point arithmetic
        can cause one or the other access strategy to be selected.
      
        The main issue with the formula for estimating the cost of using loose scan is
        that it does not take into account that it is more costly to do a re-position
        for each "loose scan" group compared to just reading the next index entry.
        Both index scan and loose scan estimates the cpu cost as:
      
          "number of entries needed too read/scan" * ROW_EVALUATE_COST
      
        The results from testing with the query in this bug indicates that the real
        cost for doing re-position four to eight times higher than just reading the
        next index entry. Thus, the cpu cost estimate for loose scan should be increased.
        To account for the extra work to re-position in the index we increase the
        cost for loose index scan to include the cost of navigating the index.
        This is modelled as a function of the height of the b-tree:
      
          navigation cost= ceil(log(records in table)/log(indexes per block))
                         * ROWID_COMPARE_COST;
      
        This will avoid loose index scan being used for indexes where the "loose scan"
        group contains very few index entries.
      3e0f63c1
  2. 16 Apr, 2014 1 commit
  3. 15 Apr, 2014 1 commit
  4. 11 Apr, 2014 1 commit
    • Sergey Petrunya's avatar
      MDEV-6081: ORDER BY+ref(const): selectivity is very incorrect (MySQL Bug#14338686) · 244d4b53
      Sergey Petrunya authored
      Add a testcase and backport this fix:
      
      Bug#14338686: MYSQL IS GENERATING DIFFERENT AND SLOWER
                    (IN NEWER VERSIONS) EXECUTION PLAN
      PROBLEM:
      While checking for an index to sort for the order by clause
      in this query
      "SELECT datestamp FROM contractStatusHistory WHERE
      contract_id = contracts.id ORDER BY datestamp asc limit 1;"
      
      we do not calculate the number of rows to be examined correctly.
      As a result we choose index 'idx_contractStatusHistory_datestamp'
      defined on the 'datestamp' field, rather than choosing index
      'contract_id'. And hence the lower performance.
      
      ANALYSIS:
      While checking if an index is present to give the records in
      sorted order(datestamp), we consider the selectivity of the
      'ref_key'(contract_id here) using 'table->quick_condition_rows'.
      'ref_key' here can be an index from 'REF_ACCESS' or from 'RANGE'.
      
      As this is a 'REF_ACCESS', 'table->quick_condition_rows' is not
      set to the actual value which is 2. Instead is set to the number
      of tuples present in the table indicating that every row that
      is selected would be satisfying the condition present in the query.
      
      Hence, the selectivity becomes 1 even when we choose the index
      on the order by column instead of the join_condition.
      
      But, in reality as only 2 rows satisy the condition, we need to
      examine half of the entire data set to get one tuple when we
      choose index on the order by column.
      Had we chosen the 'REF_ACCESS' we would have examined only 2 tuples.
      Hence the delay in executing the query specified.
        
      FIX:
      While calculating the selectivity of the ref_key:
      For REF_ACCESS consider quick_rows[ref_key] if range 
      optimizer has an estimate for this key. Else consider 
      'rec_per_key' statistic.
      For RANGE ACCESS consider 'table->quick_condition_rows'.
      244d4b53
  5. 07 Apr, 2014 1 commit
  6. 10 Apr, 2014 4 commits
    • Elena Stepanova's avatar
      MDEV-6068 Upgrade removes all changes to 'mysql' database · a7962ea5
      Elena Stepanova authored
      10.0 variation of the problem was that system tables were altered
      during mysql_upgrade process using old (smaller) column lengths. 
      At the end the tables were altered again, so the structure was restored,
      but if there were long values before the upgrade, they were truncated.
      Fixed by using correct column length in alter statements.
      a7962ea5
    • Alexander Barkov's avatar
      Fixing compilation problem on AIX. · 5fffa449
      Alexander Barkov authored
      5fffa449
    • unknown's avatar
      MDEV-5401: Wrong result (missing row) on a 2nd execution of PS with... · 39afdcdd
      unknown authored
      MDEV-5401: Wrong result (missing row) on a 2nd execution of PS with exists_to_in=on, MERGE view or a SELECT SQ
      
      The problem was that the view substitute its fields (on prepare) with reverting the change after execution. After prepare on optimization exists2in convertion substituted arguments of '=' with constsnt '1', but then one of the arguments of '=' was reverted to the view field reference.This lead to incorrect WHERE condition on the second execution.
      
      To fix the problem we replace whole '=' with '1' permannently.
      39afdcdd
    • unknown's avatar
      MDEV-6040: MariaDB hangs if terminated quickly after start · 584c2d0a
      unknown authored
      We need to use mysql_cond_broadcast() rather than _signal for
      COND_thread_count, as there can be multiple waiters.
      
      Thanks to Pavel Ivanov for reporting both the problem and the
      solution.
      584c2d0a
  7. 09 Apr, 2014 1 commit
  8. 02 Apr, 2014 1 commit
  9. 01 Apr, 2014 1 commit
    • Sergey Petrunya's avatar
      MDEV-5992: EITS: Selectivity of non-indexed condition is counted twice in table's fanout · 26a3d567
      Sergey Petrunya authored
      MDEV-5984: EITS: Incorrect filtered% value for single-table select with range access
      - Fix calculate_cond_selectivity_for_table() to work correctly with range accesses 
        over multi-component keys:
        = First, take selectivity of all possible range scans into account. Remember which 
          fields were used bt the range scans.
        = Then, calculate selectivity produced by sargable predicates on fields. If a 
          field was used in a possible range access, assume its selectivity is already
          taken into account.
      - Fix table_cond_selectivity(): when quick select is used, selectivity of
        COND(table) is taken into account in matching_candidates_in_table(). In
        table_cond_selectivity() we should not apply it for the second time.
      26a3d567
  10. 31 Mar, 2014 2 commits
  11. 29 Mar, 2014 6 commits
  12. 28 Mar, 2014 8 commits
  13. 27 Mar, 2014 12 commits