- 01 Oct, 2008 1 commit
-
-
Mattias Jonsson authored
-
- 30 Sep, 2008 1 commit
-
-
Davi Arnaut authored
Post-merge bug fix: lock_type is a enumeration type and not a bit mask. sql/sql_cache.cc: Check for lock type explicitly. Also err on the safe side and invalidate the query cache for any write lock.
-
- 29 Sep, 2008 2 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
The problem is that when statement-based replication was enabled, statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE .. SELECT FROM need to grab a read lock on the source table that does not permit concurrent inserts, which would in turn be denied if the source table is a log table because log tables can't be locked exclusively. The solution is to not take such a lock when the source table is a log table as it is unsafe to replicate log tables under statement based replication. Furthermore, the read lock that does not permits concurrent inserts is now only taken if statement-based replication is enabled and if the source table is not a log table. include/thr_lock.h: Introduce yet another lock type that my get upgraded depending on the binary log format. This is not a optimal solution but can be easily improved later. mysql-test/r/log_tables.result: Add test case result for Bug#34306 mysql-test/suite/binlog/r/binlog_stm_row.result: Add test case result for Bug#34306 mysql-test/suite/binlog/t/binlog_stm_row.test: Add test case for Bug#34306 mysql-test/t/log_tables.test: Add test case for Bug#34306 sql/lock.cc: Assert that TL_READ_DEFAULT is not a real lock type. sql/mysql_priv.h: Export new function. sql/mysqld.cc: Remove using_update_log. sql/sql_base.cc: Introduce function that returns the appropriate read lock type depending on how the statement is going to be replicated. It will only take a TL_READ_NO_INSERT log if the binary is enabled and the binary log format is statement-based and the table is not a log table. sql/sql_parse.cc: Remove using_update_log. sql/sql_update.cc: Use new function to choose read lock type. sql/sql_yacc.yy: The lock type is now decided at open_tables time. This old behavior was actually misleading as the binary log format can be dynamically switched and this would not change for statements that have already been parsed when the binary log format is changed (ie: prepared statements).
-
- 26 Sep, 2008 1 commit
-
-
He Zhenxing authored
In order to improve the performance when replicating to partitioned myisam tables with row-based format, the number of rows of current rows log event is estimated and used to setup storage engine for bulk inserts.
-
- 20 Sep, 2008 1 commit
-
-
Mattias Jonsson authored
-
- 19 Sep, 2008 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 18 Sep, 2008 4 commits
-
-
Mattias Jonsson authored
and Bug#33555: Group By Query does not correctly aggregate partitions Backport of bug-33257 which is the same bug. read_range_*() calls was not passed to the partition handlers, but was translated to index_read/next family calls. Resulting in duplicates rows and wrong aggregations. mysql-test/r/partition_range.result: Bug#30573: Ordered range scan over partitioned tables returns some rows twice Updated result file mysql-test/t/partition_range.test: Bug#30573: Ordered range scan over partitioned tables returns some rows twice Re-enabled the test sql/ha_partition.cc: Bug#30573: Ordered range scan over partitioned tables returns some rows twice backport of bug-33257, correct handling of read_range_* calls, without converting them to index_read/next calls sql/ha_partition.h: Bug#30573: Ordered range scan over partitioned tables returns some rows twice backport of bug-33257, correct handling of read_range_* calls, without converting them to index_read/next calls
-
Georgi Kodinov authored
The fix for bug 31887 was incomplete : it assumes that all the field types returned by the IS_NUM macro are descendants of Item_num and tries to zero-fill the values before doing constant substitution with such fields when they are compared to constant string values. The only exception to this is Field_timestamp : it's in the IS_NUM macro, but is not a descendant of Field_num. Fixed by excluding timestamp fields (Field_timestamp) when zero-filling when converting the constant to compare with to a string. Note that this will not exclude the timestamp columns from const propagation. mysql-test/r/compare.result: Bug #39353: test case mysql-test/t/compare.test: Bug #39353: test case sql/item.cc: Bug #39353: don't zero-fill timestamp fields when const propagating to a string : they'll be converted to a string in a date/time format and not as an integer.
-
Gleb Shchepa authored
--ps-protocol problem has been fixed. sql/item_func.cc: Added update of Item_func_set_user_var::entry->update_query_id for every PS execution.
-
Gleb Shchepa authored
columns data types The "SELECT @lastId, @lastId := Id FROM t" query returns different result sets depending on the type of the Id column (INT or BIGINT). Note: this fix doesn't cover the case when a select query references an user variable and stored function that updates a value of that variable, in this case a result is indeterminate. The server uses incorrect assumption about a constantness of an user variable value as a select list item: The server caches a last query number where that variable was changed and compares this number with a current query number. If these numbers are different, the server guesses, that the variable is not updating in the current query, so a respective select list item is a constant. However, in some common cases the server updates cached query number too late. The server has been modified to memorize user variable assignments during the parse phase to take them into account on the next (query preparation) phase independently of the order of user variable references/assignments in a select item list. mysql-test/r/user_var.result: Added test case for bug #26020. mysql-test/t/user_var.test: Added test case for bug #26020. sql/item_func.cc: An update of entry and update_query_id variables has been moved from Item_func_set_user_var::fix_fields() to a separate method, Item_func_set_user_var::set_entry(). sql/item_func.h: 1. The Item_func_set_user_var::set_entry() method has been added to update Item_func_set_user_var::entry. 2. The Item_func_set_user_var::entry_thd field has beend added to update Item_func_set_user_var::entry only when needed. sql/sql_base.cc: Fix: setup_fiedls() calls Item_func_set_user_var::set_entry() for all items from the thd->lex->set_var_list before the first call of ::fix_fields(). sql/sql_lex.cc: The lex_start function has been modified to reset the st_lex::set_var_list list. sql/sql_lex.h: New st_lex::set_var_list field has been added to memorize all user variable assignments in the current select query. sql/sql_yacc.yy: The variable_aux rule has been modified to memorize in-query user variable assignments in the st_lex::set_var_list list.
-
- 16 Sep, 2008 2 commits
-
-
Tatiana A. Nurnberg authored
-
Tatiana A. Nurnberg authored
-
- 11 Sep, 2008 3 commits
-
-
Tatiana A. Nurnberg authored
If [NOT] PRESERVE was not given, parser always defaulted to NOT PRESERVE, making it impossible for the "not given = no change" rule to work in ALTER EVENT. Leaving out the PRESERVE-clause defaults to NOT PRESERVE on CREATE now, and to "no change" in ALTER. mysql-test/r/events_2.result: show that giving no PRESERVE-clause to ALTER EVENT results in no change. show that giving no PRESERVE-clause to CREATE EVENT defaults to NOT PRESERVE as per the docs. Show specifically that this is also handled correctly when trying to ALTER EVENTs into the past. mysql-test/t/events_2.test: show that giving no PRESERVE-clause to ALTER EVENT results in no change. show that giving no PRESERVE-clause to CREATE EVENT defaults to NOT PRESERVE as per the docs. Show specifically that this is also handled correctly when trying to ALTER EVENTs into the past. sql/event_db_repository.cc: If ALTER EVENT was given no PRESERVE-clause (meaning "no change"), we don't know the previous PRESERVE-setting by the time we check the parse-data. If ALTER EVENT was given dates that are in the past, we don't know how to react, lacking the PRESERVE-setting. Heal this by running the check later when we have actually read the previous EVENT-data. sql/event_parse_data.cc: Change default for ON COMPLETION to indicate, "not specified." Also defer throwing errors when ALTER EVENT is given dates in the past but not PRESERVE-clause until we know the previous PRESERVE-value. sql/event_parse_data.h: Add third state for ON COMPLETION [NOT] PRESERVE (preserve, don't, not specified). Make check_dates() public so we can defer this check until deeper in the callstack where we have all the required data. sql/sql_yacc.yy: If CREATE EVENT is not given ON COMPLETION [NOT] PRESERVE, we default to NOT, as per the docs.
-
Tatiana A. Nurnberg authored
mysqldump creates stand-in tables before dumping the actual view. Those tables were of the default type; if the view had more columns than that (a pathological case, arguably), loading the dump would fail. We now make the temporary stand-ins MyISAM tables to prevent this. client/mysqldump.c: When creating a stand-in table, specify its type to avoid defaulting to a type with a column-number limit (like Inno). The type is always MyISAM as we know that to be available. mysql-test/r/mysqldump-max.result: add test results for 31434 mysql-test/r/mysqldump.result: mysqldump sets engine-type (MyISAM) for stand-in tables for views now. Update test results. mysql-test/t/mysqldump-max.test: Show that mysqldump's stand-in tables for views explicitly set engine-type to MyISAM to avoid falling back on an engine that might support fewer columns than the final view requires (here's lookin' at you, inno). Also show that this actually has the desired effect by dumping and reloading a view that has more columns than inno supports.
-
Tatiana A. Nurnberg authored
mysqldump creates stand-in tables before dumping the actual view. Those tables were of the default type; if the view had more columns than that (a pathological case, arguably), loading the dump would fail. We now make the temporary stand-ins MyISAM tables to prevent this. client/mysqldump.c: When creating a stand-in table, specify its type to avoid defaulting to a type with a column-number limit (like Inno). The type is always MyISAM as we know that to be available. mysql-test/r/mysqldump.result: mysqldump sets engine-type (MyISAM) for stand-in tables for views now. Update test results.
-
- 10 Sep, 2008 4 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Evgeny Potemkin authored
-
- 09 Sep, 2008 7 commits
-
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
-
Martin Hansson authored
-
Ramil Kalimullin authored
Problem: <=> operator may return wrong results comparing NULL and a DATE/DATETIME/TIME value. Fix: properly check NULLs. mysql-test/r/type_datetime.result: Fix for bug#37526: asymertic operator <=> in trigger - test result. mysql-test/t/type_datetime.test: Fix for bug#37526: asymertic operator <=> in trigger - test case. sql/item_cmpfunc.cc: Fix for bug#37526: asymertic operator <=> in trigger - if is_nulls_eq is TRUE Arg_comparator::compare_datetime() should return 1 only if both arguments are NULL.
-
Mats Kindahl authored
-
Martin Hansson authored
statement/stored procedure View privileges are properly checked after the fix for bug no 36086, so the method TABLE_LIST::get_db_name() must be used instead of field TABLE_LIST::db, as this only works for tables. Bug appears when accessing views in prepared statements. mysql-test/r/view_grant.result: Bug#35600: Extended existing test case. mysql-test/t/view_grant.test: Bug#35600: Extended existing test result. sql/sql_parse.cc: Bug#35600: Using method to retrieve database name instead of field.
-
Mats Kindahl authored
SUPER is not required to change binlog format for session A user without SUPER privileges can change the value of the session variable BINLOG_FORMAT, causing problems for a DBA. This changeset requires a user to have SUPER privileges to change the value of the session variable BINLOG_FORMAT, and not only the global variable BINLOG_FORMAT. mysql-test/suite/binlog/t/binlog_grant.test: Adding test to test grants needed for SQL_LOG_BIN and BINLOG_FORMAT. sql/set_var.cc: Adding code to check that user has SUPER permission needed to change the value of BINLOG_FORMAT. sql/set_var.h: Adding function sys_var_thd_binlog_format::check()
-
- 08 Sep, 2008 5 commits
-
-
Mattias Jonsson authored
Problem was a mutex added in bug n 27405 for solving a problem with auto_increment in partitioned innodb tables. (in ha_partition::write_row over partitions file->ha_write_row) Solution is to use the patch for bug#33479, which refines the usage of mutexes for auto_increment. Backport of bug-33479 from 6.0: Bug-33479: auto_increment failures in partitioning Several problems with auto_increment in partitioning (with MyISAM, InnoDB. Locking issues, not handling multi-row INSERTs properly etc.) Changed the auto_increment handling for partitioning: Added a ha_data variable in table_share for storage engine specific data such as auto_increment value handling in partitioning, also see WL 4305 and using the ha_data->mutex to lock around read + update. The idea is this: Store the table's reserved auto_increment value in the TABLE_SHARE and use a mutex to, lock it for reading and updating it and unlocking it, in one block. Only accessing all partitions when it is not initialized. Also allow reservations of ranges, and if no one has done a reservation afterwards, lower the reservation to what was actually used after the statement is done (via release_auto_increment from WL 3146). The lock is kept from the first reservation if it is statement based replication and a multi-row INSERT statement where the number of candidate rows to insert is not known in advance (like INSERT SELECT, LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)). This should also lead to better concurrancy (no need to have a mutex protection around write_row in all cases) and work with any local storage engine. mysql-test/suite/parts/inc/partition_auto_increment.inc: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning Test source file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_archive.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_blackhole.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_innodb.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_memory.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_myisam.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/r/partition_auto_increment_ndb.result: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning result file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_archive.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_blackhole.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_innodb.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_memory.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_myisam.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment mysql-test/suite/parts/t/partition_auto_increment_ndb.test: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning test file for testing auto_increment sql/ha_partition.cc: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: Failures using auto_increment and partitioning Changed ha_partition::get_auto_increment from file->get_auto_increment to file->info(HA_AUTO_STATUS), since it is works better with InnoDB (InnoDB can have issues with partitioning and auto_increment, where get_auto_increment sometimes can return a non updated value.) Using the new table_share->ha_data for keeping the auto_increment value, shared by all instances of the same table. It is read+updated when holding a auto_increment specific mutex. Also added release_auto_increment to decrease gaps if possible. And a lock for multi-row INSERT statements where the number of candidate rows to insert is not known in advance (like INSERT SELECT, LOAD DATA; Unlike INSERT INTO (row1),(row2),,(rowN)). Fixed a small bug, copied++ to (*copied)++ and the same for deleted. Changed from current_thd, to ha_thd() sql/ha_partition.h: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: Failures using auto_increment and partitioning Added a new struct HA_DATA_PARTITION to be used in table_share->ha_data Added a private function to set auto_increment values if needed Removed the restore_auto_increment (the hander version is better) Added lock/unlock functions for auto_increment handling. Changed copied/deleted to const. sql/handler.h: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: auto_increment failures in partitioning Added const for changed_partitions Added comments about SQLCOM_TRUNCATE for delete_all_rows sql/table.h: Bug#38804: Query deadlock causes all tables to be inaccessible. Backporting from 6.0 of: Bug-33479: Failures using auto_increment and partitioning Added a variable in table_share: ha_data for storage of storage engine specific data (such as auto_increment handling in partitioning).
-
Georgi Kodinov authored
-
Ramil Kalimullin authored
-
Martin Hansson authored
-
Ramil Kalimullin authored
-
- 05 Sep, 2008 6 commits
-
-
Georgi Kodinov authored
SET col When reporting a duplicate key error the server was making incorrect assumptions on what the state of the value string to include in the error is. Fixed by accessing the data in this string in a "safe" way (without relying on it having a terminating 0). Detected by code analysis and fixed a similar problem in reporting the foreign key duplicate errors. mysql-test/r/type_set.result: Bug #38701: test case mysql-test/t/type_set.test: Bug #38701: test case sql/handler.cc: Bug #38701: don't rely on the presence of a terminating 0 in the string
-
Narayanan V authored
configure.in: change server version number to 5.1.29
-
Narayanan V authored
Added a rule that uses gcc to generate preprocessor output (gcc -E) that can be compared to an already generated output using the diff utility. icheck has been removed and replaced by gcc -E because icheck does not support C++. Makefile.am: Added a rule for checking that the abi/api has not changed. The following rules are followed in the rule in makefile.am 1) Generate preprocessor output for the files that need to be tested for abi/ api changes. use -nostdinc to prevent generation of preprocessor output for system headers. This results in messages in stderr saying that these headers were not found. Redirect the stderr output to /dev/null to prevent seeing these messages. 2) sed the output to 2.1) remove blank lines and lines that begin with "# " 2.2) When gcc -E is run on the Mac OS and solaris sparc platforms it introduces a line of output that shows up as a difference between the .pp and .out files. Remove these OS specific preprocessor text inserted by the preprocessor. 3) diff the generated file and the canons (.pp files already in the repository). 4) delete the .out file that is generated. If the diff fails, the generated file is not removed. This will be useful for analysis of ABI differences (e.g. using a visual diff tool). A ABI change that causes a build to fail will always be accompanied by new canons (.out files). The .out files that are not removed will be replaced as the new .pp files. e.g. If include/mysql/plugin.h has an ABI change then this rule would leave a <build directory>/abi_check.out file. A developer with a justified API change will then do a mv <build directory>/abi_check.out include/mysql/plugin.pp to replace the old canons with the new ones. configure.in: 1) Removed the part of the file that was icheck related 2) Added an entry for the configure variable DIFF 3) Ensured that the abi_check rule is run only if gcc is available include/Makefile.am: 1) Removed the icheck related entries include/mysql.h.pp: The pre-processor output cannon file for include/mysql.h include/mysql/plugin.h.pp: The pre-processor output cannon file for include/mysql/plugin.h include/mysql_h.ic: Removed the cannon file related to icheck. sql/mysql_priv.h.pp: The pre-processor output cannon file for sql/mysql_priv.h
-
Georgi Kodinov authored
-
Evgeny Potemkin authored
The check_table_access function initializes per-table grant info and performs access rights check. It wasn't called for SHOW STATUS statement thus left grants info uninitialized. In some cases this led to server crash. In other cases it allowed a user to check for presence/absence of arbitrary values in any tables. Now the check_table_access function is called prior to the statement processing. mysql-test/r/status.result: Added a test case for the bug#37908. mysql-test/t/status.test: Added a test case for the bug#37908. sql/sql_parse.cc: Bug#37908: Skipped access right check caused server crash. Now the check_table_access function is called when the SHOW STATUS statement uses any table except information.STATUS. sql/sql_yacc.yy: Bug#37908: Skipped access right check caused server crash. For the SHOW PROCEDURE/FUNCTION STATUS the 'mysql.proc' table isn't added to the table list anymore as there is no need.
-
Ramil Kalimullin authored
-