- 08 Feb, 2011 3 commits
-
-
John H. Embretsen authored
Test failed on a certain Linux platform in automated environment. It turns out that this platform has an old version of Perl modules DBI and DBD::mysql installed, as well as the OS itself being relatively old. Allowing error code 11 to be returned from mysqlhotcopy on expected error seems harmless and will make the test pass also with older libraries.
-
Anitha Gopi authored
Removed the collections for mysql-5.1-bugteam. Removed 1st from weekly. This is part of default suites
-
Anitha Gopi authored
Removed the collections for mysql-5.1-bugteam. Removed 1st from weekly. This is part of default suites
-
- 07 Feb, 2011 4 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
Added --debug-server and use $opt_debug_server where appropriate Let --debug imply --debug-server When merging to 5.5, must adapt fix for 59148 Oops, set debug => debug-server too late, fixed
-
Ole John Aske authored
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory. Includes Jørgen Lølands review comments. Root cause of these bugs are that test_if_skip_sort_order() decided to revert the 'skip_sort_order' descision (and use filesort) after the query plan has been updated to reflect a 'skip' of the sort order. This might happen in 'check_reverse_order:' if we have a select->quick which could not be made descending by appending a QUICK_SELECT_DESC. (). The original 'save_quick' was then restored after the QEP has been modified, which caused: - An incorrect 'precomputed_group_by= TRUE' may have been set, and not reverted, as part of the already modifified QEP (Bug#59308) - A 'select->quick' might have been created which we fail to delete (bug#59110). This fix is a refactorication of test_if_skip_sort_order() where all logic related to modification of QEP (controlled by argument 'bool no_changes'), is moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip' checks has been performed - including the 'check_reverse_order:' checks. The refactorication above contains now intentional changes to the logic which has been moved to the end of the function. Furthermore, a smaller part of the fix address the handling of the select->quick objects which may already exists when we call 'test_if_skip_sort_order()' (save_quick) -and new select->quick's created during test_if_skip_sort_order(): - Before new select->quick may be created by calling ::test_quick_select(), we set 'select->quick= 0' to avoid that ::test_quick_select() prematurely delete the save_quick's. (After this call we may have both a 'save_quick' and 'select->quick') - All returns from ::test_if_skip_sort_order() where we may have both a 'save_quick' and a 'select->quick' has been changed to goto's to the exit points 'skiped_sort_order:' or 'need_filesort:' where we decide which of the QUICK_SELECT's to keep, and delete the other.
-
Vinay Fisrekar authored
Correcting clean up command at the start of test.
-
- 05 Feb, 2011 1 commit
-
-
Dmitry Shulga authored
if the standard input is a directory. The problem is that mysql monitor try to read from stdin without checking input source type. The solution is to stop reading data from standard input if a call to read(2) failed. A new test case was added into mysql.test. client/my_readline.h: Data members error and truncated was added to LINE_BUFFER structure. These data members used instead of out parameters in functions batch_readline, intern_read_line. client/mysql.cc: read_and_execute() was modified: set status.exit_status to 1 when the error occured while reading the next command line in non-interactive mode. Also the value of the truncated attribute of structure LINE_BUFF is taken into account only for non-iteractive mode. client/readline.cc: intern_read_line() was modified: cancel reading from input if fill_buffer() returns -1, e.g. if call to read failed. batch_readline was modified: set the error data member of LINE_BUFFER structure to value of my_errno when system error happened during call to my_read/my_realloc. mysql-test/t/mysql.test: Test for bug#57450 was added.
-
- 04 Feb, 2011 4 commits
-
-
Luis Soares authored
-
Bjorn Munch authored
-
Bjorn Munch authored
Replace --debug with --loose-debug to prevent failure exit Update: added workaround for 50627, skip all debugging of mysqlbinlog
-
Dmitry Shulga authored
handling. The problem was that parsing of nested regular expression involved recursive calls. Such recursion didn't take into account the amount of available stack space, which ended up leading to stack overflow crashes. mysql-test/t/not_embedded_server.test: Added test for bug#58026. regex/my_regex.h: added pointer to function as last argument of my_regex_init() for check enough memory in stack. regex/regcomp.c: p_ere() was modified: added call to function for check enough memory in stack. Function for check available stack space specified by global variable my_regex_enough_mem_in_stack. This variable set to NULL for embedded mysqld and to a pointer to function check_enough_stack_size otherwise. regex/reginit.c: my_regex_init was modified: pass a pointer to a function for check enough memory in stack space. Reset this pointer to NULL in my_regex_end. sql/mysqld.cc: Added function check_enough_stack_size() for check enough memory in stack. Passed this function as second argument to my_regex_init. For embedded mysqld passed NULL as second argument.
-
- 03 Feb, 2011 1 commit
-
-
Luis Soares authored
There is one part of the test case that needs to break and re-establish the circular topology. For this the test stops the slave threads on a couple of servers and restarts them with START SLAVE. However, no check is done on the status of the IO or SQL threads before proceeding with the subsequent commands. Because rpl_only_running_threads is set to 1 this can lead to silently not syncing all slave threads as expected, ultimately resulting in unexpected results (and consequently on a failing test run). We fix this by replacing the START SLAVE instructions with calls to --source include/start_slave.inc, which will wait for the slave threads to be running (show 'Yes' in Slave_IO|SQL_Running fields of SHOW SLAVE STATUS) before proceeding. Additionally, we change rpl_sync.inc to make the IO thread report that it is running when its running status is any other than 'No'.
-
- 02 Feb, 2011 5 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Bug #55755 : Date STD variable signness breaks server on FreeBSD and OpenBSD * Added a check to configure on the size of time_t * Created a macro to check for a valid time_t that is safe to use with datetime functions and store in TIMESTAMP columns. * Used the macro consistently instead of the ad-hoc checks introduced by 52315 * Fixed compliation warnings on platforms where the size of time_t is smaller than the size of a long (e.g. OpenBSD 4.8 64 amd64). Bug #52315: utc_date() crashes when system time > year 2037 * Added a correct check for the timestamp range instead of just variable size check to SET TIMESTAMP. * Added overflow checking before converting to time_t. * Using a correct localized error message in this case instead of the generic error. * Added a test suite. * fixed the checks so that they check for unsigned time_t as well. Used the checks consistently across the source code. * fixed the original test case to expect the new error code.
-
Dmitry Lenev authored
primary_key_no == 0". Attempt to create InnoDB table with non-nullable column of geometry type having an unique key with length 12 on it and with some other candidate key led to server crash due to assertion failure in both non-debug and debug builds. The problem was that such a non-candidate key could have been sorted as the first key in table/.FRM, before any legit candidate keys. This resulted in assertion failure in InnoDB engine which assumes that primary key should either be the first key in table/.FRM or should not exist at all. The reason behind such an incorrect sorting was an wrong value of Create_field::key_length member for geometry field (which was set to its pack_length == 12) which confused code in mysql_prepare_create_table(), so it would skip marking such key as a key with partial segments. This patch fixes the problem by ensuring that this member gets the same value of Create_field::key_length member as for other blob fields (from which geometry field class is inherited), and as result unique keys on geometry fields are correctly marked as having partial segments. mysql-test/include/gis_keys.inc: Added test case for bug #58650 "Failing assertion: primary_key_no == -1 || primary_key_no == 0". mysql-test/r/gis.result: Added test case for bug #58650 "Failing assertion: primary_key_no == -1 || primary_key_no == 0". mysql-test/suite/innodb/r/innodb_gis.result: Added test case for bug #58650 "Failing assertion: primary_key_no == -1 || primary_key_no == 0". mysql-test/suite/innodb_plugin/r/innodb_gis.result: Added test case for bug #58650 "Failing assertion: primary_key_no == -1 || primary_key_no == 0". sql/field.cc: Changed Create_field::create_length_to_internal_length() to correctly set Create_field::key_length member for geometry fields. Similar to the blob types key_length for such fields should be the same as length and not field's packed length (which is always 12 for geometry). As result of this change code handling table creation now always correctly identifies btree/unique keys on geometry fields as partial keys, so such keys can't be erroneously treated as candidate keys and sorted in keys array in .FRM before legit candidate keys. This fixes bug #58650 "Failing assertion: primary_key_no == -1 || primary_key_no == 0" in which incorrect candidate key sorting led to assertion failure in InnoDB code.
-
- 01 Feb, 2011 1 commit
-
-
Ole John Aske authored
Root cause for this bug is that the optimizer try to detect& optimize the special case: '<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1' This was implemented inside add_key_field(.. *field, *value[]...) which assumed field to refer key Field, and value[] to refer a [low...high] constant pair. value[0] and value[1] was then compared for equality. In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the BETWEEN operation is represented with an argementlist containing the values [<field>, val1, val2] - add_key_field() is then called with parameters field=<field>, *value=val1. However, if the BETWEEN predicate specified: 1) '<const1> BETWEEN<const2> AND<field> the 'field' and 'value' arguments to add_key_field() had to be swapped. This was implemented by trying to cheat add_key_field() to handle it like: 2) '<const1> GE<const2> AND<const1> LE<field>' As we didn't really replace the BETWEEN operation with 'ge' and 'le', add_key_field() still handled it as a 'BETWEEN' and compared the (swapped) arguments<const1> and<const2> for equality. If they was equal, the condition 1) was incorrectly 'optimized' to: 3) '<field> EQ <const1>' This fix moves this optimization of '<field> BETWEEN c1 AND c1' into add_key_fields() which then calls add_key_equal_fields() to collect key equality / comparison for the key fields in the BETWEEN condition.
-
- 31 Jan, 2011 4 commits
-
-
Alfranio Correia authored
-
Alfranio Correia authored
-
Alfranio Correia authored
-
Sandeep Doddaballapur authored
-
- 30 Jan, 2011 1 commit
-
-
Vasil Dimov authored
-
- 29 Jan, 2011 2 commits
-
-
Bjorn Munch authored
-
John H. Embretsen authored
Third updated patch - this version also includes copyright notice in added Perl script. This patch implements a check for such modules at runtime. If modules are not found or unable to load, the test is skipped with the following message: [ skipped ] Test needs Perl modules DBI and DBD::mysql Checks are done via a helper Perl script which looks for the module in a runtime environment that is as similar to that of the mysqlhotcopy script as possible (thus not intended for Windows environments at this time). The helper script tells mysql-test about the result by writing information to a temporary file that is later read by mysql-test. See comments in added files (have_dbi_dbd-mysql.inc and checkDBI_DBD-mysql.pl) for details. The patch also removes the mysqlhotcopy tests from the list of disabled tests.
-
- 28 Jan, 2011 4 commits
-
-
Mattias Jonsson authored
-
Alfranio Correia authored
In SBR, if a statement does not fail, it is always written to the binary log, regardless if rows are changed or not. If there is a failure, a statement is only written to the binary log if a non-transactional (.e.g. MyIsam) engine is updated. INSERT ON DUPLICATE KEY UPDATE and INSERT IGNORE were not following the rule above and were not written to the binary log, if then engine was Innodb. mysql-test/extra/rpl_tests/rpl_insert_duplicate.test: Added test case. mysql-test/extra/rpl_tests/rpl_insert_ignore.test: Updated test case. mysql-test/include/commit.inc: Updated test case as the calls to the binary log have changed for INSERT ON DUPLICATE and INSERT IGNORE. mysql-test/r/commit_1innodb.result: Updated result file. mysql-test/suite/rpl/r/rpl_insert_duplicate.result: Added test case. mysql-test/suite/rpl/r/rpl_insert_ignore.result: Updated result file. mysql-test/suite/rpl/t/rpl_insert_duplicate.test: Added test case. mysql-test/suite/rpl/t/rpl_insert_ignore.test: Improved test case.
-
Jimmy Yang authored
for external_size rb://581 approved by Marko
-
Alfranio Correia authored
There are two calls to read_log_event() on master in mysql_binlog_send(). Each call reads 19 bytes in this test case and the error of the second read_log_event() is reported to the slave. The second read_log_event() starts from position 94 (75 + 19) to 113 (75 + 19 + 19). Usually, there are two events in the binary log: . 0 - 3 - Header . 4 - 105 - Format Descriptor Event . 106 - 304 - Query Event and both reads fail because operations are reading from invalid positions as expected. However, mysql_binlog_send() does not use the same IO_CACHE that is used to write into binary log (i.e. mysql_bin_log.log_file) for the hot binary log. It opens the binary log file directly by calling open_binlog() and creates a separated IO_CACHE. So there is a possibly that after a master has flushed the binary log file, the content has been cached by the filesystem, and has not updated the disk file. If this happens, then a slave will only see part of the file, and thus the second read_log_event() will report event truncated error. To fix the problem, if the first read_log_event() has failed, we ensure that the second one will try to read from the same position.
-
- 27 Jan, 2011 7 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
John H. Embretsen authored
-
Marko Mäkelä authored
trx_get_trx_by_xid(): Invalidate trx->xid after a successful lookup, so that subsequent callers will not find the same transaction. The only callers of trx_get_trx_by_xid() will be invoking innobase_commit_low() or innobase_rollback_trx(), and those code paths should not depend on trx->xid. rb://584 approved by Jimmy Yang
-
Horst.Hunger authored
-
Horst.Hunger authored
-
Sandeep Doddaballapur authored
No commit message
-
- 26 Jan, 2011 3 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Ramil Kalimullin authored
-