- 10 Mar, 2008 1 commit
-
-
tnurnberg@white.intern.koehntopp.de authored
into mysql.com:/misc/mysql/29645/50-29645
-
- 08 Mar, 2008 1 commit
-
-
kaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
- 07 Mar, 2008 2 commits
-
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B34909-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
--master-data No error code was returned by mysqldump if it detects that binary logging is not enabled on the server. Fixed by returning error code.
-
- 06 Mar, 2008 1 commit
-
-
sergefp@pslp.mylan authored
into mysql.com:/home/psergey/mysql-5.0-bug34945
-
- 05 Mar, 2008 1 commit
-
-
kaa@kaamos.(none) authored
sporadically Under some circumstances, the mysql_insert_id() value after SELECT ... INSERT could return a wrong value. This could happen when the last SELECT ... INSERT did not involve an AUTO_INCREMENT column, but the value of mysql_insert_id() was changed by some previous statements. Fixed by checking the value of thd->insert_id_used in select_insert::send_eof() and returning 0 for mysql_insert_id() if it is not set.
-
- 03 Mar, 2008 4 commits
-
-
sergefp@mysql.com authored
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call to before the late NULLs filtering code. - Backport function comments from 6.0.
-
kaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
kaa@kaamos.(none) authored
with errno 17 my_create() did not perform any checks for the case when a file is successfully created by a call to open(), but the call to my_register_filename() later fails because the number of open files has exceeded the my_open_files limit. This can happen on platforms which do not have getrlimit(), and hence we do not know the real limit for open files. In such a case an error was returned to a caller although the file has actually been created. Since callers assume my_create() to return an error only when it failed to create a file, they did not perform any cleanups, leaving an 'orphaned' file on the file system. Fixed by adding a check for the above case to my_create() and ensuring the newly created file is deleted before returning an error. Creating a deterministic test case in the test suite is impossible, because the exact steps required to reproduce the above situation depend on the platform and/or environment (OS per-user limits, queries executed by previous tests, startup parameters). The patch was manually tested on Windows using examples posted in the bug report.
-
gluh@mysql.com/mgluh.(none) authored
-
- 01 Mar, 2008 1 commit
-
-
gshchepa/uchum@host.loc authored
into host.loc:/home/uchum/work/5.0-opt
-
- 29 Feb, 2008 7 commits
-
-
gluh@mysql.com/eagle.(none) authored
-
gluh@eagle.(none) authored
into mysql.com:/home/gluh/MySQL/Merge/5.0-opt
-
gluh@mysql.com/eagle.(none) authored
-
gshchepa/uchum@host.loc authored
and Item_direct_ref constructor calls. Order of ref->field_name and ref->table_name arguments is of Item_ref and Item_direct_ref in the fix_inner_refs function is inverted.
-
gluh@mysql.com/eagle.(none) authored
-
gluh@eagle.(none) authored
into mysql.com:/home/gluh/MySQL/Merge/5.0-opt
-
gluh@mysql.com/eagle.(none) authored
added new function test_if_data_home_dir() which checks that path does not contain mysql data home directory. Using of mysql data home directory in DATA DIRECTORY & INDEX DIRECTORY is disallowed.
-
- 28 Feb, 2008 4 commits
-
-
gshchepa/uchum@host.loc authored
into host.loc:/home/uchum/work/5.0-opt
-
gshchepa/uchum@host.loc authored
Assertion `0' failed If ROW item is a part of an expression that also has aggregate function calls (COUNT/SUM/AVG...), a "splitting" with an Item::split_sum_func2 function is applied to that ROW item. Current implementation of Item::split_sum_func2 replaces this Item_row with a newly created Item_aggregate_ref reference to it. Then the row cache tries to work with the Item_aggregate_ref object as with the Item_row object: row cache calls row-emulation methods such as cols and element_index. Item_aggregate_ref (like it's parent Item_ref) inherits dummy implementations of those methods from the hierarchy root Item, and call to them leads to failed assertions and wrong data output. Row-emulation virtual functions (cols, element_index, addr, check_cols, null_inside and bring_value) of Item_ref have been overloaded to forward calls to an underlying item reference.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B34747-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
Was a double-free of the Unique member of Item_func_group_concat. This was not causing a crash because the Unique is a descendent of Sql_alloc. Fixed to free the Unique only if it was allocated for the instance of Item_func_group_concat it was referenced from
-
- 27 Feb, 2008 4 commits
-
-
kaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
kaa@kaamos.(none) authored
the patch for bug #33834.
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/25097/my50-25097
-
holyfoot/hf@mysql.com/hfmain.(none) authored
There was no way to return an error from the client library if no MYSQL connections was established. So here i added variables to store that king of errors and made functions like mysql_error(NULL) to return these.
-
- 26 Feb, 2008 1 commit
-
-
kaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
- 25 Feb, 2008 2 commits
-
-
kaa@kaamos.(none) authored
documentation While the manual mentions FRAC_SECOND only for the TIMESTAMPADD() function, it was also possible to use FRAC_SECOND with DATE_ADD(), DATE_SUB() and +/- INTERVAL. Fixed the parser to match the manual, i.e. using FRAC_SECOND for anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a syntax error. Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/ TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
-
mysql_config --cflags gave a flag that forced the HP/UX C++ compiler into C-mode; as a result, C++ sources could not be compiled correctly. We now filter out the offending flag (like we do for Sun) so that --cflags will work for both C and C++.
-
- 22 Feb, 2008 5 commits
-
-
kaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B30604-5.0-opt
-
gluh@mgluh.(none) authored
into mysql.com:/home/gluh/MySQL/mysql-5.0-opt
-
kaa@kaamos.(none) authored
suite) Under some circumstances a combination of aggregate functions and GROUP BY in a SELECT query over a VIEW could lead to incorrect calculation of the result type of the aggregate function. This in turn could result in incorrect results, or assertion failures on debug builds. Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that the argument's item is dereferenced before calling its type() method.
-
gluh@mysql.com/mgluh.(none) authored
skip lock_type update for temporary tables
-
- 20 Feb, 2008 2 commits
-
-
evgen@moonbone.local authored
into moonbone.local:/work/33266-bug-5.0-opt-mysql
-
evgen@moonbone.local authored
The test case for the bug#31048 checks that there is no crash on stack overrun. But due to different stack sizes on different platforms it failed on some of them. The new test case check that a query with at least 4 level subquery nesting works without the stack overrun nesting and other levels of nesting doesn't cause a crash.
-
- 19 Feb, 2008 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
and ps-protocol Finding a routine should be a transparent operation as far as the binary log is concerned. But it was influencing the binary log because of the TIMESTAMP column in the proc table. Fixed by preserving and restoring the time_zone usage flag when searching for a stored routine in the proc table.
-
- 18 Feb, 2008 1 commit
-
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/32942/my50-32942
-
- 17 Feb, 2008 2 commits
-
-
ssh://bk-internal.mysql.com//home/bk/mysql-5.0-optkaa@kaamos.(none) authored
into kaamos.(none):/data/src/opt/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
Problem is not about intervals and doesn't actually cause 'full table scan'. We have an optimization for DISTINCT when we have 'DISTINCT field_from_first_join_table' we don't need to read all the rows from the JOIN-ed table if we found one conforming row. It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon that case in the evaluate_join_record() and that doesn't break the recordreading loop in sub_select(). Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
-