- 30 Jun, 2007 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B29157-5.1-opt
-
- 29 Jun, 2007 3 commits
-
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29247/my51-29247
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29247/my51-29247
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29247/my51-29247
-
- 28 Jun, 2007 3 commits
-
-
gkodinov/kgeorge@magare.gmz authored
Sometimes the number of really updated rows (with changed column values) cannot be determined at the server level alone (e.g. if the storage engine does not return enough column values to verify that). So the only dependable way in such cases is to let the storage engine return that information if possible. Fixed the bug at server level by providing a way for the storage engine to return information about wether it actually updated the row or the old and the new column values are the same. It can do that by returning HA_ERR_RECORD_IS_THE_SAME in ha_update_row(). Note that each storage engine may choose not to try to return this status code, so this behaviour remains storage engine specific.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B26564-5.1-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
what caused some consequitive tests failures
-
- 27 Jun, 2007 3 commits
-
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29156/my51-29156
-
holyfoot/hf@mysql.com/hfmain.(none) authored
don't free thd->lex->sphead if we didn't do lex_start(), as we can have garbage there
-
holyfoot/hf@mysql.com/hfmain.(none) authored
-
- 26 Jun, 2007 3 commits
-
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/28430/my51-28430
-
holyfoot/hf@mysql.com/hfmain.(none) authored
In the ha_partition::position we don't calculate the number of the partition of the record. We use m_last_part_value instead relying on that it is set in other place like previous calls of ::write_row(). In replication we do neither of these calls before ::position(). Delete_row_log_event::do_exec_row calls find_and_fetch_row() where we used position() & rnd_pos() calls to find the record for the PARTITION/INNODB table as it posesses InnoDB table flags. Fixed by removing HA_PRIMARY_KEY_REQUIRED_FOR_POSITION flag from PARTITION
-
mhansson@dl145s.mysql.com authored
into dl145s.mysql.com:/dev/shm/mhansson/my51-bug28677
-
- 25 Jun, 2007 15 commits
-
-
gkodinov/kgeorge@magare.gmz authored
MySQL uses _beginthread()/_endthread() instead of _beginthreadex()/_endthreadex() to create/end its threads on Windows. According to MSDN _endthread() does close the thread handle. So there's no need the handle to be closed explicitly. Besides : WaitForSingleObject(, INFINITE) != WAIT_OBJECT_0) is true for all practical cases as the other two possible return codes (according to MSDN) cannot happen in that case the CloseHandle() was actually a dead code. Fixed by removing the CloseHandle() call. No test case added because it's not possible to test for absence of dead code.
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
-
holyfoot/hf@mysql.com/hfmain.(none) authored
If one sets MYSQL_READ_DEFAULTS_FILE and MYSQL_READ_DEFAULT_GROUP options after mysql_real_connect() called with that MYSQL instance, these options will affect next mysql_reconnect then. As we use a copy of the original MYSQL object inside mysql_reconnect, and mysql_real_connect frees options.my_cnf_file and _group strings, we will free these twice when we execute mysql_reconnect with the same MYSQL for the second time. I don't think we should ever read defaults files handling mysql_reconnect. So i just set them to 0 for the temporary MYSQL object there/
-
mhansson@dl145s.mysql.com authored
into dl145s.mysql.com:/dev/shm/mhansson/my51-bug28677
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/27084/my51-27084
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/4.1-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
when index_init() or rnd_init() return an error, we still set handler->inited to INDEX or RND in ha_index_init and ha_rnd_init. As caller doesn't call ha_*_end() in this case, we get DBUG_ASSERT failed.
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/mysql-5.1-opt
-
gshchepa/uchum@gleb.loc authored
Merge with 5.1.
-
- 24 Jun, 2007 4 commits
-
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/dev-opt/mysql-5.0-opt-bug25602
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
igor@olga.mysql.com authored
the loose scan optimization for grouping queries was applied returned a wrong result set when the query was used with the SQL_BIG_RESULT option. The SQL_BIG_RESULT option forces to use sorting algorithm for grouping queries instead of employing a suitable index. The current loose scan optimization is applied only for one table queries when the suitable index is covering. It does not make sense to use sort algorithm in this case. However the create_sort_index function does not take into account the possible choice of the loose scan to implement the DISTINCT operator which makes sorting unnecessary. Moreover the current implementation of the loose scan for queries with distinct assumes that sorting will never happen. Thus in this case create_sort_index should not call the function filesort.
-
- 23 Jun, 2007 5 commits
-
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.1-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
gshchepa/uchum@gleb.loc authored
INSERT into table from SELECT from the same table with ORDER BY and LIMIT was inserting other data than sole SELECT ... ORDER BY ... LIMIT returns. One part of the patch for bug #9676 improperly pushed LIMIT to temporary table in the presence of the ORDER BY clause. That part has been removed.
-
- 22 Jun, 2007 3 commits
-
-
joerg@trift2. authored
into trift2.:/MySQL/M51/push-5.1
-
joerg@trift2. authored
into trift2.:/MySQL/M51/push-5.1
-
joerg@trift2. authored
into trift2.:/MySQL/M50/push-5.0
-