- 13 Nov, 2006 2 commits
-
-
gkodinov/kgeorge@rakia.gmz authored
into rakia.gmz:/home/kgeorge/mysql/autopush/B19216-4.1-opt
-
gkodinov/kgeorge@macbook.gmz authored
The server sends a number of columns to the client. It uses a limited "fast" function for that instead of the general one. This fast function cannot send numbers larger than 2 bytes. This causes the client to expect smaller number of columns. The client writes outside of the allocated memory buffer as a result. Fixed the server to use the general function to send column count. Fixed the client to check the column count before writing column data.
-
- 09 Nov, 2006 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
into macbook.gmz:/Users/kgeorge/mysql/work/mem-test-4.1-opt
-
- 08 Nov, 2006 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
-
- 07 Nov, 2006 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
- When returning metadata for scalar subqueries the actual type of the column was calculated based on the value type, which limits the actual type of a scalar subselect to the set of (currently) 3 basic types : integer, double precision or string. This is the reason that columns of types other then the basic ones (e.g. date/time) are reported as being of the corresponding basic type. Fixed by storing/returning information for the column type in addition to the result type.
-
- 03 Nov, 2006 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
The parser is allocating Item_field for references by name in ORDER BY expressions. Such expressions however may point not only to Item_field in the select list (or to a table column) but also to an arbitrary Item. This causes Item_field::fix_fields to throw an error about missing column. The fix substitutes Item_field for the reference with an Item_ref when not pointing to Item_field.
-
- 24 Oct, 2006 5 commits
-
-
into mysql.com:/usersnfs/abotchkov/mysql-4.1-opt1
-
holyfoot/hf@mysql.com/deer.(none) authored
into mysql.com:/home/hf/work/0current_stmt/my41-current_stmt
-
holyfoot/hf@mysql.com/deer.(none) authored
the incompatibility was caused by current_stmt member added to the MYSQL structure. It's possible to move it to THD structure instead which saves ABI
-
holyfoot/hf@mysql.com/deer.(none) authored
into mysql.com:/home/hf/work/w3475/my41-w3475
-
holyfoot/hf@mysql.com/deer.(none) authored
-
- 23 Oct, 2006 1 commit
-
-
holyfoot/hf@mysql.com/deer.(none) authored
Necessary code added to mysqltest.c. Disabled tests are available now.
-
- 20 Oct, 2006 3 commits
-
-
igor@rurik.mysql.com authored
into rurik.mysql.com:/home/igor/mysql-4.1-opt
-
gkodinov@dl145s.mysql.com authored
into dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-4.1-opt
-
igor@rurik.mysql.com authored
If elements a not top-level IN subquery were accessed by an index and the subquery result set included a NULL value then the quantified predicate that contained the subquery was evaluated to NULL when it should return a non-null value.
-
- 19 Oct, 2006 4 commits
-
-
svoj@mysql.com/april.(none) authored
into mysql.com:/home/svoj/devel/mysql/engines/mysql-4.1-engines
-
gkodinov@dl145s.mysql.com authored
into dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-4.1-opt
-
svoj@mysql.com/april.(none) authored
into mysql.com:/home/svoj/devel/mysql/engines/mysql-4.1-engines
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-4.1-merge
-
- 18 Oct, 2006 1 commit
-
-
svoj@mysql.com/april.(none) authored
Repair table could crash a server if there is not sufficient memory (myisam_sort_buffer_size) to operate. Affects not only repair, but also all statements that use create index by sort: repair by sort, parallel repair, bulk insert. Return an error if there is not sufficient memory to store at least one key per BUFFPEK. Also fixed memory leak if thr_find_all_keys returns an error.
-
- 17 Oct, 2006 1 commit
-
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-4.1-bug12240
-
- 16 Oct, 2006 3 commits
-
-
gkodinov/kgeorge@rakia.(none) authored
into rakia.(none):/home/kgeorge/mysql/autopush/B14019-4.1-opt
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-4.1-bug12240
-
gkodinov/kgeorge@macbook.gmz authored
When resolving unqualified name references MySQL was not checking what is the item type for the reference. Thus e.g a string literal item that has by convention a name equal to its string value will also work as a reference to a SELECT list item or a table field. Fixed by allowing only Item_ref or Item_field to referenced by (unqualified) name.
-
- 13 Oct, 2006 2 commits
-
-
kroki/tomash@moonlight.intranet authored
into moonlight.intranet:/home/tomash/src/mysql_ab/mysql-4.1-bug9678
-
kroki/tomash@moonlight.intranet authored
into moonlight.intranet:/home/tomash/src/mysql_ab/mysql-4.1-bug9678
-
- 11 Oct, 2006 4 commits
-
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
-
svoj@mysql.com/april.(none) authored
hangs on Linux If REPAIR TABLE ... USE_FRM is issued for table that is located in different than default database server crash could happen. In reopen_name_locked_table take database name from table_list (user specified or default database) instead of from thd (default database). Affects 4.1 only.
-
istruewing@chilla.local authored
Examined rows are counted for every join part. The per-join-part counter was incremented over all iterations. The result variable was replaced at the end of every iteration. The final result was the number of examined rows by the join part that ended its execution as the last one. The numbers of other join parts was lost. Now we reset the per-join-part counter before every iteration and add it to the result variable at the end of the iteration. That way we get the sum of all iterations of all join parts. No test case. Testing this needs a look into the slow query log. I don't know of a way to do this portably with the test suite.
-
msvensson@neptunus.(none) authored
into neptunus.(none):/home/msvensson/mysql/mysql-4.1
-
- 10 Oct, 2006 1 commit
-
-
lars@mysql.com/black.(none) authored
into mysql.com:/home/bk/MERGE/mysql-4.1-merge
-
- 09 Oct, 2006 2 commits
-
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
-
istruewing@chilla.local authored
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick parallel repair. This means that it does not only rebuild all indexes, but also the data file. Non-quick parallel repair works so that there is one thread per index. The first of the threads rebuilds also the new data file. The problem was that all threads shared the read io cache on the old data file. If there were holes (deleted records) in the table, the first thread skipped them, writing only contiguous, non-deleted records to the new data file. Then it built the new index so that its entries pointed to the correct record positions. But the other threads didn't know the new record positions, but put the positions from the old data file into the index. The new design is so that there is a shared io cache which is filled by the first thread (the data file writer) with the new contiguous records and read by the other threads. Now they know the new record positions. Another problem was that for the parallel repair of compressed tables a common bit_buff and rec_buff was used. I changed it so that thread specific buffers are used for parallel repair. A similar problem existed for checksum calculation. I made this multi-thread safe too.
-
- 08 Oct, 2006 1 commit
-
-
svoj@may.pils.ru authored
into may.pils.ru:/home/svoj/devel/bk/mysql-4.1-engines
-
- 06 Oct, 2006 4 commits
-
-
svoj@mysql.com/april.(none) authored
into mysql.com:/home/svoj/devel/mysql/BUG22937/mysql-4.1-engines
-
svoj@mysql.com/april.(none) authored
This is addition to fix for bug21617. Valgrind reports an error when opening merge table that has underlying tables with less indexes than in a merge table itself. Copy at most min(file->keys, table->key_parts) elements from rec_per_key array. This fixes problems when merge table and subtables have different number of keys.
-
svoj@mysql.com/april.(none) authored
-
svoj@mysql.com/april.(none) authored
-
- 05 Oct, 2006 2 commits
-
-
svoj@mysql.com/april.(none) authored
into mysql.com:/home/svoj/devel/mysql/BUG21381/mysql-4.1-engines
-
svoj@mysql.com/april.(none) authored
Though this is not storage engine specific problem, I was able to repeat this problem with BDB and NDB engines only. That was the reason to add a test case into ndb_update.test. As a result different bad things could happen. BDB has removed duplicate rows which is not expected. NDB returns an error. For multi table update notify storage engine about UPDATE IGNORE as it is done in single table UPDATE.
-