- 03 Sep, 2007 3 commits
-
-
unknown authored
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime sql/sql_cache.cc: Auto merged
-
unknown authored
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime sql/sql_cache.cc: Auto merged
-
unknown authored
Invaldating a subset of a sufficiently large query cache can take a long time. During this time the server is efficiently frozen and no other operation can be executed. This patch addresses this problem by setting a time limit on how long time a dictionary access request can take before giving up on the attempt. This patch does not work for query cache invalidations issued by DROP, ALTER or RENAME TABLE operations. sql/sql_cache.cc: Changed mutex locking to a timed spinn lock. If access to query cache dictionary takes "a long time" (currently more than 0.1 seconds) the system fall backs on ordinary statement execution.
-
- 30 Aug, 2007 5 commits
-
-
unknown authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-runtime sql/item_cmpfunc.h: Auto merged sql/sql_lex.cc: Auto merged
-
unknown authored
The problem is that a SELECT on one thread is blocked by INSERT ... ON DUPLICATE KEY UPDATE on another thread even when low_priority_updates is activated. The solution is to possibly downgrade the lock type to the setting of low_priority_updates if the INSERT cannot be concurrent. sql/sql_insert.cc: Possibly downgrade lock type to the the setting of low_priority_updates if if the INSERT cannot be concurrent.
-
unknown authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-28779-b
-
unknown authored
Use double quotes instead of single ones which make the test fail on Windows. This is for bug #30164. mysql-test/t/mysql.test: Use double quotes instead of single ones which make the test fail on Windows.
-
unknown authored
Problem: In cases when a client-side macro appears inside a server-side comment, the add_line() function in mysql.cc discarded all characters until the next delimiter to remove macro arguments from the query string. This resulted in broken queries being sent to the server when the next delimiter character appeared past the comment's boundaries, because the comment closing sequence ('*/') was discarded. Fix: If a client-side macro appears inside a server-side comment, discard all characters in the comment after the macro (that is, until the end of the comment rather than the next delimiter). This is a minimal fix to allow only simple cases used by the mysqlbinlog utility. Limitations that are worth documenting: - Nested server-side and/or client-side comments are not supported by mysql.cc - Using client-side macros in multi-line server-side comments is not supported - All characters after a client-side macro in a server-side comment will be omitted from the query string (and thus, will not be sent to server). client/mysql.cc: If a client-side macro appears inside a server-side comment, discard all characters in the comment after the macro. mysql-test/r/mysql.result: Added a test case for bug #30164. mysql-test/t/mysql.test: Added a test case for bug #30164.
-
- 29 Aug, 2007 2 commits
-
-
unknown authored
comments) Before this fix, the server would accept queries that contained comments, even when the comments were not properly closed with a '*' '/' marker. For example, select 1 /* + 2 <EOF> would be accepted as select 1 /* + 2 */ <EOF> and executed as select 1 With this fix, the server now rejects queries with unclosed comments as syntax errors. Both regular comments ('/' '*') and special comments ('/' '*' '!') must be closed with '*' '/' to be parsed correctly. mysql-test/r/comments.result: Unbalanced comments are a syntax error. mysql-test/t/comments.test: Unbalanced comments are a syntax error. sql/sql_lex.cc: Unbalanced comments are a syntax error.
-
unknown authored
seems to be converted as varbinary. The bug has been already fixed. This CS just adds a test case for it. mysql-test/r/sp.result: Update result file. mysql-test/t/sp.test: Test case for BUG#13675.
-
- 28 Aug, 2007 2 commits
-
-
unknown authored
into moksha.local:/Users/davi/mysql/push/mysql-5.0-runtime
-
unknown authored
This is a performance bug, affecting in particular the bison generated code for the parser. Prior to this fix, the grammar used a long chain of reduces to parse an expression, like: bit_expr -> bit_term bit_term -> bit_factor bit_factor -> value_expr value_expr -> term term -> factor etc This chain of reduces cause the internal state automaton in the generated parser to execute more state transitions and more reduces, so that the generated MySQLParse() function would spend a lot of time looping to execute all the grammar reductions. With this patch, the grammar has been reorganized so that rules are more "flat", limiting the depth of reduces needed to parse <expr>. Tests have been written to enforce that relative priorities and properties of operators have not changed while changing the grammar. See the bug report for performance data. mysql-test/r/parser_precedence.result: Improved test coverage for operator precedence mysql-test/t/parser_precedence.test: Improved test coverage for operator precedence sql/sql_yacc.yy: Simplified the grammar to improve performances
-
- 27 Aug, 2007 4 commits
-
-
unknown authored
If, after the tables are locked, one of the conditions to read from a HANDLER table is not met, the handler code wrongly jumps to a error path that won't unlock the tables. The user-visible effect is that after a error in a handler read command, all subsequent handler operations on the same table will hang. The fix is simply to correct the code to jump to the (same) error path that unlocks the tables. mysql-test/r/handler.result: Bug#30632 test case result mysql-test/t/handler.test: Bug#30632 test case sql/sql_handler.cc: Always unlock the internal and external table level locks if any of the conditions (including errors) to read from a HANDLER table are not met.
-
unknown authored
The problem from a user's perspective: user creates table A, and then tries to CREATE TABLE a SELECT from A - and this causes a deadlock error, a hang, or fails with a debug assert, but only if the storage engine is InnoDB. The origin of the problem: InnoDB uses case-insensitive collation (system_charset_info) when looking up the internal table share, thus returning the same share for 'a' and 'A'. Cause of the user-visible behavior: since the same share is returned to SQL locking subsystem, it assumes that the same table is first locked (within the same session) for WRITE, and then for READ, and returns a deadlock error. However, the code is wrong in not properly cleaning up upon an error, leaving external locks in place, which leads to assertion failures and hangs. Fix that has been implemented: the SQL layer should properly propagate the deadlock error, cleaning up and freeing all resources. Further work towards a more complete solution: InnoDB should not use case insensitive collation for table share hash if table names on disk honor the case. mysql-test/r/innodb-deadlock.result: Bug#25164 test case result mysql-test/t/innodb-deadlock.test: Bug#25164 test case. The CREATE TABLE may fail depending on the character set of the system and filesystem, but it should never hang. sql/lock.cc: Unlock the storage engine "external" table level locks, if the MySQL thr_lock locking subsystem detects a deadlock error.
-
unknown authored
configure.in: adjust version number after 5.0.48 clone-off
-
unknown authored
into mysql.com:/data0/mysqldev/my/build-200708231546-5.0.48/mysql-5.0-release
-
- 25 Aug, 2007 1 commit
-
-
unknown authored
into a88-113-38-195.elisa-laajakaista.fi:/home/my/bk/mysql-5.0-marvel
-
- 24 Aug, 2007 9 commits
-
-
unknown authored
into trift2.:/MySQL/M50/push-5.0 netware/Makefile.am: Auto merged
-
unknown authored
1) Ensure "init_db.sql" and "test_db-sql" really get built. 2) Ensure the "*.def" files with NetWare linker options get distributed to the proper directories. netware/BUILD/compile-netware-END: Ensure the "*.def" files are built for NetWare. This is a backport of a 5.1 fix which may not be needed in 5.0 but cannot do any harm: the general "link_sources" step might fall victim to a cleanup which would be fatal just for NetWare, because of problems in the ordering of SUBDIR entries. netware/Makefile.am: 1) The scripts "init_db.sql" and "test_db.sql" must be built in the NetWare phase. 2) Use "basename", not sed.
-
unknown authored
into mysql.com:/data0/mysqldev/my/build-200708231546-5.0.48/mysql-5.0-release
-
unknown authored
into a88-113-38-195.elisa-laajakaista.fi:/home/my/bk/mysql-5.0-marvel
-
unknown authored
into mysql.com:/data0/mysqldev/my/build-200708231546-5.0.48/mysql-5.0-release sql/sql_base.cc: Auto merged sql/sql_cache.cc: Auto merged
-
unknown authored
into trift2.:/MySQL/M50/push-5.0
-
unknown authored
into trift2.:/MySQL/M50/push-5.0
-
unknown authored
-
unknown authored
into pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.0-build
-
- 23 Aug, 2007 5 commits
-
-
unknown authored
into bk-internal.mysql.com:/users/gshchepa/mysql-5.0-opt sql/sql_base.cc: Auto merged sql/sql_cache.cc: Auto merged
-
unknown authored
into trift2.:/MySQL/M50/push-5.0
-
unknown authored
into pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.0.48 sql/sql_base.cc: Auto merged sql/sql_cache.cc: Auto merged
-
unknown authored
into pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.0.48
-
unknown authored
since this flag was explicitly removed in pushbuild for GCOV builds. BUILD_CMD => ['sh', '-c', 'perl -i.bak -pe "s/ \\\\\$static_link//" ' . 'BUILD/compile-pentium-gcov; BUILD/compile-pentium-gcov'], Moving $static_link to SETUP.sh broke this, and is now fixed. Should this flag be needed on some platforms, the proper location is compile-<platform>-gcov Tested the amd64 and pentium64 build fine without it, and can run NDB tests. BUILD/SETUP.sh: Removed $static_link from GCOV builds.
-
- 22 Aug, 2007 8 commits
-
-
unknown authored
1) We do not provide the "isam" table handler in 5.0 and up (different from "myisam" !), so we do not need the ".def" files for the "isam"-specific tools. 2) Use "basename" to get the base name of a file, not a harder-to-read sed expression. BitKeeper/deleted/.del-isamchk.def: Delete: netware/isamchk.def BitKeeper/deleted/.del-isamlog.def: Delete: netware/isamlog.def BitKeeper/deleted/.del-pack_isam.def: Delete: netware/pack_isam.def netware/Makefile.am: Use a plain "basename" showing the purpose, not a sed command which is harder to read.
-
unknown authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-30237
-
unknown authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
unknown authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-23062
-
unknown authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-30237 sql/sql_yacc.yy: Auto merged
-
unknown authored
This is a performance bug, related to the parsing or 'OR' and 'AND' boolean expressions. Let N be the number of expressions involved in a OR (respectively AND). When N=1 For example, "select 1" involve only 1 term: there is no OR operator. In 4.0 and 4.1, parsing expressions not involving OR had no overhead. In 5.0, parsing adds some overhead, with Select->expr_list. With this patch, the overhead introduced in 5.0 has been removed, so that performances for N=1 should be identical to the 4.0 performances, which are optimal (there is no code executed at all) The overhead in 5.0 was in fact affecting significantly some operations. For example, loading 1 Million rows into a table with INSERTs, for a table that has 100 columns, leads to parsing 100 Millions of expressions, which means that the overhead related to Select->expr_list is executed 100 Million times ... Considering that N=1 is by far the most probable expression, this case should be optimal. When N=2 For example, "select a OR b" involves 2 terms in the OR operator. In 4.0 and 4.1, parsing expressions involving 2 terms created 1 Item_cond_or node, which is the expected result. In 5.0, parsing these expression also produced 1 node, but with some extra overhead related to Select->expr_list : creating 1 list in Select->expr_list and another in Item_cond::list is inefficient. With this patch, the overhead introduced in 5.0 has been removed so that performances for N=2 should be identical to the 4.0 performances. Note that the memory allocation uses the new (thd->mem_root) syntax directly. The cost of "is_cond_or" is estimated to be neglectable: the real problem of the performance degradation comes from unneeded memory allocations. When N>=3 For example, "select a OR b OR c ...", which involves 3 or more terms. In 4.0 and 4.1, the parser had no significant cost overhead, but produced an Item tree which is difficult to evaluate / optimize during runtime. In 5.0, the parser produces a better Item tree, using the Item_cond constructor that accepts a list of children directly, but at an extra cost related to Select->expr_list. With this patch, the code is implemented to take the best of the two implementations: - there is no overhead with Select->expr_list - the Item tree generated is optimized and flattened. This is achieved by adding children nodes into the Item tree directly, with Item_cond::add(), which avoids the need for temporary lists and memory allocation Note that this patch also provide an extra optimization, that the previous code in 5.0 did not provide: expressions are flattened in the Item tree, based on what the expression already parsed is, and not based on the order in which rules are reduced. For example : "(a OR b) OR c", "a OR (b OR c)" would both be represented with 2 Item_cond_or nodes before this patch, and with 1 node only with this patch. The logic used is based on the mathematical properties of the OR operator (it's associative), and produces a simpler tree. sql/item_cmpfunc.h: Improved performances for parsing boolean expressions sql/sql_yacc.yy: Improved performances for parsing boolean expressions mysql-test/r/parser_precedence.result: Added test cases to cover boolean operator precedence mysql-test/t/parser_precedence.test: Added test cases to cover boolean operator precedence
-
unknown authored
into hynda.mysql.fi:/home/my/mysql-5.0-marvel
-
unknown authored
Killing a SELECT query with KILL QUERY or KILL CONNECTION causes a server crash if the query cache is enabled. Normal evaluation of a query may be interrupted by the KILL QUERY/CONNECTION statement, in this case the mysql_execute_command function returns TRUE, and the thd->killed flag has true value. In this case the result of the query may be cached incompletely (omitting call to query_cache_insert inside the net_real_write function), and next call to query_cache_end_of_result may lead to server crash. Thus, the query_cache_end_of_result function has been modified to abort query cache in the case of killed thread. sql/sql_cache.cc: Fixed bug #30201. The query_cache_end_of_result function has been modified to abort query cache in the case of query execution failure. Also this function has been modified to remove incomplete query block.
-
- 21 Aug, 2007 1 commit
-
-
unknown authored
into trift2.:/MySQL/M50/push-5.0
-