An error occurred fetching the project authors.
- 30 May, 2007 1 commit
-
-
anozdrin/alik@ibm. authored
2. Use multibyte-safe constant.
-
- 29 May, 2007 2 commits
-
-
malff/marcsql@weblab.(none) authored
-
anozdrin/alik@ibm. authored
1. Refactor sp_show_create_function() and sp_show_create_procedure() into show_create_routine(). 2. Code cleanup: eliminate proxy functions.
-
- 25 May, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
The root cause of this bug is related to the function skip_rear_comments, in sql_lex.cc Recent code changes in skip_rear_comments changed the prototype from "const uchar*" to "const char*", which had an unforseen impact on this test: (endp[-1] < ' ') With unsigned characters, this code filters bytes of value [0x00 - 0x20] With *signed* characters, this also filters bytes of value [0x80 - 0xFF]. This caused the regression reported, considering cyrillic characters in the parameter name to be whitespace, and truncated. Note that the regression is present both in 5.0 and 5.1. With this fix: - [0x80 - 0xFF] bytes are no longer considered whitespace. This alone fixes the regression. In addition, filtering [0x00 - 0x20] was found bogus and abusive, so that the code now filters uses my_isspace when looking for whitespace. Note that this fix is only addressing the regression affecting UTF-8 in general, but does not address a more fundamental problem with skip_rear_comments: parsing a string *backwards*, starting at end[-1], is not safe with multi-bytes characters, so that end[-1] can confuse the last byte of a multi-byte characters with a characters to filter out. The only known impact of this remaining issue affects objects that have to meet all the conditions below: - the object is a FUNCTION / PROCEDURE / TRIGGER / EVENT / VIEW - the body consist of only *1* instruction, and does *not* contain a BEGIN-END block - the instruction ends, lexically, with <ident> <whitespace>* ';'? For example, "select <ident>;" or "return <ident>;" - The last character of <ident> is a multi-byte character - the last byte of this character is ';' '*', '/' or whitespace In this case, the body of the object will be truncated after parsing, and stored in an invalid format. This last issue has not been fixed in this patch, since the real fix will be implemented by Bug 25411 (trigger code truncated), which is caused by the very same code. The real problem is that the function skip_rear_comments is only a work-around, and should be removed entirely: see the proposed patch for bug 25411 for details.
-
- 23 May, 2007 1 commit
-
-
evgen@moonbone.local authored
If a stored function or a trigger was killed it had aborted but no error was thrown. This allows the caller statement to continue without a notice. This may lead to a wrong data being inserted/updated to/deleted as in such cases the correct result of a stored function isn't guaranteed. In the case of triggers it allows the caller statement to ignore kill signal and to waste time because of re-evaluation of triggers that always will fail because thd->killed flag is still on. Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions check whether a function or a trigger were killed during execution and throws an appropriate error if so. Now the fill_record() function stops filling record if an error was reported through thd->net.report_error.
-
- 16 May, 2007 1 commit
-
-
kostja@vajra.(none) authored
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another implicit insert" Also fixes and adds test cases for bugs: 20497 "Trigger with INSERT DELAYED causes Error 1165" 21714 "Wrong NEW.value and server abort on INSERT DELAYED to a table with a trigger". Post-review fixes. Problem: In MySQL INSERT DELAYED is a way to pipe all inserts into a given table through a dedicated thread. This is necessary for simplistic storage engines like MyISAM, which do not have internal concurrency control or threading and thus can not achieve efficient INSERT throughput without support from SQL layer. DELAYED INSERT works as follows: For every distinct table, which can accept DELAYED inserts and has pending data to insert, a dedicated thread is created to write data to disk. All user connection threads that attempt to delayed-insert into this table interact with the dedicated thread in producer/consumer fashion: all records to-be inserted are pushed into a queue of the dedicated thread, which fetches the records and writes them. In this design, client connection threads never open or lock the delayed insert table. This functionality was introduced in version 3.23 and does not take into account existence of triggers, views, or pre-locking. E.g. if INSERT DELAYED is called from a stored function, which, in turn, is called from another stored function that uses the delayed table, a deadlock can occur, because delayed locking by-passes pre-locking. Besides: * the delayed thread works directly with the subject table through the storage engine API and does not invoke triggers * even if it was patched to invoke triggers, if triggers, in turn, used other tables, the delayed thread would have to open and lock involved tables (use pre-locking). * even if it was patched to use pre-locking, without deadlock detection the delayed thread could easily lock out user connection threads in case when the same table is used both in a trigger and on the right side of the insert query: the delayed thread would not release locks until all inserts are complete, and user connection can not complete inserts without having locks on the tables used on the right side of the query. Solution: These considerations suggest two general alternatives for the future of INSERT DELAYED: * it is considered a full-fledged alternative to normal INSERT * it is regarded as an optimisation that is only relevant for simplistic engines. Since we missed our chance to provide complete support of new features when 5.0 was in development, the first alternative currently renders infeasible. However, even the second alternative, which is to detect new features and convert DELAYED insert into a normal insert, is not easy to implement. The catch-22 is that we don't know if the subject table has triggers or is a view before we open it, and we only open it in the delayed thread. We don't know if the query involves pre-locking until we have opened all tables, and we always first create the delayed thread, and only then open the remaining tables. This patch detects the problematic scenarios and converts DELAYED INSERT to a normal INSERT using the following approach: * if the statement is executed under pre-locking (e.g. from within a stored function or trigger) or the right side may require pre-locking, we detect the situation before creating a delayed insert thread and convert the statement to a conventional INSERT. * if the subject table is a view or has triggers, we shutdown the delayed thread and convert the statement to a conventional INSERT.
-
- 14 May, 2007 1 commit
-
-
mats@romeo.kindahl.net authored
Replacing binlog_row_based_if_mixed with variable binlog_stmt_flags holding several flags and adding member functions to manipulate the flags. Added code to generate a warning when an attempt to log an unsafe statement to the binary log was made. The warning is both pushed to the SHOW WARNINGS table and written to the error log. The prevent flooding the error log, the warning is just written to the error log once per open session.
-
- 11 May, 2007 1 commit
-
-
dlenev@mockturtle.local authored
-
- 10 May, 2007 1 commit
-
-
monty@mysql.com/narttu.mysql.fi authored
The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
-
- 07 May, 2007 1 commit
-
-
thek@adventure.(none) authored
- In some cases, flow control optimization implemented in sp::optimize removes hreturn instructions, causing SQL exception handlers to: * never return * execute wrong logic - This patch overrides default short cut optimization on hreturn instructions to avoid this problem.
-
- 26 Apr, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
-
- 24 Apr, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
The issue found with bug 25411 is due to the function skip_rear_comments() which damages the source code while implementing a work around. The root cause of the problem is in the lexical analyser, which does not process special comments properly. For special comments like : [1] aaa /*!50000 bbb */ ccc since 5.0 is a version older that the current code, the parser is in lining the content of the special comment, so that the query to process is [2] aaa bbb ccc However, the text of the query captured when processing a stored procedure, stored function or trigger (or event in 5.1), can be after rebuilding it: [3] aaa bbb */ ccc which is wrong. To fix bug 25411 properly, the lexical analyser needs to return [2] when in lining special comments. In order to implement this, some preliminary cleanup is required in the code, which is implemented by this patch. Before this change, the structure named LEX (or st_lex) contains attributes that belong to lexical analysis, as well as attributes that represents the abstract syntax tree (AST) of a statement. Creating a new LEX structure for each statements (which makes sense for the AST part) also re-initialized the lexical analysis phase each time, which is conceptually wrong. With this patch, the previous st_lex structure has been split in two: - st_lex represents the Abstract Syntax Tree for a statement. The name "lex" has not been changed to avoid a bigger impact in the code base. - class lex_input_stream represents the internal state of the lexical analyser, which by definition should *not* be reinitialized when parsing multiple statements from the same input stream. This change is a pre-requisite for bug 25411, since the implementation of lex_input_stream will later improve to deal properly with special comments, and this processing can not be done with the current implementation of sp_head::reset_lex and sp_head::restore_lex, which interfere with the lexer. This change set alone does not fix bug 25411.
-
- 13 Apr, 2007 1 commit
-
-
kostja@vajra.(none) authored
streamline the event worker thread work flow and try to eliminate possibilities for memory corruptions that might have been lurking in previous (complicated) code. This patch: * removes Event_job_data::compile that was never used * cleans up Event_job_data::execute to minimize juggling with thread context and eliminate unneded code paths * Implements Security_context::change/restore_security_context to be able to re-use these methods in all stored programs This is to maybe fix Bug#27733 "Valgrind failures in remove_table_from_cache". Review comments applied.
-
- 06 Apr, 2007 1 commit
-
-
anozdrin/alik@ibm. authored
-
- 05 Apr, 2007 2 commits
-
-
kostja@vajra.(none) authored
-
kostja@vajra.(none) authored
when there are no up-to-date system tables to support it: - initialize the scheduler before reporting "Ready for connections". This ensures that warnings, if any, are printed before "Ready for connections", and this message is not mangled. - do not abort the scheduler if there are no system tables - check the tables once at start up, remember the status and disable the scheduler if the tables are not up to date. If one attempts to use the scheduler with bad tables, issue an error message. - clean up the behaviour of the module under LOCK TABLES and pre-locking mode - make sure implicit commit of Events DDL works as expected. - add more tests Collateral clean ups in the events code. This patch fixes Bug#23631 Events: SHOW VARIABLES doesn't work when mysql.event is damaged
-
- 03 Apr, 2007 1 commit
-
-
gluh@mysql.com/eagle.(none) authored
-
- 27 Mar, 2007 2 commits
-
-
anozdrin/alik@alik.opbmk authored
execution breaks replication. When a stored routine is executed, we switch current database to the database, in which the routine has been created. When the stored routine finishes, we switch back to the original database. The problem was that if the original database does not exist (anymore) after routine execution, we raised an error. The fix is to report a warning, and switch to the NULL database.
-
kostja@bodhi.local authored
the lexer API which internally uses unsigned char variables to address its state map. The implementation of the lexer should be internal to the lexer, and not influence the rest of the code.
-
- 23 Mar, 2007 1 commit
-
-
aelkin/elkin@andrepl.(none) authored
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where SF() modified non-ta table. As the result of this artifact it was not possible to detect whether there were any side-effects when top-level query ends. If the top level query table was not modified and the bit is lost there would be no binlogging. Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags telling whether the current query and the current transaction modified any non-ta table. The flags stmt, all are dropped at the end of the query and the transaction.
-
- 15 Mar, 2007 1 commit
-
-
dlenev@mockturtle.local authored
TABLE ... WRITE". Memory and CPU hogging occured when connection which had to wait for table lock was serviced by thread which previously serviced connection that was killed (note that connections can reuse threads if thread cache is enabled). One possible scenario which exposed this problem was when thread which provided binlog dump to replication slave was implicitly/automatically killed when the same slave reconnected and started pulling data through different thread/connection. The problem also occured when one killed particular query in connection (using KILL QUERY) and later this connection had to wait for some table lock. This problem was caused by the fact that thread-specific mysys_var::abort variable, which indicates that waiting operations on mysys layer should be aborted (this includes waiting for table locks), was set by kill operation but was never reset back. So this value was "inherited" by the following statements or even other connections (which reused the same physical thread). Such discrepancy between this variable and THD::killed flag broke logic on SQL-layer and caused CPU and memory hogging. This patch tries to fix this problem by properly resetting this member. There is no test-case associated with this patch since it is hard to test for memory/CPU hogging conditions in our test-suite.
-
- 14 Mar, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
Before this fix, the parser would accept illegal code in SQL exceptions handlers, that later causes the runtime to crash when executing the code, due to memory violations in the exception handler stack. The root cause of the problem is instructions within an exception handler that jumps to code located outside of the handler. This is illegal according to the SQL 2003 standard, since labels located outside the handler are not supposed to be visible (they are "out of scope"), so any instruction that jumps to these labels, like ITERATE or LEAVE, should not parse. The section of the standard that is relevant for this is : SQL:2003 SQL/PSM (ISO/IEC 9075-4:2003) section 13.1 <compound statement>, syntax rule 4 <quote> The scope of the <beginning label> is CS excluding every <SQL schema statement> contained in CS and excluding every <local handler declaration list> contained in CS. <beginning label> shall not be equivalent to any other <beginning label>s within that scope. </quote> With this fix, the C++ class sp_pcontext, which represent the "parsing context" tree (a.k.a symbol table) of a stored procedure, has been changed as follows: - constructors have been cleaned up, so that only building a root node for the tree is public; building nodes inside a tree is not public. - a new member, m_label_scope, indicates if a given syntactic context belongs to a DECLARE HANDLER block, - label resolution, in the method find_label(), has been changed to implement the restriction of scope regarding labels used in a compound statement. The actions in the parser, when parsing the body of a SQL exception handler, have been changed as follows: - the implementation of an exception handler (DECLARE HANDLER) now creates explicitly a new sp_pcontext, to isolate the code inside the handler from the containing compound statement context. - registering exception handlers as a result occurs in the parent context, see the rule sp_hcond_element - the code in sp_hcond_list has been cleaned up, to avoid code duplication In addition, the flags IN_SIMPLE_CASE and IN_HANDLER, declared in sp_head.h have been removed, since they are unused and broken by design (as seen with Bug 19194 (Right recursion in parser for CASE causes excessive stack usage, limitation), representing a stack in a single flag is not possible. Tests in sp-error have been added to show that illegal constructs are now rejected. Tests in sp have been added for code coverage, to show that ITERATE or LEAVE statements are legal when jumping to a label in scope, inside the body of an exception handler.
-
- 07 Mar, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
Bug 8407, post review cleanup: use instr::get_cont_dest() to get the instruction continuation instruction, for CONTINUE exception handlers.
-
- 06 Mar, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
Bug 18914 (Calling certain SPs from triggers fail) Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02') Bug 21825 (Incorrect message error deleting records in a table with a trigger for inserting) Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency error) Bug 25345 (Cursors from Functions) This fix resolves a long standing issue originally reported with bug 8407, which affect the behavior of Stored Procedures, Stored Functions and Trigger in many different ways, causing symptoms reported by all the bugs listed. In all cases, the root cause of the problem traces back to 8407 and how the server locks tables involved with sub statements. Prior to this fix, the implementation of stored routines would: - compute the transitive closure of all the tables referenced by a top level statement - open and lock all the tables involved - execute the top level statement "transitive closure of tables" means collecting: - all the tables, - all the stored functions, - all the views, - all the table triggers - all the stored procedures involved, and recursively inspect these objects definition to find more references to more objects, until the list of every object referenced does not grow any more. This mechanism is known as "pre-locking" tables before execution. The motivation for locking all the tables (possibly) used at once is to prevent dead locks. One problem with this approach is that, if the execution path the code really takes during runtime does not use a given table, and if the table is missing, the server would not execute the statement. This in particular has a major impact on triggers, since a missing table referenced by an update/delete trigger would prevent an insert trigger to run. Another problem is that stored routines might define SQL exception handlers to deal with missing tables, but the server implementation would never give user code a chance to execute this logic, since the routine is never executed when a missing table cause the pre-locking code to fail. With this fix, the internal implementation of the pre-locking code has been relaxed of some constraints, so that failure to open a table does not necessarily prevent execution of a stored routine. In particular, the pre-locking mechanism is now behaving as follows: 1) the first step, to compute the transitive closure of all the tables possibly referenced by a statement, is unchanged. 2) the next step, which is to open all the tables involved, only attempts to open the tables added by the pre-locking code, but silently fails without reporting any error or invoking any exception handler is the table is not present. This is achieved by trapping internal errors with Prelock_error_handler 3) the locking step only locks tables that were successfully opened. 4) when executing sub statements, the list of tables used by each statements is evaluated as before. The tables needed by the sub statement are expected to be already opened and locked. Statement referencing tables that were not opened in step 2) will fail to find the table in the open list, and only at this point will execution of the user code fail. 5) when a runtime exception is raised at 4), the instruction continuation destination (the next instruction to execute in case of SQL continue handlers) is evaluated. This is achieved with sp_instr::exec_open_and_lock_tables() 6) if a user exception handler is present in the stored routine, that handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be trapped by stored routines. If no handler exists, then the runtime execution will fail as expected. With all these changes, a side effect is that view security is impacted, in two different ways. First, a view defined as "select stored_function()", where the stored function references a table that may not exist, is considered valid. The rationale is that, because the stored function might trap exceptions during execution and still return a valid result, there is no way to decide when the view is created if a missing table really cause the view to be invalid. Secondly, testing for existence of tables is now done later during execution. View security, which consist of trapping errors and return a generic ER_VIEW_INVALID (to prevent disclosing information) was only implemented at very specific phases covering *opening* tables, but not covering the runtime execution. Because of this existing limitation, errors that were previously trapped and converted into ER_VIEW_INVALID are not trapped, causing table names to be reported to the user. This change is exposing an existing problem, which is independent and will be resolved separately.
-
- 27 Feb, 2007 2 commits
-
-
cbell/Chuck@mysql_cab_desk. authored
SF/Triggers in SBR mode." BUG#14914 "SP: Uses of session variables in routines are not always replicated" BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated correctly" This patch corrects a minor error in the previous patch for BUG#20141. This patch corrects an errant code change to sp_head.cc. The comments for the first patch follow: User-defined variables used inside of stored functions/triggers in statements which did not update tables directly were not replicated. We also had problems with replication of user-defined variables which were used in triggers (or stored functions called from table-updating statements) more than once. This patch addresses the first issue by enabling logging of all references to user-defined variables in triggers/stored functions and not only references from table-updating statements. The second issue stemmed from the fact that for user-defined variables used from triggers or stored functions called from table-updating statements we were writing binlog events for each reference instead of only one event for the first reference. This problem is already solved for stored functions called from non-updating statements with help of "event unioning" mechanism. So the patch simply extends this mechanism to the case affected. It also fixes small problem in this mechanism which caused wrong logging of references to user-variables in cases when non-updating statement called several stored functions which used the same variable and some of these function calls were omitted from binlog as they were not updating any tables.
-
cbell/Chuck@mysql_cab_desk. authored
SF/Triggers in SBR mode." BUG#14914 "SP: Uses of session variables in routines are not always replicated" BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated correctly" This patch corrects a minor error in the previous patch for BUG#20141. This patch corrects an errant code change to sp_head.cc. The comments for the first patch follow: User-defined variables used inside of stored functions/triggers in statements which did not update tables directly were not replicated. We also had problems with replication of user-defined variables which were used in triggers (or stored functions called from table-updating statements) more than once. This patch addresses the first issue by enabling logging of all references to user-defined variables in triggers/stored functions and not only references from table-updating statements. The second issue stemmed from the fact that for user-defined variables used from triggers or stored functions called from table-updating statements we were writing binlog events for each reference instead of only one event for the first reference. This problem is already solved for stored functions called from non-updating statements with help of "event unioning" mechanism. So the patch simply extends this mechanism to the case affected. It also fixes small problem in this mechanism which caused wrong logging of references to user-variables in cases when non-updating statement called several stored functions which used the same variable and some of these function calls were omitted from binlog as they were not updating any tables.
-
- 26 Feb, 2007 1 commit
-
-
cbell/Chuck@mysql_cab_desk. authored
SF/Triggers in SBR mode." BUG#14914 "SP: Uses of session variables in routines are not always replicated" BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated correctly" User-defined variables used inside of stored functions/triggers in statements which did not update tables directly were not replicated. We also had problems with replication of user-defined variables which were used in triggers (or stored functions called from table-updating statements) more than once. This patch addresses the first issue by enabling logging of all references to user-defined variables in triggers/stored functions and not only references from table-updating statements. The second issue stemmed from the fact that for user-defined variables used from triggers or stored functions called from table-updating statements we were writing binlog events for each reference instead of only one event for the first reference. This problem is already solved for stored functions called from non-updating statements with help of "event unioning" mechanism. So the patch simply extends this mechanism to the case affected. It also fixes small problem in this mechanism which caused wrong logging of references to user-variables in cases when non-updating statement called several stored functions which used the same variable and some of these function calls were omitted from binlog as they were not updating any tables.
-
- 23 Feb, 2007 1 commit
-
-
cbell/Chuck@mysql_cab_desk. authored
Triggers in SBR mode." BUG#14914 "SP: Uses of session variables in routines are not always replicated" BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated correctly" User-defined variables used inside of stored functions/triggers in statements which did not update tables directly were not replicated. We also had problems with replication of user-defined variables which were used in triggers (or stored functions called from table-updating statements) more than once. This patch addresses the first issue by enabling logging of all references to user-defined variables in triggers/stored functions and not only references from table-updating statements. The second issue stemmed from the fact that for user-defined variables used from triggers or stored functions called from table-updating statements we were writing binlog events for each reference instead of only one event for the first reference. This problem is already solved for stored functions called from non-updating statements with help of "event unioning" mechanism. So the patch simply extends this mechanism to the case affected. It also fixes small problem in this mechanism which caused wrong logging of references to user-variables in cases when non-updating statement called several stored functions which used the same variable and some of these function calls were omitted from binlog as they were not updating any tables.
-
- 06 Feb, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
Before this change, a local variables in stored procedures / stored functions or triggers, when declared with a type of bit(N), would not evaluate their value properly. The problem was that the data was incorrectly typed as a string, causing for example bit b'1', implemented as a byte 0x01, to be interpreted as a string starting with the character 0x01. This later would cause implicit conversions to integers or booleans to fail. The root cause of this problem was an incorrect translation between field types, like bit(N), and internal types used when representing values in Item objects. Also, before this change, the function HEX() would sometime print extra "0" characters when invoked with bit(N) values. With this fix, the type translation (sp_map_result_type, sp_map_item_type) has been changed so that bit(N) fields are represented with integer values. A consequence is that, for the function HEX(), when called with a stored procedure local variable of type bit(N) as argument, HEX() is provided with an integer instead of a string, and therefore does not print "0" padding. A test case for Bug 12976 was present in the test suite, and has been updated.
-
- 28 Jan, 2007 1 commit
-
-
monty@mysql.com/narttu.mysql.fi authored
Removed a lot of compiler warnings Removed not used variables, functions and labels Initialize some variables that could be used unitialized (fatal bugs) %ll -> %l
-
- 08 Jan, 2007 1 commit
-
-
guilhem@gbichot3.local authored
correctly in some cases". In short, calls to a stored function located in another database than the default database, may fail to replicate if the call was made by SET, SELECT, or DO. Longer: when a stored function is called from a statement which does not go to binlog ("SET @A=somedb.myfunc()", "SELECT somedb.myfunc()", "DO somedb.myfunc()"), this crafted statement is binlogged: "SELECT myfunc();" (accompanied with a mention of the default database if there is one). So, if "somedb" is not the default database, the slave would fail to find myfunc(). The fix is to specify the function's database name in the crafted binlogged statement, like this: "SELECT somedb.myfunc();". Test added in rpl_sp.test.
-
- 23 Dec, 2006 1 commit
-
-
kent@mysql.com/kent-amd64.(none) authored
Changed header to GPL version 2 only
-
- 14 Dec, 2006 1 commit
-
-
monty@mysql.com/narttu.mysql.fi authored
- Removed not used variables and functions - Added #ifdef around code that is not used - Renamed variables and functions to avoid conflicts - Removed some not used arguments Fixed some class/struct warnings in ndb Added define IS_LONGDATA() to simplify code in libmysql.c I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
-
- 07 Dec, 2006 1 commit
-
-
cbell/Chuck@suse.vabb.com authored
Please see worklog for details on files changed.
-
- 30 Nov, 2006 2 commits
-
-
monty@mysql.com/narttu.mysql.fi authored
Fixed compiler warnings (detected by VC++): - Removed not used variables - Added casts - Fixed wrong assignments to bool - Fixed wrong calls with bool arguments - Added missing argument to store(longlong), which caused wrong store method to be called.
-
monty@mysql.com/narttu.mysql.fi authored
- Removed not used variables - Changed some ulong parameters/variables to ulonglong (possible serious bug) - Added casts to get rid of safe assignment from longlong to long (and similar) - Added casts to function parameters - Fixed signed/unsigned compares - Added some constructores to structures - Removed some not portable constructs Better fix for bug Bug #21428 "skipped 9 bytes from file: socket (3)" on "mysqladmin shutdown" (Added new parameter to net_clear() to define when we want the communication buffer to be emptied)
-
- 26 Nov, 2006 1 commit
-
-
monty@mysql.com/nosik.monty.fi authored
Added missing DBUG_RETURN statements (in mysqldump.c) Added missing enums Fixed a lot of wrong DBUG_PRINT() statements, some of which could cause crashes Removed usage of %lld and %p in printf strings as these are not portable or produces different results on different systems.
-
- 17 Nov, 2006 1 commit
-
-
malff/marcsql@weblab.(none) authored
limitation) Note to the reviewer ==================== Warning: reviewing this patch is somewhat involved. Due to the nature of several issues all affecting the same area, fixing separately each issue is not practical, since each fix can not be implemented and tested independently. In particular, the issues with - rule recursion - nested case statements - forward jump resolution (backpatch list) are tightly coupled (see below). Definitions =========== The expression CASE expr WHEN expr THEN expr WHEN expr THEN expr ... END is a "Simple Case Expression". The expression CASE WHEN expr THEN expr WHEN expr THEN expr ... END is a "Searched Case Expression". The statement CASE expr WHEN expr THEN stmts WHEN expr THEN stmts ... END CASE is a "Simple Case Statement". The statement CASE WHEN expr THEN stmts WHEN expr THEN stmts ... END CASE is a "Searched Case Statement". A "Left Recursive" rule is like list: element | list element ; A "Right Recursive" rule is like list: element | element list ; Left and right recursion produces the same language, the difference only affects the *order* in which the text is parsed. In a descendant parser (usually written manually), right recursion works very well, and is typically implemented with a while loop. In an ascendant parser (yacc/bison) left recursion works very well, and is implemented naturally by the parser stack. In both cases, using the wrong type or recursion is very bad and should be avoided, as it causes technical issues with the parser implementation. Before this change ================== The "Simple Case Expression" and "Searched Case Expression" were both implemented by the "when_list" and "when_list2" rules, which are left recursive (ok). These rules, however, used lex->when_list instead of using the parser stack, which is more complex that necessary, and potentially dangerous because of other rules using THD::reset_lex. The "Simple Case Statement" and "Searched Case Statements" were implemented by the "sp_case", "sp_whens" and in part by "sp_proc_stmt" rules. Both cases were right recursive (bad). The grammar involved was convoluted, and is assumed to be the results of tweaks to get the code generation to work, but is not what someone would naturally write. In addition, using a common rule for both "Simple" and "Searched" case statements was implemented with sp_head::m_flags |= IN_SIMPLE_CASE, which is a flag and not a stack, and therefore does not take into account *nested* case statements. This leads to incorrect generated code, and either a server crash or an incorrect result. With regards to the backpatch mechanism, a *different* backpatch list was created for each jump from "WHEN expr THEN stmt" to "END CASE", which relied on the grammar to be right recursive. This is a mis-use of the backpatch list, since this list can resolve multiple references to the same target at once. The optimizer algorithm used to detect dead code in the "assembly" SQL instructions, implemented by sp_head::opt_mark(uint ip), was recursive in some cases (a conditional jump pointing forward to another conditional jump). In case of specially crafted code, like - a long list of "IF expr THEN stmt END IF" - a long CASE statement this would actually cause a server crash with a stack overflow. In general, having a stack that grows proportionally with user data (the SQL code given by the client in a CREATE PROCEDURE) is to be avoided. In debug builds only, creating a SP / SF / Trigger which had a significant amount of code would spend --literally-- several minutes in sp_head::create, because of the debug code involved with DBUG_PRINT("info", ("Code %s ... There are several issues with this code: - in a CASE with 5 000 WHEN, there are 15 000 instructions generated, which create a sting representation of the code which is 500 000 bytes long, - using a String instead of an io stream causes performances to degrade to a total server freeze, as time is spent doing realloc of a buffer always too short, - Printing a 500 000 long string in the debug log is too verbose, - Generating this string even when DBUG_PRINT is off is useless, - Having code that potentially can affect the server behavior, used with #ifdef / #endif is useful in some cases, but is also a bad practice. After this change ================= "Case Expressions" (both simple and searched) have been simplified to not use LEX::when_list, which has been removed. Considering all the issues affecting case statements, the grammar for these has been totally re written. The existing actions, used to generate "assembly" sp_inst* code, have been preserved but moved in the new grammar, with the following changes: a) Bison rules are no longer shared between "Simple" and "Searched" case statements, because a stack instead of a flag is required to handle them. Nested statements are handled naturally by the parser stack, which by definition uses the correct rule in the correct context. Nested statements of the opposite type (simple vs searched) works correctly. The flag sp_head::IN_SIMPLE_CASE is no longer used. This is a step towards resolution of WL#2999, which correctly identified that temporary parsing flags do not belong to sp_head. The code in the action is shared by mean of the case_stmt_action_xxx() helpers. b) The backpatch mechanism, used to resolve forward jumps in the generated code, has been changed to: - create a label for the instruction following 'END CASE', - register each jump at the end of a "WHEN expr THEN stmt" in a *unique* backpatch list associated with the 'END CASE' label - resolve all the forward jumps for this label at once. In addition, the code involving backpatch has been commented, so that a reader can now understand by reading matching "Registering" and "Resolving" comments how the forward jumps are resolved and what target they resolve to, as this is far from evident when reading the code alone. The implementation of sp_head::opt_mark() has been revised to avoid recursive calls from jump instructions, and instead add the jump location to the list of paths to explore during the flow analysis of the instruction graph, with a call to sp_head::add_mark_lead(). In addition, the flow analysis will stop if an instruction has already been marked as reachable, which the previous code failed to do in the recursive case. sp_head::opt_mark() is now private, to prevent new calls to this method from being introduced. The debug code present in sp_head::create() has been removed. Considering that SHOW PROCEDURE CODE is also available in debug builds, and can be used anytime regardless of the trace level, as opposed to "CREATE PROCEDURE" time and only if the trace was on, removing the code actually makes debugging easier (usable trace). Tests have been written to cover the parser overflow (big CASE), and to cover nested CASE statements.
-
- 09 Nov, 2006 1 commit
-
-
Problem: when embedding a character string with introducer with charset X into a SQL query which is generally in character set Y, the string constants were escaped according to their own character set (i.e.X), then after reading such a "mixed" query from binlog, the string constants were unescaped using character set of the query (i.e. Y), instead of X, which gave wrong results or even syntax errors with tricky charsets (e.g. sjis) Fix: when embedding a string constant of charset X into a query of charset Y, the string constant is now escaped according to character Y, instead of its own character set X.
-
- 01 Nov, 2006 1 commit
-
-
monty@mysql.com/nosik.monty.fi authored
Fixed some possible fatal wrong arguments to printf() style functions Initialized some not initialized variables Fixed bug in stored procedure and continue handlers (Fixes Bug#22150)
-