- 15 Feb, 2010 1 commit
-
-
Georgi Kodinov authored
Fixed a compilation warning.
-
- 13 Feb, 2010 1 commit
-
-
Davi Arnaut authored
This bug is just one facet of stored routines not being able to detect changes in meta-data (WL#4179). This particular problem can be triggered within a single session due to the improper management of the pre-locking list if the view is expanded after the pre-locking list is calculated. Since the overall solution for the meta-data detection issue is planned for a later release, for now a workaround is used to fix this particular aspect that only involves a single session. The workaround is to flush the thread-local stored routine cache every time a view is created or modified, causing locally cached routines to be re-evaluated upon invocation.
-
- 28 Jan, 2010 1 commit
-
-
Davi Arnaut authored
The problem was that a DROP TRIGGER statement inside a stored procedure could cause a crash in subsequent invocations. This was due to the addition, on the first execution, of a temporary table reference to the stored procedure query table list. In a subsequent invocation, there would be a attempt to reinitialize the temporary table reference, which by then was already gone. The solution is to backup and reset the query table list each time a trigger needs to be dropped. This ensures that any temp changes to the query table list are discarded. It is safe to do so at this time as drop trigger is restricted from more complicated scenarios (ie, not allowed within stored functions, etc).
-
- 12 Feb, 2010 13 commits
-
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Server crashes when accessing ARCHIVE table with missing .ARZ file. When opening a table, ARCHIVE didn't properly pass through error code from lower level azopen() to higher level open() method.
-
Sergey Vojtovich authored
Bulk REPLACE or bulk INSERT ... ON DUPLICATE KEY UPDATE may break dynamic record MyISAM table. The problem is limited to bulk REPLACE and INSERT ... ON DUPLICATE KEY UPDATE, because only these operations may be done via UPDATE internally and may request write cache. When flushing write cache, MyISAM may write remaining cached data at wrong position. Fixed by requesting write cache to seek to a correct position.
-
Sergey Vojtovich authored
table and view... Invalid memory reads after a query referencing MyISAM table multiple times with write lock. Invalid memory reads may lead to server crash, valgrind warnings, incorrect values in INFORMATION_SCHEMA.TABLES.{TABLE_ROWS, DATA_LENGTH, INDEX_LENGTH, ...}. This may happen when one of the table instances gets closed after a query, e.g. out of slots in open tables cache. UNION, MERGE and VIEW are irrelevant. The problem was that MyISAM didn't restore state info pointer to default value.
-
Sergey Glukhov authored
-
Sergey Glukhov authored
In case of 'CREATE VIEW' subselect transformation does not happen(see JOIN::prepare). During fix_fields Item_row may call is_null() method for its arugmens which leads to item calculation(wrong subselect in our case as transformation did not happen before). This is_null() call does not make sence for 'CREATE VIEW'. Note: Only Item_row is affected because other items don't call is_null() during fix_fields() for arguments.
-
Davi Arnaut authored
related bits.
-
- 11 Feb, 2010 2 commits
-
-
Joerg Bruehe authored
this includes a major whitespace (formatting) alignment and sequence changes to better agree with other spec files. Further changes: - All features are controlled by "%define" set from call options or builtin. - "bundled zlib" is on by default. - "with libgcc" is controlled by runtime detection of gcc. - Handling of "CFLAGS" and "CXXFLAGS" is more concentrated. - Several missing man pages were added.
-
Georgi Kodinov authored
when converting to a enumerated type.
-
- 10 Feb, 2010 1 commit
-
-
Davi Arnaut authored
SHOW CREATE TABLE on a view (v1) that contains a function whose statement uses another view (v2), could trigger a infinite loop if the view referenced within the function causes a warning to be raised while opening the said view (v2). The problem was a infinite loop over the stack of internal error handlers. The problem would be triggered if the stack contained two or more handlers and the first two handlers didn't handle the raised condition. In this case, the loop variable would always point to the second handler in the stack. The solution is to correct the loop variable assignment so that the loop is able to iterate over all handlers in the stack.
-
- 07 Feb, 2010 1 commit
-
-
Luis Soares authored
logging is disabled Post-push fix: disabling test when running mysqld in embedded mode.
-
- 06 Feb, 2010 1 commit
-
-
Gleb Shchepa authored
Grouping by a subquery in a query with a distinct aggregate function lead to a wrong result (wrong and unordered grouping values). There are two related problems: 1) The query like this: SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c FROM t1 GROUP BY aa returned wrong result, because the outer reference "t1.a" in the subquery was substituted with the Item_ref item. The Item_ref item obtains data from the result_field object that refreshes once after the end of each group. This data is not applicable to filesort since filesort() doesn't care about groups (and doesn't update result_field objects with copy_fields() and so on). Also that data is not applicable to group separation algorithm: end_send_group() checks every record with test_if_group_changed() that evaluates Item_ref items, but it refreshes those Item_ref-s only after the end of group, that is a vicious circle and the grouped column values in the output are shifted. Fix: if a) we grouping by a subquery and b) that subquery has outer references to FROM list of the grouping query, then we substitute these outer references with Item_direct_ref like references under aggregate functions: Item_direct_ref obtains data directly from the current record. 2) The query with a non-trivial grouping expression like: SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c FROM t1 GROUP BY aa+0 also returned wrong result, since JOIN::exec() substitutes references to top-level aliases in SELECT list with Item_copy caching items. Item_copy items have same refreshing policy as Item_ref items, so the whole groping expression with Item_copy inside returns wrong result in filesort() and end_send_group(). Fix: include aliased items into GROUP BY item tree instead of Item_ref references to them.
-
- 05 Feb, 2010 3 commits
-
-
Luis Soares authored
logging is disabled The server would hit an assertion because of a DBUG violation. There was a missing DBUG_RETURN and instead a plain return was used. This patch replaces the return with DBUG_RETURN.
-
Luis Soares authored
into slow log While processing a statement, down the mysql_parse execution stack, the thd->enable_slow_log can be assigned to opt_log_slow_admin_statements, depending whether one is executing administrative statements, such as ALTER TABLE, OPTIMIZE, ANALYZE, etc, or not. This can have an impact on slow logging for statements that are executed after an administrative statement execution is completed. When executing statements directly from the user this is fine because, the thd->enable_slow_log is reset right at the beginning of the dispatch_command function, ie, everytime a new statement is set is set to execute. On the other hand, for slave SQL thread (sql_thd) the story is a bit different. When in SBR the sql_thd applies statements by calling mysql_parse. Right after, it calls log_slow_statement function to log them if they take too long. Calling mysql_parse directly is fine, but also means that dispatch_command function is bypassed. As a consequence, thd->enable_slow_log does not get a chance to be reset before the next statement to be executed by the sql_thd. If the statement just executed by the sql_thd was an administrative statement and logging of admin statements was disabled, this means that sql_thd->enable_slow_log will be set to 0 (disabled) from that moment on. End result: sql_thd stops logging slow statements. We fix this by resetting the value of sql_thd->enable_slow_log to the value of opt_log_slow_slave_statements right after log_slow_stement is called by the sql_thd.
-
Luis Soares authored
To 5.x Release Notes ===== This is a backport of BUG#23300 into 5.1 GA. Original cset revid (in betony): luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h Description =========== When using replication, the slave will not log any slow query logs queries replicated from the master, even if the option "--log-slow-slave-statements" is set and these take more than "log_query_time" to execute. In order to log slow queries in replicated thread one needs to set the --log-slow-slave-statements, so that the SQL thread is initialized with the correct switch. Although setting this flag correctly configures the slave thread option to log slow queries, there is an issue with the condition that is used to check whether to log the slow query or not. When replaying binlog events the statement contains the SET TIMESTAMP clause which will force the slow logging condition check to fail. Consequently, the slow query logging will not take place. This patch addresses this issue by removing the second condition from the log_slow_statements as it prevents slow queries to be binlogged and seems to be deprecated.
-
- 03 Feb, 2010 1 commit
-
-
Georgi Kodinov authored
-
- 02 Feb, 2010 2 commits
-
-
Joerg Bruehe authored
Get rid of trailing blanks.
-
Joerg Bruehe authored
Cleanup, formatting improvements, vendor is Sun (since MySQL AB was bought). Backport the change so that RPM doesn't magically create a dependency on "Perl-DBI".
-
- 01 Feb, 2010 1 commit
-
-
Georgi Kodinov authored
-
- 29 Jan, 2010 1 commit
-
-
Georgi Kodinov authored
Fixed 2 problems : 1. test_if_order_by_key() was continuing on the primary key as if it has a primary key suffix (as the secondary keys do). This leads to crashes in ORDER BY <pk>,<pk>. Fixed by not treating the primary key as the secondary one and not depending on it being clustered with a primary key. 2. The cost calculation was trying to read the records per key when operating on ORDER BYs that order on all of the secondary key + some of the primary key. This leads to crashes because of out-of-bounds array access. Fixed by assuming we'll find 1 record per key in such cases.
-
- 05 Feb, 2010 1 commit
-
-
Davi Arnaut authored
The problem was that the dbug facility was being used after the per-thread dbug state had already been finalized. The was present in a few functions which invoked decrement_handler_count, which in turn invokes my_thread_end on Windows. In my_thread_end, the per-thread dbug state is finalized. Any use after the state is finalized ends up creating a new state. The solution is to process the exit of a function before the decrement_handler_count function is called.
-
- 29 Jan, 2010 2 commits
-
-
Georgi Kodinov authored
Updated the certs to expire on 2015. Made sure they work with both yassl and openssl.
-
Ramil Kalimullin authored
column is used for ORDER BY Problem: filesort isn't meant for null length sort data (e.g. char(0)), that leads to a server crash. Fix: disregard sort order if sort data record length is 0 (nothing to sort).
-
- 27 Jan, 2010 2 commits
-
-
Bjorn Munch authored
Define env. vars for both timeout settings This patch is for 5.0 (mtr v1) and should replaces for 5.1 up
-
Davi Arnaut authored
The problem was that a failure to open a view wasn't being properly handled. When opening a view with unknown definer, the open procedure would be treated as successful and would later crash when attempting to lock the view (which wasn't opened to begin with). The solution is to skip further processing when opening a table if it fails with a fatal error.
-
- 11 Feb, 2010 2 commits
-
-
Staale Smedseng authored
being logged to slow query log The problem is that the execution time for a multi-statement stored procedure as a whole may not be accurate, and thus not be entered into the slow query log even if the total time exceeds long_query_time. The reason for this is that THD::utime_after_lock used for time calculation may be reset at the start of each new statement, possibly leaving the total SP execution equal to the time spent executing the last statement in the SP. This patch stores the utime on start of SP execution, and restores it on exit of SP execution. A test is added.
-
Martin Hansson authored
-
- 10 Feb, 2010 3 commits
-
-
Luis Soares authored
-
Sergey Glukhov authored
The problem becomes apparent only if HAVE_purify is undefined. It related to the part of code placed in open_table_from_share() fuction where we initialize record buffer only if HAVE_purify is enabled. So in case of HAVE_purify=OFF record buffer is not initialized on open table stage. Next we read key, find NULL value and update appropriate null bit but do not update record buffer. After that the record is stored in the join cache(store_record_in_cache). For CHAR fields we strip trailing spaces and in our case this procedure uses uninitialized record buffer. The fix is to skip stripping space procedure in case of null values for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
-
Martin Hansson authored
error causes debug assertion The IGNORE option of the multiple-table UPDATE command was not intended to suppress errors caused by the sql_safe_updates mode. This flag will raise an error if the execution of UPDATE does not use a key for row retrieval, and should continue do so regardless of the IGNORE option. However the implementation of IGNORE does not support exceptions to the rule; it always converts errors to warnings and cannot be extended. The Internal_error_handler interface offers the infrastructure to handle individual errors, making sure that the error raised by sql_safe_updates is not silenced. Fixed by implementing an Internal_error_handler and using it for UPDATE IGNORE commands.
-
- 09 Feb, 2010 1 commit
-
-
Sergey Vojtovich authored
-