Commit 7617d198 authored by monty@donna.mysql.com's avatar monty@donna.mysql.com

Lots of fixes for BDB tables

Change DROP TABLE to first drop the data, then the .frm file
parent c475a988
......@@ -425,3 +425,4 @@ mysql-test/var/slave-data/mysql-bin.012
mysql-test/var/slave-data/mysql-bin.013
mysql-test/var/slave-data/mysql-bin.014
mysql-test/var/slave-data/mysql-bin.index
scripts/mysqld_multi
jani@prima.mysql.com
sasha@mysql.sashanet.com
sasha@work.mysql.com
serg@serg.mysql.com
jani@prima.mysql.fi
monty@donna.mysql.com
......@@ -31688,9 +31688,9 @@ for a similar query to get the correct row count.
@cindex Borland Buidler 4 program
@item Borland Builder 4
When you start a query you can use the property @code{Active} or use the
method @code{Open}. Note that @code{Active} will start by automatically issuing
a @code{SELECT * FROM ...} query that may not be a good thing if your tables
are big!
method @code{Open}. Note that @code{Active} will start by automatically
issuing a @code{SELECT * FROM ...} query that may not be a good thing if
your tables are big!
@item ColdFusion (On Unix)
The following information is taken from the ColdFusion documentation:
......@@ -31702,11 +31702,16 @@ newer version should also work.) You can download @strong{MyODBC} at
@uref{http://www.mysql.com/downloads/api-myodbc.html}
@cindex ColdFusion program
ColdFusion Version 4.5.1 allows you to us the ColdFusion Administrator to add
the @strong{MySQL} data source. However, the driver is not included with
ColdFusion Version 4.5.1. Before the @strong{MySQL} driver will appear in the ODBC
datasources drop-down list, you must build and copy the @strong{MyODBC} driver
to @file{/opt/coldfusion/lib/libmyodbc.so}.
ColdFusion Version 4.5.1 allows you to us the ColdFusion Administrator
to add the @strong{MySQL} data source. However, the driver is not
included with ColdFusion Version 4.5.1. Before the @strong{MySQL} driver
will appear in the ODBC datasources drop-down list, you must build and
copy the @strong{MyODBC} driver to
@file{/opt/coldfusion/lib/libmyodbc.so}.
The Contrib directory contains the program mydsn-xxx.zip which allows
you to build and remove the DSN registry file for the MyODBC driver
on Coldfusion applications.
@cindex DataJunction
@item DataJunction
......@@ -38643,13 +38648,18 @@ databases. By Hal Roberts.
Interface for Stk. Stk is the Tk widgets with Scheme underneath instead of Tcl.
By Terry Jones.
@item @uref{http://www.mysql.com/Downloads/Contrib/eiffel-wrapper-1.0.tar.gz,eiffel-wrapper-1.0.tar.gz}.
@item @uref{http://www.mysql.com/Downloads/Contrib/eiffel-wrapper-1.0.tar.gz,eiffel-wrapper-1.0.tar.gz}
Eiffel wrapper by Michael Ravits.
@item @uref{http://www.mysql.com/Downloads/Contrib/SQLmy0.06.tgz,SQLmy0.06.tgz}.
@item @uref{http://www.mysql.com/Downloads/Contrib/SQLmy0.06.tgz,SQLmy0.06.tgz}
FlagShip Replaceable Database Driver (RDD) for MySQL. By Alejandro
Fernandez Herrero.
@uref{http://www.fship.com/rdds.html, Flagship RDD home page}
@item @uref{http://www.mysql.com/Downloads/Contrib/mydsn-1.0.zip,mydsn-1.0.zip}
Binary and source for @code{mydsn.dll}. mydsn should be used to build
and remove the DSN registry file for the MyODBC driver in Coldfusion
applications. By Miguel Angel Solórzano.
@end itemize
@appendixsec Clients
......@@ -39603,36 +39613,49 @@ though, so Version 3.23 is not released as a stable version yet.
@appendixsubsec Changes in release 3.23.29
@itemize @bullet
@item
Changed drop table to first drop the tables and then the @code{.frm} file.
@item
Fixed a bug in the hostname cache which caused @code{mysqld} to report the
hostname as '' in some error messages.
@item
Fixed a bug with @code{HEAP} type tables; the variable
@code{max_heap_table_size} wasn't used. Now either @code{MAX_ROWS} or
@code{max_heap_table_size} can be used to limit the size of a @code{HEAP}
type table.
@item
Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
@item
Changed the default server-id to 1 for masters and 2 for slaves
to make it easier to use the binary log.
@item
Added @code{CHECK}, @code{ANALYZE} and @code{OPTIMIZE} of BDB tables.
Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
@item
Added support for @code{auto_increment} on sub fields for BDB tables.
@item
Added @code{ANALYZE} of BDB tables.
@item
Store in BDB tables the number of rows; This helps to optimize queries
when we need an approximation of the number of row.
@item
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and
@code{DROP INDEX} are now transaction endpoints.
If we get an error in a multi-row statement, we now only rollback the
last statement, not the entire transaction.
@item
If you do a @code{ROLLBACK} when you have updated a non-transactional table
you will get an error as a warning.
@item
Added option @code{--bdb-shared-data} to @code{mysqld}.
@item
Added status variable @code{Slave_open_temp_tables}.
@item
Added variables @code{binlog_cache_size} and @code{max_binlog_cache_size} to
@code{mysqld}.
@item
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and
@code{DROP INDEX} are now transaction endpoints.
@item
If you do a @code{DROP DATABASE} on a symbolic linked database, both
the link and the original database is deleted.
@item
Fixed that @code{DROP DATABASE} works on OS/2.
@item
New client @code{mysqld_multi}. @xref{mysqld_multi}.
@item
Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN
table2..} when table2 was empty.
@item
......@@ -39640,13 +39663,13 @@ Added @code{--abort-slave-event-count} and
@code{--disconnect-slave-event-count} options to @code{mysqld} for
debugging and testing of replication.
@item
added @code{Slave_open_temp_tables} status variable.
@item
Fixed replication of temporary tables. Handles everything except
slave server restart.
@item
@code{SHOW KEYS} now shows whether or not key is @code{FULLTEXT}.
@item
New script @code{mysqld_multi}. @xref{mysqld_multi}.
@item
Added new script, @file{mysql-multi.server.sh}. Thanks to
Tim Bunce @email{Tim.Bunce@@ig.co.uk} for modifying @file{mysql.server} to
easily handle hosts running many @code{mysqld} processes.
......@@ -39682,12 +39705,6 @@ with FrontBase.
Allow @code{RESTRICT} and @code{CASCADE} after @code{DROP TABLE} to make
porting easier.
@item
If we get an error we now only rollback the statement (for BDB tables),
not the entire transaction.
@item
If you do a @code{ROLLBACK} when you have updated a non-transactional table
you will get an error as a warning.
@item
Reset status variable which could cause problem if one used @code{--slow-log}.
@item
Added variable @code{connect_timeout} to @code{mysql} and @code{mysqladmin}.
......@@ -44053,6 +44070,32 @@ Fixed @code{DISTINCT} with calculated columns.
@node Bugs, TODO, News, Top
@appendix Known errors and design deficiencies in MySQL
The following problems are known and have a very high priority to get
fixed:
@itemize @bullet
@item
@code{ANALYZE TABLE} on a BDB table may in some case make the table
unusable until one has restarted @code{mysqld}. When this happens you will
see errors like the following in the @strong{MySQL} error file:
@example
001207 22:07:56 bdb: log_flush: LSN past current end-of-log
@end example
@item
Don't execute @code{ALTER TABLE} on a @code{BDB} table on which you are
running not completed multi-statement transactions. (The transaction
will probably be ignored).
@item
Doing a @code{LOCK TABLE ..} and @code{FLUSH TABLES ..} doesn't
guarantee that there isn't a half-finished transaction in progress on the
table.
@end itemize
The following problems are known and will be fixed in due time:
@itemize @bullet
@item
@code{mysqldump} on a @code{MERGE} table doesn't include the current
......@@ -44120,7 +44163,7 @@ you a nice speed increase as it allows @strong{MySQL} to do some
optimizations that otherwise would be very hard to do.
If you set a column to a wrong value, @strong{MySQL} will, instead of doing
a rollback, store the @code{best possible value} in the column.
a rollback, store the @code{best possible value} in the column:
@itemize @bullet
@item
......@@ -44144,6 +44187,7 @@ If the date is totally wrong, @strong{MySQL} will store the special
If you set an @code{enum} to an unsupported value, it will be set to
the error value 'empty string', with numeric value 0.
@end itemize
@item
If you execute a @code{PROCEDURE} on a query that returns an empty set,
in some cases the @code{PROCEDURE} will not transform the columns.
......@@ -51,7 +51,7 @@ my_global.h: global.h
# These files should not be included in distributions since they are
# generated by configure from the .h.in files
dist-hook:
rm -f $(distdir)/mysql_version.h $(distdir)/my_config.h
$(RM) -f $(distdir)/mysql_version.h $(distdir)/my_config.h
# Don't update the files from bitkeeper
%::SCCS/s.%
......@@ -32,20 +32,19 @@
void my_b_seek(IO_CACHE *info,my_off_t pos)
{
if(info->type == READ_CACHE)
{
info->rc_pos=info->rc_end=info->buffer;
}
else if(info->type == WRITE_CACHE)
{
byte* try_rc_pos;
try_rc_pos = info->rc_pos + (pos - info->pos_in_file);
if(try_rc_pos >= info->buffer && try_rc_pos <= info->rc_end)
info->rc_pos = try_rc_pos;
else
flush_io_cache(info);
}
if (info->type == READ_CACHE)
{
info->rc_pos=info->rc_end=info->buffer;
}
else if (info->type == WRITE_CACHE)
{
byte* try_rc_pos;
try_rc_pos = info->rc_pos + (pos - info->pos_in_file);
if (try_rc_pos >= info->buffer && try_rc_pos <= info->rc_end)
info->rc_pos = try_rc_pos;
else
flush_io_cache(info);
}
info->pos_in_file=pos;
info->seek_not_done=1;
}
......
......@@ -37,10 +37,12 @@ WARNING: THIS IS VERY MUCH A FIRST-CUT ALPHA. Comments/patches welcome.
# Documentation continued at end of file
my $VERSION = "1.9";
my $opt_tmpdir= $main::env{TMPDIR};
my $opt_tmpdir= $main::ENV{TMPDIR};
my $OPTIONS = <<"_OPTIONS";
$0 Ver $VERSION
Usage: $0 db_name [new_db_name | directory]
-?, --help display this helpscreen and exit
......@@ -115,6 +117,8 @@ GetOptions( \%opt,
my @db_desc = ();
my $tgt_name = undef;
usage("") if ($opt{help});
if ( $opt{regexp} || $opt{suffix} || @ARGV > 2 ) {
$tgt_name = pop @ARGV unless ( exists $opt{suffix} );
@db_desc = map { s{^([^\.]+)\./(.+)/$}{$1}; { 'src' => $_, 't_regex' => ( $2 ? $2 : '.*' ) } } @ARGV;
......@@ -133,10 +137,9 @@ else {
}
}
my $mysqld_help;
my %mysqld_vars;
my $start_time = time;
my $opt_tmpdir= $opt{tempdir} ? $opt{tmpdir} : $main::env{TMPDIR};
my $opt_tmpdir= $opt{tmpdir} ? $opt{tmpdir} : $main::ENV{TMPDIR};
$0 = $1 if $0 =~ m:/([^/]+)$:;
$opt{quiet} = 0 if $opt{debug};
$opt{allowold} = 1 if $opt{keepold};
......@@ -310,15 +313,19 @@ print Dumper( \@db_desc ) if ( $opt{debug} );
die "No tables to hot-copy" unless ( length $hc_locks );
# --- create target directories ---
# --- create target directories if we are using 'cp' ---
my @existing = ();
foreach my $rdb ( @db_desc ) {
if ($opt{method} =~ /^cp\b/)
{
foreach my $rdb ( @db_desc ) {
push @existing, $rdb->{target} if ( -d $rdb->{target} );
}
}
die "Can't hotcopy to '", join( "','", @existing ), "' because already exist and --allowold option not given.\n"
if ( @existing && !$opt{allowold} );
die "Can't hotcopy to '", join( "','", @existing ), "' because already exist and --allowold option not given.\n"
if ( @existing && !$opt{allowold} );
}
retire_directory( @existing ) if ( @existing );
......@@ -385,54 +392,11 @@ foreach my $rdb ( @db_desc )
push @failed, "$rdb->{src} -> $rdb->{target} failed: $@"
if ( $@ );
@files = map { "$datadir/$rdb->{src}/$_" } @{$rdb->{index}};
@files = @{$rdb->{index}};
if ($rdb->{index})
{
#
# Copy only the header of the index file
#
my $tmpfile="$opt_tmpdir/mysqlhotcopy$$";
foreach my $file ($rdb->{index})
{
my $from="$datadir/$rdb->{src}/$file";
my $to="$rdb->{target}/$file";
my $buff;
open(INPUT, $from) || die "Can't open file $from: $!\n";
my $length=read INPUT, $buff, 2048;
die "Can't read index header from $from\n" if ($length <= 1024);
close INPUT;
if ( $opt{dryrun} )
{
print '$opt{method}-header $from $to\n';
}
elsif ($opt{method} eq 'cp')
{
!open(OUTPUT,$to) || die "Can\'t create file $to: $!\n";
if (write(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $to: $!\n";
}
close OUTPUT || die "Error on close of $to: $!\n";
}
elsif ($opt{method} eq 'scp')
{
my $tmp=$tmpfile;
open(OUTPUT,"$tmp") || die "Can\'t create file $tmp: $!\n";
if (write(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $tmp: $!\n";
}
close OUTPUT || die "Error on close of $tmp: $!\n";
safe_system('scp $tmp $to');
}
else
{
die "Can't use unsupported method '$opt{method}'\n";
}
}
unlink "$opt_tmpdir/mysqlhotcopy$$";
copy_index($opt{method}, \@files,
"$datadir/$rdb->{src}", $rdb->{target} );
}
if ( $opt{checkpoint} ) {
......@@ -534,9 +498,62 @@ sub copy_files {
safe_system (@cmd);
}
#
# Copy only the header of the index file
#
sub copy_index
{
my ($method, $files, $source, $target) = @_;
my $tmpfile="$opt_tmpdir/mysqlhotcopy$$";
print "Copying indices for ".@$files." files...\n" unless $opt{quiet};
foreach my $file (@$files)
{
my $from="$source/$file";
my $to="$target/$file";
my $buff;
open(INPUT, "<$from") || die "Can't open file $from: $!\n";
my $length=read INPUT, $buff, 2048;
die "Can't read index header from $from\n" if ($length < 1024);
close INPUT;
if ( $opt{dryrun} )
{
print "$opt{method}-header $from $to\n";
}
elsif ($opt{method} eq 'cp')
{
open(OUTPUT,">$to") || die "Can\'t create file $to: $!\n";
if (syswrite(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $to: $!\n";
}
close OUTPUT || die "Error on close of $to: $!\n";
}
elsif ($opt{method} eq 'scp')
{
my $tmp=$tmpfile;
open(OUTPUT,">$tmp") || die "Can\'t create file $tmp: $!\n";
if (syswrite(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $tmp: $!\n";
}
close OUTPUT || die "Error on close of $tmp: $!\n";
safe_system("scp $tmp $to");
}
else
{
die "Can't use unsupported method '$opt{method}'\n";
}
}
unlink "$tmpfile" if ($opt{method} eq 'scp');
}
sub safe_system
{
my @cmd=shift;
my @cmd= @_;
if ( $opt{dryrun} )
{
......@@ -546,7 +563,7 @@ sub safe_system
## for some reason system fails but backticks works ok for scp...
print "Executing '@cmd'\n" if $opt{debug};
my $cp_status = system @cmd;
my $cp_status = system "@cmd > /dev/null";
if ($cp_status != 0) {
warn "Burp ('scuse me). Trying backtick execution...\n" if $opt{debug}; #'
## try something else
......@@ -680,7 +697,9 @@ UNIX domain socket to use when connecting to local server
=item --noindices
don't include index files in copy
Don\'t include index files in copy. Only up to the first 2048 bytes
are copied; You can restore the indexes with isamchk -r or myisamchk -r
on the backup.
=item --method=#
......@@ -689,9 +708,10 @@ method for copy (only "cp" currently supported). Alpha support for
will vary with your ability to understand how scp works. 'man scp'
and 'man ssh' are your friends.
The destination directory _must exist_ on the target machine using
the scp method. Liberal use of the --debug option will help you figure
out what's really going on when you do an scp.
The destination directory _must exist_ on the target machine using the
scp method. --keepold and --allowold are meeningless with scp.
Liberal use of the --debug option will help you figure out what\'s
really going on when you do an scp.
Note that using scp will lock your tables for a _long_ time unless
your network connection is _fast_. If this is unacceptable to you,
......@@ -755,3 +775,4 @@ Ralph Corderoy - added synonyms for commands
Scott Wiersdorf - added table regex and scp support
Monty - working --noindex (copy only first 2048 bytes of index file)
Fixes for --method=scp
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 16:58:55
Testing server 'PostgreSQL version ???' at 2000-12-05 5:18:45
ATIS table test
Creating tables
Time for create_table (28): 1 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_table (28): 0 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Inserting data
Time to insert (9768): 9 wallclock secs ( 2.71 usr 0.43 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to insert (9768): 9 wallclock secs ( 2.88 usr 0.35 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Retrieving data
Time for select_simple_join (500): 3 wallclock secs ( 0.76 usr 0.04 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_join (200): 13 wallclock secs ( 4.80 usr 0.22 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_distinct (800): 17 wallclock secs ( 2.10 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (2500): 44 wallclock secs ( 1.57 usr 0.13 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_simple_join (500): 3 wallclock secs ( 0.69 usr 0.04 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_join (200): 14 wallclock secs ( 5.18 usr 0.20 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_distinct (800): 17 wallclock secs ( 2.21 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (2600): 45 wallclock secs ( 1.73 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Removing tables
Time to drop_table (28): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 88 wallclock secs (11.97 usr 0.85 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to drop_table (28): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 89 wallclock secs (12.72 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 1:58:36
Testing server 'PostgreSQL version ???' at 2000-12-05 5:20:15
Testing of ALTER TABLE
Testing with 1000 columns and 1000 rows in 20 steps
Insert data into the table
Time for insert (1000) 1 wallclock secs ( 0.35 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert (1000) 0 wallclock secs ( 0.28 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for alter_table_add (992): 46 wallclock secs ( 0.32 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for alter_table_add (992): 28 wallclock secs ( 0.41 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_index (8): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for drop_index (8): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 50 wallclock secs ( 0.67 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 29 wallclock secs ( 0.71 usr 0.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 1:59:26
Testing server 'PostgreSQL version ???' at 2000-12-05 5:20:45
Testing of some unusual tables
All tests are done 1000 times with 1000 fields
Testing table with 1000 fields
Testing select * from table with 1 record
Time to select_many_fields(1000): 389 wallclock secs ( 3.71 usr 0.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_many_fields(1000): 402 wallclock secs ( 3.75 usr 0.32 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select all_fields from table with 1 record
Time to select_many_fields(1000): 497 wallclock secs ( 4.04 usr 0.23 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_many_fields(1000): 489 wallclock secs ( 4.32 usr 0.34 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert VALUES()
Time to insert_many_fields(1000): 143 wallclock secs ( 0.43 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to insert_many_fields(1000): 144 wallclock secs ( 0.38 usr 0.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert (all_fields) VALUES()
Time to insert_many_fields(1000): 214 wallclock secs ( 0.57 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to insert_many_fields(1000): 213 wallclock secs ( 0.80 usr 0.05 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 1244 wallclock secs ( 8.76 usr 0.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 1248 wallclock secs ( 9.27 usr 0.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 17:01:48
Testing server 'PostgreSQL version ???' at 2000-12-05 5:41:34
Testing the speed of connecting to the server and sending of data
All tests are done 10000 times
Testing connection/disconnect
Time to connect (10000): 129 wallclock secs ( 8.57 usr 4.58 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to connect (10000): 125 wallclock secs ( 9.11 usr 3.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test connect/simple select/disconnect
Time for connect+select_simple (10000): 142 wallclock secs (11.34 usr 5.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for connect+select_simple (10000): 140 wallclock secs (12.15 usr 5.74 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test simple select
Time for select_simple (10000): 5 wallclock secs ( 2.71 usr 0.49 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_simple (10000): 4 wallclock secs ( 2.96 usr 0.45 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing connect/select 1 row from table/disconnect
Time to connect+select_1_row (10000): 176 wallclock secs (11.82 usr 5.48 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to connect+select_1_row (10000): 173 wallclock secs (12.56 usr 5.56 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select 1 row from table
Time to select_1_row (10000): 7 wallclock secs ( 2.56 usr 0.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_1_row (10000): 7 wallclock secs ( 3.10 usr 0.50 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select 2 rows from table
Time to select_2_rows (10000): 7 wallclock secs ( 2.76 usr 0.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_2_rows (10000): 6 wallclock secs ( 2.75 usr 0.54 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test select with aritmetic (+)
Time for select_column+column (10000): 8 wallclock secs ( 2.28 usr 0.49 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_column+column (10000): 9 wallclock secs ( 2.41 usr 0.31 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing retrieval of big records (7000 bytes)
Time to select_big (10000): 8 wallclock secs ( 3.76 usr 0.68 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_big (10000): 8 wallclock secs ( 3.74 usr 0.88 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 482 wallclock secs (45.81 usr 18.33 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 472 wallclock secs (48.80 usr 17.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 17:09:50
Testing server 'PostgreSQL version ???' at 2000-12-05 5:49:26
Testing the speed of creating and droping tables
Testing with 10000 tables and 10000 loop count
Testing create of tables
Time for create_MANY_tables (10000): 455 wallclock secs ( 8.09 usr 1.12 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_MANY_tables (10000): 448 wallclock secs ( 7.42 usr 0.95 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Accessing tables
Time to select_group_when_MANY_tables (10000): 188 wallclock secs ( 3.03 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to select_group_when_MANY_tables (10000): 187 wallclock secs ( 2.71 usr 0.68 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing drop
Time for drop_table_when_MANY_tables (10000): 1328 wallclock secs ( 2.91 usr 0.56 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for drop_table_when_MANY_tables (10000): 1324 wallclock secs ( 3.41 usr 0.51 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing create+drop
Time for create+drop (10000): 3022 wallclock secs (10.18 usr 1.71 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_key+drop (10000): 3752 wallclock secs ( 8.40 usr 1.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 8745 wallclock secs (32.62 usr 4.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create+drop (10000): 2954 wallclock secs (11.24 usr 1.81 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_key+drop (10000): 4055 wallclock secs (10.98 usr 1.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 8968 wallclock secs (35.76 usr 5.26 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 2:20:11
Testing server 'PostgreSQL version ???' at 2000-12-05 8:18:54
Testing the speed of inserting data into 1 table and do some selects on it.
The tests are done with a table that has 100000 rows.
......@@ -8,73 +8,91 @@ Creating tables
Inserting 100000 rows in order
Inserting 100000 rows in reverse order
Inserting 100000 rows in random order
Time for insert (300000): 7729 wallclock secs (94.80 usr 16.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert (300000): 7486 wallclock secs (94.98 usr 16.58 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert of duplicates
Time for insert_duplicates (300000): 55 wallclock secs (29.54 usr 3.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert_duplicates (100000): 3055 wallclock secs (60.75 usr 8.53 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Retrieving data from the table
Time for select_big (10:3000000): 53 wallclock secs (22.20 usr 0.75 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key (10:3000000): 118 wallclock secs (22.03 usr 0.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by (10:3000000): 103 wallclock secs (22.05 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_diff_key (500:1000): 13 wallclock secs ( 0.17 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_big (10:3000000): 54 wallclock secs (21.95 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big_key (10:3000000): 115 wallclock secs (22.06 usr 0.67 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big_key_desc (10:3000000): 116 wallclock secs (22.15 usr 0.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big_key2 (10:3000000): 118 wallclock secs (22.07 usr 0.53 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big_key_diff (10:3000000): 126 wallclock secs (22.20 usr 0.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big (10:3000000): 121 wallclock secs (21.92 usr 0.67 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_range (500:125750): 16 wallclock secs ( 1.21 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key (500:125750): 15 wallclock secs ( 1.09 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key2_diff (500:250500): 19 wallclock secs ( 2.00 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_diff_key (500:1000): 13 wallclock secs ( 0.24 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
165 queries in 165 loops of 5000 loops took 605 seconds
Estimated time for select_range_prefix (5000:1386): 18333 wallclock secs ( 3.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
180 queries in 180 loops of 5000 loops took 653 seconds
Estimated time for select_range_prefix (5000:1512): 18138 wallclock secs ( 5.00 usr 0.28 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
165 queries in 165 loops of 5000 loops took 603 seconds
Estimated time for select_range (5000:1386): 18272 wallclock secs ( 5.45 usr 0.91 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
165 queries in 165 loops of 5000 loops took 614 seconds
Estimated time for select_range_key2 (5000:1386): 18606 wallclock secs ( 3.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
23746 queries in 11873 loops of 100000 loops took 601 seconds
Estimated time for select_key_prefix (200000): 5061 wallclock secs (67.04 usr 11.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
24340 queries in 12170 loops of 100000 loops took 601 seconds
Estimated time for select_key_prefix (200000): 4938 wallclock secs (67.63 usr 10.85 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
23796 queries in 11898 loops of 100000 loops took 601 seconds
Estimated time for select_key (200000): 5051 wallclock secs (66.15 usr 11.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
24198 queries in 12099 loops of 100000 loops took 601 seconds
Estimated time for select_key (200000): 4967 wallclock secs (68.44 usr 12.65 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
24362 queries in 12181 loops of 100000 loops took 601 seconds
Estimated time for select_key2 (200000): 4933 wallclock secs (67.48 usr 11.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test of compares with simple ranges
Note: Query took longer then time-limit: 600
Estimating end time based on:
2000 queries in 50 loops of 500 loops took 605 seconds
Estimated time for select_range_prefix (20000:4350): 6050 wallclock secs ( 3.50 usr 0.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
1920 queries in 48 loops of 500 loops took 603 seconds
Estimated time for select_range_prefix (20000:4176): 6281 wallclock secs ( 4.69 usr 0.52 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
2000 queries in 50 loops of 500 loops took 603 seconds
Estimated time for select_range (20000:4350): 6030 wallclock secs ( 4.30 usr 0.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (111): 233 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
1480 queries in 37 loops of 500 loops took 611 seconds
Estimated time for select_range_key2 (20000:3219): 8256 wallclock secs ( 4.59 usr 1.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (111): 240 wallclock secs ( 0.03 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
1362 queries in 227 loops of 2500 loops took 601 seconds
Estimated time for min_max_on_key (15000): 6618 wallclock secs ( 5.40 usr 0.33 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for min_max (60): 55 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_on_key (100): 116 wallclock secs ( 0.04 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count (100): 121 wallclock secs ( 0.03 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (20): 139 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
1314 queries in 219 loops of 2500 loops took 603 seconds
Estimated time for min_max_on_key (15000): 6883 wallclock secs ( 4.00 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for min_max (60): 58 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_on_key (100): 120 wallclock secs ( 0.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count (100): 130 wallclock secs ( 0.01 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (20): 143 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of keys with functions
Time for update_of_key (500): 2520 wallclock secs (13.97 usr 2.44 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_of_key_big (501): 249 wallclock secs ( 0.12 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_of_key (50000): 2460 wallclock secs (15.33 usr 3.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_of_key_big (501): 444 wallclock secs ( 0.20 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update with key
Time for update_with_key (100000): 15050 wallclock secs (85.10 usr 15.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_with_key (300000): 14806 wallclock secs (89.73 usr 16.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of all rows
Time for update_big (500): 2330 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_big (10): 1894 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing INSERT INTO ... SELECT
Time for insert_select_1_key (1): 49 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert_select_2_keys (1): 43 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for drop table(2): 20 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing delete
Time for delete_key (10000): 256 wallclock secs ( 3.10 usr 0.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for delete_big (12): 1914 wallclock secs ( 0.00 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for delete_key (10000): 283 wallclock secs ( 2.91 usr 0.52 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for delete_all (12): 341 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Insert into table with 16 keys and with a primary key with 16 parts
Time for insert_key (100000): 3825 wallclock secs (33.55 usr 6.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert_key (100000): 3693 wallclock secs (33.29 usr 5.64 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of keys
Time for update_of_key (256): 2218 wallclock secs ( 0.12 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_of_key (256): 1164 wallclock secs ( 0.08 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Deleting rows from the table
Time for delete_big_many_keys (128): 30 wallclock secs ( 0.07 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Deleting everything from table
Time for delete_big_many_keys (2): 10 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for delete_all_many_keys (1): 31 wallclock secs ( 0.07 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 102579 wallclock secs (481.81 usr 72.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 110214 wallclock secs (659.27 usr 91.88 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 13:49:53
Testing server 'PostgreSQL version ???' at 2000-12-05 20:00:31
Testing the speed of selecting on keys that consist of many parts
The test-table has 10000 rows and the test is done with 12 ranges.
Creating table
Inserting 10000 rows
Time to insert (10000): 254 wallclock secs ( 3.38 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to insert (10000): 254 wallclock secs ( 3.11 usr 0.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing big selects on the table
Time for select_big (70:17207): 3 wallclock secs ( 0.14 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_range (410:75949): 34 wallclock secs ( 0.85 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_big (70:17207): 2 wallclock secs ( 0.17 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_range (410:75949): 35 wallclock secs ( 0.87 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
10094 queries in 1442 loops of 10000 loops took 601 seconds
Estimated time for min_max_on_key (70000): 4167 wallclock secs (20.87 usr 4.65 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
9807 queries in 1401 loops of 10000 loops took 601 seconds
Estimated time for min_max_on_key (70000): 4289 wallclock secs (20.56 usr 3.14 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
12580 queries in 2516 loops of 10000 loops took 601 seconds
Estimated time for count_on_key (50000): 2388 wallclock secs (13.00 usr 3.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
12395 queries in 2479 loops of 10000 loops took 601 seconds
Estimated time for count_on_key (50000): 2424 wallclock secs (16.70 usr 2.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_group_on_key_parts (1000:0): 238 wallclock secs ( 1.01 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_group_on_key_parts (1000:100000): 242 wallclock secs ( 1.19 usr 0.05 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing count(distinct) on the table
Time for count_distinct (1000:2000): 232 wallclock secs ( 0.39 usr 0.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key (1000:6000): 169 wallclock secs ( 0.37 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key_parts (1000:100000): 267 wallclock secs ( 1.11 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group (1000:100000): 268 wallclock secs ( 1.09 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (1000:10000000): 552 wallclock secs (82.22 usr 2.83 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 8574 wallclock secs (124.45 usr 11.39 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct (2000:2000): 235 wallclock secs ( 0.76 usr 0.12 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key (1000:6000): 174 wallclock secs ( 0.44 usr 0.11 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key_parts (1000:100000): 270 wallclock secs ( 1.43 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group (1000:100000): 271 wallclock secs ( 1.27 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (100:1000000): 57 wallclock secs ( 8.24 usr 0.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 8255 wallclock secs (54.76 usr 6.93 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 14:43:33
Testing server 'PostgreSQL version ???' at 2000-12-05 20:46:15
Wisconsin benchmark test
Time for create_table (3): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_table (3): 1 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Inserting data
Time to insert (31000): 791 wallclock secs ( 9.20 usr 1.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to delete_big (1): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to insert (31000): 793 wallclock secs ( 8.99 usr 1.89 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to delete_big (1): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Running actual benchmark
Time for wisc_benchmark (114): 16 wallclock secs ( 3.11 usr 0.27 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for wisc_benchmark (114): 18 wallclock secs ( 3.04 usr 0.25 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 810 wallclock secs (12.32 usr 1.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 813 wallclock secs (12.05 usr 2.14 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
#This file is automaticly generated by crash-me 1.45
#This file is automaticly generated by crash-me 1.54
NEG=yes # update of column= -column
Need_cast_for_null=no # Need to cast NULL for arithmetic
......@@ -18,40 +18,44 @@ alter_drop_unique=no # Alter table drop unique
alter_modify_col=no # Alter table modify column
alter_rename_table=yes # Alter table rename table
atomic_updates=no # atomic updates
automatic_rowid=no # Automatic rowid
automatic_rowid=no # Automatic row id
binary_numbers=no # binary numbers (0b1001)
binary_strings=yes # binary strings (b'0110')
case_insensitive_strings=no # case insensitive compare
case_insensitive_strings=no # Case insensitive compare
char_is_space_filled=yes # char are space filled
column_alias=yes # Column alias
columns_in_group_by=+64 # number of columns in group by
columns_in_order_by=+64 # number of columns in order by
comment_#=no # # as comment
comment_--=yes # -- as comment
comment_--=yes # -- as comment (ANSI)
comment_/**/=yes # /* */ as comment
comment_//=no # // as comment
comment_//=no # // as comment (ANSI)
compute=no # Compute
connections=32 # Simultaneous connections (installation default)
constraint_check=yes # Column constraints
constraint_check_table=yes # Table constraints
constraint_null=yes # NULL constraint (SyBase style)
crash_me_safe=yes # crash me safe
crash_me_version=1.45 # crash me version
crash_me_version=1.54 # crash me version
create_default=yes # default value for column
create_default_func=no # default value function for column
create_if_not_exists=no # create table if not exists
create_index=yes # create index
create_schema=no # Create SCHEMA
create_table_select=no # create table from select
create_table_select=with AS # create table from select
cross_join=yes # cross join (same as from a,b)
date_infinity=no # Supports 'infinity dates
date_last=yes # Supports 9999-12-31 dates
date_one=yes # Supports 0001-01-01 dates
date_with_YY=yes # Supports YY-MM-DD 2000 compilant dates
date_zero=no # Supports 0000-00-00 dates
domains=no # Domains (ANSI SQL)
dont_require_cast_to_float=no # No need to cast from integer to float
double_quotes=yes # Double '' as ' in strings
drop_if_exists=no # drop table if exists
drop_index=yes # drop index
drop_requires_cascade=no # drop table require cascade/restrict
drop_restrict=no # drop table with cascade/restrict
end_colon=yes # allows end ';'
except=yes # except
except_all=no # except all
......@@ -158,6 +162,7 @@ func_extra_version=yes # Function VERSION
func_extra_weekday=no # Function WEEKDAY
func_extra_|=no # Function | (bitwise or)
func_extra_||=no # Function OR as '||'
func_extra_~*=yes # Function ~* (case insensitive compare)
func_odbc_abs=yes # Function ABS
func_odbc_acos=yes # Function ACOS
func_odbc_ascii=yes # Function ASCII
......@@ -178,7 +183,7 @@ func_odbc_dayofweek=no # Function DAYOFWEEK
func_odbc_dayofyear=no # Function DAYOFYEAR
func_odbc_degrees=yes # Function DEGREES
func_odbc_difference=no # Function DIFFERENCE()
func_odbc_exp=no # Function EXP
func_odbc_exp=yes # Function EXP
func_odbc_floor=yes # Function FLOOR
func_odbc_fn_left=no # Function ODBC syntax LEFT & RIGHT
func_odbc_hour=no # Function HOUR
......@@ -240,7 +245,8 @@ func_sql_extract_sql=yes # Function EXTRACT
func_sql_localtime=no # Function LOCALTIME
func_sql_localtimestamp=no # Function LOCALTIMESTAMP
func_sql_lower=yes # Function LOWER
func_sql_nullif=no # Function NULLIF
func_sql_nullif_num=yes # Function NULLIF with numbers
func_sql_nullif_string=no # Function NULLIF with strings
func_sql_octet_length=no # Function OCTET_LENGTH
func_sql_position=yes # Function POSITION
func_sql_searched_case=yes # Function searched CASE
......@@ -270,7 +276,7 @@ func_where_unique=no # Function UNIQUE
functions=yes # Functions
group_by=yes # Group by
group_by_alias=yes # Group by alias
group_by_null=yes # group on column with null values
group_by_null=yes # Group on column with null values
group_by_position=yes # Group by position
group_distinct_functions=yes # Group functions with distinct
group_func_extra_bit_and=no # Group function BIT_AND
......@@ -279,28 +285,33 @@ group_func_extra_count_distinct_list=no # Group function COUNT(DISTINCT expr,exp
group_func_extra_std=no # Group function STD
group_func_extra_stddev=no # Group function STDDEV
group_func_extra_variance=no # Group function VARIANCE
group_func_sql_any=no # Group function ANY
group_func_sql_avg=yes # Group function AVG
group_func_sql_count_*=yes # Group function COUNT (*)
group_func_sql_count_column=yes # Group function COUNT column name
group_func_sql_count_distinct=yes # Group function COUNT(DISTINCT expr)
group_func_sql_every=no # Group function EVERY
group_func_sql_max=yes # Group function MAX on numbers
group_func_sql_max_str=yes # Group function MAX on strings
group_func_sql_min=yes # Group function MIN on numbers
group_func_sql_min_str=yes # Group function MIN on strings
group_func_sql_some=no # Group function SOME
group_func_sql_sum=yes # Group function SUM
group_functions=yes # Group functions
group_on_unused=yes # Group on unused column
has_true_false=yes # TRUE and FALSE
having=yes # Having
having_with_alias=no # Having on alias
having_with_group=yes # Having with group function
hex_numbers=no # hex numbers (0x41)
hex_strings=yes # hex strings (x'1ace')
ignore_end_space=yes # ignore end space in compare
ignore_end_space=yes # Ignore end space in compare
index_in_create=no # index in create table
index_namespace=no # different namespace for index
index_parts=no # index on column part (extension)
inner_join=no # inner join
inner_join=yes # inner join
insert_empty_string=yes # insert empty string
insert_multi_value=no # INSERT with Value lists
insert_select=yes # insert INTO ... SELECT ...
insert_with_set=no # INSERT with set syntax
intersect=yes # intersect
......@@ -343,7 +354,6 @@ multi_null_in_unique=yes # null in unique index
multi_strings=yes # Multiple line strings
multi_table_delete=no # DELETE FROM table1,table2...
multi_table_update=no # Update with many tables
insert_multi_value=no # Value lists in INSERT
natural_join=yes # natural join
natural_join_incompat=yes # natural join (incompatible lists)
natural_left_outer_join=no # natural left outer join
......@@ -352,6 +362,7 @@ null_concat_expr=yes # Is 'a' || NULL = NULL
null_in_index=yes # null in index
null_in_unique=yes # null in unique index
null_num_expr=yes # Is 1+NULL = NULL
nulls_in_unique=yes # null combination in unique index
odbc_left_outer_join=no # left outer join odbc style
operating_system=Linux 2.2.14-5.0 i686 # crash-me tested on
order_by=yes # Order by
......@@ -359,6 +370,7 @@ order_by_alias=yes # Order by alias
order_by_function=yes # Order by function
order_by_position=yes # Order by position
order_by_remember_desc=no # Order by DESC is remembered
order_on_unused=yes # Order by on unused column
primary_key_in_create=yes # primary key in create table
psm_functions=no # PSM functions (ANSI SQL)
psm_modules=no # PSM modules (ANSI SQL)
......@@ -372,6 +384,7 @@ quote_with_"=no # Allows ' and " as string markers
recursive_subqueries=+64 # recursive subqueries
remember_end_space=no # Remembers end space in char()
remember_end_space_varchar=yes # Remembers end space in varchar()
rename_table=no # rename table
repeat_string_size=+8000000 # return string size from function
right_outer_join=no # right outer join
rowid=oid # Type for row id
......@@ -381,15 +394,16 @@ select_limit2=yes # SELECT with LIMIT #,#
select_string_size=16777207 # constant string size in SELECT
select_table_update=yes # Update with sub select
select_without_from=yes # SELECT without FROM
server_version=PostgreSQL 7.0 # server version
server_version=PostgreSQL version 7.0.2 # server version
simple_joins=yes # ANSI SQL simple joins
storage_of_float=round # Storage of float values
subqueries=yes # subqueries
table_alias=yes # Table alias
table_name_case=yes # case independent table names
table_wildcard=yes # Select table_name.*
tempoary_table=yes # temporary tables
temporary_table=yes # temporary tables
transactions=yes # transactions
truncate_table=yes # truncate
type_extra_abstime=yes # Type abstime
type_extra_bfile=no # Type bfile
type_extra_blob=no # Type blob
......@@ -397,6 +411,7 @@ type_extra_bool=yes # Type bool
type_extra_box=yes # Type box
type_extra_byte=no # Type byte
type_extra_char(1_arg)_binary=no # Type char(1 arg) binary
type_extra_cidr=yes # Type cidr
type_extra_circle=yes # Type circle
type_extra_clob=no # Type clob
type_extra_datetime=yes # Type datetime
......@@ -406,6 +421,7 @@ type_extra_float(2_arg)=no # Type float(2 arg)
type_extra_float4=yes # Type float4
type_extra_float8=yes # Type float8
type_extra_image=no # Type image
type_extra_inet=yes # Type inet
type_extra_int(1_arg)_zerofill=no # Type int(1 arg) zerofill
type_extra_int1=no # Type int1
type_extra_int2=yes # Type int2
......@@ -422,6 +438,7 @@ type_extra_long_raw=no # Type long raw
type_extra_long_varbinary=no # Type long varbinary
type_extra_long_varchar(1_arg)=no # Type long varchar(1 arg)
type_extra_lseg=yes # Type lseg
type_extra_macaddr=yes # Type macaddr
type_extra_mediumint=no # Type mediumint
type_extra_mediumtext=no # Type mediumtext
type_extra_middleint=no # Type middleint
......@@ -457,6 +474,7 @@ type_odbc_varbinary(1_arg)=no # Type varbinary(1 arg)
type_sql_bit=yes # Type bit
type_sql_bit(1_arg)=yes # Type bit(1 arg)
type_sql_bit_varying(1_arg)=yes # Type bit varying(1 arg)
type_sql_boolean=yes # Type boolean
type_sql_char(1_arg)=yes # Type char(1 arg)
type_sql_char_varying(1_arg)=yes # Type char varying(1 arg)
type_sql_character(1_arg)=yes # Type character(1 arg)
......
......@@ -581,7 +581,7 @@ sub new
$limits{'table_wildcard'} = 1;
$limits{'max_column_name'} = 32; # Is this true
$limits{'max_columns'} = 1000; # 500 crashes pg 6.3
$limits{'max_tables'} = 65000; # Should be big enough
$limits{'max_tables'} = 5000; # 10000 crashes pg 7.0.2
$limits{'max_conditions'} = 30; # This makes Pg real slow
$limits{'max_index'} = 64; # Is this true ?
$limits{'max_index_parts'} = 16; # Is this true ?
......
This diff is collapsed.
/* Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
......@@ -54,7 +54,7 @@ class ha_berkeley: public handler
ulong changed_rows;
uint primary_key,last_dup_key, hidden_primary_key, version;
bool fixed_length_row, fixed_length_primary_key, key_read;
bool fix_rec_buff_for_blob(ulong length);
bool fix_rec_buff_for_blob(ulong length);
byte current_ident[BDB_HIDDEN_PRIMARY_KEY_LENGTH];
ulong max_row_length(const byte *buf);
......@@ -82,7 +82,7 @@ class ha_berkeley: public handler
HA_REC_NOT_IN_SEQ |
HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER | HA_LASTKEY_ORDER |
HA_LONGLONG_KEYS | HA_NULL_KEY | HA_HAVE_KEY_READ_ONLY |
HA_BLOB_KEY | HA_NOT_EXACT_COUNT |
HA_BLOB_KEY | HA_NOT_EXACT_COUNT |
HA_PRIMARY_KEY_IN_READ_INDEX | HA_DROP_BEFORE_CREATE |
HA_AUTO_PART_KEY),
last_dup_key((uint) -1),version(0)
......@@ -93,8 +93,8 @@ class ha_berkeley: public handler
const char **bas_ext() const;
ulong option_flag() const { return int_option_flag; }
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
uint max_keys() const { return MAX_KEY-1; }
uint max_key_parts() const { return MAX_REF_PARTS; }
uint max_keys() const { return MAX_KEY-1; }
uint max_key_parts() const { return MAX_REF_PARTS; }
uint max_key_length() const { return MAX_KEY_LENGTH; }
uint extra_rec_buf_length() { return BDB_HIDDEN_PRIMARY_KEY_LENGTH; }
ha_rows estimate_number_of_rows();
......
......@@ -297,12 +297,16 @@ bool ha_flush_logs()
return result;
}
/*
This should return ENOENT if the file doesn't exists.
The .frm file will be deleted only if we return 0 or ENOENT
*/
int ha_delete_table(enum db_type table_type, const char *path)
{
handler *file=get_new_handler((TABLE*) 0, table_type);
if (!file)
return -1;
return ENOENT;
int error=file->delete_table(path);
delete file;
return error;
......@@ -620,12 +624,16 @@ uint handler::get_dup_key(int error)
int handler::delete_table(const char *name)
{
int error=0;
for (const char **ext=bas_ext(); *ext ; ext++)
{
if (delete_file(name,*ext,2))
return my_errno;
{
if ((error=errno) != ENOENT)
break;
}
}
return 0;
return error;
}
......
......@@ -81,10 +81,12 @@ static void add_hostname(struct in_addr *in,const char *name)
if ((entry=(host_entry*) malloc(sizeof(host_entry)+length+1)))
{
char *new_name= (char *) (entry+1);
char *new_name;
memcpy_fixed(&entry->ip, &in->s_addr, sizeof(in->s_addr));
memcpy(new_name, name, length); // Should work even if name == NULL
new_name[length]=0; // End of string
if (length)
memcpy(new_name= (char *) (entry+1), name, length+1);
else
new_name=0;
entry->hostname=new_name;
entry->errors=0;
(void) hostname_cache->add(entry);
......
......@@ -686,10 +686,9 @@ bool MYSQL_LOG::write(IO_CACHE *cache)
uint length;
my_off_t start_pos=my_b_tell(&log_file);
if (reinit_io_cache(cache, WRITE_CACHE, 0, 0, 0))
if (reinit_io_cache(cache, READ_CACHE, 0, 0, 0))
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno);
sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno);
goto err;
}
while ((length=my_b_fill(cache)))
......@@ -710,8 +709,7 @@ bool MYSQL_LOG::write(IO_CACHE *cache)
}
if (cache->error) // Error on read
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno);
sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno);
goto err;
}
}
......
......@@ -198,5 +198,4 @@
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
#ER_TRANS_CACHE_FULL
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
......@@ -215,7 +215,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
if (options & OPTION_QUICK)
(void) table->file->extra(HA_EXTRA_NORMAL);
using_transactions=table->file->has_transactions();
if (deleted && (error == 0 || !using_transactions))
if (deleted && (error <= 0 || !using_transactions))
{
mysql_update_log.write(thd,thd->query, thd->query_length);
if (mysql_bin_log.is_open())
......
......@@ -256,7 +256,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
else if (table->next_number_field)
id=table->next_number_field->val_int(); // Return auto_increment value
using_transactions=table->file->has_transactions();
if ((info.copied || info.deleted) && (error == 0 || !using_transactions))
if ((info.copied || info.deleted) && (error <= 0 || !using_transactions))
{
mysql_update_log.write(thd, thd->query, thd->query_length);
if (mysql_bin_log.is_open())
......
......@@ -863,7 +863,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
else
s->dependent=(table_map) 0;
s->key_dependent=(table_map) 0;
if ((table->system || table->file->records <= 1L) && ! s->dependent)
if ((table->system || table->file->records <= 1) && ! s->dependent &&
!(table->file->option_flag() & HA_NOT_EXACT_COUNT))
{
s->type=JT_SYSTEM;
const_table_map|=table->map;
......@@ -924,7 +925,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
{
if (s->dependent & ~(const_table_map)) // All dep. must be constants
continue;
if (s->table->file->records <= 1L)
if (s->table->file->records <= 1L &&
!(s->table->file->option_flag() & HA_NOT_EXACT_COUNT))
{ // system table
s->type=JT_SYSTEM;
const_table_map|=s->table->map;
......
......@@ -110,24 +110,25 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
table_type=get_table_type(path);
if (my_delete(path,MYF(0))) /* Delete the table definition file */
if (access(path,F_OK))
{
if (errno != ENOENT || !if_exists)
{
if (!if_exists)
error=1;
if (errno != ENOENT)
{
my_error(ER_CANT_DELETE_FILE,MYF(0),path,errno);
}
}
}
else
{
some_tables_deleted=1;
*fn_ext(path)=0; // Remove extension;
char *end;
*(end=fn_ext(path))=0; // Remove extension
error=ha_delete_table(table_type, path);
if (error == ENOENT && if_exists)
error = 0;
if (!error || error == ENOENT)
{
/* Delete the table definition file */
strmov(end,reg_ext);
if (!(error=my_delete(path,MYF(MY_WME))))
some_tables_deleted=1;
}
}
if (error)
{
......@@ -1427,17 +1428,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
thd->count_cuted_fields=0; /* Don`t calc cuted fields */
new_table->time_stamp=save_time_stamp;
#if defined( __WIN__) || defined( __EMX__)
/*
We must do the COMMIT here so that we can close and rename the
temporary table (as windows can't rename open tables)
*/
if (ha_commit_stmt(thd))
error=1;
if (ha_commit(thd))
error=1;
#endif
if (table->tmp_table)
{
/* We changed a temporary table */
......@@ -1556,7 +1546,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
}
}
#if !(defined( __WIN__) || defined( __EMX__))
/* The ALTER TABLE is always in it's own transaction */
error = ha_commit_stmt(thd);
if (ha_commit(thd))
......@@ -1567,7 +1556,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
VOID(pthread_mutex_unlock(&LOCK_open));
goto err;
}
#endif
thd->proc_info="end";
mysql_update_log.write(thd, thd->query,thd->query_length);
......
......@@ -238,7 +238,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
VOID(table->file->extra(HA_EXTRA_READCHECK));
table->time_stamp=save_time_stamp; // Restore auto timestamp pointer
using_transactions=table->file->has_transactions();
if (updated && (error == 0 || !using_transactions))
if (updated && (error <= 0 || !using_transactions))
{
mysql_update_log.write(thd,thd->query,thd->query_length);
if (mysql_bin_log.is_open())
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment