Commit 73e34999 authored by unknown's avatar unknown

Fixes for innobase usage

Fixed bug when using TEXT columns with BDB tables
Allow LOAD DATA INFILE to use numbers with ENUM and SET columns


BUILD/compile-pentium:
  Added --with-innobase-db
Docs/manual.texi:
  Added more documentation to Innobase and KILL
client/mysqladmin.c:
  Quote database names for CREATE and DROP
mysql-test/install_test_db.sh:
  Don't use innobase, bdb or gemini when installing privilege tables
mysql-test/mysql-test-run.sh:
  Added testing of innobase tables
mysql-test/r/bdb.result:
  Added test of TEXT column bug
mysql-test/t/bdb.test:
  Added test of TEXT column bug
mysql-test/t/innobase.test:
  Cleanup innobase tests
scripts/mysql_install_db.sh:
  Added testing of innobase tables
sql/field.cc:
  Allow LOAD DATA INFILE to use numbers with ENUM and SET columns
sql/filesort.cc:
  Fixed typo
sql/ha_berkeley.cc:
  Fixed problem with TEXT columns in BDB tables
sql/mysqld.cc:
  Always support the --innobase-data-file-path option
sql/share/swedish/errmsg.OLD:
  Added swedish error messages
sql/share/swedish/errmsg.txt:
  Added swedish error messages
sql/sql_base.cc:
  Reset tables after usage (to fix problem with BDB and TEXT columns)
sql/sql_delete.cc:
  Use generate table if --skip-innobase is used
parent 3ab17885
......@@ -12,5 +12,6 @@ if test -d /usr/local/BerkeleyDB-opt/
then
extra_configs="$extra_configs --with-berkeley-db=/usr/local/BerkeleyDB-opt/"
fi
extra_configs="$extra_configs --with-innobase-db"
. "$path/FINISH.sh"
......@@ -487,7 +487,7 @@ MySQL Table Types
* ISAM:: ISAM tables
* HEAP:: HEAP tables
* BDB:: BDB or Berkeley_db tables
* INNOBASE:: Innobase tables
* INNOBASE::
MyISAM Tables
......@@ -509,6 +509,11 @@ BDB or Berkeley_db Tables
* BDB TODO::
* BDB errors::
INNOBASE Tables
* INNOBASE overview::
* Innobase restrictions::
MySQL Tutorial
* Connecting-disconnecting:: Connecting to and disconnecting from the server
......@@ -575,7 +580,7 @@ Replication in MySQL
* Replication Options:: Replication Options in my.cnf
* Replication SQL:: SQL Commands related to replication
* Replication FAQ:: Frequently Asked Questions about replication
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
Getting Maximum Performance from MySQL
......@@ -11896,7 +11901,7 @@ The @code{processlist} command displays information about the threads
executing within the server. The @code{kill} command kills server threads.
You can always display or kill your own threads, but you need the
@strong{process} privilege to display or kill threads initiated by other
users.
users. @xref{KILL}.
It is a good idea in general to grant privileges only to those users who need
them, but you should exercise particular caution in granting certain
......@@ -13545,8 +13550,8 @@ in ANSI mode. @xref{ANSI mode}.
@item Alias @tab 255 @tab All characters.
@end multitable
Note that in addition to the above, you can't have ASCII(0) or ASCII(255) in
an identifier.
Note that in addition to the above, you can't have ASCII(0) or ASCII(255) or
the quoting character in an identifier.
Note that if the identifer is a restricted word or contains special characters
you must always quote it with @code{`} when you use it:
......@@ -20421,6 +20426,39 @@ Otherwise, you can see and kill only your own threads.
You can also use the @code{mysqladmin processlist} and @code{mysqladmin kill}
commands to examine and kill threads.
When you do a @code{KILL}, a thread specific @code{kill flag} is set for
the thread.
In most cases it may take some time for the thread to die as the kill
flag is only checked at specific intervals.
@itemize @bullet
@item
In @code{SELECT}, @code{ORDER BY} and @code{GROUP BY} loops, the flag is
checked after reading a block of rows. If the kill flag is set the
statement is aborted
@item
When doing an @code{ALTER TABLE} the kill flag is checked before each block of
rows are read from the original table. If the kill flag was set the command
is aborted and the temporary table is deleted.
@item
When doing an @code{UPDATE TABLE} and @code{DELETE TABLE}, the kill flag
is checked after each block read and after each updated or delete
row. If the kill flag is set the statement is aborted. Note that if you
are not using transactions, the changes will not be rolled back!
@item
@code{GET_LOCK()} will abort with @code{NULL}.
@item
An @code{INSERT DELAYED} thread will quickly flush all rows it has in
memory and die.
@item
If the thread is in the table lock handler (state: @code{Locked}),
the table lock will be quickly aborted.
@item
If the thread is waiting for free disk space in a @code{write} call, the
write is aborted with an disk full error message.
@end itemize
@findex SHOW DATABASES
@findex SHOW TABLES
@findex SHOW COLUMNS
......@@ -23412,6 +23450,14 @@ not trivial).
@node INNOBASE, , BDB, Table types
@section INNOBASE Tables
@menu
* INNOBASE overview::
* Innobase restrictions::
@end menu
@node INNOBASE overview, Innobase restrictions, INNOBASE, INNOBASE
@subsection INNOBASE Tables overview
Innobase is included in the @strong{MySQL} source distribution starting
from 3.23.34 and will be activated in the @strong{MySQL}-max binary.
......@@ -23591,6 +23637,17 @@ P.O.Box 800
Finland
@end example
@node Innobase restrictions, , INNOBASE overview, INNOBASE
@subsection Some restrictions on @code{Innobase} tables:
@itemize @bullet
@item
You can't have a key on a @code{BLOB} or @code{TEXT} column.
@item
@code{DELETE FROM TABLE} doesn't generate the table but instead deletes all
rows, one by one, which isn't that fast.
@end itemize
@cindex tutorial
@cindex terminal monitor, defined
@cindex monitor, terminal
......@@ -26325,7 +26382,7 @@ tables}.
* Replication Options:: Replication Options in my.cnf
* Replication SQL:: SQL Commands related to replication
* Replication FAQ:: Frequently Asked Questions about replication
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
@end menu
@node Replication Intro, Replication Implementation, Replication, Replication
......@@ -34106,7 +34163,15 @@ option.
@node Communication errors, Full table, Packet too large, Common errors
@subsection Communication Errors / Aborted Connection
The server variable @code{Aborted_clients} is incremented when:
If you find errors like the following in your error log.
@example
010301 14:38:23 Aborted connection 854 to db: 'users' user: 'josh'
@end example
@xref{Error log}.
This means that something of the following has happened:
@itemize @bullet
@item
......@@ -34119,8 +34184,8 @@ VARIABLES}.
The client program ended abruptly in the middle of the transfer.
@end itemize
When the above happens, the mysqld will write a note about an
@code{Aborted connection} in the @code{hostname.err} file. @xref{Error log}.
When the above happens, the server variable @code{Aborted_clients} is
incremented.
The server variable @code{Aborted_connects} is incremented when:
......@@ -41690,6 +41755,9 @@ not yet 100 % confident in this code.
@appendixsubsec Changes in release 3.23.34
@itemize @bullet
@item
Fixed that one can with @code{LOAD DATA INFILE} read number values to
@code{ENUM} and @code{SET} columns.
@item
Improved error diagnostic for slave thread exit
@item
Fixed bug in @code{ALTER TABLE ... ORDER BY}.
......@@ -41716,6 +41784,11 @@ to the @strong{MySQL} source distribution.
Fixed bug in @code{BDB} tables when using index on multi-part key where a
key part may be @code{NULL}.
@item
Fixed problem with 'garbage results' when using @code{BDB} tables and
@code{BLOB} or @code{TEXT} fields when joining many tables.
@item
Fixed a problem with @code{BDB} tables and @code{TEXT} columns.
@item
Fixed that @code{mysqlbinlog} writes the timestamp value for each query.
This ensures that on gets same values for date functions like @code{NOW()}
when using @code{mysqlbinlog} to pipe the queries to another server.
......@@ -28,7 +28,7 @@
#include <my_pthread.h> /* because of signal() */
#endif
#define ADMIN_VERSION "8.16"
#define ADMIN_VERSION "8.17"
#define MAX_MYSQL_VAR 64
#define MAX_TIME_TO_WAIT 3600 /* Wait for shutdown */
#define MAX_TRUNC_LENGTH 3
......@@ -402,32 +402,32 @@ static my_bool execute_commands(MYSQL *mysql,int argc, char **argv)
my_printf_error(0,"Too few arguments to create",MYF(ME_BELL));
return 1;
}
sprintf(buff,"create database %.*s",FN_REFLEN,argv[1]);
sprintf(buff,"create database `%.*s`",FN_REFLEN,argv[1]);
if (mysql_query(mysql,buff))
{
my_printf_error(0,"Create failed; error: '%-.200s'",MYF(ME_BELL),
mysql_error(mysql));
my_printf_error(0,"CREATE DATABASE failed; error: '%-.200s'",
MYF(ME_BELL), mysql_error(mysql));
return 1;
}
else
{
argc--; argv++;
}
argc--; argv++;
break;
}
case ADMIN_DROP:
{
char buff[FN_REFLEN+20];
if (argc < 2)
{
my_printf_error(0,"Too few arguments to drop",MYF(ME_BELL));
return 1;
}
if (drop_db(mysql,argv[1]))
return 1;
else
sprintf(buff,"drop database `%.*s`",FN_REFLEN,argv[1]);
if (mysql_query(mysql,buff))
{
argc--; argv++;
my_printf_error(0,"DROP DATABASE failed; error: '%-.200s'",
MYF(ME_BELL), mysql_error(mysql));
return 1;
}
argc--; argv++;
break;
}
case ADMIN_SHUTDOWN:
......
......@@ -193,7 +193,7 @@ then
fi
if $execdir/mysqld --no-defaults --bootstrap --skip-grant-tables \
--basedir=$basedir --datadir=$ldata << END_OF_DATA
--basedir=$basedir --datadir=$ldata --skip-innobase --skip-bdb --skip-gemini << END_OF_DATA
use mysql;
$c_d
$i_d
......
......@@ -358,6 +358,7 @@ start_master()
--core \
--tmpdir=$MYSQL_TMP_DIR \
--language=english \
--innobase_data_file_path=ibdata1:50M \
$SMALL_SERVER \
$EXTRA_MASTER_OPT $EXTRA_MASTER_MYSQLD_OPT"
if [ x$DO_DDD = x1 ]
......
......@@ -481,3 +481,5 @@ i j
1 2
i j
1 2
build_path
current
......@@ -429,4 +429,302 @@ create index ax1 on t1 (i,j);
select * from t1 where i=1 and j=2;
drop table t1;
#
# Test of with CONST tables and TEXT columns
# This gave a wrong result because the row information was freed too early
#
drop table if exists t1, t2, t3, t4, t5, t6, t7;
create table t1
(
branch_id int auto_increment primary key,
branch_name varchar(255) not null,
branch_active int not null default 1,
unique branch_name(branch_name),
index branch_active(branch_active)
) type=bdb;
drop table if exists t2 ;
create table t2
(
target_id int auto_increment primary key,
target_name varchar(255) not null,
target_active int not null default 1,
unique target_name(target_name),
index target_active(target_active)
) type=bdb;
drop table if exists t3 ;
create table t3
(
platform_id int auto_increment primary key,
platform_name varchar(255) not null,
platform_active int not null default 1,
unique platform_name(platform_name),
index platform_active(platform_active)
) type=bdb;
drop table if exists t4 ;
create table t4
(
product_id int auto_increment primary key,
product_name varchar(255) not null,
version_file varchar(255) not null,
product_active int not null default 1,
unique product_name(product_name),
index product_active(product_active)
) type=bdb;
drop table if exists t5 ;
create table t5
(
product_file_id int auto_increment primary key,
product_id int not null,
file_name varchar(255) not null,
/* cvs module used to find the file version */
module_name varchar(255) not null,
/* flag whether the file is still included in the product */
file_included int not null default 1,
unique product_file(product_id,file_name),
index file_included(file_included)
) type=bdb;
drop table if exists t6 ;
create table t6
(
file_platform_id int auto_increment primary key,
product_file_id int not null,
platform_id int not null,
branch_id int not null,
/* filename in the build system */
build_filename varchar(255) not null,
/* default filename in the build archive */
archive_filename varchar(255) not null,
unique file_platform(product_file_id,platform_id,branch_id)
) type=bdb;
drop table if exists ba_archive ;
create table ba_archive
(
archive_id int auto_increment primary key,
branch_id int not null,
target_id int not null,
platform_id int not null,
product_id int not null,
status_id int not null default 1,
unique archive(branch_id,target_id,platform_id,product_id),
index status_id(status_id)
) type=bdb;
drop table if exists t7 ;
create table t7
(
build_id int auto_increment primary key,
branch_id int not null,
target_id int not null,
build_number int not null,
build_date date not null,
/* build system tag, e.g. 'rmanight-022301-1779' */
build_tag varchar(255) not null,
/* path relative to the build archive root, e.g. 'current' */
build_path text not null,
unique build(branch_id,target_id,build_number)
) type=bdb;
drop table if exists t4_build ;
create table t4_build
(
product_build_id int auto_increment primary key,
build_id int not null,
product_id int not null,
platform_id int not null,
/* flag whether this is a released build */
product_release int not null default 0,
/* user-defined tag, e.g. 'RealPlayer 8.0' */
release_tag varchar(255) not null,
unique product_build(build_id,product_id,platform_id),
index product_release(product_release),
index release_tag(release_tag)
) type=bdb;
drop table if exists t7_file ;
create table t7_file
(
build_file_id int auto_increment primary key,
product_build_id int not null,
product_file_id int not null,
/* actual filename in the build archive */
filename text not null,
/* actual path in the build archive */
file_path text not null,
/* file version string, e.g. '8.0.1.368' */
file_version varchar(255) not null,
unique build_file(product_build_id,product_file_id),
index file_version(file_version)
) type=bdb;
drop table if exists ba_status ;
create table ba_status
(
status_id int auto_increment primary key,
status_name varchar(255) not null,
status_desc text not null
) type=bdb;
insert into ba_status
(status_name, status_desc)
values
('new', 'This item has been newly added.'),
('archived', 'This item is currently archived.'),
('not archived', 'This item is currently not archived.'),
('obsolete', 'This item is obsolete.'),
('unknown', 'The status of this item is unknown.') ;
insert into t1 (branch_name)
values ('RealMedia');
insert into t1 (branch_name)
values ('RP8REV');
insert into t1 (branch_name)
values ('SERVER_8_0_GOLD');
insert into t2 (target_name)
values ('rmanight');
insert into t2 (target_name)
values ('playerall');
insert into t2 (target_name)
values ('servproxyall');
insert into t3 (platform_name)
values ('linux-2.0-libc6-i386');
insert into t3 (platform_name)
values ('win32-i386');
insert into t4 (product_name, version_file)
values ('realserver', 'servinst');
insert into t4 (product_name, version_file)
values ('realproxy', 'prxyinst');
insert into t4 (product_name, version_file)
values ('realplayer', 'playinst');
insert into t4 (product_name, version_file)
values ('plusplayer', 'plusinst');
create temporary table tmp1
select branch_id, target_id, platform_id, product_id
from t1, t2, t3, t4 ;
create temporary table tmp2
select tmp1.branch_id, tmp1.target_id, tmp1.platform_id, tmp1.product_id
from tmp1 left join ba_archive
using (branch_id,target_id,platform_id,product_id)
where ba_archive.archive_id is null ;
insert into ba_archive
(branch_id, target_id, platform_id, product_id, status_id)
select branch_id, target_id, platform_id, product_id, 1
from tmp2 ;
drop table tmp1 ;
drop table tmp2 ;
insert into t5 (product_id, file_name, module_name)
values (1, 'servinst', 'server');
insert into t5 (product_id, file_name, module_name)
values (2, 'prxyinst', 'server');
insert into t5 (product_id, file_name, module_name)
values (3, 'playinst', 'rpapp');
insert into t5 (product_id, file_name, module_name)
values (4, 'plusinst', 'rpapp');
insert into t6
(product_file_id,platform_id,branch_id,build_filename,archive_filename)
values (1, 2, 3, 'servinst.exe', 'win32-servinst.exe');
insert into t6
(product_file_id,platform_id,branch_id,build_filename,archive_filename)
values (1, 1, 3, 'v80_linux-2.0-libc6-i386_servinst.bin', 'linux2-servinst.exe');
insert into t6
(product_file_id,platform_id,branch_id,build_filename,archive_filename)
values (3, 2, 2, 'playinst.exe', 'win32-playinst.exe');
insert into t6
(product_file_id,platform_id,branch_id,build_filename,archive_filename)
values (4, 2, 2, 'playinst.exe', 'win32-playinst.exe');
insert into t7
(branch_id,target_id,build_number,build_tag,build_date,build_path)
values (2, 2, 1071, 'playerall-022101-1071', '2001-02-21', 'current');
insert into t7
(branch_id,target_id,build_number,build_tag,build_date,build_path)
values (2, 2, 1072, 'playerall-022201-1072', '2001-02-22', 'current');
insert into t7
(branch_id,target_id,build_number,build_tag,build_date,build_path)
values (3, 3, 388, 'servproxyall-022201-388', '2001-02-22', 'current');
insert into t7
(branch_id,target_id,build_number,build_tag,build_date,build_path)
values (3, 3, 389, 'servproxyall-022301-389', '2001-02-23', 'current');
insert into t7
(branch_id,target_id,build_number,build_tag,build_date,build_path)
values (4, 4, 100, 'foo target-010101-100', '2001-01-01', 'current');
insert into t4_build
(build_id, product_id, platform_id)
values (1, 3, 2);
insert into t4_build
(build_id, product_id, platform_id)
values (2, 3, 2);
insert into t4_build
(build_id, product_id, platform_id)
values (3, 1, 2);
insert into t4_build
(build_id, product_id, platform_id)
values (4, 1, 2);
insert into t4_build
(build_id, product_id, platform_id)
values (5, 5, 3);
insert into t7_file
(product_build_id, product_file_id, filename, file_path, file_version)
values (1, 3, 'win32-playinst.exe', 'RP8REV/current/playerall-022101-1071/win32-i386', '8.0.3.263');
insert into t7_file
(product_build_id, product_file_id, filename, file_path, file_version)
values (5, 5, 'file1.exe', 'foo branch/current/foo target-022101-1071/foo platform', 'version 1');
insert into t7_file
(product_build_id, product_file_id, filename, file_path, file_version)
values (5, 6, 'file2.exe', 'foo branch/current/foo target-022101-1071/foo platform', 'version 2');
update ba_archive
set status_id=2
where branch_id=2 and target_id=2 and platform_id=2 and product_id=1;
select t7.build_path
from
t1,
t7,
t2,
t3,
t4,
t5,
t6
where
t7.branch_id = t1.branch_id and
t7.target_id = t2.target_id and
t5.product_id = t4.product_id and
t6.product_file_id = t5.product_file_id and
t6.platform_id = t3.platform_id and
t6.branch_id = t6.branch_id and
t7.build_id = 1 and
t4.product_id = 3 and
t5.file_name = 'playinst' and
t3.platform_id = 2;
drop table t1, t2, t3, t4, t5, t6,t7;
......@@ -33,7 +33,7 @@ INSERT INTO t1 VALUES (1,0,0),(3,1,1),(4,1,1),(8,2,2),(9,2,2),(17,3,2),(22,4,2),
update t1 set parent_id=parent_id+100;
select * from t1 where parent_id=102;
update t1 set id=id+1000;
-- error 1062
-- error 1062,1022
update t1 set id=1024 where id=1009;
select * from t1;
update ignore t1 set id=id+1; # This will change all rows
......@@ -95,22 +95,6 @@ insert into t1 values (1,""), (2,"testing");
select * from t1 where a = 1;
drop table t1;
#
# Test auto_increment on sub key
#
create table t1 (a char(10) not null, b int not null auto_increment, primary key(a,b)) type=innobase;
insert into t1 values ("a",1),("b",2),("a",2),("c",1);
insert into t1 values ("a",NULL),("b",NULL),("c",NULL),("e",NULL);
insert into t1 (a) values ("a"),("b"),("c"),("d");
insert into t1 (a) values ('k'),('d');
insert into t1 (a) values ("a");
insert into t1 values ("d",last_insert_id());
select * from t1;
flush tables;
select count(*) from t1;
drop table t1;
#
# Test rollback
#
......@@ -391,20 +375,6 @@ update t1 set a=5 where a=1;
select a from t1;
drop table t1;
#
# Test key on blob with null values
#
create table t1 (b blob, i int, key (b(100)), key (i), key (i, b(20))) type=innobase;
insert into t1 values ('this is a blob', 1), (null, -1), (null, null),("",1),("",2),("",3);
select b from t1 where b = 'this is a blob';
select * from t1 where b like 't%';
select b, i from t1 where b is not null;
select * from t1 where b is null and i > 0;
select * from t1 where i is NULL;
update t1 set b='updated' where i=1;
select * from t1;
drop table t1;
#
# Test with variable length primary key
#
......
......@@ -282,7 +282,7 @@ fi
echo "Installing all prepared tables"
if eval "$execdir/mysqld $defaults --bootstrap --skip-grant-tables \
--basedir=$basedir --datadir=$ldata $args" << END_OF_DATA
--basedir=$basedir --datadir=$ldata --skip-innobase --skip-gemeni --skip-bdb $args" << END_OF_DATA
use mysql;
$c_d
$i_d
......
......@@ -4254,15 +4254,30 @@ uint find_enum(TYPELIB *lib,const char *x, uint length)
void Field_enum::store(const char *from,uint length)
{
uint tmp=find_enum(typelib,from,length);
if (!tmp)
{
if (!tmp)
if (length < 6) // Can't be more than 99999 enums
{
current_thd->cuted_fields++;
Field_enum::store_type((longlong) 0);
/* This is for reading numbers with LOAD DATA INFILE */
char buff[7], *end;
const char *conv=from;
if (from[length])
{
strmake(buff, from, length);
conv=buff;
}
my_errno=0;
tmp=strtoul(conv,&end,10);
if (my_errno || end != conv+length || tmp > typelib->count)
{
tmp=0;
current_thd->cuted_fields++;
}
}
else
store_type((ulonglong) tmp);
current_thd->cuted_fields++;
}
store_type((ulonglong) tmp);
}
......@@ -4430,7 +4445,26 @@ ulonglong find_set(TYPELIB *lib,const char *x,uint length)
void Field_set::store(const char *from,uint length)
{
store_type(find_set(typelib,from,length));
ulonglong tmp=find_set(typelib,from,length);
if (!tmp && length && length < 22)
{
/* This is for reading numbers with LOAD DATA INFILE */
char buff[22], *end;
const char *conv=from;
if (from[length])
{
strmake(buff, from, length);
conv=buff;
}
my_errno=0;
tmp=strtoull(conv,&end,10);
if (my_errno || end != conv+length ||
tmp > (ulonglong) (((longlong) 1 << typelib->count) - (longlong) 1))
tmp=0;
else
current_thd->cuted_fields--; // Remove warning from find_set
}
store_type(tmp);
}
......
......@@ -137,7 +137,8 @@ ha_rows filesort(TABLE **table, SORT_FIELD *sortorder, uint s_length,
#ifdef CAN_TRUST_RANGE
else if (select && select->quick && select->quick->records > 0L)
{
VOID(ha_info(&table[0]->form,0)); /* Get record-count */
/* Get record-count */
table[0]->file->info(HA_STATUS_VARIABLE | HA_STATUS_NO_LOCK);
records=min((ha_rows) (select->quick->records*2+EXTRA_RECORDS*2),
table[0]->file->records)+EXTRA_RECORDS;
selected_records_file=0;
......
......@@ -21,7 +21,7 @@
- Don't automaticly pack all string keys (To do this we need to modify
CREATE TABLE so that one can use the pack_keys argument per key).
- An argument to pack_key that we don't want compression.
- DB_DBT_USERMEN should be used for fixed length tables
- DB_DBT_USERMEM should be used for fixed length tables
We will need an updated Berkeley DB version for this.
- Killing threads that has got a 'deadlock'
- SHOW TABLE STATUS should give more information about the table.
......@@ -585,6 +585,7 @@ int ha_berkeley::close(void)
my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR));
my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR));
ha_berkeley::extra(HA_EXTRA_RESET); // current_row buffer
DBUG_RETURN(free_share(share,table, hidden_primary_key,0));
}
......@@ -1587,6 +1588,15 @@ int ha_berkeley::extra(enum ha_extra_function operation)
case HA_EXTRA_RESET_STATE:
key_read=0;
using_ignore=0;
if (current_row.flags & (DB_DBT_MALLOC | DB_DBT_REALLOC))
{
current_row.flags=0;
if (current_row.data)
{
free(current_row.data);
current_row.data=0;
}
}
break;
case HA_EXTRA_KEYREAD:
key_read=1; // Query satisfied with key
......@@ -1662,17 +1672,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
else
{
lock.type=TL_UNLOCK; // Unlocked
if (current_row.flags & (DB_DBT_MALLOC | DB_DBT_REALLOC))
{
current_row.flags=0;
if (current_row.data)
{
free(current_row.data);
current_row.data=0;
}
}
thread_safe_add(share->rows, changed_rows, &share->mutex);
current_row.data=0;
if (!--thd->transaction.bdb_lock_count)
{
if (thd->transaction.stmt.bdb_tid)
......
......@@ -2458,11 +2458,13 @@ static struct option long_options[] = {
{"enable-locking", no_argument, 0, (int) OPT_ENABLE_LOCK},
{"exit-info", optional_argument, 0, 'T'},
{"flush", no_argument, 0, (int) OPT_FLUSH},
/* We must always support this option to make scripts like mysqltest easier
to do */
{"innobase_data_file_path", required_argument, 0,
OPT_INNOBASE_DATA_FILE_PATH},
#ifdef HAVE_INNOBASE_DB
{"innobase_data_home_dir", required_argument, 0,
OPT_INNOBASE_DATA_HOME_DIR},
{"innobase_data_file_path", required_argument, 0,
OPT_INNOBASE_DATA_FILE_PATH},
{"innobase_log_group_home_dir", required_argument, 0,
OPT_INNOBASE_LOG_GROUP_HOME_DIR},
{"innobase_log_arch_dir", required_argument, 0,
......@@ -3487,15 +3489,17 @@ static void get_options(int argc,char **argv)
#ifdef HAVE_INNOBASE_DB
innobase_skip=1;
have_innobase=SHOW_OPTION_DISABLED;
#endif
break;
case OPT_INNOBASE_DATA_FILE_PATH:
#ifdef HAVE_INNOBASE_DB
innobase_data_file_path=optarg;
#endif
break;
#ifdef HAVE_INNOBASE_DB
case OPT_INNOBASE_DATA_HOME_DIR:
innobase_data_home_dir=optarg;
break;
case OPT_INNOBASE_DATA_FILE_PATH:
innobase_data_file_path=optarg;
break;
case OPT_INNOBASE_LOG_GROUP_HOME_DIR:
innobase_log_group_home_dir=optarg;
break;
......
......@@ -205,3 +205,4 @@
"Kunde inte initializera replications-strukturerna. Kontrollera privilegerna för 'master.info'",
"Kunde inte starta en tråd för replikering",
"Användare '%-.64s' har redan 'max_user_connections' aktiva inloggningar",
"Du kan endast använda konstant-uttryck med SET",
......@@ -205,4 +205,4 @@
"Kunde inte initializera replications-strukturerna. Kontrollera privilegerna för 'master.info'",
"Kunde inte starta en tråd för replikering",
"Användare '%-.64s' har redan 'max_user_connections' aktiva inloggningar",
"You may only use constant expressions with SET",
"Du kan endast använda konstant-uttryck med SET",
......@@ -446,6 +446,11 @@ void close_thread_tables(THD *thd, bool locked)
table->flush_version=flush_version;
table->file->extra(HA_EXTRA_FLUSH);
}
else
{
// Free memory and reset for next loop
table->file->extra(HA_EXTRA_RESET);
}
table->in_use=0;
if (unused_tables)
{
......
......@@ -18,6 +18,7 @@
/* Delete of records */
#include "mysql_priv.h"
#include "ha_innobase.h"
/*
Optimize delete of all rows by doing a full generate of the table
......@@ -142,9 +143,10 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
(SPECIAL_NO_NEW_FUNC | SPECIAL_SAFE_MODE)) &&
!(thd->options &
(OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)));
/* We need to add code to not generate table based on the table type */
#ifdef HAVE_INNOBASE_DB
use_generate_table=0;
/* We need to add code to not generate table based on the table type */
if (!innobase_skip)
use_generate_table=0; // Innobase can't use re-generate table
#endif
if (use_generate_table && ! thd->open_tables)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment