Xaver Biton said:
hi;
Sorry if I didn't explain the concept clearly enough. What I mean is for
example, the program has to insert 30000 record in a table, now if the
record 25550 causes an error than the whole process break. I want to
avoid this.
Assuming that you are using RaiseError, you could do something like:
my $get=$dbh1->prepare('select blah1, blah2 from table1');
my $insert=$dbh2->prepare('insert into table1a (blah1, blah2) values (?,
?)'); my $backup=$dbh2->prepare('insert into trable1a_error(blah1,blah2)
values (?,?)');
$get->execute();
while (my $stuff = $get->fetch() ) {
eval {$insert->execute(@$stuff)};
if ($@) {
warn $@;
$backup->execute(@$stuff);
};
};
Since insert and backup go into same DB, I am assuming that if the failure
is do to database crash or something, than the "backup" insert will also
fail, causing your script to die. If I didn't assume that, you would want
to check that the error in $@ is actually a data integrity error versus
some other (unrecoverable) kind of error.
I would like to insert the record which fail in a separate table and let
the program finish the work.
the biggest problem is that I must check the referetial integrity of the
data in the new database,
I didn't know that mysql supported referential integrity checks.
the old database has not referiatial integrity
and many records have no reference, but also why I must tranfer about 20
GB data and I can't stop and go with a such volume.
I'd by tempted to implement this checking in Perl proper, not Mysql. That
is, assuming all the keys of the "parent" table can fit into a hash at one
time. Have Perl write two data dumps, one of known good, the other of
known bad, and then use Mysql's bulk loading tools to load the good ones.
Xho