Hi David,
thanks for your very helpful answer.
On Fri, 7 Jan 2005, David Smith wrote:
> For problem (1) the tables can be analyzed with:
>
> analyze table acls;
> analyze table events;
> analyze table jobs;
> analyze table long_fields;
> analyze table server_state;
> analyze table short_fields;
> analyze table states;
> analyze table status_tags;
> analyze table users;
Okay, that worked w/o and problems, allthough it took several hours to
finish.
Should this been done regularly? I.e. should I make a crontab entry?
> For (2), setting an explicit table size (in terms of a maximum number of
> rows) removes the limitation. For example:
>
> ALTER TABLE short_fields MAX_ROWS=1000000000;
> ALTER TABLE long_fields MAX_ROWS=55000000;
> ALTER TABLE states MAX_ROWS=9500000;
> ALTER TABLE events MAX_ROWS=175000000;
>
> which should allow the database to hold up to about 9.5M events. (At which
> time it will be about 220Gb in size). With the default limit the critical
> number of jobs is ~550,000.
You really suggest to let the DB grow up to 220 Gigabytes? Wouldn't this
slow down MySQL even further?
My RB doesn't have such a large HDD, allthough it would be no problem to
install one. Is this really needed / suggested?
Thanks again and
best regards,
Torsten
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<> <>
<> Torsten Harenberg [log in to unmask] <>
<> Bergische Universitaet <>
<> FB C - Physik Tel.: +49 (0)202 439-3521 <>
<> Gaussstr. 20 Fax : +49 (0)202 439-2811 <>
<> 42097 Wuppertal <>
<> <>
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
|