Hi, as far as I understand with the InnoDB engine there should be no
problem with the tables size, but I'm not sure about that.
In any case if you installed the machine with yaim there should be a
cronjob (glite-lb-purger) /etc/cron.d/.
Documentation about this purger is available here:
http://egee.cesnet.cz/mediawiki/index.php/LBServerPurge
Note that if you do not have enable the cron purger from the beginning
and you try to run it on a big db the purger can crash, but I think this
is explained on that page.
For LBProxy, instead, there is no available purger, but being the
lbproxy a local cache for the wms it is self-purging, in the sense that
when job events are no more needed by the wms (job is in a final status)
the job events are purged.
The problem is that the Done status is not considered final enough to
trigger the LBProxy purging. So if the VOs that are using your WMS are
used not to retrieve their job outputs (triggering the cleared status)
your LBProxy will increase more and more. (this is what happened to us
on atlas LBs)
In theory if you use the glite-wms-purger to clear old sandboxes this
should log Clear event to LB with reason code = TIMEOUT allowing the
LBPurger to purge those jobs from the LBProxy, but I never understood if
this is working correctly, because I purge old sandbox with home made
scripts.
Anyway if you run:
mysql -u lbserver lbproxy -e "select count(*) from states where status>6
and status <10"
you get the numebr of jobs in final status in the LBProxy db (remember
that final means Aborted, Canceled and Cleared)
This should be zero.
mysql -u lbserver lbproxy -e "select count(*) from states where status=6
you get the number of job in Done status (both ok and failed)
mysql -u lbserver lbproxy -e "select count(*) from states"
you get the umber of total jobs in the LBProxy server
If the number of Done job is similar to the one of all jobs it means
that something in the lb proxy purging chain is not working.
If this is creating problems I think that only developers at cesnet can
help you.
These are the numbers for an lbbroxy at cnaf:
mysql> select count(*) from states;
+----------+
| count(*) |
+----------+
| 785931 |
+----------+
1 row in set (0.00 sec)
most of them are done jobs and we do not observe any problems in the wms
functionalities.
Hope this helps.
Daniele.
Condurache, C (Catalin) wrote:
> Hi,
>
> Could someone give here some advices regarding the MySQL databases
> maintenance on a glite-WMSLB? It is about lbproxy and lbserver20
> databases which are created in InnoDB format.
>
> On a lcg-RB system every time when the lbserver20 database (MyISAM
> format) gets nearly 4GB (in fact long_fields.MYD file), I am physically
> moving the database in /var/lib/mysql/lbserver20 (for archiving
> purposes) and recreating a new schema.
>
> What would be a proper procedure on a glite-WMSLB system?
>
> Many thanks,
> Catalin Condurache
> RAL Tier1 Grid Services Team
>
|