I don't know how much the audit-viewer tool is used by folks with
substantial amounts of data, but my experience is that it is nearly
unusable for our system. I appreciate that it does a lot really well,
however it takes minutes to load our data on startup. It seems more
filter tabs hurt the performance as well.
The only purpose of this particular machine is to collect/process
audit/prelude data.
Only prelude and audit-related processes are being run on this one.
It is an HP DL380 with 2 quad-core processors, 12GB RAM and an
internal RAID running on F10.
As it is now, I daily move the audit data from the /var/log/audit
directory or there is no chance that the audit-viewer will complete
its load. Sometimes it will never recover and we have to kill and
restart it. The amount of data is around 1.5GB on the directory we are
currently loading and it appears to take about 5 minutes (give or
take) for the viewer to load the data and be usable.
Once loaded, a big filter effort will take maybe a minute or so to
yield results.
While the data is loading, there is no feedback and of course
uncertainty about whether it is going to return with any data always
sets in after a minute or two.
What is the plan for this tool? As I said, I think it is very nice
feature-wise in general but in practice it isn't living up to
expectations.
I can try to help but will take a while to get python-proficient. Or
is the trouble in the parse library?
I do not have scientific data yet, but recently I loaded one 100MB
audit file from the store. It took around 3 minute to load. Then I
changed the source and that one took longer. When it was finally
loaded it, the process size was over 2GB.
I can run some better tests and try to get some data if it is helpful.
Are there ways I can try to exercise the parse library outside the GUI
on these same files which might help me know what to look for? Or any
other ideas I can try?
Thanks,
LCB.
--
LC (Lenny) Bruzenak