Hi,
I just talked the audit/sqlite stuff over with Richard Hipp.
One approach would be to have the audisp application put the data
into a TEMP table, then dump those tables to the main disk tables at
some interval. After the data is copied to the permanent tables, the
TEMP table is cleared.
The TEMP tables resides in memory, and are private to the application
- there can be no contention problems putting the data into those tables.
If writing the temp table to the main table fails because the db is
locked, the temp tables don't get cleared and there is just that much
more to be dumped when the disk is available.
My experience with SQLite is that dumping multiple records is nearly
as fast as dumping one. It's a K+N*x type situation where N is the
number of records, K is a "constant", and x is much smaller than K. I
think the main contributor to K is the disk latency time, while the
main contributor to x is memory access time.
When it comes to someone gathering so much audit data that the
logging routines get swamped, he agrees that dumping to a flat file is
the fastest way to get the data out of memory and onto the disk.
He thinks that my current approach of scanning the flat files and
building a one-time database may be the best way to avoid dragging down
the kernel.
The compromise might be to keep a permanent database, and just update
it with the new records when the reporter is run, or from a cron job,
rather than needing to include all records since the beginning of time
(or when you want to start generating a report from.)
The way write/read contention problems is handled is that the write
will block for up to N (configurable) milliseconds, and then will
either succeed or fail and return a failure message.
Clif
--
.... Clif Flynt ...
http://www.cflynt.com ... clif(a)cflynt.com ...
.. Tcl/Tk: A Developer's Guide (2nd edition) - Morgan Kauffman ..
..13th Annual Tcl/Tk Conference: Oct 9-13, 2006, Chicago, IL ..
.............
http://www.tcl.tk/community/tcl2006/ ............