LC Bruzenak wrote:
Thank you; I get it now.
This is about to get interesting! :)
Not certain why I didn't get it the first time, but for some reason I
had not considered sending the events into the auditd loop.
I was thinking of just aggregating the logfiles. Now it makes sense.
My one auditd machine gets very busy occasionally - I sometimes drop
events (rather than abort for a development machine) even after
ratcheting up my event queue to 8K. Often this is due to an error I've
introduced with too-general rules, so this is also not definitive.
Now the question is what happens if the network hiccups and I cannot
send the events from a client? I could still write the events to the
local disk, but them getting them onto the intended aggregator is now
tricky right? Will the sender keep track of the last event sent and
recover once the connection is restored?
I'm not disputing the approach, just trying to look down the road
knowing problems I've experienced myself. There are some definite
benefits to this approach I see also - the log files now are "blended"
and you don't have to do any special directory hierarchy to accommodate
the other events, for one.
This is why the IPA project has selected AMQP (
www.amqp.org) as the
transport for centralized loggiing (included audit logs). AMQP will
queue in persistent storage messages which do not reach their
destination until delivery is assured. The thinking was AMQP was too
heavy weight a dependency for a simple centralized audit log but makes
sense in an enterprise deployment such as IPA.
--
John Dennis <jdennis(a)redhat.com>