On Fri, 2010-08-13 at 11:06 -0400, rshaw1(a)umbc.edu wrote:
(Technology preview or no, I'm very happy to have audisp; certain other
systems aren't so lucky.)
I agree.
Well, I can't run aureport --summary; it pegs the CPU for hours and hours.
That's not really a big deal for me, though. I have a script that runs
shortly after the logs are rotated, generating a report based on the
previous day's data. It's using 3 aureports and one ausearch (piped
through a bunch of stuff). Usually takes less than 15 minutes to run. At
the moment, this is the main way we're using the data, though I'm hoping
to do more in the future. I've glanced at the audit+Prelude HOWTO, since
Prelude can do a few other things that appeal to me.
I use this. Works pretty well.
(The ausearch used to be an aureport, but aureport --anomaly -i doesn't
seem to get the node/host names from the logs, which is why I ended up
writing my own thing. Interestingly, --anomaly isn't even in the man page
for aureport; I've no idea where I found it. I don't know if any of this
is different in more recent versions.)
That's a doc bug I guess. I have never heard of it.
Hrm. This is what I have:
network_retry_time = 30
max_tries_per_record = 60
max_time_per_record = 5
network_failure_action = syslog (looks like I'll be changing that)
...
remote_ending_action = reconnect
Are you using the heartbeat_timeout stuff? I haven't been.
Me:
network_retry_time = 1
max_tries_per_record = 10
max_time_per_record = 10
heartbeat_timeout = 30
...
remote_ending_action = reconnect
> Also - I have a big ugly system involving timestamps and reconnect
> logic.
Yeah, I think I might come up with something like that, and use the "exec"
option for network_failure_action combined with cron stuff to keep
retrying.
That is what I do. It gets a little tricky, but it works.
LCB.
--
LC (Lenny) Bruzenak
lenny(a)magitekltd.com