On Saturday, November 01, 2014 10:49:24 PM Wouter van Verre wrote:
Hi all,
I am trying to set up logging using the audit framework, but I have some
questions about how the system works and how the components fit together.
This presentation is a pretty good overview, see slide 5:
http://people.redhat.com/sgrubb/audit/audit_ids_2011.pdf
My use case is as follows:
* I would like to have one or more servers on my network capturing data,
including TTY sessions.
* I would then like to have these servers (the 'client servers') submit the
data to another server on the network (the 'central server').
* This central server would then write the incoming data to disk, and do
some processing on the data as well.
My current idea on how to implement this is to:
* Run auditd + audisp + audisp-remote on every client server.
* Use pam_tty_audit.so on every client server for the TTY logging.
* Run auditd on the central server to receive the data and write it to disk.
* Either implement my processing tool such that it can be used instead of
the dispatcher, or implement it as a plugin for audisp?
Sure. If necessary in realtime. That same presentation referenced about also
gives an introduction to the auparse library.
I'd love some feedback on whether this set up makes sense. In
particular on
whether receiving the data with auditd on the central server is the best
way to go? And on which option is recommended for implementing the
processing tool? I would think that a custom plugin for audisp would be
best? If so, is there any documentation on how to go about implementing a
plugin for audisp that I could read?
I have already experimented with this set up a bit, and have come to the
conclusion that I am not sure how things work... I have implemented a
single client running auditd + audisp + audisp-remote with logging of TTY
session (using pam_tty_audit.so), and a central server running auditd (with
auditd configured to listen to port 60).
This seems to work to an extent:
* On the client server all the data is logged to /var/log/audit/audit.log
and I can see it there.
* On the client server I can run "aureport --tty" and I will see the TTY
session data represented more easily.
* When I am on the central server I can run "aureport --tty" and see the TTY
session data for session on the client server. My conclusion based on this
is that the central server must be receiving and storing data properly?
Yes, that sounds right. I'd also mention that if you are doing central
logging, you need to tell audispd or auditd that you want the node name
prepended to the event so that at the aggregating server you can tell the
difference.
* However, when I look at /var/log/audit/audit.log on the central
server I
can only see audit data for that server.
My first guess is that you don't have the client adding node information. That
makes it a lot clearer. You should able to search using --node to locate the
records from the client.
So my question is, where does the audit data from the client server
get
stored?
In the aggregating server's directory.
* When I connect a very simple program to the auditd daemon (instead
of the
default dispatcher) it doesn't seem to receive any input at the moment, even
though "aureport --tty" is showing that the daemon has been receiving data
in the mean time...
The preferred way of adding analytical applications to make it am auditspd
plugin. You could make it a dispatcher if you want, but the interface is a bit
different. The audit tar ball should have an example program of both kinds.
-Steve