On Monday, August 20, 2018 5:56:04 AM EDT Frederik Bosch wrote:
As I have not found a location anywhere else on the web, I am sending
my
question to this list. I have an Ubuntu 18.04 machine with auditd and it
acts as a Docker Host machine. I have hardened the system via this
package:
https://github.com/konstruktoid/hardening which installs auditd
with the configuration to be found here:
https://github.com/konstruktoid/hardening/blob/master/misc/audit.rules.
These rules could be improved upon by condensing:
# File deletions
# Capture all unauthorized file accesses
# Capture all failures to access on critical elements
# Permissions
down to 2 rules in each, 4 for the second one. That, however, is not the
actual problem. My guess is that it is capturing way more information than is
necessary.
The problems I have are related to the directives -f and -b. The
hardening package uses -b 8192 and -f 2. That results in a kernel panic
very quickly because of audit backlog limit exceeded, and that causes a
reboot of the system. Now I wonder what a good configuration would be. I
started reading on the subject and read that -f 2 is probably the best
for security reasons. However, I do not want to have a system that
panics very quickly and reboots.
I'd say that you need to run:
aureport --start today --key --summary
and see what rule is triggering all the events. Do you really want all
deletes? Or just deletes in a specific directory? Do you really want to know
that a user changed dir permissions on a file in their homedir?
Should I simply increase the backlog to much higher numbers? Or
should I
change -f to not cause a kernel panic? Or am I missing something and
should I change some other configuration? Thanks for your help.
For the moment change -f not to cause a kernel panic. I think the rules are
probably too aggressive.
-Steve