Hi Steve,
Thank you very much for your reply and your suggestion. I appreciate
that. The summary looks as follows.
Key Summary Report
===========================
total key
===========================
63164 tmp
16060 docker
7206 delete
6007 admin_user_home
2760 auditlog
1595 specialfiles
675 perm_mod
69 systemd
54 systemd_tools
36 init
15 sshd
12 cron
5 login
5 actions
4 access
3 privileged
1 audit_rules_networkconfig_modification
Now I wonder why to watch /tmp and /var/tmp. As it seems, these cause
most entries in the logs. Could you think of any reason why that would
be? I have also asked this question to the owner of the package. I will
reduce the number of delete calls to specific locations and disable
watches for /home as they seem to be inappropriate for my use case.
Regards,
Frederik
On 20-08-18 19:48, Steve Grubb wrote:
On Monday, August 20, 2018 5:56:04 AM EDT Frederik Bosch wrote:
> As I have not found a location anywhere else on the web, I am sending my
> question to this list. I have an Ubuntu 18.04 machine with auditd and it
> acts as a Docker Host machine. I have hardened the system via this
> package:
https://github.com/konstruktoid/hardening which installs auditd
> with the configuration to be found here:
>
https://github.com/konstruktoid/hardening/blob/master/misc/audit.rules.
These rules could be improved upon by condensing:
# File deletions
# Capture all unauthorized file accesses
# Capture all failures to access on critical elements
# Permissions
down to 2 rules in each, 4 for the second one. That, however, is not the
actual problem. My guess is that it is capturing way more information than is
necessary.
> The problems I have are related to the directives -f and -b. The
> hardening package uses -b 8192 and -f 2. That results in a kernel panic
> very quickly because of audit backlog limit exceeded, and that causes a
> reboot of the system. Now I wonder what a good configuration would be. I
> started reading on the subject and read that -f 2 is probably the best
> for security reasons. However, I do not want to have a system that
> panics very quickly and reboots.
I'd say that you need to run:
aureport --start today --key --summary
and see what rule is triggering all the events. Do you really want all
deletes? Or just deletes in a specific directory? Do you really want to know
that a user changed dir permissions on a file in their homedir?
> Should I simply increase the backlog to much higher numbers? Or should I
> change -f to not cause a kernel panic? Or am I missing something and
> should I change some other configuration? Thanks for your help.
For the moment change -f not to cause a kernel panic. I think the rules are
probably too aggressive.
-Steve