On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe
<philippe.maupertuis(a)worldline.com> wrote:
Hi list,
One clusters fenced the passive node around two hours after auditd was
started.
We have found that iowait has increased since auditd was started and was
unusually high.
Auditd wasn’t generating many messages and there were no noticeable added
activity on the disk were the audit and syslog files were written.
Besides watches, the only general rules were :
# creation
-a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S symlink -S
mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k
creation
-a exit,always -F arch=b64 -S creat -S mkdir -S mknod -S link -S symlink -S
mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k
creation
# deletion
-a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root -F
success=1 -k deletion
-a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root -F
success=1 -k deletion
After the rebot we deleted all rules and didn’t notice extra iowait anymore.
Could these rules be the cause of additional iowait even if not generating
many events (around 20 in two hours) ?
Is there any other auditd mechanism that could explain this phenomenon ?
I would appreciate any hints.
Hi Philippe,
First, as this is a RH cluster product, I would suggest contacting RH
support with your question if you haven't already; this list is
primarily for upstream development and support.
If you are able to experiment with the system, or have a test
environment, I would suggest trying to narrow down the list of audit
rules/watches to see which rules/watches have the most affect on the
iowait times. You've listed four rules, but you didn't list the
watches you have configured. Also, what kernel version are you using?
--
paul moore
www.paul-moore.com