I configured the system to audit open system call alone instead of all the system calls
(our loader program executes) and hence I saw the massive improvement in performance.
My fix is not causing any change in the performance. I wrongly communicated that the fix
is causing performance improvement. Sorry for that.
As per the perf data, the format_decode is the function where most of the time is spent
i.e. formatting the record in the buffer before delivering the data to user space.
We need to eliminate formatting records to increase the performance.
Any idea why we need to format the record and whether can we add an option (RAW) to
deliver the record without formatting to user space?
-----Original Message-----
From: Richard Guy Briggs [mailto:rgb@redhat.com]
Sent: Thursday, February 12, 2015 11:55 PM
To: Viswanath, Logeswari P (MCOU OSTL)
Cc: Richard Guy Briggs; linux-audit(a)redhat.com
Subject: Re: Linux audit performance impact
On 15/02/12, Viswanath, Logeswari P (MCOU OSTL) wrote:
Hi all,
We did profiling of the kernel (using perf tool) during our performance test and below
were the top 4 functions for the overhead.
11.33% loader1 [kernel.kallsyms] [k] format_decode
10.40% loader1 [kernel.kallsyms] [k] memcpy
7.46% loader1 [kernel.kallsyms] [k] number.isra.1
6.99% loader1 [kernel.kallsyms] [k] vsnprintf
I was unable to attach the entire profiling data of the kernel because it exceeds the
limit of 80KB.
>From the perf data, we believed the overhead is because of invoking audit_log_format
function multiple times.
We changed the code to reduce the number of times this function is called.
With this change the performance degradation is 20% now compared to the performance
without auditing.
Without this change the performance degradation is 200% compared to the performance
without auditing.
Those numbers are not insignificant! I am a bit surprised you were able to get that much
of an improvement with just this class of change.
We can publish the code change done tomorrow.
Please let me know your feedback on this idea.
Regards,
Logeswari.
-----Original Message-----
From: Richard Guy Briggs [mailto:rgb@redhat.com]
Sent: Wednesday, February 11, 2015 10:21 PM
To: Viswanath, Logeswari P (MCOU OSTL)
Cc: linux-audit(a)redhat.com
Subject: Re: Linux audit performance impact
On 15/02/06, Viswanath, Logeswari P (MCOU OSTL) wrote:
> Hi all,
>
> Please find the below the details of the performance test we ran.
> It would be great if we get help to identify the reason behind the degradation and
the ways of improving it.
>
> Kernel Version:
> root > uname -r
> 3.13.0-36-generic
>
> OS Version:
> Ubuntu 14.04.1
>
> No. of CPUs:
> root > nproc
> 24
>
> Audit Status:
> root > auditctl -s
> AUDIT_STATUS: enabled=1 flag=1 pid=0 rate_limit=0 backlog_limit=320
> lost=57190353 backlog=0
>
> Rules Configured:
> root > auditctl -l
> LIST_RULES: exit,always arch=3221225534 (0xc000003e) syscall=all
>
> Attached is the program used to load the system.
>
> Results:
>
> Without enabling audit 12.29
> With auditing enabled and no rules configured 12.31
> With auditing enabled, 1 rule configured but auditd not running - kauditd logs audit
records to syslog via printk 41.02
This would be more meaningful if you hacked the kernel to drain the queue figuratively to
/dev/nul to eliminate the effect of auditd draining it, or syslog covering for a missing
auditd. This stat doesn't tell us that much since the I/O act can vary significantly
per installation. That one rule you chose is pretty unnaturally abusive and needs to be
carefully thought out to avoid self-measurement.
> The degradation is around 200%
>
> Regards,
> Logeswari.
>
> -----Original Message-----
> From: Richard Guy Briggs [mailto:rgb@redhat.com]
> Sent: Wednesday, February 04, 2015 9:46 PM
> To: Viswanath, Logeswari P (MCOU OSTL)
> Cc: Satish Chandra Kilaru; Steve Grubb; linux-audit(a)redhat.com
> Subject: Re: Linux audit performance impact
>
> On 15/02/04, Viswanath, Logeswari P (MCOU OSTL) wrote:
> > The intent is to calculate the performance impact by the auditing
> > components such as
> >
> > 1) impact because of kauditd without auditd - but kauditd writes to syslog, so
we are unable to determine the impact just because of kauditd - It is fine even if the
audit record is dropped by kauditd. Is there any way to do this?
>
> Not yet. That is a mode that has not been useful to anyone yet. You are welcome to
hack a custom kernel to disable klog for doing testing instrumentation.
>
> > 2) impact because of running auditd - log format NOLOG
> > 3) impact because of running audispd - small plugin is written which will just
read the audit records and doesn't processes it.
> >
> > -----Original Message-----
> > From: Richard Guy Briggs [mailto:rgb@redhat.com]
> > Sent: Tuesday, February 03, 2015 10:33 PM
> > To: Satish Chandra Kilaru
> > Cc: Viswanath, Logeswari P (MCOU OSTL); Steve Grubb;
> > linux-audit(a)redhat.com
> > Subject: Re: Linux audit performance impact
> >
> > On 15/02/03, Satish Chandra Kilaru wrote:
> > > Thanks for The info. But my question was rhetorical... I meant
> > > to say that it would not be much... She is trying to bombard the
> > > system with open calls ... So lots and lots of events will be
> > > generated and kernel has to write down the events some where or discard
them...
> >
> > Exactly. It is of little practical use. You have to do I/O at some point,
either to the same disk or another, or to a network interface or serial port, otherwise,
just chuck it out. You could do a performance measurement on a short burst, then drain
the queue, but what will that actually tell us?
> >
> > > On Tuesday, February 3, 2015, Richard Guy Briggs <rgb(a)redhat.com>
wrote:
> > >
> > > > On 15/02/03, Satish Chandra Kilaru wrote:
> > > > > How many events can kernel accumulate without I/o ?
> > > >
> > > > The kernel default is 64 *buffers*, but I think Fedora and
> > > > RHEL set it to 320. It is now possible to set it to "0"
which
> > > > means limited only by system resources. See "man
auditctl", "-b"
> > > > option. An event can be made up of several buffers.
> > > >
> > > > Of course, how long a system lasts before the queue blows up
> > > > depends on your rule set...
> > > >
> > > > However, at the moment, it will still write out to klog if
> > > > auditd isn't running.
> > > >
> > > > > On Tuesday, February 3, 2015, Viswanath, Logeswari P (MCOU
> > > > > OSTL) < logeswari.pv(a)hp.com <javascript:;>> wrote:
> > > > >
> > > > > > I don't want to disable auditing (i.e. disable audit
> > > > > > record
> > > > collection),
> > > > > > but just do not want the records to delivered to user
> > > > > > space since I
> > > > want to
> > > > > > remove the I/O overhead while running the performance
test.
> > > > > > Is there any option for this?
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: Richard Guy Briggs [mailto:rgb@redhat.com
> > > > > > <javascript:;>
> > > > <javascript:;>]
> > > > > > Sent: Thursday, January 29, 2015 10:23 PM
> > > > > > To: Viswanath, Logeswari P (MCOU OSTL)
> > > > > > Cc: Satish Chandra Kilaru; Steve Grubb;
> > > > > > linux-audit(a)redhat.com
> > > > <javascript:;>
> > > > > > <javascript:;>
> > > > > > Subject: Re: Linux audit performance impact
> > > > > >
> > > > > > On 15/01/29, Viswanath, Logeswari P (MCOU OSTL) wrote:
> > > > > > > Please read my question as “Is there any option to
> > > > > > > configure kaudit not to log audit records to syslog?
when auditd not running.”
> > > > > >
> > > > > > Yeah, remove audit=1 from the kernel command line, or set
> > > > > > audit=0 in
> > > > its
> > > > > > place. This will stop all but AVCs and if auditd has ever
> > > > > > run since
> > > > boot.
> > > > > > If audit=0 is on the kernel boot line, it will be
> > > > > > impossible to run
> > > > auditd.
> > > > > >
> > > > > > There is a feature request that is likely coming soon that
> > > > > > could be
> > > > > > useful:
> > > > > >
> > > > > >
https://bugzilla.redhat.com/show_bug.cgi?id=1160046
> > > > > > "If no audit daemon is running, but an audit multicast
> > > > > > subscriber is around, then the kernel shouldn't forward
audit data to kmsg"
> > > > > >
> > > > > > > From: Viswanath, Logeswari P (MCOU OSTL)
> > > > > > > Sent: Thursday, January 29, 2015 11:49 AM
> > > > > > > To: 'Satish Chandra Kilaru'; Steve Grubb
> > > > > > > Cc: linux-audit(a)redhat.com <javascript:;>
<javascript:;>
> > > > > > > Subject: RE: Linux audit performance impact
> > > > > > >
> > > > > > > Is there any option to configure kaudit not to log
audit
> > > > > > > records to
> > > > > > syslog when auditd is running?
> > > > > > > This way we can assess the impact of enabling audit
> > > > > > > without involving
> > > > > > disk I/o overhead.
> > > > > > >
> > > > > > > From: Satish Chandra Kilaru
[mailto:iam.kilaru@gmail.com
> > > > <javascript:;> <javascript:;>]
> > > > > > > Sent: Thursday, January 29, 2015 9:12 AM
> > > > > > > To: Steve Grubb
> > > > > > > Cc: linux-audit(a)redhat.com <javascript:;>
<javascript:;><mailto:
> > > > linux-audit(a)redhat.com <javascript:;>
> > > > > > <javascript:;>>; Viswanath,
> > > > > > > Logeswari P (MCOU OSTL)
> > > > > > > Subject: Re: Linux audit performance impact
> > > > > > >
> > > > > > > I agree with you... but writing to disk can trigger
> > > > > > > further events
> > > > > > leading spiralling of events...
> > > > > > > I brought down my server few times with stupid
rules...
> > > > > > >
> > > > > > > On Wed, Jan 28, 2015 at 10:39 PM, Steve Grubb
> > > > > > > <sgrubb(a)redhat.com
> > > > <javascript:;>
> > > > > > <javascript:;><mailto:sgrubb@redhat.com
<javascript:;>
> > > > <javascript:;>>> wrote:
> > > > > > > On Wednesday, January 28, 2015 10:18:47 AM Satish
> > > > > > > Chandra Kilaru
> > > > wrote:
> > > > > > > > Write your own program to receive audit events
> > > > > > > > directly without using auditd...
> > > > > > > > That should be faster ....
> > > > > > > > Auditd will log the events to disk causing more
I/o than u need...
> > > > > > >
> > > > > > > But even that is configurable in many ways. You can
> > > > > > > decide if you
> > > > want
> > > > > > > logging to disk or not and what kind of assurance that
> > > > > > > it made it to disk and the priority of that audit
daemon.
> > > > > > > Then you also have all
> > > > the
> > > > > > > normal tuning knobs for disk throughput that you would
> > > > > > > use for any disk performance critical system.
> > > > > > >
> > > > > > > -Steve
> > > > > > >
> > > > > > > > On Wednesday, January 28, 2015, Viswanath,
Logeswari P
> > > > > > > > (MCOU
> > > > > > > > OSTL)
> > > > <
> > > > > > > >
> > > > > > > > logeswari.pv(a)hp.com <javascript:;>
<javascript:;><mailto:
> > > > logeswari.pv(a)hp.com <javascript:;>
> > > > > > <javascript:;>>> wrote:
> > > > > > > > > Hi Steve,
> > > > > > > > >
> > > > > > > > > I am Logeswari working for HP.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > We want to know audit performance impact on
RHEL and
> > > > > > > > > Suse linux
> > > > to
> > > > > > > > > help us evaluate linux audit as data source
for our
> > > > > > > > > host based
> > > > IDS.
> > > > > > > > >
> > > > > > > > > When we ran our own performance test with a
test
> > > > > > > > > audispd plugin, we found if a system can
perform
> > > > > > > > > 200000 open/close system calls per second
without
> > > > > > > > > auditing, system can perform only 3000
open/close
> > > > > > > > > system calls auditing is enabled for
open/close
> > > > > > > > > system call which is a HUGE impact on the
system performance.
> > > > > > > > > It would
> > > > be
> > > > > > > > > great if anyone can help us answering the
following questions.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 1) Is this performance impact expected?
If yes, what is the
> > > > > > reason
> > > > > > > > > behind it and can we fix it?
> > > > > > > > >
> > > > > > > > > 2) Have anyone done any benchmarking
for performance
> > > > impact? If
> > > > > > yes,
> > > > > > > > > can you please share the numbers and also
the
> > > > > > > > > steps/programs used the run the same.
> > > > > > > > >
> > > > > > > > > 3) Help us validating the performance
test we have done in
> > > > our
> > > > > > test
> > > > > > > > > setup using the steps mentioned along with
the results attached.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Attached test program (loader.c) to invoke
open and
> > > > > > > > > close system
> > > > > > calls.
> > > > > > > > >
> > > > > > > > > Attached idskerndsp is the audispd plugin
program.
> > > > > > > > >
> > > > > > > > > We used time command to determine how much
time the
> > > > > > > > > system took
> > > > to
> > > > > > > > > complete 50000 open/close system calls
without
> > > > > > > > > (results attached
> > > > > > > > > Without-auditing) and with auditing enabled
on the
> > > > > > > > > system (With-auditing-NOLOG-audispd-plugin
and
> > > > > > > > > With-auditing-RAW)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > System details:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 1 CPU machine
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > *OS Version*
> > > > > > > > >
> > > > > > > > > RHEL 6.5
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > *Kernel Version*
> > > > > > > > >
> > > > > > > > > uname –r
> > > > > > > > >
> > > > > > > > > 2.6.32-431.el6.x86_64
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Note: auditd was occupying 35% of CPU and
was
> > > > > > > > > sleeping for most
> > > > of
> > > > > > > > > the time whereas kauditd was occupying 20%
of the CPU.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks & Regards,
> > > > > > > > >
> > > > > > > > > Logeswari.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Please Donate to
> > > > > > >
www.wikipedia.org<http://www.wikipedia.org>
> > > > > >
> > > > > > > --
> > > > > > > Linux-audit mailing list Linux-audit(a)redhat.com
> > > > > > > <javascript:;> <javascript:;>
> > > > > > >
https://www.redhat.com/mailman/listinfo/linux-audit
> > > > > >
> > > > > >
> > > > > > - RGB
> > > > > >
> > > > > > --
> > > > > > Richard Guy Briggs <rbriggs(a)redhat.com
<javascript:;>
> > > > > > <javascript:;>> Senior Software Engineer, Kernel
Security,
> > > > > > AMER ENG Base Operating Systems, Red Hat Remote, Ottawa,
> > > > > > Canada
> > > > > > Voice: +1.647.777.2635, Internal: (81) 32635, Alt:
> > > > > > +1.613.693.0684x3545
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Please Donate to
www.wikipedia.org
> > > >
> > > > - RGB
> > > >
> > > > --
> > > > Richard Guy Briggs <rbriggs(a)redhat.com <javascript:;>>
Senior
> > > > Software Engineer, Kernel Security, AMER ENG Base Operating
> > > > Systems, Red Hat Remote, Ottawa, Canada
> > > > Voice: +1.647.777.2635, Internal: (81) 32635, Alt:
> > > > +1.613.693.0684x3545
> > > >
> > >
> > >
> > > --
> > > Please Donate to
www.wikipedia.org
> >
> > - RGB
> >
> > --
> > Richard Guy Briggs <rbriggs(a)redhat.com> Senior Software Engineer,
> > Kernel Security, AMER ENG Base Operating Systems, Red Hat Remote,
> > Ottawa, Canada
> > Voice: +1.647.777.2635, Internal: (81) 32635, Alt:
> > +1.613.693.0684x3545
>
> - RGB
>
> --
> Richard Guy Briggs <rbriggs(a)redhat.com> Senior Software Engineer,
> Kernel Security, AMER ENG Base Operating Systems, Red Hat Remote,
> Ottawa, Canada
> Voice: +1.647.777.2635, Internal: (81) 32635, Alt:
> +1.613.693.0684x3545
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/stat.h>
> #include <fcntl.h>
> #include <unistd.h>
> #include <errno.h>
>
> void create_load(int iters);
> void cleanup();
>
> int high_rate = 0;
> int num_iters = 100000;
> int fd1;
> char file1[50];
> char file2[50];
> char dir1[50];
> char symlink1[50];
>
> /* Purpose: To create system load by invoking system calls used by templates.
> *
> * Note: The unlink(2) of a file can be an expensive operation (i.e., event
> * rate goes way down).
> */
>
> main(int argc, char **argv) {
>
> int num_children=1;
> int iters;
> int i;
> char c;
>
> while ((c = getopt(argc, argv, "hi:")) != -1) {
> switch (c) {
> case 'h':
> /*
> * Desire "high" event rate
> */
> high_rate = 1;
> argc--;
> break;
> case 'i':
> /*
> * Desire a specified number of iterations
> */
> num_iters = atoi(optarg);
> argc--;
> break;
> default:
> fprintf(stderr,"Unknown option: %c\n",optarg);
> exit(1);
> }
> }
>
>
> /*if(argv[optind] != NULL) {
> num_children = atoi(argv[optind]);
> } else {
> num_children = 0;
> }
> Register cleanup routine */
> fprintf(stderr,"Registering cleanup routine...\n");
> if (atexit(cleanup) == -1) {
> fprintf(stderr,"Error calling atexit(), errno=%d(%s)\n",
> errno,strerror(errno));
> exit(1);
> }
>
>
> /* fork child processes, if any requested */
> for(i=1; i < num_children; i++) {
> if(fork() == 0) {
>
> printf("child pid: %d\n",getpid());
>
> /* Setup file names based on child's pid */
> sprintf(file1,"./file1_%d",getpid());
> sprintf(file2,"./file2_%d",getpid());
> sprintf(dir1,"./dir1_%d",getpid());
> sprintf(symlink1,"./file1symlink_%d",getpid());
>
> /* each child creates load */
> iters=0;
> if (num_iters == -1) {
> while(1) {
> create_load(iters);
> iters++;
> if( (iters % 1000) == 0) {
> printf("pid %d iteration %d\n",getpid(),iters);
> }
> }
> } else {
> while(iters < num_iters) {
> create_load(iters);
> iters++;
> if( (iters % 1000) == 0) {
> printf("pid %d iteration %d\n",getpid(),iters);
> }
> }
> }
> }
> }
>
> /* Parent creates load also */
> printf("parent pid: %d\n",getpid());
>
> /* Setup file names based on parent's pid */
> sprintf(file1,"./file1_%d",getpid());
> sprintf(file2,"./file2_%d",getpid());
> sprintf(dir1,"./dir1_%d",getpid());
> sprintf(symlink1,"./file1symlink_%d",getpid());
>
> iters=0;
> if (num_iters == -1) {
> while(1) {
> create_load(iters);
> iters++;
> if( (iters % 1000) == 0) {
> printf("pid %d iteration %d\n",getpid(),iters);
> }
> }
> } else {
> while(iters < num_iters) {
> create_load(iters);
> iters++;
> if( (iters % 1000) == 0) {
> printf("pid %d iteration %d\n",getpid(),iters);
> }
> }
> }
>
> } /* main */
>
>
> void create_load(int iters) {
>
> int pid;
> char *args[2];
> struct stat stat_buf;
>
> fd1 = creat(file1,0x644);
> if (fd1 == -1) {
> fprintf(stderr,"pid %d: creat() returned error for file %s,
errno=%d(%s)\n",
> getpid(),file1,errno,strerror(errno));
> exit(1);
> }
> if (close(fd1) == -1) {
> fprintf(stderr,"pid %d: close() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> fd1 = open(file1, O_RDWR, 0777);
> if (fd1 == -1) {
> fprintf(stderr,"pid %d: open() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
>
> /* Chown this file to root instead of user ids so that we don't generate a
> * non-owned alert when the file is truncated when invoking creat() again
> * as root on an existing file owned by another user.
> */
> if (chown(file1,0,0) == -1) {
> fprintf(stderr,"pid %d: chown(%d,%d) returned error, errno=%d(%s)\n",
> getpid(),0,0,errno,strerror(errno));
> exit(1);
> }
>
> if (fchown(fd1,0,0) == -1) {
> fprintf(stderr,"pid %d: fchown(%d,%d) returned error,
errno=%d(%s)\n",
> getpid(),0,0,errno,strerror(errno));
> exit(1);
> }
>
> if (chmod(file1, S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH) == -1) {
> fprintf(stderr,"pid %d: chmod(S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH) returned
error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> if (fchmod(fd1, S_IXUSR|S_IXGRP|S_IXOTH) == -1) {
> fprintf(stderr,"pid %d: fchmod(S_IXUSR|S_IXGRP|S_IXOTH) returned error,
errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
>
>
> if (write(fd1,"Some stuff",strlen("Some stuff")) == -1) {
> fprintf(stderr,"pid %d: write() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> if (ftruncate(fd1,7) == -1) {
> fprintf(stderr,"pid %d: ftruncate() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> if (close(fd1) == -1) {
> fprintf(stderr,"pid %d: close() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
>
> if (truncate(file1,3) == -1) {
> fprintf(stderr,"pid %d: truncate() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> if (rename(file1,file2) == -1) {
> fprintf(stderr,"pid %d: rename(%s,%s) returned error,
errno=%d(%s)\n",
> getpid(),file1,file2,errno,strerror(errno));
> exit(1);
> }
> if (rename(file2,file1) == -1) {
> fprintf(stderr,"pid %d: rename(%s,%s) returned error,
errno=%d(%s)\n",
> getpid(),file2,file1,errno,strerror(errno));
> exit(1);
> }
> if (link(file1,file2) == -1) {
> fprintf(stderr,"pid %d: link(%s,%s) returned error, errno=%d(%s)\n",
> getpid(),file1,file2,errno,strerror(errno));
> exit(1);
> }
> if (symlink(file1,symlink1) == -1) {
> fprintf(stderr,"pid %d: symlink(%s,%s) returned error,
errno=%d(%s)\n",
> getpid(),file1,symlink1,errno,strerror(errno));
> exit(1);
> }
> if (lchown(symlink1,0,0) == -1) {
> fprintf(stderr,"pid %d: lchown(%s,%d,%d) returned error,
errno=%d(%s)\n",
> getpid(),symlink1,0,0,errno,strerror(errno));
> exit(1);
> }
>
> if (lstat(symlink1,&stat_buf) == -1) {
> fprintf(stderr,"pid %d: lstat(%s) returned error, errno=%d(%s)\n",
> getpid(),symlink1,errno,strerror(errno));
> exit(1);
> }
> if (stat(file1,&stat_buf) == -1) {
> fprintf(stderr,"pid %d: stat(%s) returned error, errno=%d(%s)\n",
> getpid(),file1,errno,strerror(errno));
> exit(1);
> }
> if (unlink(file1) == -1) {
> fprintf(stderr,"pid %d: unlink(%s) returned error, errno=%d(%s)\n",
> getpid(),file1,errno,strerror(errno));
> exit(1);
> }
> if (unlink(file2) == -1) {
> fprintf(stderr,"pid %d: unlink(%s) returned error, errno=%d(%s)\n",
> getpid(),file2,errno,strerror(errno));
> exit(1);
> }
> if (unlink(symlink1) == -1) {
> fprintf(stderr,"pid %d: unlink(%s) returned error, errno=%d(%s)\n",
> getpid(),symlink1,errno,strerror(errno));
> exit(1);
> }
> if (mkdir(dir1,S_IRUSR|S_IWUSR|S_IXUSR|S_IRGRP|S_IXGRP) == -1) {
> fprintf(stderr,"pid %d: mkdir() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> if (rmdir(dir1) == -1) {
> fprintf(stderr,"pid %d: rmdir() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
>
> /* Fork every 10000 iterations to not use up process resources too quickly */
> if ( (iters % 10000) == 0) {
> pid = fork();
> if(pid == 0) {
> fprintf(stderr,"child pid %d: fork!\n",getpid());
> // child
> args[0] = "/bin/ls";
> args[1] = NULL;
> close(1);
> close(2);
> execve(args[0], args, NULL);
> fprintf(stderr,"pid %d: execve(%s) returned error, errno=%d(%s)\n",
> getpid(),args[0],errno,strerror(errno));
> _exit(1);
> } else if (pid < 0) {
> fprintf(stderr,"pid %d: fork() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> } else {
> fprintf(stderr,"parent pid %d, child pid: %d:
fork!\n",getpid(),pid);
> }
>
> pid = vfork();
> if(pid == 0) {
> args[0] = "/bin/pwd";
> args[1] = NULL;
> close(1);
> close(2);
> execv(args[0], args);
> fprintf(stderr,"pid %d: execve(%s) returned error, errno=%d(%s)\n",
> getpid(),args[0],errno,strerror(errno));
> _exit(1);
> } else if (pid < 0) {
> fprintf(stderr,"pid %d: vfork() returned error, errno=%d(%s)\n",
> getpid(),errno,strerror(errno));
> exit(1);
> }
> }
>
> /* Make sure everything is cleaned up and deleted before returning */
> cleanup();
>
> } /* create_load() */
>
> void cleanup() {
> close(fd1);
> unlink(file1);
> unlink(file2);
> unlink(symlink1);
> unlink(dir1);
> return;
> }
> --
> Linux-audit mailing list
> Linux-audit(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/linux-audit
- RGB
--
Richard Guy Briggs <rbriggs(a)redhat.com> Senior Software Engineer,
Kernel Security, AMER ENG Base Operating Systems, Red Hat Remote,
Ottawa, Canada
Voice: +1.647.777.2635, Internal: (81) 32635, Alt:
+1.613.693.0684x3545
--
Linux-audit mailing list
Linux-audit(a)redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit
- RGB
--
Richard Guy Briggs <rbriggs(a)redhat.com>
Senior Software Engineer, Kernel Security, AMER ENG Base Operating Systems, Red Hat
Remote, Ottawa, Canada
Voice: +1.647.777.2635, Internal: (81) 32635, Alt: +1.613.693.0684x3545