Monkey.org Developments
Honeyd Mailing List

Support Honeyd

Search:
Keywords:

Search Amazon

 
 

RE: "Too many open files" under heavy use

From: Kuntzelman Brad MSgt AFIT/ENG <Brad.Kuntzelman_at_afit.edu>
Date: Mon, 15 Dec 2003 11:42:49 -0500






RE: "Too many open files" under heavy use




grrrr... that fixed the error message, but now I get no logs!

I have instrumented each of the scripts to output to a logfile, creating one logfile for every client IP that it sees, so at most, 90 files for logs.

I really wish I understood Linux better than I do... :(


-----Original Message-----
From:   Laurent OUDOT [mailto:oudot@rstack.org]
Sent:   Mon 2003-12-15 02:10 AM
To:     Kuntzelman Brad MSgt AFIT/ENG
Cc:     honeypots@securityfocus.com
Subject:        Re: "Too many open files" under heavy use


Kuntzelman Brad MSgt AFIT/ENG a écrit:
> Hi again all,
>
> I'm having trouble (what's new?). I'm breaking arpd and honeyd with "too
> many open files"... both end up crashing
>
> I'm using honeyd and my own "custom" Java synthetic traffic generation
> suite to  simulate a live LAN. The synthetic traffic generator is a very
> naive implementation, but I had to get something running... but I digress...
>
> Using stripped down versions the udp-based protocol handler in the
> snmp.pl script, I made a functional udp echo server (thanks, Lance.)
> However, sometimes it still blocks on read (same problem I had before)
> but only about a third of the time.
>
> Next, I run my traffic generator (I have about 38 various servers
> configured in honeyd and 90 simulated clients) which kicks in quite a
> number of connections simultaneously. So, I'm guessing with all the tcp
> and udp connections I'm making at once, combined with all the scripts
> that need to be opened, as well as all the logging, and, my blocking udp
> handlers, my honeyd box is choking... somehow...
>
> Upon starting the traffic, I immediately begin getting the "too many
> open files" message when honeyd is trying to fork the shell/perl scripts
> to handle the connections, as below (from /var/log/syslog).
>
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45509 - 10.2.0.14:110):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45510 - 10.2.0.15:110):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.85:49265 - 10.3.0.15:137):
> honeyd: cmd_fork: execv(scripts/ns-handler.pl): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45511 - 10.2.0.14:110):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45512 - 10.2.0.15:25):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45513 - 10.2.0.14:25):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.16:45515 - 10.2.0.14:110):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45516 - 10.2.0.15:110):
> honeyd: cmd_fork: execv(sh): Too many open files
> Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:49266 - 10.3.0.15:137):
> honeyd: cmd_fork: execv(scripts/ns-handler.pl): Too many open files
>
> Now, I've checked the following locations in proc after a run:
> /proc/sys/fs/file-max (52403)
> /proc/sys/fs/file-nr (2069 242 52403)
> /proc/sys/fs/inode-nr (2011 120)
> /proc/sys/fs/inode-state (2011 120 0 0 0 0 0)
>
> and ulimit (bash) says:
> core file size        (blocks, -c) 0
> data seg size         (kbytes, -d) unlimited
> file size             (blocks, -f) unlimited
> max locked memory     (kbytes, -l) unlimited
> max memory size       (kbytes, -m) unlimited
> open files                    (-n) 52000
> pipe size          (512 bytes, -p) 8
> stack size            (kbytes, -s) 8192
> cpu time             (seconds, -t) unlimited
> max user processes            (-u) unlimited
> virtual memory        (kbytes, -v) unlimited
>
> Okay, now, I'm guessing that the initial honeyd process has the correct
> "ulimit" but when it forks off the daemonized child, it receives its own
> ulimit settings. Is there a way to force the transfer of these settings
> to the child process? Note: it doesn't help the problem to run honeyd
> with the '-d' switch...
>

Hi,

There is a limitation hard coded in Honeyd.

Feel free to play at your own risks with the line "rl.rlim_cur =
rl.rlim_max = 24;" from command.c in the honeyd package.
You'll be able to increase this limitation.

Just check this C code, you'll understand easily what's going on with
your problem (RLIMIT_NOFILE stands for open files) :

---------honeyd: command.c----------
int
cmd_setpriv(struct template *tmpl)
{
(...)
        struct rlimit rl;

(...)
        /* Raising file descriptor limits */
        rl.rlim_cur = rl.rlim_max = 24;
        if (setrlimit(RLIMIT_NOFILE, &rl) == -1)
                err(1, "setrlimit");

        return (0);
}
-----------------

Good luck,

laurent



Received on Mon Dec 15 2003 - 12:02:57 PST
Search For Information
Google
Search WWW Search www.honeyd.org

NB: This is a filtered version of the Honeypots mailing list. Only posts that concern Honeyd are shown here. For more recent discussions visit the forums.