Monkey.org Developments
Honeyd Mailing List

Support Honeyd

Search:
Keywords:

Search Amazon

 
 

"Too many open files" under heavy use

From: Kuntzelman Brad MSgt AFIT/ENG <Brad.Kuntzelman_at_afit.edu>
Date: Sun, 14 Dec 2003 16:38:38 -0500






"Too many open files" under heavy use




Hi again all,

I'm having trouble (what's new?). I'm breaking arpd and honeyd with "too many open files"... both end up crashing

I'm using honeyd and my own "custom" Java synthetic traffic generation suite to  simulate a live LAN. The synthetic traffic generator is a very naive implementation, but I had to get something running... but I digress...

Using stripped down versions the udp-based protocol handler in the snmp.pl script, I made a functional udp echo server (thanks, Lance.)  However, sometimes it still blocks on read (same problem I had before) but only about a third of the time.

Next, I run my traffic generator (I have about 38 various servers configured in honeyd and 90 simulated clients) which kicks in quite a number of connections simultaneously. So, I'm guessing with all the tcp and udp connections I'm making at once, combined with all the scripts that need to be opened, as well as all the logging, and, my blocking udp handlers, my honeyd box is choking... somehow...

Upon starting the traffic, I immediately begin getting the "too many open files" message when honeyd is trying to fork the shell/perl scripts to handle the connections, as below (from /var/log/syslog).

Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45509 - 10.2.0.14:110): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45510 - 10.2.0.15:110): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.85:49265 - 10.3.0.15:137): honeyd: cmd_fork: execv(scripts/ns-handler.pl): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45511 - 10.2.0.14:110): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45512 - 10.2.0.15:25): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.17:45513 - 10.2.0.14:25): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.16:45515 - 10.2.0.14:110): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:45516 - 10.2.0.15:110): honeyd: cmd_fork: execv(sh): Too many open files
Dec 14 16:23:59 harry honeyd[6601]: E(10.10.0.19:49266 - 10.3.0.15:137): honeyd: cmd_fork: execv(scripts/ns-handler.pl): Too many open files

Now, I've checked the following locations in proc after a run:
/proc/sys/fs/file-max (52403)
/proc/sys/fs/file-nr (2069 242 52403)
/proc/sys/fs/inode-nr (2011 120)
/proc/sys/fs/inode-state (2011 120 0 0 0 0 0)

and ulimit (bash) says:
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) unlimited
max memory size       (kbytes, -m) unlimited
open files                    (-n) 52000
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) 8192
cpu time             (seconds, -t) unlimited
max user processes            (-u) unlimited
virtual memory        (kbytes, -v) unlimited

Okay, now, I'm guessing that the initial honeyd process has the correct "ulimit" but when it forks off the daemonized child, it receives its own ulimit settings. Is there a way to force the transfer of these settings to the child process? Note: it doesn't help the problem to run honeyd with the '-d' switch...

Thanks for any help!

Brad Kuntzelman


Received on Sun Dec 14 2003 - 18:56:08 PST
Search For Information
Google
Search WWW Search www.honeyd.org

NB: This is a filtered version of the Honeypots mailing list. Only posts that concern Honeyd are shown here. For more recent discussions visit the forums.