MOO-cows Mailing List Archive

[Prev][Next][Index][Thread]

MOO-1.7.9p2 Memory leak in match() still?



We still seem to be seeing a memory leak, probably in match(), at MOOtiny, even
after the patch to regexpr.c that Pavel announced about a month back.

Using FUP I have set up a @hoststats verb which calls a csh script that returns
the size of the moo processes, and some other stats.  This allows all the Wizzen 
to watch what's happening, and what we see is a growth of about a K a minute in 
the main process size.  That is a lower boundary.  The following 2 show a 73K 
increase in 16 minutes:

Current Host stats:
Size of moo (pid 273) = 28319744 Bytes.
Size of moo (pid 275) = 1441792 Bytes.
Size of moo (pid 2797) = 1441792 Bytes.
Total size:           = 31203328 Bytes.
Swap available: 126436k Bytes
System: 10:15am up 1 day(s), 2:14, 3 users, load average: 1.09, 1.04, 1.05

Current Host stats:
Size of moo (pid 273) = 28393472 Bytes.
Size of moo (pid 275) = 1441792 Bytes.
Size of moo (pid 2797) = 1593344 Bytes.
Total size:           = 31428608 Bytes.
Swap available: 126308k Bytes
System: 10:31am up 1 day(s), 2:30, 3 users, load average: 1.06, 1.04, 1.05

Now, I have run our server on my own Sun, running our initial WOOM1.0 DB release 
and observed the following.  It ran over night with no connections and saw no 
increase in size.  Big surprise.  I then connected and tried a few commands,
measuring it at the end of each.  Just blinking added a substantial amount.
Now, due to our Web extensions (nearly all of which are in MOO-code) "just 
blinking" actually does a small number of match()'s and rmatch()'s, due to the
announce being made visable to any Web watchers.  This is why I believe that the 
problem comes from here.

I have tried running the moo inside the SparcWorks Debugger with memory leak 
checking on, but for some reason under these conditions the MOO immediately does 
a memory request for far more than I can supply and it panics.  So I can't trace 
it better than this.

However, I can't believe, since we've grown from 22M to 28M in 24 hours, that we 
are the only people seeing this problem.  Have the rest of you noted anything 
similar?  Is there a second patch I've missed?  Any ideas as to how we can plug 
this leak, or do we indeed need to drop back to a version of 1.7.8p4 patched to 
read our current DB?  Help?

Thanks for any comments,

Moredhel/TC.  MOOtiny@spsyc.nott.ac.uk:8888


Follow-Ups:

Home | Subject Index | Thread Index