MOO-cows Mailing List Archive
Re: RAM usage limits
On Sun, 21 Apr 1996, Justin C Harris wrote:
> First off, we could use Martian's built-in, proc_size() I think it was, and
> set up something like:
As Martian noted, his proc_size() isn't portable. Works for Linux or
SunOS, but not on, say, HP-UX or Ultrix. Writing a routine that figures out
total memory use is one of those things that tends to be different on
every system--the kind of thing that gives programmers trying to write
portable code headaches. However, it's probably fair to assume, that with
some amount of effort, we can find a way to get the job done on most
[Note: there's a compliation option to use the GNU malloc package, which
makes available memory usage statistics to the database in the form of the
memory_usage() built-in function. However, options.h specifically
recommends that it not be used, for several reasons...]
> while (1)
> size = proc_size();
> if (size > $server_options.max_ram_usage)
> [call a built-in or something to find the task, and kill it]
I don't see what this accomplishes, or how. How do we find the guilty
task? How do we know that it's a runaway task at fault, and not natural
database growth or gradual memory usage increase as a result of memory
fragmentation? What if it's more than one task?
Furthermore, since this is a MOO task, it can only happen when other
tasks aren't running. We accomplish a heck of a lot more by checking the
process size at regular intervals in the MOO-code executor. (For
example, we can decide to stop executing a task if the process size
increase by 20 megs inside while inside that task, which is probably a sign
that Bad Things (TM) are happening. This is where I thought things were
> > Running this code under WinMOO for one minute (on a Pentium/100) I
> > produced a 2.7 meg object. (And note that the code is potentially wasting
> > a bit of time; running out of ticks usually takes less than a second.
> > With a minor code change I got 4 megs in a minute.) Not amazingly fast,
> > but fast enough. The properties it generates are less than 50K apiece;
> > they won't trigger a memory-increase-per-task limit or possibly a
> > max-prop-size limit, nor does it ever tick-out or second-out. Note also
> > that it produces an increase in *db* size as well as process size, which
> > people seem to be ignoring.
> Exactly. The process size is what we'd be monitoring.
Exactly what? If we're monitoring total process size, what happens when
we exceed the limit because the database is up to 50 megs? If we're
monitoring process size increase inside a task, then that code is
specifically designed not to trigger it. [This code is also designed to
get around the two things I propose below.]
> > LambdaMOO will not be hacked into an idiot-proof,
> hostile-programmer-safe, > system. It is simply not designed for it. >
> We'll (or at least I will) try to make it hostile-programmer-safe. Anyone
> that has put a lot of work into a MOO would want to do the same.
Yes, of course I want this. I'd love it if it were impossible for
programmers to crash the MOOs I wiz on. It is not going to become that
Is it possible to make it more difficult to crash the server? Yes.
Is it possible to give an alert wiz more time to avoid a crash? Yes.
Should we attempt to do these things? Heck yes!
Is it possible to make it impossible for a programmer intent on crashing a
MOO from doing so, while retaining the basic design of the LambdaMOO
server (not the design of the language or the database--the basic
program design of the server code itself) and enough speed to make it
usable? No, it is not.
Please understand, I am not arguing that the ideas you propose are bad, or
not worth implementing or considering. Nor do I suggest that we should
throw away ideas because they're not bulletproof; if it's impossible to
build a fifty-foot-high electric fence around a prison, a ten-foot
barbed-wire fence and some guard towers are probably still a worthwhile
investment. I simply believe that we need to think very carefully about
exactly what proposals along this line are expected to accomplish, and at
There are two things that I've thought of in this area. They're designed
to be relatively easy to implement (not to require broad changes to the
source code), and they're aimed more at the simpler things.
1) Place a maximum size on the environment (variables in all frames) that
a task can have when it suspends.
To protect against the simple case:
which is quite effective, and is also easy enough to do by mistake.
Difficulty of implementation:
Need to be able to determine the environment size of a task, which isn't
too difficult, especially since just measuring the size of variables
should suffice. Need to check task size before suspending a task,
which happens in only a few places. Also need to add support for "task
ran out of memory" or whatever.
Low. Only need to do anything when a task suspends, and the time
required should be linearly related to the environment size.
2) Place a maximum size on the amount of data a single property can hold.
To protect against the analagous and arguably more dangerous case:
which is as effective as the above, but the damage is saved in the
Difficulty of implementation:
Relatively easy. Property sets happen only in a very small number of
functions. Of course, now property sets have a new way to fail,
presumably with E_QUOTA--not a small complication.
High. Property sets occur very frequently.
(Note that the code I gave in my earlier message avoids both these
measures: it stores only a certain amount of data in each property and
never suspends with any big variables.)
I suppose there's no need to explicitly solicit comments on these ideas;
they'll come in any case... ;>
ResComp Network Support Technician, Bursley Hall
"Invisibility is in the eye of the beholder."
Home Page: http://www-personal.engin.umich.edu/~cunkel/
Subject Index |