MOO-cows Mailing List Archive


Re: Task RAM-usage limits...

On Sun, 21 Apr 1996, Tom Ritchford wrote:

>     I just can't believe this is that inefficient.  Certainly,
> keeping track of the amount of memory used by an entire task
> is impossible.  However, all the memory management funnels though
> the single mymalloc routine.  If you made sure that mymalloc "failed"
> gracefully for large allocations whos currently executing task
> didn't have wiz perms, then this would catch a lot but not all of
> them problem code (you could still create, for example, a list
> with two elements, each of which was a list of two elements,
> each of which...).
> I can't see why a single comparison before each malloc is going to
> bring the house down.  Unix malloc isn't particularly zippy.

For strings, you're right.  For lists, it's not that simple.  Consider 
the list:

{"", {"", {"", {"", {"", {"", {"", {} } } } } } } }

which has a length of 2.  Creation of this list never allocates a large 
amount of memory at a time.  To figure out how "big" this thing really 
is, we need value_bytes() or something like it, which is much slower than 
a simple integer comparison.  (I might also add that value_bytes() is 
recursive for lists containing lists, which adds problems of its own.)

But the real problem is that whenever we modify something that is 
potentially an element of a list, and that modification increases the 
size of the structure, we need to find the lowest-level list that it's 
part of and figure out if that list is now too big.  This is difficult 
and potentially slow.  (For example, if, given the above list in variable 
x, I do x[2][2][2][2][2][2][2]={"", {}}, I need to know that the 
outermost list contains that list and check to see if it's now too big.)

> The real problem is having mymalloc "fail."  Right now, there is
> no provision in the code for having it fail, so you'd have to
> rewrite everything that called *that* to fail and return an
> appropriate error code.
> But it's not inconceivably vast.  There are about 100 places
> where this occurs, though.

mymalloc's not the level to do it at, but you're essentially right.  Too 
many things that shouldn't fail except in dire emergency call mymalloc; 
the place to deal with this is in the string- and list-handling 
routines.  Not too many things actually have the potential to create 
arbitrarily long lists or strings, although in the case of lists, things 
are still rather complicated.

I think that the same basic thing can be accomplished in a manner more 
efficient and simpler to implement with what I proposed earlier: a limit 
on the size of a value that a property can store, and on the limit of the 
total size of variables that a task can have when it suspends.  A task 
that accumulates a very large variable will either run out of seconds as 
the machine swaps a lot dealing with the variable, or will fail to 
suspend because its execution environment is too large.

This still doesn't protect against vast, quick expanse in database and 
therefore process size by creation of a lot of properties just under the 
size limit, of course, but I don't think anything that's been proposed on 
this thread has.  Possibly it could be accomplished with byte quotas 
enforced by bf_whatever function wrappers for appropriate built-ins, 
except that a programmer could just create a lot of properties and *then* 
set them to very large values.

    ResComp Network Support Technician, Bursley Hall
    "Invisibility is in the eye of the beholder."
    Home Page:


Home | Subject Index | Thread Index