MOO-cows Mailing List Archive

[Prev][Next][Index][Thread]

Re: Task RAM-usage limits...



cunkel@us.itd.umich.edu saith:
>Because, like I say above, the values in variables change so often that 
>the overhead is astonishing.  Even if we only take variables, and not 
>other values floating around (only have to deal with a few of the 
>instructions, plus environment initialization) we're doing a lot of slow 
>things a lot.

    I just can't believe this is that inefficient.  Certainly,
keeping track of the amount of memory used by an entire task
is impossible.  However, all the memory management funnels though
the single mymalloc routine.  If you made sure that mymalloc "failed"
gracefully for large allocations whos currently executing task
didn't have wiz perms, then this would catch a lot but not all of
them problem code (you could still create, for example, a list
with two elements, each of which was a list of two elements,
each of which...).

I can't see why a single comparison before each malloc is going to
bring the house down.  Unix malloc isn't particularly zippy.

The real problem is having mymalloc "fail."  Right now, there is
no provision in the code for having it fail, so you'd have to
rewrite everything that called *that* to fail and return an
appropriate error code.

But it's not inconceivably vast.  There are about 100 places
where this occurs, though.

    /t

Tom Ritchford     tom@mvision.com, tom@weirdos.com

Verge's "Little Idiot" -- Music for the mentally peculiar.
1-800-WEIRDOS           http://www.fly.net/~verge


Follow-Ups: References:

Home | Subject Index | Thread Index