MOO-cows Mailing List Archive

[Prev][Next][Index][Thread]

DB crasher (fwd)





---------- Forwarded message ----------
Date: Tue, 23 Apr 1996 09:52:36 -0700 (PDT)
From: Judy Anderson <yduj@CS.Stanford.EDU>
To: phantom@baymoo.sfsu.edu
Subject: DB crasher

   Date: Mon, 22 Apr 1996 23:36:18 -0700 (PDT)
   From: Rich Connamacher <phantom@baymoo.sfsu.edu>

   (Note that, if there really were a true fork bomb attack on that moo, 
   unless the clock cycles so frequently as to lag the entire moo, the fork 
   bomb could grow so fast that the clock will choke and die trying to 
   keep up..  If they don't believe me, tell them to give me a prog bit on 
   their moo and an okay to go ahead and crash it.  Whoever wrote it 
   must not understand exponential mathematics.)

Fortunately the scheduler sufficiently discriminates against players
who use huge amounts of CPU time, that, as long as you have enough
swap space to hold the initial round of tasks (probably 60,000 can get
spawned before the scheduler's discriminatory action takes over enough
to give relief), you do have plenty of time to deal with it.

Advice to wizards combatting forkbombs:  (1) make yourself two or
three other wizard characters.  Your wizard will be subject to the
scheduler's bias as well.  (2) make sure you have the LambdaMOO
@killquiet version of @kill -- by not printing out the tasks as they
are killed, you can get many hundreds more in each @kill command.  (3)
Be aware of the bug in the scheduler that gives you relief from its
bias if you disconnect and reconnect.

So, you use your three wizard connections to figure out which
character has the bazillion forks. If queued_tasks() doesn't give you
out of seconds you're in good shape.  If it does, set
$server_options.foreground_seconds to 100.  (Remember to change it
back when you're done :-)  Then you can use @forked, which will
presumably have the culprit's forks pretty early in the list, as
they're the most lagged.  Now @killquiet all Culprit.  Thousands of
tasks will die.  You may have to use more than one @killq.  Don't let
it suspend; it's hosed at that point and requires a different wizard
(or a fresh disconnected/reconnected wizard) to try it again.  Of
course, with your other wizard connections you are @deprogrammering
and @rmverbing.

Before I figured out the disconnect/reconnect scheduler bug (btw, it's
the reconnect that does it, not the disconnect, I've experimented), I
used to coordinate efforts with lots of wizards.  "Page ho_yan please
type ..." "page heathcliff Please type..." plus of course giving all
my other characters wizbits.  If one of your wizard connections gets
lagged, don't wait for it, use a different one.  I've been lagged for
anup to two hours after executing a 10 second queued_tasks() that
returned 100,000 tasks.  Remember, even though the culprit is subject
to the same scheduler discrimination that you are, he still does get
to run sometimes, and spawn 5000 more tasks each try.  So you are in a
hurry to get it fixed.

If your moo version is so old you don't have
$server_options.foreground_seconds as an option, the following code
will assist.

for x in (players())
  if (x.programmer && !x.wizard)
    fork (0)
      set_task_perms(x);
      player:tell("about to do ", x, " ", x.name);
      player:tell(length(queued_tasks()), " for ", x.name, " ", x);
    endfork
    suspend(0);
  endif
endfor

This tells you the length of the queued tasks for every programmer on
your moo.  (Natch it's spammy).  The one that gets out-of-seconds
printed after its about-to-do is your culprit.  Of course, if you're
having to do this, you can't use @kill, since it iterates over
queued_tasks(), and the point is that you couldn't use that.  Which
means you have to crawl through their @audit hoping to find the verb
in question, disable it in some way that causes each existing fork to
get an error and therefore exit.  I have in the past had to resort to
putting an error inside a $string_utils function in order to stop a
forkbomb.  (I can't remember the circumstances.)

Obviously you deprogrammer, newt, and maybe redlist while you're doing
all this investigatory work.  LambdaMOO has some facility to restrict
the MOO to only wizard logins; I don't remember the precise details,
it's in $login somewhere.  You can look at the latest core.  As soon
as the forks are all gone, @shutdown to get a checkpoint -- you've
chewed up so much VM that you aren't going to be able to checkpoint
normally.

      Judy Anderson yclept yduJ          'yduJ' rhymes with 'fudge'
 yduJ@cs.stanford.edu (personal mail)   yduJ@harlequin.com (work-related)
	Join the League for Programming Freedom, lpf@uunet.uu.net



Follow-Ups:

Home | Subject Index | Thread Index