[ale] compiled system calls versus shell scripts

Jeff Hubbs hbbs at comcast.net
Thu Oct 23 12:59:47 EDT 2003


Geoffrey -

I did look over the IBM paper when the /. article came out and I think
that the point of the exercise was to use make's -j facility to kick off
actions in parallel that would otherwise occur serially.  This doesn't
mean that you could just give it a -j without an argument (the "run
amok" switch); you have to tease out the dependencies and set it up in
such a way as to not break stuff, i.e., try to start sshd before eth0.  

It stands to reason that one way to go about this would be to list out
all the things that cause nor depend on dependencies and shoot them ALL
off in parallel, letting those chips fall where they may, and let the
normal mechanism handle everything else.  Or, wait until eth0 is up and
then shoot off sundry network services that aren't otherwise
interdependent.  

I think the speedup comes from the fact that if everything is done
serially, you can't sneak in some computing amongst disk i/o waits, for
instance.  The objective seems to be to let the kernel and its MM to
make the best of the barrage.

- Jeff

On Thu, 2003-10-23 at 12:36, Geoffrey wrote:
> Bjorn Dittmer-Roche wrote:
> > On Thu, 23 Oct 2003, Geoffrey wrote:
> 
> >>For example, you know networking needs to be up before Samba is started.
> >>  But, startup of samba and nfs could be parallel processes.
> > 
> > 
> > Make is a little smarter than that. EG. if you have a copplex web of
> > dependencies, rather than just a few things that can happen in parallel,
> > make might do better.
> 
> How so?  You still have to create the make file with the specified 
> dependencies.  How does that differ from building a script with the 
> dependency checking built in as well?  What is that make will do 
> differently?
-- 
Jeff Hubbs <hbbs at comcast.net>



More information about the Ale mailing list