[ale] mosix clusters?

James P. Kinney III jkinney at localnetsolutions.com
Sun Jun 23 22:16:59 EDT 2002


Jeff, you hit the nail on the head of uses for a Mosix cluster. The
network latency is an impact on performance, but have a place to dropoff
any box too slow for a workstation to add compute cycles to a small run
but many operation, serial problem. 
A Mosix system would make a good dynamic web server.

When I setup the one I ran at Emory, the nodes only used /tmp. The
master node had real drive space. The program pieces got farmed out to
the nodes as needed. The tmp space was only used to store state
information on each node. That allowed the breaking of the system for
other uses and the restarting later. I was gearing up for solid state
simulations, so each node would handle a small section of 3D space and
pass of the boundary condition to the adjacent node.

On Sun, 2002-06-23 at 19:02, Jeff Hubbs wrote:
> On Sun, 2002-06-23 at 17:51, Christopher Fowler wrote:
> > I would have the master use disks and the slaves share via NFS.  Besides
> > does'nt Mosix require the use of an API to be taken advantage of?
> > 
> > Chris
> > 
> > On Sun, 2002-06-23 at 17:39, Stephen Turner wrote:
> > > do mosix cluster nodes share hdd space? as one hd or do they mirror it so
> > > its backed up?
> > > 
> 
> I can actually take these questions, kinda... :-)
> 
> Mosix sharing HDD space: Mosix has what it calls the Mosix File System
> which makes it possible for every node to see every other node's disk
> space.  I haven't fired this up yet so I'm not entirely coversant with
> its operation...YET (I am currently working on a Gentoo/OpenMosix "Node
> 0" right now).  
> 
> Having the nodes use NFS is do-able but MFS makes it unnecessary, or so
> I'm starting to understand.  Exception:  you want your entire Mosix
> cluster to see filespace elsewhere, like on a file server, if no other
> reason than so you can get away with smaller hdds on your nodes.
> 
> Need for API:  Chris, you're thinking of Beowulf-class clustering. 
> Mosix' big draw is that it requires little attention in the way of how
> you code or what you run, meaning that you can leverage Mosix for COTS
> software or stuff you write yourself with little or no special coding
> considerations.
> 
> What I would like to be able to use Mosix for someday is a business
> computing situation where the computing operations are generally
> batch-like, i.e., multiple independent jobs that would ordinarily
> execute serially on a single machine.  All such jobs might operate upon
> the same data files or RDBMS.  Under VMS' batch mechanism, you'd
> establish one or more queues and configure them to hold X jobs and
> execute Y at a time; your choice of Y tended to determine how hard you
> ran your machine, which you of course would be balancing off against
> completion time.  If your batch jobs build up faster than the machine
> can finish them, you're not in very good shape.  
> 
> To me, the great thing about trying to handle a computing problem like
> this with Mosix is that you can implement your batch mechanism (or
> equiv.) on Node 0 and you can add or take away more horsepower as you
> need it or don't need it.  Wait, it gets better.  You can take a pile of
> old computers that some outfit lays out by their dumpster and throw them
> onto your cluster like gerbils on a wheel.  And before you go, hey, what
> good are ten free Pentium 90s when I can get a 900MHz Duron cheap,
> consider that the ten free Pentium 90s have a 320-bit-wide memory pipe. 
> Now, if you've got money to spend, then, yes, you can buy some number of
> really fast and compact machines, but you also can amass a lot of power
> just with other people's junk! 
> 
> Of course, this could start to fall apart if your application is so
> network-traffic-heavy that the cluster doesn't scale well (ISA-bus
> Gigabit Ethernet cards are hard to find :-)) .
> 
> - Jeff
> 
> 
> 
> 
> 
> 
> 
> 
> ---
> This message has been sent through the ALE general discussion list.
> See http://www.ale.org/mailing-lists.shtml for more info. Problems should be 
> sent to listmaster at ale dot org.
-- 
James P. Kinney III   \Changing the mobile computing world/
President and CEO      \          one Linux user         /
Local Net Solutions,LLC \           at a time.          /
770-493-8244             \.___________________________./

GPG ID: 829C6CA7 James P. Kinney III (M.S. Physics)
<jkinney at localnetsolutions.com>
Fingerprint = 3C9E 6366 54FC A3FE BA4D 0659 6190 ADC3 829C 6CA7 




---
This message has been sent through the ALE general discussion list.
See http://www.ale.org/mailing-lists.shtml for more info. Problems should be 
sent to listmaster at ale dot org.






More information about the Ale mailing list