[ale] Two offices, one data pool

Michael B. Trausch mike at trausch.us
Thu Feb 17 11:53:21 EST 2011


On Thu, 2011-02-17 at 11:11 -0500, Ron Frazier wrote:
> This may be a stupid question, but, if you establish a VPN, couldn't
> the remote office directly access the same database as the home base,
> with all the normal locking mechanisms, etc.

That would be a half-way decent thing to do, with one exception: it's
very bandwidth intensive, and the available upstream bandwidth in the
local office slightly less than 384Kbps---in other words, slightly less
than 48 KB/sec.

The local office runs services (including Internet mail and XMPP) which
makes it difficult to share the bandwidth for this sort of thing.  Let's
say that someone wants to open a spreadsheet that is 1 MB in the remote
office, and it's stored in the local office (note that I'm using "local"
to refer to Atlanta, and "remote" to refer to the office that is 500
miles away).  It would take them 21 seconds (best-case scenario) to open
the file.  Every time they updated the file, that would trigger
round-trips between the remote and the local offices, as well,
significantly slowing things down.

> I dealt with a situation like that once while working with Delta Air
> Lines.  We had a remote office with a (very slow) leased line and
> network bridges (more accurately gateways) at each end of the
> connection.  The remote site connected directly to a Clipper database
> just as though they were sitting at headquarters.  All the locking
> stuff worked fine.

The big difference being that instead of a database, we're talking about
remote file access.  Database commands tend to be relatively small, and
they can (usually, with something that is well-designed) provide answers
that are relatively small, too.  In this situation, however, we're
talking about "stupid" (that is, neither "intelligent" nor efficient)
client software.  It makes the assumption that the filesystem is local,
and that all access to the files it is using are inexpensive.

Oh, would it that one could do something like have an office suite that
would work with files in tiny itty-bitty chunks, only needing to access
the part of the file that is being displayed and/or modified, and
sending changes back and forth over the wire using some nifty efficient
application layer protocol.  Would it that that were the case...

> They could also access shared word processing documents, etc. just
> like they were at headquarters.  I had to spend a whole day once
> tweaking the gateway not to forward superfluous traffic to the remote
> site because performance was abysmal.

The current infrastructure uses SMB/CIFS in an NT4 style domain.  Reason
being that Samba 4 had not implemented enough functionality to do an
Active Directory style domain, though that would have been much
preferred.  NT4 domains have two hard requirements:  IPv4 (for the
ability to do broadcast, when using NetBIOS over IP) and the ability to
be really flipping chatty with each other.  It's possible to have
NT4-style domains over multiple subnets, but then you have to have WINS
servers that can handle name resolution, and you still have all the
other problems that come with it, including periods of up to 45 minutes
to an hour where the browse lists are completely unstable in the event
of almost any change on the network.  Very much displeasing.

I'm evaluating Samba 4 again, and it's looking a _lot_ better than it
was previously.  I still haven't figured out how it does roaming
profiles, and I still haven't figured out home drives/directories in
Samba 4, but I'm sure that it supports both of them from what I've been
reading.

> Also, I had to store a local copy of my Clipper database app and load
> it from the local hard drive at the remote site rather than retrieving
> it over the leased line when someone started it.  I did similar things
> with the executables for common office applications.  So, the remote
> site started up executables locally, but accessed data files from the
> file share at headquarters.  It never did work great, but it was
> acceptable.  With a VPN, I was thinking you could do something
> similar, as the VPN would act like a bridge.  That would eliminate
> your concurrency problems.  Maybe something like Himachi might work.
> Just a thought. 

A routed VPN would be a good idea, and in fact I am going to set one up.
But the ultimate goal is to reduce the latency required in common
scenarios where people work with the same files with temporal relation.
There are 200 GB of data over approximately 150,000 files.  Probably
about 40,000 of those files are regularly used.

	--- Mike
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://mail.ale.org/pipermail/ale/attachments/20110217/d83745f2/attachment.bin 


More information about the Ale mailing list