[ale] Object Model on Linux

Douglas Todd dwt at atlanta.com
Mon Dec 23 13:08:01 EST 1996


Just my two sense on this subject, for what it's worth. -)

It's just about impossible to define a standard that everyone
would use to share information between applications. Each application
has it's own features and methods to manipulate the data that
the application manages. This is the core reason
that the applications are written in the first place -- to build a
'better' mouse-trap. Better can mean easier to use, less resources,
more features, etc. These are often conflicting goals. XEmacs 
is probably the most extensible text editor in the world, but how many 
people can really use it to it's potential? My guess is that there are 
more people using simpler VI. 

There are quite a few methodologies available, depending on what
the problem that users are trying to solve. For example, Microsoft
seems to be concentrating on document processing, with MS Office.
These sets of problems span quite a different solution space than
trying, for example, to build network routers, or data collectors.

In order to build such inter-operability, we should define the problem
that we are trying to solve. If we're looking at network solutions, the
document processing paradine is probably not what we want to deal with.
There are differences between sharing information between a 'thin' pipe,
and a 'thick' pipe, mainly that processing should be far more
distributed
on the 'thin' pipe. 

If we are to build sets of applications that can be 'cut and pasted' 
together with a visual interface, then each application should 
inherate  some form of interprocess communication. This interface would 
have basic methods that identify other methods, and the calling
conventions
within the application to manipulate the data model. It would be nice to
find some way that the lowest common denominator is not always used, for
example, a data base server may accept binary streams as well as SQL 
calls to update it's data. A 'smart' client application can then use the
binary calls ( far more efficient ) to manipulate the data model. This
is 
particularly important when networking on an 'thin' pipe.

Users are interested in function, not programs, which is the original
idea
of the Unix utilities. Small programs, which do one thing well, and can
be
made to inter-operate is the idea. In the Unix world, this is done 
through shell scripts, using the binutils suite. There are two problems
with
this method: the user needs to understand each of the various
utilities, and how to direct the output of one utility to the input of
the 
other with the scripting language itself.

If applications can expose their function, usage, and data model(
input-output)
it may be possible to automate the plumbing between the applets. Rather
than having the 'file manager' concept, it would be possible to have a 
'communication-manager' or 'process-flow' concept. 
Scripting could then be done using icons, and visual tools without 
regard to the implementation issues of various applets. This model is 
'function-centric' rather than 'docu-centeric'.  Io between applets is 
negotiated when the applets (servers) are instantiated, and is
transparent 
to the user.

Applets may want to export code as well. If a data base server is
connected
to an application through a thin network, ( ppp) then it may want to
export
code to the applet that is inserting data over the network. This code
would be
used by the 'insert' applet to format and compress the data before it is
transported over the network.  


-- 
dwt at atlanta.com
doug.todd at sciatl.com






More information about the Ale mailing list