[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Multilines necessary?



> On Tue, 21 Oct 1997, Dave Kormann wrote:
> > multilines are a requirement (i think they pretty much HAVE to be)
> 
> Why is that?  Jupiter got along just fine without any multiline messages 
> at all.

I don't think it's necessary to implement multilines if you don't
use any multiling messages.  No endpoint should be sending you a
multiline message if you haven't signalled that you have a package
that understands it.  Maybe we should (ta da) say something about
this in the spec.

Of course, if you want people to be able to plug in code for packages
that MAY include multiline messages, then your MCP implementation
should implement them.

> I think the problem here is that MCP is confusing two separate issues: 
> multiplexing and type-encoding.  Multiplexing is not necessary to encode 
> newline escape sequences, and in fact is not always the most efficient 
> way of doing so.

Fair enough.  I think the original motivation for multilines was to let you
ship a large chunk of data without consing it all up into a string or
requiring it to be atomic in the byte stream.  You're probably right that
it should be possible to send large data without implying newlines, and
encoding newlines should be done independently of that mechanism.  It
just seemed to work out nicely.

Note that all current implementations that I'm aware of _do_ require
all the data to be available at the time that the message is begun.  One
facet of the design is that it allows implementations to be written that
never build up the whole data structure, but rather stream data through
as it becomes available.  That would be nice for some applications.

> The assumption seems to be that multiplexing is the only 
> (or best) way to encode lists of strings.

I don't think we assumed that.  After all, you're free to encode anything
you want in an MCP value (single- or multiline); we've used s-expressions,
for example, to send lists of numbers (1 2 3).  The big mistake, perhaps,
is in the other direction:  Assuming that it's right to divide up large
data segments into meaningful segments.

Hm.