[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Greetings and comments on MCP 2.1

Erik Ostrom:
> > #$#mcp version=2.1 client=Supernova client-version=1.6beta2
> >
> > In particular, we anticipate many different client types (with differing 
> > feature support) which might all support MCP 2.1; e.g.
> I'm pretty strongly opposed to this, if I understand it correctly.  Our design 
> allows for negotiation strictly on a feature-by-feature basis; I don't think 
> that the server has any need to know what the name of the client I'm using is, 
> or that it should be making decisions based on that information alone.  The 
> MCP implementations I've been involved with have among their goals:
>   * Make it easy to add functionality to the client/server, or remove it.
>   * Interoperate with other clients/servers that support some of the same
>     functionality.
> Requiring the server to know what a Supernova is or what features its version 
> 1.6beta 2 supports seems like a very bad idea.

The experience from HTTP content negotiation is that using the
user-agent information (the name of the client, and perhaps its
version) is a horrible way to discern capabilities.  The only reason
that UA is used for capability checks is because HTTP browser vendors
refused to fix their browsers early in the protocol's life to permit
meaningful content negotiation.  Content negotiation was always an
afterthought to the core HTTP protocol and this permitted the UA
information to have greater value to developers.

UA is frequently used to detect broken implementations of other
supposedly complient behaviour.  Witness the infamous Mozilla/2.0 hack
in Apache which turns off HTTP-keep-alive behaviour for that br0ken
range of client.

In all likelyhood a proliferation of allegedly compliant browser/server
implementations, will be closely followed by a rash of hacks and
kludges, probably keyed from the user-agent value to counteract broken
client/server behaviour.

In addition, the dynamic of emerging technologies is essentially a
marketing and statistics gathering exercise once the initial product is
released.  In this case the elements of a standard protocol which
support record keeping (user agents, client software version numbers)
will receive much attention, futher blurring their relevence to the job
of understanding client capabilities.

I think that it's important to be aware of these aspects of a protocol
before the implementation becomes cast in stone (or more likely warm

> > (5) Client/server protocol negotiation.  In your proposal this is a 
> > server-driven process (that is, the server kicks it off; in fact, it 
> > sends the first OOB message).  I definitely don't like this; the reasons 
> > are spelled out in our proposal, but we strongly feel that the 
> > negotiation process must be client-driven.  The client should send the 
> > first OOB message, and the client should be the one who makes the final 
> > decisions about what protocol is used.
> I don't remember why we did this; in any case, I don't feel strongly about it, 
> so if no one else does, we might as well switch it.

I think a client-initiated negotiation sequence is best.  The question
boils down to

	"how does this client know that this server is able to understand
	MCP messages before the client has sent the first message?"

HTTP clients solve this by the 'http://' fragment of a URL.
tkMOO-light can start the auth-key generation sequence upon connection
because the user can set a 'use xmcp' flag in the preferences editor
(light assumes that not all sites it visits know or care about xmcp).

I detest the '#$#mcp foo: bar' message that shows up on connection to
some muds, though this might also be moderated by user account options
set in the server.  In MOO terms a user may for example, connecte using
plain telnet, set the '@client-o +mcp' option then reconnect using a
smart client.

Client-initiated negotiation at least permits the user to connect
'first-time' to a site and immediately start speaking MCP.  But I'm
splitting hairs now.

Someone has to go first, I think it just makes more sense if it's the
active partner in the connection, the entity that initiated the
connection, eg the user's client.

> > When the mcp negotiation process is finished, the client and server enter 
> > feature negotiation, and this process is server driven; after all, if 
> > the server does not contain VRML, then it doesn't matter whether the 
> > client supports it or not.  However, if the server does do VRML, then 
> > it needs to determine whether the client supports this or not before 
> > using it.
> This is true, but only because VRML is (in your model) inherently 
> server-driven in practice as well as negotiation.  On the other hand, imagine 
> a file-upload protocol.  I could say "If the client does not upload files, 
> then it doesn't matter whether the server supports it or not.  However, if the 
> client does do file upload, then it needs to determine whether the server 
> supports this or not before using it."
> I think both of these statements are the wrong way to look at it.  In our 
> spec, feature negotiation is neither server-driven nor client-driven; both 
> parties simply send their capabilities.  The goal was to have the negotiation 
> over with as quickly as possible.  If both sides send feature sets as soon as 
> they establish MCP contact, then each side knows that a supported feature is 
> present as soon as it receives one negotiate message from the other side.  In 
> a server-driven process, the server must wait for a full round trip before 
> acting.

A server driven approach (or a client driven approach for that matter)
has the benefit of reducing the amount of traffic to shift before the
session can begin passing normal messages.  Suppose the Server has 10
features and the Client has 10 also.  Suppose also that the client and
the server have only 4 features in common.  A server driven approach

	S	->	C		10 features
	S	<-	C		4 overlapping features
					14 chunks of data

A non server driven approach:

	S	->	C		10 features
	S	<-	C		10 features
					20 chunks of data

The server driven approach costs less bandwidth, but we're not talking
about gigabytes here, so what does it matter if a more costly solution
simplifies the implementation by allowing client/server vendors to use
exactly the same code to generate/understand negotiation sequences.

When (if) clients begin supporting peer-to-peer MCP, bypassing the
central server then the need for 2 different mechanisms for negotiating
capabilities to be present in the client source becomes less obvious.
Clients end up being heavier (by a few bytes I guess) and there's twice
as much codebase to go wrong in implementations.

But if I had to choose, it'd be for sake of asthetics.  I prefer the
non server-driven approach.

> > Anyway, I guess that's more than enough for now.  I hope that we can all 
> > agree on a single, uniform base from which to build our individual client 
> > protocols, and I very much welcome further discussion on all these ideas.
> Sounds good.  It'll be interesting to see where the other participants in the 
> discussion so far disagree with my responses.
> --Erik

You know, I hand't the faintest idea that Supernova was on the way.
Where have I been all year!?!


Andrew.Wilson@cm.cs.ac.uk          http://www.cs.cf.ac.uk/User/Andrew.Wilson/