[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Greetings and comments on MCP 2.1
Anyone else feeling swamped by the deluge of MCP discussion? At least
I'm only 9 days behind... :-)
On Fri, 8 Aug 1997, Erik Ostrom wrote:
> I've got a related problem for you. Suppose I'm writing my MCP-enabled
> client. Suppose, further, that there are some servers that don't want you to
> put up a conventional "MUD" window at all--they want to start with a dialog
> box, or have a special protocol for creating chat windows, or they're just
> tic-tac-toe servers so why would you have a chat window?
Jupiter handled this in the following way: Widgets (GUI components) had
various properties (height, width, font, etc.). Some properties applied
only to particular widgets, and two properties which applied only to
text-edits (viz. Java's TextArea) was MainInput (or MainOutput). A
textedit designated MainInput would direct any text entered into it to
the server (as in-band data); similarly, a MainOutput textedit would
display any in-band text received from the server. (Jupiter was
primarily a UNIX program, and stdout also served both these functions, so
it was possible to run Jupiter through a script or a UNIX pipe, e.g. from
So that is one possible solution.
> Suppose, finally,
> that I also want to be able to connect to servers that mix MCP messages (such
> as #$#edit) with ordinary MUD interaction, as well as plain old dumb-as-dirt
> TinyMUD servers.
[server options vary per listener]
> It's embarrassing that I don't know the answer to this, but: Did Pavel fix
Yes. the_listener.server_options is the object queried for things like
the welcome message, connection/disconnect messages, etc., for
connections through the_listener.
> [MCP] doesn't speak to the question "what would you PREFER to do?" Again,
> this is something that we've had no opportunity to get experience
> with; the question "whose audio protocol would you like to speak"
> hasn't come up in the absence of even a single audio protocol to
> decide between.
> I guess my suspicion is that, in most situations, this question gets answered
> implicitly. For example, having established that both client and server are
> capable of receiving/sending both RealAudio and StreamWorks audio, the
> question is resolved by the fact that the server sends RealAudio messages and
> not StreamWorks messages. For any interaction that is initiated by one side
> or the other, the initiator gets to decide what protocol is used. And, again
> for the most part, I think as long as both parties speak both protocols, this
> is fine.
It's more than just a matter of file formats; it can also mean feature
sets more generally. After all, one may have
/ generic set of \
\ audio features /
/ Somecool \ / An environmental \
\ audioFX / \ sound synth /
Both may work as audio components, supporting some basic set of audio
features (play this sound, record that sound). But Somecool audioFX
might have various distortion capabilities, while An environmental sound
synth might be able to synthesize environmental effects (doors closing,
When our client doesn't have some capability that the server wants to
use, it will query the server as to where it can find that ability,
download it, and install it (as allowed by the user and other
constraints). So for us, the ability to dynamically add/remove features
and to indicate which features are "preferred" over others, is essential
to the protocol. Of course, I don't think this needs to be in MCP at
all, but it does form a part of the feature and protocol negotiation
proceses. That is, if the client contacts the server, the server responds
"Yeah, I can speak MCP 2.1, but you know the current version is actually
3.0, why don't you upgrade?" then the ability to do that kind of interaction
is a part of our negotiation process.