Lists Home |
Date Index |
- From: Len Bullard <firstname.lastname@example.org>
- To: "W. Eliot Kimber" <email@example.com>
- Date: Tue, 08 Feb 2000 21:36:18 -0600
This will get tedious. I apologize, but I think we
have to tear this down to atoms to get back to the
original queries about the applicability of HyTime and Groves,
and before that, why the W3C specs don't seem to cohere.
> Len: It has an abstract model: roughly, the InfoSet.
> Eliot: Yes, it has an abstract model, but what is the abstract model that
> underlies the XML abstract model? Within the infoset (or the SGML
> property set), "element" is a specialization of "node". It is "node"
> that is the base underlying abstract data model from which the
> specialized types "element", "attribute", "data character", etc. are
(ELEMENT | ATTRIBUTE | DATA CHARACTER) IS_A NODE
ok. You have three names and you named them.
So, then is the claim that the XML <!ELEMENT IS_A infoSet Element
is not definitionally complete? Is this yourNames vs theirNames
or is there a deeper issue here?
> Without this completely generic, universal, base, there is no
> way to meaningfully compare different data models to define, for
> example, how to map from one to other, because they are not defined in
> terms of a common definitional framework.
My problem here is that we seem to be in an MMTT trap. That is,
I can point to at least four other languages that claim the
*name* "node". The trick is to prove that what each calls a node
is the same.
As you say, "not defined in terms of a common definitional framework."
What common definitions? Are these common definitions or
> > If you had replied, "groves could be used to make that abstraction
> > more useful" I would have thought groves more interesting for the
> > problem: a meta contract for multiple encodings, perhaps more
> > descriptive, perhaps as an adjunct to the abstract IDL.
> I thought that's what I said. Let me say it explicitly: groves could be
> used to make that abstraction more useful.
Thank you. How? By providing "a common definitional framework"?
We must establish the requirements for the "common definitions".
OTW, we risk the descent into MMTTHell (reaching for heaven,
we open the gates of perdition) or we specify a non-closing task.
Referring back to an earlier email in this multiThread, projects need
a definition of "done". Posit: if we can show that three existing
metalanguages can be rigorously and completely specified with
groves, we are done. (Want to add to that anyone? Requirements
> > > It's too bad that we didn't appreciate the existence
> > > or applicability of EXPRESS at the time, because if we had we very well
> > > might have used it.
> > That strikes me as odd. We certainly did know about it.
> *I* didn't know about it. James didn't know about it (or if he did,
> didn't mention it).
Then do your homework next time and tell James to do his. OTW, how
can we ask the W3C to keep reinventing the wheel when one can show
by precedent we don't. :-) All of the CALS and most of the serious
SGML community did know it because they competed for funding and
when the PDES community proposed a generic document model. Two
representatives of the SGML community went to the first meeting to
propose that SGML be used instead. The PDES working group was quite far
at that juncture. Fortunately, they were not too enamored of
what they had and other competitors (notably, compound document
architectures) were considered more credible, so they were listening.
> The original "STEP and SGML" work was about storing
> SGML strings in as values of EXPRESS entity attributes, not about using
> EXPRESS to model SGML.
That was the first step taken at the meeting. Harmonization of the
models wasn't possible. There was no "common definitional framework"
and the jockeying in the room for "who owns the parse" was fairly
serious. Storing the SGML string as an entity attribute was a
compromise. Most attendees couldn't go further than that. We
spent most of that meeting making drawings of our respective
declarative techniques and trying to understand each other.
> With Yuri Rubinksi's untimely death, the original
> driving force behind the effort died.
Actually, the funding went away. Of the two members, Yuri
had some control over his. As the other SGML member there,
I had to take the customer's decision (USAMICOM) that PDES
was of no immediate interest with regards to documentation
because in the near term, no viable technologies were
emerging and local systems such as IADS had proven that
markup using a combination of fixed tag sets and stylesheets
were adequate to cause: IETMs.
> It wasn't until 1998 that Daniel
> Rivers-Moore resurrected the effort and convinced me to participate.
Good. It is worthy to do, IMO, and always was.
> > Is a grove a means to standardize? Is it better and why? For what?
> > In 50 words or less.
> Groves, by providing a generic, basic, universal abstract data model
> provide a formal basis for defining data models for specific data types,
> e.g., XML, VRML, relational tables, etc. This provides a basis for
> standardizing abstract data models and enables the application of
> generic processing to data in a provable, testable, way.
Good opening definition of some requirements for what Groves
must be proven to provide. Keep it where we can revisit it.
We have to show the usefulness, the code worthiness, of defining
such standards. This is the goal. Otherwise, precisely as
Steven asserts, we base our technology and create our information
over the whimsy of consortia and powerful companies. This is not
to assert conspiracy. Where two meet to agree, they conspire.
However, where they express that agreement in terms that all
understand, they create standards. HyTime is not understood.
Therefore, it is largely unused.
That can be changed. Next, the VRML model for comparison
to answer Didier.