OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: Topic Maps on SQL

[ Lists Home | Date Index | Thread Index ]
  • From: "W. E. Perry" <wperry@fiduciary.com>
  • To: xml-dev@ic.ac.uk
  • Date: Wed, 25 Nov 1998 02:23:46 -0500

"W. Eliot Kimber" wrote:

> At 03:00 PM 11/24/98 -0800, Tim Bray wrote:
> >At 10:40 PM 11/21/98 -0600, len bullard wrote:
> >>Steve brings up the point that I do wish would be looked at
> >>seriously by other language communities:  the potential of
> >>using property set/grove concepts to create information
> >>standards that are independent of lexical/syntax representation
> >>and implementation.
> >
> >You know, this goes straight to the core of a deep issue.  Where
> >I have often felt out of sync with the grove/property-set evangelists
> >is that I perceive syntax as fundamental and any particular
> >data model as ephemeral.  I guess I must admit that I don't believe in
> >"information standards that are independent of lexical/syntax
> >representation".

[WEK's finely presented argument snipped, up to the following conclusion:]

> While I have no great faith in standardized APIs, I do have faith in
> standardized data models.  But given a standardized data model, it's much
> more likely that a standardized, or at least conventionalized, API for that
> data model will appear. And even if it doesn't, it's much easier to adapt
> code to different API views of the same data model then it is to different
> data models. Thus, even though, for example, SP, GroveMinder, and the DOM
> all provide different APIs to access XML groves, it's easy for me to map
> all of them into PHyLIS' grove API because the underlying data model is the
> same.  My life would be even easier if they all used the same API, but not
> *that much* easier, because the cost of managing the API mappings relative
> to the total cost of the system is small.
>
> Thus, I think that there is lots value in standardized data models, even in
> the absence of standardized APIs.  I think the DOM is useful and good, but
> it's not sufficient because it represents a particular set of optimization
> and API design choices. By definition, it can never be complete or
> completely satisfactory for all tasks for which we might need an API to XML
> documents.  So we should *never* expect to have 100% standardization of
> APIs even when we do have standard data models.

I think that WEK stops just a hair's breadth short of the necessary conclusion:
each of us, as a user and a comprehender of data, has constructed a uniquely
personal data model and might benefit from accurately expressing that model in
software. At each different computer we use, our personal aggregate data model
constrains the data domain of the system itself:  the data available for
intelligent manipulation are the intersection of data mechanically available to
the system and data structures--culminating in an ontology of
information--understandable to the user. Some significant portion of that data
will depend on distant sources and may usefully be modeled through standard API
mappings. Within a user's local community--corporation, university, political
party, whatever--some portion of that incoming data will by overlaid or
restructured by house standards which may also usefully be expressed in standard
APIs. Separate from this more or less hierarchical redefinition, individual users
also understand data in the light of philosophies, theories, or business
practices to which they adhere but the colleague beside them may not. These
theories too might usefully be modeled in standard APIs--even sold as such by the
gurus who formulate them. Likewise the capability of a particular computer to
handle a particular class of data might be expressed through a similar API.
Limitations of the operating system or, conversely, its particular
capabilities--even the configuration of features of the OS on a given
machine--could benefit (for example, in avoiding common configuration errors) by
being expressed through more or less standardized data models. The abilities of
specialized hardware to handle specialized data types, and the fit between
application software and the data for which it is intended are also clear
candidates for comparable data models. [The MS registry fed by streams of
comprehensible XML and parsed as openly accessible groves!]

The point is that, however necessary it is to produce either standard tagsets or
data models commonly accepted across industries or spheres of interest, the final
indispensable piece is client-side (relative to the incoming data) remodeling of
the standard APIs to the local case. In fact, there will need to be an
overlapping series of these, at enterprise, departmental, user and machine
levels, and also for the interaction of application software with particular data
domains. Personally, I think that this is, par excellence, the role of XML
processors. I also think that building this software obviates the need to wait
for browser support of XML. Instead of waiting to see XML (see ontology?!) we can
be putting data supplied by a growing and worldwide community of publishers to
work in our particular individual contexts, at tasks the suppliers of the data
have no inkling of, and for which they certainly did not mark it up, nor design a
DTD.

Walter Perry



xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev@ic.ac.uk
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/
To (un)subscribe, mailto:majordomo@ic.ac.uk the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo@ic.ac.uk the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa@ic.ac.uk)





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS