XML.orgXML.org
FOCUS AREAS |XML-DEV |XML.org DAILY NEWSLINK |REGISTRY |RESOURCES |ABOUT
OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Does the XML syntax have an underlying data model?

Yes, Rick, I agree with what you said below.

In my mind, you identified precisely the reason why a well-considered and -defined data model is important: a data model should fully and accurately decode the data used by the applications. If it does not (or there is no data model), then it's much harder to be sure that everybody who needs to know the data does -- and that they "know" it in the same way.

That's good for small application systems, but vital for large, enterprise-class application systems.

In the case you cited, it seems that the applications needed a bit of information (whether an attribute value was defaulted or not) that the data model selected/designed did not capture. Obviously, the people who designed the Infoset and the PSVI did not need that information for their purposes, and the models don't contain that information...same with the XQuery Data Model. But we could have included that information. And, given a strong enough case, it's not too late.

Jim

On 4/16/2016 11:01 PM, Rick Jelliffe wrote:

To throw a practical problem in: imagine a global company not unlike where I work: one persistant problem we oops they have with their XML systems, but not with their SGML systems, is that some of their legacy DTDs ported from SGML specify large numbers of default attributes (sometimes 20 default attributes per element), and their documents, which can be large anyway, can multiply in size with useless attributes.

When using Omnimark, we have no problem, because the infoset it operates on includes information on whether a value was defaulted or not. So doing null transformations on documents is possible. With eg Xslt, there is no way to know. So you have to fake it by coding the ATTLIST default rules into xslt to strip out the values. Fragile and bothersome.

Now would having a data model have fixed this? Probably not: the Infoset decided that defaulting was not interesting information and any data nodel built on that would make the same decision. Transformation tools have been built on a data model that is one step removed from the actual XML, but it has meant that one major optimization does not flow through the pipeline.

I dont see that a data model would help move the technology in a direction that would be more optimized for large numbers of default attributes. If you make things, let them solve real problems.

Cheers
Rick

--
========================================================================
Jim Melton --- Editor of ISO/IEC 9075-* (SQL)     Phone: +1.801.942.0144
  Chair, ISO/IEC JTC1/SC32 and W3C XML Query WG    Fax : +1.801.942.3345
Oracle Corporation        Oracle Email: jim dot melton at oracle dot com
1930 Viscounti Drive      Alternate email: jim dot melton at acm dot org
Sandy, UT 84093-1063 USA  Personal email: SheltieJim at xmission dot com
========================================================================
=  Facts are facts.   But any opinions expressed are the opinions      =
=  only of myself and may or may not reflect the opinions of anybody   =
=  else with whom I may or may not have discussed the issues at hand.  =
========================================================================



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS