XML.orgXML.org
FOCUS AREAS |XML-DEV |XML.org DAILY NEWSLINK |REGISTRY |RESOURCES |ABOUT
OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Does the XML syntax have an underlying data model?

To throw a practical problem in: imagine a global company not unlike where I work: one persistant problem we oops they have with their XML systems, but not with their SGML systems, is that some of their legacy DTDs ported from SGML specify large numbers of default attributes (sometimes 20 default attributes per element), and their documents, which can be large anyway, can multiply in size with useless attributes.  

When using Omnimark, we have no problem, because the infoset it operates on includes information on whether a value was defaulted or not. So doing null transformations on documents is possible. With eg Xslt, there is no way to know. So you have to fake it by coding the ATTLIST default rules into xslt to strip out the values. Fragile and bothersome.

Now would having a data model have fixed this? Probably not: the Infoset decided that defaulting was not interesting information and any data nodel built on that would make the same decision. Transformation tools have been built on a data model that is one step removed from the actual XML, but it has meant that one major optimization does not flow through the pipeline.

I dont see that a data model would help move the technology in a direction that would be more optimized for large numbers of default attributes. If you make things, let them solve real problems.

Cheers
Rick



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS