OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Best Practice - beyond schema

[ Lists Home | Date Index | Thread Index ]

>
>
>>2. Given transforms, how much post-parse information really 
>>  needs a more powerful schema formalism?  If so, when? 
>>    
>>
>>I'm sorry, I don't really understand how transforms effect the PSVI 
>>requirement - can you elaborate?
>>    
>>
>
>If a PSVI or as Rick said better, a Typed Infoset, has an XML 
>form, isn't it possible to convert to that form by transformation? 
>If not, no.
>
Maybe I'm mistaken but isn't this exactly what Francis' TypeTagger [1] 
tool does? It will parse a W3C XML Schema document and annotate the 
elements in the instance document with the type information from the 
schema. True, it won't annotate attributes with their types but it's 
certainly a step in the direction of an XML form for a Typed Infoset.

Cheers,
/Eddie

[1] http://www.schemavalid.com/utils/typeTagger.zip

>
>  
>
>>3.  What requirements for a schema language should be posed for 
>>   any project and what should be explicit to a project?  It 
>>   seems to me that we don't have many criteria for evaluating 
>>   these proposed mods and features for schema languages?
>>    
>>
>
>  
>
>>Some of these requirements may emerge during projects, I agree it would 
>>be nice to list them for future use and better understanding.
>>    
>>
>
>The problem then is that we are seeing schema languages being 
>proposed for which we have no use cases that result in 
>requirements that can be evaluated.  How then does a developer 
>or organization choose?  How can one take their project requirements 
>and use them to contrast and compare to the capabilities of a 
>schema language?  As in the trite but true scenario I outlined 
>the other day, Step 3 (the recommendation from the technical 
>team) vanishes into Step 4 (manager picks based on recommendations 
>from inner peers).  There is a sort of cluelessness there that 
>means dumb luck rules the project thereafter.   To ensure dumb 
>luck has the best luck, the schema language ends up needing 
>to be feature-rich and non-layered to cover as many dumb cases 
>as possible.
>
>  
>
>>>I don't find it compelling to have a means to validate or 
>>>augment that can't be inspected and proven by readily 
>>>available means if it has to cited normatively.  That's 
>>>the transparency requirement. But it also may just be my 
>>>SGML Ludditism showing because I accept a lot of behind 
>>>the scenes manipulation from my relational toolsets. 
>>>      
>>>
>
>  
>
>>I think the effect of this utility works is obvious enough to count as 
>>totally transparent. On the other hand more powerful tools might use the 
>>same underlying technology and have equally open source, but eventually 
>>the complexity of effect will take one to the other side of the 
>>indeterminate "transparency v. opacity" boundary.
>>    
>>
>
>What is unrecognizable is unknown and it varies by context.  The 
>context has to be the proof of effect where the proof is 
>transparent.  In other words, it can be black box but the 
>proof must be clear as to why it proves the case.  But my 
>point is that I am being pristine about this technology 
>because markup itself and XML specifically goes out of its 
>way to get rid of layers that make it impossible to see into 
>the box.  If we put those back, we have to confess that 
>things like Typed Infoset, PSVI, etc, are the way back 
>into the machine, the realm of XML Systems, not XML. 
>This isn't all bad but it is a step past XML 1.0 to be sure.
>
>  
>
>>>>Flatten out one piece of the complexity carpet and darned 
>>>>if another part doesn't ripple.  Try to rehost an MS Access 
>>>>app into Visual Foxpro and watch code disappear into 
>>>>the more powerful but explicitly relational language 
>>>>features of Fox.  What goes on beneath the rug?  We aren't 
>>>>supposed to care if the results are provably the same.
>>>>        
>>>>
> 
>  
>
>>>Shouldn't that be the case for the schema language processors?
>>>      
>>>
>
>  
>
>>Yes, let's hope layering leads to a more complementary outcome.
>>    
>>
>
>I hope it leads to a more maintainable system where features 
>can always be stripped away or added as needed.  On the other 
>hand, there are very good reasons for tools like Visual FoxPro 
>that have powerful and specific features for a specific application, 
>relational database system building, vs say Visual Basic that 
>has features for connecting to a relational database.  Not 
>surprisingly, the relational aspects are easier in Fox, but the 
>GUI aspects are harder.  Exactly the reverse goes on in Visual Basic. 
>Jumping between the two can be exasperating.   I speculate 
>that Typed Infosets which if pluggable become, Application Typed Infosets 
>will offer the same dilemma: clarity in the specific application, 
>obscurity with regards to general tasks.
>
>len
>
>-----------------------------------------------------------------
>The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
>initiative of OASIS <http://www.oasis-open.org>
>
>The list archives are at http://lists.xml.org/archives/xml-dev/
>
>To subscribe or unsubscribe from this list use the subscription
>manager: <http://lists.xml.org/ob/adm.pl>
>
>  
>






 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS