XML.orgXML.org
FOCUS AREAS |XML-DEV |XML.org DAILY NEWSLINK |REGISTRY |RESOURCES |ABOUT
OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Lessons learned from the XML experiment

On Fri, Nov 15, 2013 at 10:56 AM, Pete Cordell <petexmldev@codalogic.com> wrote:
----- Original Message From: "Uche Ogbuji"

On Fri, Nov 15, 2013 at 4:35 AM, Hans-Juergen Rennau <hrennau@yahoo.de>wrote:

Michael Kay wrote:
"and they [namespaces] add myriad opportunities for doing things wrong."

You forgot an item in your list: they offer a unique possibility to get
things right when things get really complicated.

For example when you do what I presently do: integrate schema information
from 283 different schemas. Fortunately they use namespaces - 283 target
namespaces - which enables me to keep everything clean without relying on
document URIs and or any conventions, and without adding anything to
existing structures (like marker attributes). I admit that I miss a lot of
fun which I might have if there were no such thing as a target namespace
(the shadow of namespaces).


I would start much, much further back and commiserate with you about having
to integrate info from 238 different schemas. If I were dealing in an
environment that broken I would start by fixing it in my local context.
Pipelines much more effective in doing so than namespaces.


Why is this architecture necessarily broken?

It's not intrinsically broken. It's more broken in the sense that it's not reasonable to expect an architect or developer to maintain it in an elegant and durable way.

 
 Loose architectural systems, where clearly identified external dependencies are injected in and resolved and dispatched via external configuration are likely to be much more flexible than a system that has to be crafted with intimate knowledge of a pipeline of how the data should be processed.

This might be a matter of how we read what Hans-Juergen wrote, and I admit I might have misread it. When he said "integrate schema information from 283 different schemas" that to me is not a loose architectural system.  If he had said "integrate data from 283 different schemas" then I'd agree.

In other words, for me, the right, loosely-coupled approach approach would not be to make a super-schema with all its requisite entanglements from 238 initial sources, and then use that to direct all the processing.  But rather to use pipelines, with some sort of demultiplexer component at the input to identify sub-patterns (maybe even at a greater level of granularity than the 238 full schemata) and forward to specialized pipeline branches for each. It wouldn't magically transform into an easy task, but I believe it would be more manageable.

And I don't think this is pie-in-the-sky either. I'd think NVDL and/or XProc could be used to do this.


As others have said, we're not all solving the same problem.

This might be the one point on which we definitely all agree.


--
Uche Ogbuji                                       http://uche.ogbuji.net
Founding Partner, Zepheira                  http://zepheira.com
Author, Ndewo, Colorado                     http://uche.ogbuji.net/ndewo/
Founding editor, Kin Poetry Journal      http://wearekin.org
Editor & Contributor, TNB     http://www.thenervousbreakdown.com/author/uogbuji/
http://copia.ogbuji.net    http://www.linkedin.com/in/ucheogbuji    http://twitter.com/uogbuji


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS