[
Lists Home |
Date Index |
Thread Index
]
Peter,
Peter Hunsberger wrote:
>Burak Emir <burak.emir@epfl.ch> asks:
>
>
>>Chizzolini Stefano wrote:
>>
>>
>>
>>>I think there are some valid reasons for writing schemas in XML:
>>>seamlessness, elegance and power. Adopting a "self-describing" language
>>>syntax avoids the users from learning a new one and allows to leverage many
>>>existing applications derived from the original spec (in this case, XML
>>>spec); I mean, for example, the chance to dynamically generate brand new
>>>schemas through XSL transformations.
>>>
>>>
>>>
>>>
>>One can of course endlessly discuss about syntax, but I have never
>>understood the obsessiveness of marking up descriptions of XML data in XML.
>>
>>Who needs to dynamically generate schemas?
>>
>>
>
>Umm, we do.
>
>
>
Are you sure? :-)
>>The whole point of schemas is
>>to be a widespread, well understood description of instances.
>>
>>
>
>In our cases we have a lot of metadata described in a relational
>database. There are customizations of that metdata that select
>specific pieces based on the authorizations of the user and the usage
>context of the meta data. The only time we need a schema is for the
>description of a piece of instance data that is travelling beyond the
>boundaries of the system, so we generate the schema as we need it.
>
>This may sound like a problem of not having a powerful enough schema
>language and in a way it is. However, my general philosophy is that I
>will generate no schema before it's time...
>
>
>
Ok, using schemas to describe the format of the data that is going out,
from descriptions in a relational database.
If I put this a bit more concrete, I have a bugtracking system, and a
bug-report has a field "product" which is an enumeration.
Now, when there is a new product, the enumeration changes (somebody
updates the database). One generates a new schema.
But this is a bit one way: The one who generates the schema changes his
data at free will (maybe the product field even disappears)?
Where does that leave the receiver of your data? Two options
1) Either, he cannot rely on any schema, because it may be subject to
complete change.
2) Or, the schema changes are actually very very restricted to a few
backwards-compatible details.
Assuming the latter, I start seeing things clearer now, namely that if
you add a new complex type by derivation, you are effectively building a
new schema, hence there is indeed a new to build new schemas if it is
possible to "continuously specialize".
Does this cover your requirement? If no, can you give a concrete example
like the one above?
I am aware of XML Schema pitfalls that prevent typed programming
languages (e.g. XSLT, XQuery) from using the specialized data, yet it's
hard to really grasp the need for "continuous
specialization/extension/adaptation".
...
<snip/>
>>Now one can dwell in discussion of hypothetical families of schemas, but
>>for all my experience tells me about modelling, if you manage to
>>understand what the common things are that make a bunch of schemas a
>>family, then you can anticipate the extensibility you need, which
>>removes completely the need for dynamic generation.
>>
>>
>
>Yes and no. We have a meta-schema. It's so abstract and so
>generalized that it's difficult to use for specific instance data.
>The problem is, understanding of the schema is often local to the
>schema writer. Not everyone "gets" 5th normal form, 5th normal form
>doesn't work when the data hit's the data warehouse.
>
>
>
Does it happen that you need to change that one as well?
Or is it a "parameterized" schema (like the Java generics)?
>>What is a use case for dynamically generated schemas?
>>
>>
>
>For one, you need different schema for different stages in the life of
>the data. I know of no technology that lets you adequately describe
>all possible transformations of the schema over time from within the
>schema itself. As a specific example (discussed previously on the
>list), you need a way to match versions of the schema to work flow.
>
>
>
In my understanding of the problem, this drifts away from "dynamical
generation". Schema evolution (or just backwards-incompatible change)
makes configuration management, versioning, and many things necessary.
But having a meta schema and generating schemas is of no use for the
problem at hand, because the receiver of your data cannot write software
that deals with the meta schema, and hence with all versions of the schema.
>>Why does one need to use XSL for it ?
>>
>>
>
>You don't, but in our case, we've got about 8 different pieces of
>source metadata that have to be combined and transformed in order to
>derive a specific schema. XSL is the best match to the problem I know
>of.
>
>
>
Unless I have misunderstood, I think your problem seems rather
different, because you could also get away with not generating any
schema at all, if it can change it unanticipated ways. Your problem and
its solution (which may be elegant) does not take receivers into account
- they may have to hand patch their code to deal with the new data.
cheers,
Burak
http//lamp.epfl.ch/~buraq
|