[
Lists Home |
Date Index |
Thread Index
]
On Wed, 16 Mar 2005 08:13:36 +1100, Rick Marshall <rjm@zenucom.com> wrote:
> i must be missing something here. every day i do battle with
> translations from one vocabulary to another. flat files to csv to edi to
> xml to printer codes to postscript etc. actually i'm a bit over it all
> at the moment.
>
> to do what len has suggested you need a dictionary - (not a data
> dictionary, but a dictionary) that says an attribute, element, whatever
> in one vocabulary is <something /> in another. possibly rdf is a good
> way to express this. then you need a translator that can read an output
> schema (and produce valid output). then it needs a schema to describe
> the input stream.
>
I don't think you're missing anything. I suspect we're basically
saying the same thing in different ways.
In our existing case I have a good set of metadata and I can build
rules around that which then tell me how to get a mapping from data to
document. I don't explicitly have a mapping document, our mapping is
distributed all over the place (actually it's concentrated in about 10
tables, but I suspect that's more than is really needed).
I think what may have been confusing was that I also (sort of)
outlined a way to proceed when you don't have good mappings or
metadata or input Schema. You could sum it up as key discovery used
to infer relationships that can then be used as the input Schema in
your scenario. Certainly not as robust as a good Schema and a good
mapping, but for simple stuff it will work. It might also work for
some complex cases if you have good control over the data.
At this point, least I give the impression I'm in favour of using
Schema to do any of this, I think I should repeat my personal bias: If
the task is to turn raw data into semi-structured documents in a
generalized fashion I'd rather work with a good metadata store than
any Schema... <perma-thread-option>One can also discuss exactly what
that might mean.</perma-thread-option>
<snip/>
--
Peter Hunsberger
|