[
Lists Home |
Date Index |
Thread Index
]
the last thing i remember reading about this sort of processing annotated the
validation automata to derive the minimal models
(http://citeseer.nj.nec.com/464854.html), which gets you the exact models
which appear in the test documents, but don't think it addressed the issue of
how to allow for systematic, predictable variations without enumerating all instances.
"Thomas B. Passin" wrote:
>
> [james anderson
>
> > that exercise made the trick part of the question apparent: what kinds of
> > transformations are intended for the content models? a reduction to that
> which
> > appears in the document, replacement of unreferenced types with ANY?
> >
>
> I am just guessing here, but I would guess that the subset schema would
> include only the message types - the schema defines the structure of a
> number of message - that the agency in question would be using. But many of
> not all of the messages use the same imported pieces, such as
> locationReference for geographic locations, and they import a lot of
> enumerated types as well. Anyway, it is "just" a matter of going through
> and marking things, then extracting the pieces and fitting them together,
> but the individual asking about it was just asking for some automated help
> in doing these mechanics.
>
> I would guess that cutting down on the lengthy lists of enumerated types
> would be part of what is wanted here, as well.
>
|