OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] RE: Poll (was: Seeking advice on handling large industry-s

[ Lists Home | Date Index | Thread Index ]

> When you say data binding, are you using generated code or handwritten?

Hmmm... tough to say. As a little bit of history, the industry we are in was
not well standardized... so everyone's format has the same info but in
wildly different ways. Our format was very much object oriented, the request
contains an applicant which can contain multiple address objects, etc... the
(most) commonly used format now (XML) is similar but not really my taste. So
instead of rewriting our whole back end which already had a lot of logic we
used XML--

what we had in objects

-Request
  - Applicant
    - Address
    - Address
  - Data
  - etc

became in XML

  <Request>
    <Applicant>
      <Address ... />
      <Address ... />
    </Applicant>
    <Data ... />
  </Request>

More or less... everywhere there was an object name we created an element.
(This is done in Borland's Delphi, using SAXSerializer in the SAX for Pascal
project) We used the run-time type information so there is no real "binding"
code. There is SaveToXML(AnObject) and LoadFromXML(AnObject).

So the code to manipulate the data is all native and handwritten-- but it
was written a long time ago and fundamentally hasn't changed too much. The
XML serialization is auto-generated with that one command. If the object
structure changes the output changes (and is versioned, and a new XML Schema
can be generated).

> Are the structures:
> a) very different?

If you mean the structures we integrate with then : Physically, yes.
Semantically, not really. We also have several profiles that minimize what
is needed.

If you mean are our structures internally very different then : Yes and
no... they all have some sort of Group, properties and possible subgroups...
so shorthand... it is object oriented-- the physical structure is very
similar but the information varies.

> I certainly don't want to touch native object structures. Those are well
and
> good from the application's standpoint.

I touch on that above...

> Yep, I can dig it.  When do XSLT solutions become too complex, though?
It's
> basically a maintainability question.  And I don't know an alternative,
> since any object representation of the interchange format will likely be
> just as complex and hard to maintain (as will the cross-exhange code
betwen
> the two object models [native and interchange]).  I wonder if XQuery
offers
> something here?  I guess it's time to start digging into that Draft.

We have thought about XQuery... but again... the gung-ho factor isn't fully
there and we have a "it ain't broke" thing going on lately... : ) I will
toss out that XSLT is complex in that I am the only one who is proficient
enough to hand code it. What I really need is a WYSIWYG XSLT/HTML editor. I
have started one myself out of sheer frustration with what I have tried...
but there is a reason it's not out there yet. It's hard. But if there were a
way to bootstrap the XSLT onto an HTML editor it would allow the web
designers to do it all (more or less) instead of me taking what they create
and inserting the XSLT by hand.

> The main advantage I see to XSLT is it avoid API-ness for what is
basically
> simple data manipulation. But that's an old record in this jukejoint.

True... oldie but a goodie.

> The XML files we generate right now are small (25k), but will surely
become
> very large as more information is incorporated. Especially true as new
> domains come onboard. I'm trying to worry ahead of time (which of course
> never works, but I can't help myself).
>
> What we could do is profile the larger spec into sub-specs, but as work
> flows down the pipe you really can't subdivide the document model any
> longer.  And the way the model is designed, it's pretty scattershot where
> the data goes in the structures.
>
> The model is somewhat "baggy", also, in that there's quite a bit of type
> information carried in attributes and many co-occurrence constraints in
the
> data.  Could use a bit of Schematron in the mix, although there's already
> some appinfo annotations to handle what XML Schema can't.

Hmmm... for us I have followed a "Procedural model for output", "Template
driven model for input" rule. So when we receive data we handle it using
lots of templates... but when we output data we use one main template and
use for-each statements because our own data is far more predictable. The
point being, perhaps the baggy nature can be addressed easier in a
non-procedural context. Assuming you did split the spec (or profile it..)
you could create a master XSLT and sub XSLT templates using <xsl:include>
statements. This would be nice for updating later as different problems in
the larger spec would be localized to small templates. I don't know how
people generally feel about this approach, but again, it has worked great
for me.

> It's helpful. I've got you pigeonholed as a (3).  Gotta do best fit
> approximation, ya know.

Yeah... 3 sounds right... I feel a little 1 and 4 also... but 3 is closest.

Cheers,

Jeff Rafter
Defined Systems
http://www.defined.net
XML Development and Developer Web Hosting





 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS