OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Picking the Tools -- Marrying processing models to data model s



You'll forgive this being a comparatively naive viewpoint, but I'm going to
ask it anyway as I've been wondering about this for some time now, and I
need to unload the festering heap that pass for thoughts:

When writing an object oriented application, I define both data members and
methods for each class. Class instances contain local atomic data values,
plus complex data in the form of references to other class instances. This
sound suspiciously similar to a user's data model (sounds an awful lot like
XML Schema, actually). There is no 'schema' for this data model, however, at
least in the declarative sense of the word: all constraints (except type)
are bound up in the methods that manipulate this data model, including
default, fixed, and enumerated values.

With a data model constraints embedded in the 'processing model' as I'll
call it (feel free to correct my terminology here), it's hard to see the
trees for the forest. You can't, by examination of a schema, determine what
the internal data structure is that is serviced by class hierarchy. Yet
we're commonly faced with impedence mismatch issues, often seen when trying
to store class data in, say, a relational model, where there is a schema and
the transform of one model to the other must be valid against said schema.

If we could, as a practice, declare a data model as a separate entity, along
with constraints that can easily be expressed declaratively, along with a
type system that supports inheritance, along with default and constant
values... **then** hang methods off of those types, along with other
processing methods not related to direct data model manipulation, wouldn't
we make our lives a whole lot simpler? I mean, creating a transform from one
declared schema to another would certainly be simpler, wouldn't it?

Being able to extract and transport data from a set of classes would be
simpler, too. Checking that you're poking the right parameter values in a
set method would be documented in the schema, not in some comment text that
you don't have access to. Your program editor could do type validation as
you write code to call methods, rather than later when you try to link
(granted, some already have this capability). Your coworkers could see where
to go to get the data they're wanting, without having to trace through your
inscrutable logic.

Why the heck not? Why don't we pull the @#$*@ data model out from the rest
of the processing stuff? Yes, I know that there are initiatives to generate
classes from schemas and visa-versa, but why don't we have programming
languages that make the distinction between data model and processing model
native and absolute? It would seem to make the software development process
easier by resorting to the time-tested method of splitting a complex problem
into smaller, more manageable problems. I can scope out the user's data
domain first, then delve into the ways for processing it. This is the way
it's been done with database application development for decades.

Wouldn't it make it easier to start *modeling* code? UML to object-oriented
class transformations, with XML Schema validated data on the side, anyone? I
mean simply, not the convoluted mess we have now. Heck, you might even be
able to reverse engineer a real-life commercial application and get a UML
model and set of data constraints from it, something I've never seen done
once by any CASE tool **successfully**. (Oh, I'm sorry, we don't call it
CASE anymore, do we? Ooooops.)

Okay, so I'm dreaming. Somebody wake me up.

P.S.
Thanks for putting put with the tirade. I'm all better now.