[
Lists Home |
Date Index |
Thread Index
]
Rick Jelliffe wrote:
[[
From: "Jonathan Borden" <jborden@attbi.com>
> I am all for pluggable type libraries. Aside from that, any fact that XML
> is verbose is not something that we've been too concerned with. (we've all
> heard this before).
And, as a result, it is very hard to make tools for XML.
Take that <position> example. A way to declare the rules to go from a
lexical
space to a value space (e.g. using grammars, picturess or whatever) would
allow a nice user interface where the user types an idiomatic entry and the
GUI
converts it to markup. Just as much, it would allow someone with a simple
text editor
to enter the values and validate. (In some cases, the rules might be
reverseable and help
rendering too.) No losers, as far as I can see.
]]
As I've said, this is a great example of where something like Simon's
regular fragmentations provides a nice solution.
The point is, is that XML ++ is better able to deal with XML than text
patterns, surprise surprise. Luckily we have regular expressions and
software that knows how to deal with them ... and we also have software that
knows how to take a piece of text _and_ a regular expression and emit a
piece of XML. This seems like a good clean solution.
[[
That XML removed out SGML's facilities for parsing text and implying
tags, does not mean that it is not appropriate or practical functionality
for some
other layer to have some system of declarations. (As XML Schemas
found out with their derivation by lists and unions.)
]]
Agreed. see above.
[[
"Let's get on to business rules" is irrelevent to people in publishing
who do not have business rules in their data in that way.
Otherwise, we will be left with complex tools but no data, because the
stage of getting from what people want to read and write and the
particular types that committees have decided on has a standards
gap. In the data capture world, GOMS and low-training are still king.
Complex data values should not be sniffed at.
]]
Hardly. All I am saying is that XML specifications, and much XML software
(e.g. XSLT) are naturally able to deal with XML structures/patterns rather
than text structures/patterns -- witness RELAXNG, XSLT, etc. - Is this
really a surprise?
[[
<path d="M300,200 h-150 a150,150 0 1,0 150,-150 z"
fill="red" stroke="blue" stroke-width="5" />
<path d="M275,175 v-150 a150,150 0 0,0 -150,150 z"
fill="yellow" stroke="blue" stroke-width="5" />
<path d="M600,350 l 50,-25
a25,25 -30 0,1 50,-25 l 50,-25
a25,50 -30 0,1 50,-25 l 50,-25
a25,75 -30 0,1 50,-25 l 50,-25
a25,100 -30 0,1 50,-25 l 50,-25"
fill="none" stroke="red" stroke-width="5" />
</svg>
Oops, did someone forget to tell them that terseness is of minimal
importance? Or is it that there is a sweet spot at which
point specialist and idiomatic notations (paths, URLs, dates, positions,
Xpaths, styles, measurements, etc) are appropriate?
Indeed, is it positively bad for readability (and therefore
maintainability, comprehensability, cheap-tool-ability) to have no
embedded notations for complex values?
]]
This example proves exactly what?
a) that use of lists of numbers and tokens in attributes is a _good thing_ ?
b) that humans should magically understand this?
c) that sometimes we need to make compromises?
Compromises are fine, but realize that you do give something up, for example
the ability to easily define subsets of such patterns, or to restrict
specific ranges inside of such patterns etc. Now markup is not the only way
to define structure, EBNF is often an acceptable option -- and we expect
that SVG will come with specific tools, and largely not be processed with
entirely generic XML tools. That tells me something about when such
compromises are indicated.
Jonathan
|