[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
Re: [xml-dev] Usability testing of XML vocabularies?
- From: Rick Jelliffe <rjelliffe@allette.com.au>
- To: "Costello, Roger L." <costello@mitre.org>
- Date: Mon, 07 Jul 2008 18:04:31 +1000
Costello, Roger L. wrote:
>
> How do the standards bodies - W3C, OASIS, ISO - conduct usability
> testing on the XML vocabularies they produce?
>
I am not aware of formal usability testing ever having been done on XML.
Plausible theories might include:
1) One chunk of the population is devoted to voodoo or faddish theories
about markup (e.g. attributes = bad)
2) Another chunk thinks that users never see the XML and so it doesn't
matter
3) Another chunk is constrained to use schema languages that only allow
some kinds of constraint, so they run up against brick walls whenever
they try to get too idiomatic (in which case they just ignore schemas:
e.g. old RDF, XSLT, SVG)
4) They are modeling pre-existing data objects: e.g. a schema or data
structure exists, and it might be too messy to fit neatly into an
optimal model (MS' Jean Paoli told me this was the difference between
Office 2003's XML and Office 2007's XML: they first tried to get nice
XML and found it was just to difficult to shoehorn all the ideosyncratic
stuff in.)
5) There is a clear tree of the data or the report. (See the Michael
Stonebraker ACM article VLDB ’07, "The End of the Era (Its Time for a
Complete Rewrite)" which questions the relation model by saying how
often, in fact, schemas are stars, snowflakes, trees, streams, etc.
But the *main* reason it is difficult, I think, is that usability
testing implies a scenario and task. You don't just sit people from the
street down and say "do you understand this document?" (Actually, that
was one of the problems with OOXML: so many people wanted that to be the
criterion.) One of the value propositions of XML is retargeting in the
future to unknown tasks.
So, in that case, rather than user testing now, practitioners tend to
regard future-proofing as a better option: look at how schemas tend to
change as they evolve, and start off down that road now. For example,
schemas tend to get looser over time, they tend to get more genericized
and consistent, and they tend to be bundled into versions or dialects,
which at worst require different schemas and tools. The logical response
to this is to start off with a generic and open/extensible schema, and
to put version constraints as another layer (you guessed it...Schematron).
In SGML days, there was in fact some user modeling, even using Fitts
law. This was because the prime cost then was regarded as keystrokes by
typists, so the different forms of minimization could be tested to get
the most efficient data entry. This was sometimes measured in
"keystrokes per tag" (i.e. effective tag) and it was not uncommon to
have less than one keystroke per tag. But this was because there was
clearly a bottleneck or critical path scenario identified. But the XML
consituency has no awareness of data entry efficiency as a scenario: I
don't recall it ever having been mentioned, or only in passing to
comment that XML prizes readability over efficiency: in effect XML
embodies a proposition that the critical path is developer
ease-of-comprehension not data entry.
Cheers
Rick Jelliffe
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]