I see an opportunity for an interesting "social aspects of technology" research project: asses attributes for a large range of standards (size, completeness, precision, domain covered, etc.), assess the adoption success of the standard (number of implementations, ease of interoperability, perception of success / failure by community, etc.) and see how the two correlate. Should at least be worth a PhD ....
On 04-Nov-16 9:36 AM, Rick Jelliffe wrote:
There are two quite different issues?
What is the best size and layering etc for a standard technology; and what is the best for the standard document? (People all the time conflate the technology with the document IMHO.)
(But you would expect the size of the standard document would to some extent need to reflect the size of the techology. )
But apart from that everything will have its own rules: looking for general principles is well and fine as long as the abstraction doesnt then become unprovable dogma that prevents advance.
Actually i think xml/xpath might be quite rare in having the standards document lead the technology, rather than being a QA on the technology being pushed or instigated by vendors and dictators-for-life.
I dont think it is necessarily a bad thing if a standard has known gaps or clear limitations or TBDs. Should ODF have been held up until it had a spreadsheet formula language? Of course not.
But if a formal standard process is above all a QA on the documentation for a technology, what it would bring to JSON is not necessarily fixes for the edge-case problems, or the addition of comments, even though committees love tinkering, but we might expect it should be a clearer list of those edgecases and short comings, and standard ways to ameliorate them.
Regards
Rick
On 4 Nov 2016 11:41 pm, "Costello, Roger L." <costello@mitre.org> wrote:
Hi Folks,
Ø while JSON’s “simplicity is a virtue” approach led to widespread adoption,
Ø under-specification has led to a proliferation of interoperability problems
Ø and ambiguities. From a strictly software engineering perspective these
Ø ambiguities can lead to annoying bugs and reliability problems, but in a
Ø security context such as JOSE they can be fodder for attackers to exploit.
So it would seem that a standard that is completely specified is a good thing.
But completely specifying anything is complex. And lengthy.
Who has time to read (and understand) lengthy, complex specifications?
A solution: narrow the focus/scope of the standard. That reigns in its length and complexity. But then multiple standards are needed to accomplish anything. And perhaps those standards are contradictory and/or overlapping in places. If there was one large specification, we could ensure that inconsistencies and duplications are eliminated.
Sigh … many small standards (simple individually, but collectively complex) versus one large, complex standard.
I don’t see a winning strategy. Do you?
/Roger
-- Ian Graham // <http://www.iangraham.org>