Lists Home |
Date Index |
1) Can processes be reliable given noisy data?
2) Where one can establish a similarity metric, is that
good enough, as Bosworth is claiming for human processes,
Bosworth is playing fast and loose with the noise problems.
Applications vary wildly in their tolerance to noise. Humans
tolerate it best. As far as the use of publish and
subscribe notification systems, that isn't new news here
or elsewhere, so I am unsure exactly what progress Bosworth
is anticipating. I have this queasy feeling that like
so many of the showman engineers future visions,
this is another round of same old wine in a
*branded new bottle* (not a typo).
As follow on, in the domain of not new but possibly
worthy of further study or just to make your brain hurt:
I suggest a review of the works of Salton et al on
the vector space model, and the new refinements of
Dominick Kuropka et al on topic-based vector space
models. Consider these in terms of namespaces as
provided by XML, and the implications given aggregate
documents with multiple namespaces folded into the
same document to the vector model itself. Given
the cheap/free real-time 3D rendering (not Avalon
but I won't get into why here), vector space models
can be mapped to the real time 3D and improve the
interface as well as enable the slicing and dicing
of the returns. It MAY be the case that quantum
logic can be applied to improve the problems of
ambiguity, but the jury is still out on that one.
The question one might ask is if quantum logic
approaches require quantum computers. My math isn't
good enough to determine if the term expansion
overhead really does kill any performance gains
made by the algorithms.
The simpler vector space models do not have that
problem. They work. The question is do they
work better given markup. Remember, a schema IS
a document itself.
From: Jonathan Robie [mailto:email@example.com]
Here's another "something altogether different":