[
Lists Home |
Date Index |
Thread Index
]
juanrgonzaleza@canonicalscience.com said:
>
>> peter murray-rust wrote:
>>
>
>> A key
>> approach is that data and text are mixed ("datument") so that we can
>> transmit data in primary publications. Machines can now start to
>> understand scientific publications.
>
> I would say "to analize".
LOL, so would I! :-)
>> This is sufficiently broad that it is impossible to create a
>> traditional XSD schema which allows for all uses.
>
> Such as I see the problems are being complexity and flexibility. I think
> that the whole XML approach was not really designed for dealing with
> complex applications in a flexible way.
Yes, XML was not designed with particular applications in mind. In fact,
it was designed bottom-up.
> If reusing of code is one of priorities, and extensibility, power for
> manipulation of simbolic structures and modularization are also why do not
> use a specialized simbolic language as Lisp or Scheme?
Because XML parsers are available in every language built-in, while LISP
and scheme parser are not ubiquitous, have less good internationalization,
encourage rather than discourage the addition of processing, and does not
have validation languages.
> Yeah, XSD was mainly designed with a bussiness application in mind.
> Precisely main strengh of Lisp-like approaches has been its unusual
> easiness for adaptation to evolution. The main reason Lisp is so popular
> in academic circles and IA research is that code evolutionate with
> evolution of the discipline. At least that is my opinion.
>
> I see extremadly difficult that a XML environment (such as is being
> designed today) can offer that kind of stuff needed in science. XML comes
> from the SGML world of design-once-for-a-fixed-big-bussines.
This is high-order analizing.
Cheers
Rick Jelliffe
|