OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RDF_PI



I'm thinking about a little system, with at least the following resources
defined in an RDF doc :

data
inference engine
algorithm
processor
formatter for result
label for resulting resource

Use case :
---
I have a set of CSV data files, the results of an experiment collected by a
legacy tool
I want to get a nice table on screen of some statistical analysis of this
data.

First of all I want to build the RDF containing the instructions - a tool
would be nice, but for now a text editor. The resources involved would be :

location of the raw data
description of the required conversion
-> source code (algorithm) for data conversion CSV -> XDF
-> compiler for source code
-> instructions for compilation and running

(rdf + data for intermediate results)

description of the required conversion
-> source code (algorithm) for statistical analysis XDF -> XDF
-> compiler for source code
-> instructions for compilation and running

(rdf + data for intermediate results)

description of the required view
-> XSL for view
-> instructions for launching viewer

the inference engine in the list above is the meta-engine for running
through the procedure
---

I imagine that by using a named algorithm, the (specified) inference engine
could first check to see if a pre-built processor was available for
$PLATFORM, otherwise pick up the algorithm as source code suitable for
$PLATFORM and build the processor - the resulting processor could be cached
and a reference marked in the system. The processor wouldn't necessarily
have to be on the local systems (URL). The compiler would be specified in
the same way, though obviously having a pre-built compiler for at least one
language would be a good start - if necessary the system could bootstrap
from there.

Note that the whole procedure for CSV -> analysis view could be retained as
yet another algorithm, specified as the same RDF but without the binding to
the source data.

There's been some discussion recently about how to deal with the results of
processing, I'm not sure if there was any resolution. What I'm suggesting
with the above is the (rdf + data for intermediate results) will contain the
result of running the algorithm, and a reference to the tools used - this
will be the effectively the chain of trust.

What I'd like to ask the illustrious correspondents of this list, is what
schema are already in existence that could cover some or even all of the
above, and also for comments as to the viability and likely problems of such
a system.


---
Danny Ayers
http://www.isacat.net