[
Lists Home |
Date Index |
Thread Index
]
Joshua Allen wrote:
> Did "memex" have the concept of universal identifiers?
If, by "memex", you mean the Memex prototype that I mentioned
working on back in the 80's and installing at ITT, CERN and inside
Digital, then the answer is: "Yes."
Links my memex system had very much the form of what you know
as URL's or URI's today. I can't remember the exact syntax (it's been
almost 15 years...) but it was basically:
node::"task=application":<ObjId><local_target>
"node" was the name of the node on DECnet (Easynet) network
"task=application" is like the "protocol" in HTTP URLs. i.e.
"http:" tell you what protocol to run on the remote node.
"ObjId" was some opaque string which was an "address" that was
meaningful to the application that you were calling to. And, "local
target" was some address, record selector, byte-offset, etc. that was
meaningful to that application. For something like VaxNotes, the
ObjId+LocalTarget would be "NotesFile + NoteNumber" while for
something like the TPU Text Editor, it could be "FileName +
LineNumber".
This was, of course, the natural way to do things on VMS since
our file system was distributed over DECnet[1]. On VMS, there really
was no distinction made between a local file and one that was remote.
If you wanted to access "foobar.txt" in the "mumble" directory on node
"ATFAB," you simply wrote: "ATFAB::[mumble]foobar.txt". Also, on VMS,
the way you created a network session with another machine was by
simply opening up a "file." That's what the "task=application"
business is all about in the above example. This tells the operating
system to open up the program called "application" and give it a file
handle to the network session. Communications between the client and
server was then done by reading and writing from the "file" that they
shared. You could also write: 'node::"task=0":foo.dcl' and DECnet
would execute a DCL command file (like a shell script) that had the
name "foo.dcl".
So, DECnet basically provided us the equivalent of "URL's" as
a standard function of the operating system. This meant that most of
our work on "Memex Prototype 1" was focused on thing like the
conventions that were needed to identify objects and local addresses
within them. Also, we put a great deal of effort into user interface
issues (i.e. integrating standard functions for creating links,
following them, etc.) and we did a good bit of work on storing
hyperinformation networks external to files since in many cases one
would want to link to an object in a program that we couldn't modify
yet. But, all the network stuff and "universal identifiers" was
basically "free." Given our system, it didn't matter if you were
linking to files in your building or in Japan -- as long as you had a
network connection (and very few people had one.)
> I think that the URI, not the hyperlink, is
> the fundamental innovation of the WWW
I really question this, if only because other systems already
implemented the equivalent of URL long before the WWW stuff happened.
VMS had DECnet and RMS in its first version as far back as 1978... and
I don't believe that VMS was the only or first system to have such a
file system. Didn't Apollo's machines have a distributed file system?
My first impression of TB-L's URL's was that they were simply
a way to bring the equivalent of VMS file names to UNIX. This is
something that Unix always needed -- too bad it still isn't integrated
in the file system even after all these years...
Personally, I think that the WWW was all about timing, not
innovation. My projects and those of others had much the same
capability as the early web browsers, however, what we didn't have was
anywhere to use these tools outside our own internal network (which
was the largest in the world at the time). The big problem I had in
trying to bring my "Memex" system to market was that there was *no*
market to sell into. Virtually no customers had more than two or three
machines networked together and even though people had hundreds of
PC's or Macs, they typically used them as little more than VT100
emulators -- not network machines. (There is a long story, to be told
another day, of how the origin of HyperCard was a reaction to people's
disgust at using the Lisa as a VT100 emulator to access Apple's
internal ALL-IN-1 network...) Those customers who did have network
connections were typically paying big-bucks for tiny amounts of
bandwidth and thus carefully rationed access to their networks.
What happened in the late 80's and early 90's was that
bandwidth became cheaper, modems became faster, folk like Peter Tattam
brought practical TCP/IP to the PC's (Trumpet Winsock), email and
office automation finally convinced people to link their machines into
networks, etc. Once these trends began, it became possible for a
wide-area network hypertext system to be deployed. It simply wasn't
possible much earlier. What we needed to make Hypertext real wasn't
innovation. The problem was well understood long before the 90's
began. (Heck, I had personally been building hypertext things of
various types ever since 1974 and I knew many others who were doing
the same.) What we needed was a network that could support Hypertext
and a community of users who were comfortable enough with computers to
accept it.
The lack of a network was, I think, largely responsible for
the tremendous amount of attention that was paid during the 80's on
building large numbers of closed hypertext systems. i.e. systems that
relied on everything being in a single database or on a single
machine. (In many cases, these things weren't much more than glorified
"help" systems. But, at least you could sell or deploy a help system.
There was a market for that.) Personally, I believe that the Memex
system we built at Digital was the first of the "wide-area network" or
distributed hypertext/hyperinformation systems. (Even though Ted
Nelson had been *talking* about it for years...) However, the fact
that we did it first wasn't because we were terribly brilliant.
Rather, it was because in Digital we had a massive, fast, worldwide
network to play with. Also, the culture of the company was such that
we valued the network and the sharing of information on that network
very, very highly. There were very few people outside Digital that
were in such an environment back then. For instance, even though IBM
also had a very large network, they carefully controlled access to it
and discouraged its use in many ways.
Anyway, the answer to your question is "Yes" we did have: "a
string identifier that is going to give the exact same result no
matter whether called by a user on a workstation in Singapore or a
user at CERN." Sorry for rambling. This stuff was all a very long time
ago and there probably aren't many folk who care any more...
bob wyman
[1] Thanks to Leo Laverdure who implemented RMS in VMS V1.0.
Note: Working with me at Digital in Valbonne on the Memex Prototype
were Marios Cleovoulou and Per Hamnqvist. Later, Leo Laverdure, Ward
Clark, Pascalle Dardaillier, Jim Cowan, Doug Rayner, Tim Scanlan, Mike
Horner and others helped as well...
|