NTFS also had (still has?) the concept of multiple forks per file. NTFS took the concept much further than Apple’s approach of two specific forks per file - in NTFS it was possible to create arbitrary numbers of forks per file, so that applications could layer
whatever meta-data made sense for each file’s particular purpose.
They added considerable complexity and - likely as a result - were almost never used by applications, which continued to treat files as simple streams of bytes. The fact that applications could not assume that their files were always stored on
an NTFS file system was very likely also a factor, as this meant that applications had to fully support files both with and without forks.
Thus the benefits came at a cost of both considerable engineering and considerable operational complexities.
So this concept has been implemented to quite a considerable degree, in the mainstream (the primary filesystem for the dominant operating system worldwide), and application vendors - the market - decided that the benefits were not worth the additional
complexity.
Cheers,
Damian
I'm glad someone mentioned the old days of Mac OS, when files had a "resource fork" and a "data fork", the former containing anything you might want to know about the bits in the latter. In practice, most people
said it was painful and awkward, but I never went to the mat with it myself so can't relay the details.
Having said that, all the data you get over HTTP comes with a Content-type, a reasonably-well-structured but very brief assertion of how the byte payload is meant to be interpreted. This, in practice, seems
to work pretty well. Suppose that back in the day, the resource fork had just had the equivalent of Content-type?
It is a shame Apple's Bento file format never took off. It combined the resource/data fork idea with an appendable archive.
I don't see any new contender: ZIP is still ascendant, Open Packaging Conventions have a too-complex simplest-case, and the package manager world is utterly siloed for each computer language.
So until then, pragmatics have to rule over elegant Separation of Concerns.
... On the other hand, it may be that where some existing successful technology clearly violates theoretical elegance and SOC, that theory may be wrong-headed or incomplete: it is a truth universally acknowledged that all systems that
do work should not work. So perhaps the idea that the best place for the metadata of a file is IN the file (e.g. as magic number, as XML header, etc) not WITH the file, is not hacky but the most unified and best approach? (Just as the metadata for a MIME
header is with the data, not a separate stream.)
Regards
Rick
|