[
Lists Home |
Date Index |
Thread Index
]
At 03:33 PM 6/11/2002 -0400, John Cowan wrote:
>I don't think "well-defined lexical space" can be taken to be synonymous
>with "common lexical representation", though I don't quite understand
>what Simon means by "common". A legitimate, though perhaps not very
>useful, type would be "footype": {"foo", 32, 64, 79.9, "hike!"}, with
>the obvious lexical representation {"foo", "32", "64", "79.9", "hike!"}.
>But I don't know if Simon thinks this list of lexical representations
>are "common".
I don't believe I'd find the parts of foo to have any commonality, much
less "common"-ness. I guess an enumeration or a regex with lots of OR in
it for the content could define some, but I prefer commonality to be a
pattern that emerges from the information.
More generally, I suspect general processing of types on a lexical
foundation has to rely on some kind of commonality among the lexical content.
Regular expressions are pretty good at describing a wide variety of lexical
commonalities. If you're willing to spend more time in regular
expressions, you can of course extend understandings of commonalities,
especially if you're willing to apply multiple successive regexes rather
than insisting on a single match or split. Some kind of formal description
can be helpful, of course.
I'll be trying to explain this in more detail as I move deeper into Regular
Fragmentations. RegFrag is pretty much an excercise in explicit casting
through processing, which is my preferred approach.
Simon St.Laurent
"Every day in every way I'm getting better and better." - Emile Coue
|