Lists Home |
Date Index |
Michael Champion wrote:
> Why " give things of interest (the equivalent of your objects in your
> domain model or your table rows in your physical data model) visible
> identity?" That seems to violate the principle of information
> hiding that has been around since before OO. It seems to be simply a
> Bad Idea to expose internal details in a world where slimeballs have
> proliferated who would love to subvert your website for fun and/or
> profit. Why not hide them behind a "controller URI" that accepts
> requests and gives them a going over with the polygraph and
> protocoscope, then routes them to whereever the system thinks they
> should be routed at this moment?
Oh gosh, I wasn't talking about actually mapping table rows or objects.
But many systems have a domain model. I see no reason not to have
multiple URLs to name the things of interest to the domain. I think
you're arguing for an obscurity by design that isn't always valuable.
In implementation terms by all means drive everything through a single
ASP or Servlet - for a lot of web frameworks out there, you don't have
a choice. That does not mean you have to have a single exposed URL to
the world. Who's really exposing implementation details in this case?
If I turned your argument around and said there should be one
self-joining uber-table (property values) or one uber-object (HashMap?)
in a system it would surely be something to question. What's special
about mapping a domain onto URL space that we have to have one uber-URL?
For example every book on Amazon (I think) has a URL (if not a few).
tell me whether you prefer that approach over this for tv channels:
that's a classic controller design. I think you'll find Amazon has the
edge in design terms (yes, you can get at the RTE domain model for TV
channels, but the website is not designed that way, perhaps because
somebody thought it's better to keep the user on the same URL...). There
are other systems where the same URL points you to different *entities*
depending on whatever the server session state happens to be (JIRA comes
to mind today). It means I can't bookmark stuff easily - and not being
able to bookmark is to my mind a leading indicator for two things;
difficult to innovate with and difficult to manage.
If the purpose of the system is to front a messaging endpoint or
something then sure, designing in an artificial serialization might make
sense; that would be akin to decorating a hashmap behind a queue. There
is a genuine (I think) impedence between message queues and URL space,
which I've waffled about before:
I think one reason RSS (and search) is huge, because they're a cheap way
to create iterators over a space (the Web) that has random access, but
no natural iterators. Having bridged a few queuing systems with the web,
I think this area is ripe for exploration; it's definitely not a done deal.
I do take Rick's point from the NAT analogy (NAT of course has pros and
cons) and I'm not arguing from an absolute position. I just find
controller designs need to be questioned for suitability. Controller
implementations are another thing.