Lists Home |
Date Index |
"The problem he found with centralizing processing--with stored
procedures and triggers and so forth--is that it doesn't scale."
What is that suppose to mean ? Big databases are kept in server
farms with some kind of load balancing between them. You have a trigger
attached to a table. Each time you perform an operation the trigger will
fire. If the trigger is processor-intensive you will need additional
processors to handle the load or additional servers to share the load.
Either way, there are ways to scale such configurations.
"His talk also implied that it restricts users from making innovative
?!!?!?! How is that ? Nobody forces you to use triggers and stored
procedures and foreign keys. You use them if they are appropriate for
your application. What is an innovative connection any way ?
"XQuery and Web Services were too big and came too late, however. Nobody
actually wants to use them, even if they know how."
It seems to me that "web services" is becoming a cool buzz-word in
the IT industry. Maybe I am wrong ?
"It would be a very simple and database-independent protocol that would
make all data in the world open."
In what way open ? Perhaps someone should explain what means that
today's data is closed.
"The entire relational approach, from the canon of Third Normal Form
(three is a holy number) to the enormously complex collection of
analytic functions, subqueries, and other ways to impose structure in
SQL, is an attempt to be as precise as possible about the data chosen
I am not a DB expert but in my understanding a good DB design is
made in order to avoid duplicate information, increase performance,
integrity a.s.o. I never heard of somebody designing a DB for "precision".
"Bosworth isn't interested in that. If the user gets a few hundred
results and has to scroll through them a little bit, that's fine. We
don't need no stinkin' metadata or knowledge management."
Most applications are not tolerant to such behavior. And I am not
sure that the users would accept such a behavior in the first place.
What is more expensive: processor time or user time ?
Anyway, this discussion is purely theoretical because from the
article I do not see any indication as to how this would be possible.
"But his brief critique of the trend toward putting more and more
features into the database engine--a critique that he whisked through on
the way to grander visions--left open a question about the basic
philosophy of SQL."
This is not a problem of SQL or foreign keys or whatever. This is
the problem of the database vendor trying to sell the next DB version.
"This centralized control is a relic of the 1970s, when corporate staff
would sit at command-line processors and type in SQL to do what they
This is complete bullshit. Current applications make heavy use of
foreign keys. It's a very efficient way to ensure that the information
in the database is correct regardless of what is happening with the
application manipulating the database (and everybody knows that a lot of
things can happened with the application manipulating the database -
including *bad* programming; not all software houses have programmers to
match those that wrote Postgres or Oracle)
"Nowadays, when an application and even a Web interface stand between
the user and the database engine, the never-trust-the-user philosophy is
Who can blindly trust the data that they receive from a web
interface ? This is a sure recipe for disaster.
To be honest I do not understand what this article propagating. What
is the revolutionary thing ? If somebody has time, I ask him to drop me
a line or two about this.
SCJP preparation material: