[
Lists Home |
Date Index |
Thread Index
]
- To: "XML Dev" <xml-dev@lists.xml.org>
- Subject: overheads of pipelines
- From: "bryan rasmussen" <rasmussen.bryan@gmail.com>
- Date: Mon, 8 May 2006 07:45:26 +0200
- Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=eZjSrd1lpxgOvJ8XqWsN6BVXjmO1ocyv54Sn0IVqyEXBadf9TnimmVa8NiyPGMZ2SYIvmJEhp42HTewSa+oJOMSPQVdrvQ1GcC9z3jHrD3ihVV/qfPms0cnUhENsizUQGyA80HbvKKfJklz35oiIPFUVqLKlF21VWmZZqXoaFKY=
The NVDL discussion has got me thinking about the overheads of pipeline usage.
Let us say I am building an application and I want it to be easily
extensible in the future, so I want to use a pipeline, but currently
the application only has two steps. I'm wondering if a pipeline in
that context wouldn't seem resource intensive for no good reason?
But then, neglecting to build my application as a pipeline, in someway
this smacks of premature optimization, when in reality using the
pipeline when unnecessary would actually be premature extensibility.
So, questions:
1. what are the initial resource usages of a generic pipeline, and how
do pipelines generally scale - is there a theory, algorithm etc. or is
it all implementation specific.
2. should one always build the most extensible application even if the
current needs do not require extensibility - my personal opinion is
yeah
Cheers,
Bryan Rasmussen
|