[
Lists Home |
Date Index |
Thread Index
]
Thanks for the vote of confidence, Joe ;->
Jason, you are really asking what the optimum way is to store some data for
a particular task. I don't know what needs to be optimized until I know
more about what you are doing with this data. Here are some questions that
might be helpful for a start:
- What kind of data is this? Are there logical connections that would be
lost if the data were divided up into smaller "chunks"?
- Are you creating many different kinds of XML documents from the same
data, or do you retrieve it in the same form it was stored?
- How many users are anticipated? What kind of turnaround time is needed?
For what kinds of requests?
- How is the data updated?
Jonathan
At 08:53 AM 9/11/2003, Chiusano Joseph wrote:
>Jason,
>
>There are some good solutions for this...someone will be along in the
>near future to talk about SQL/XML and XQuery.
>
>Over to Jonathan...
>
>Kind Regards,
>Joe Chiusano
>Booz | Allen | Hamilton
>
>Jason Kohls wrote:
> >
> > Greetings,
> >
> > We're looking at a content management system, which stores all of the
> content/metadata in a single, 1 MB XML file on disk or as separate
> records (for each parent element) in a two-field table in SQL,
> out-of-the-box. Based on our rough content estimates, however, we can see
> this file growing to over 100 MB easily. The CMS provider says that
> anything over 30 MB should use the SQL backend.
> >
> > The one thing that we do not like is the schema/data model (or lack of)
> for the SQL storage option. Coming from the relational camp, this seems
> odd to us, and even on disk, hierarchically, it seems to make more sense
> to break up this single XML file into smaller files (per parent element)
> in a directory structure with an index.
> >
> > ...But then again, you guys are the experts :)
> >
> > Can anyone see any problems with this storage architecture from a
> performance/stability/scalability standpoint?
> >
> > Thanks in advance!
> >
> > Jason Kohls
|