[
Lists Home |
Date Index |
Thread Index
]
- From: Imran Rashid <imranr@wolfram.com>
- To: xml-dev@lists.xml.org
- Date: Tue, 19 Dec 2000 14:58:27 -0600
I'm a complete newbie w/ Schema's, so this might be a dumb question.
I have a datatype that can be a decimal of arbitray precision, or it can be
+/- infinity.
since its arbitrary precision, my first instinct was to go to the
decimal datatype. However, it seems to me that INF is only defined for
double and float, not decimal.
I know I could define my own datatype, that is either a decimal or the
string "INF". However, I was thinking that with the float and double types,
presumably another application would see "INF" and interpret it as infinity.
If "INF" is matched on a string, though, the application would simply see
the literal string "INF".
if the arbitrary precision is not extraordinarily important, am I better
off just using the double type?
thanks,
Imran Rashid
|