[
Lists Home |
Date Index |
Thread Index
]
----- Original Message -----
From: "Bullard, Claude L (Len)" <len.bullard@intergraph.com>
To: "'Didier PH Martin'" <martind@netfolder.com>; "'XML Developers List'"
<xml-dev@lists.xml.org>
Sent: Wednesday, June 08, 2005 12:35 AM
Subject: RE: [xml-dev] Why XML for Messaging?
> We do. See X3D (flux or xj3d). A 2D component
> can also fit inside a 3D layout and/or be a layer
> in the 3D objects.
>
> Pete sent the XMSF link at the Moves Institute.
> This and XSBC are companions to X3D.
>
> Navigating 3D with a joystick is quite easy.
> Even then, you overrate the difficulty. I
> watch kids do this with their keyboards all
> the time without much effort. There are still
> areas to be worked betted in selecting say areas
> of objects, but it is a common practice issue.
> As to the concepts of gestures, this is a very
> fertile field. Note that just as with searching
> and selecting in other domains, human gestures are
> subject to ambiguity and that is one of the areas
> where efforts such as HumanML ventured into but
> didn't get much traction. Distractions.... but
> the concepts are all there and the technology to
> implement them given XML is cheap and abundant.
>
just jumping in here:
yes, 3d on pc's is not that difficult, but typically one does not
conviniently gain full 6DOF controls. most 3d games typically make use of
4dof controls (the mouse controlling view angles and the keyboard
controlling movement).
in many games, and in much of my 3d stuff, I have 5DOF controls (2 mouse, 3
keyboard). full 6DOF can also present an interface problem mathematically,
eg, many games and other things often represent orientation via eular
angles, which poses the problem that, eg, roll, is not represented uniformly
in angle space (things like gimbal lock can be a problem depending on
orientation).
actually, many of my projects do use eular angles for the camera, and more
so in a way that they are allready partly gimbal locked, and as a result,
roll and yaw are equivalent.
a fix here is, obviously enough, using either matrices or quaternions as the
basis for orientation (though I personally prefer quaternions). ammusingly
enough, quaternions have taken over most representations of orientation
within my projects, except the camera (which is typically where I gather
many end up using them first).
using a representation like quaternions, full 6DOF controlls could be pulled
off, but imo would be less pleasant than they could be.
note: personally, given my handedness, I tend to prefer the arrow, cursor
control, and numeric keypad for controls. the arrow keys and cursor control
keys are used primarily for movement:
arrows: forwards/backwards, left/right movement;
del/end: when possible, up/down axis controls, otherwise, crouch and jump;
page up/down: auxilary up/down controls, sometimes controlling roll,
sometimes as a control for speed;
insert/home: next choice for roll.
numeric keypad, typically more misc controls, along with the right side of
the keyboard (enter, right control, shift, ...).
others often prefer variations of the WASD or ESDF scheme, but for me this
makes little sense. luckily, most games are configurable (I get kind of
annoyed when faced with a fixed scheme, and it is WASD).
as far as other things, I am not fammiliar with XMSF. the site seemed more
an organizational thing than any kind of spec.
[note: the rest is my personal oppinion, and I may well be wrong on much of
this, having not invested that much time into looking into X3D]
I have looked at X3D before, but had not been impressed in that it seems.
it seems to have too much stuff embedded in itself, and tries to be too many
different pieces at once (rather than a number of different formats and
files, each representing a different piece).
maybe I "just don't get it", but to me X3D seems like a rather backwards
approach to the whole thing. too complicated for a single format, and
covering too much domain.
the whole "component" system does not make much sense to me either, much of
it seems like stuff that would normally be left to the scripts, rather than
part of the model format.
(I am having trouble finding any real definition of a physics engine in
relation to X3D...).
ok, it can be noted that my experience is primarily with
first-person-shooters, more specifically, those which are part of the quake
family (eg: quake 1, to doom 3, half-life 2, and similar engines). I have
also messed to a much lesser extent with games such as serious sam and
unreal tournament.
however, I have not seen that much to suggest that the fundamental concepts
don't map more generally to other game types, and possibly to other non-game
uses.
ok, my preference would be more for a game-like design, eg, the model format
just represents models. control logic or similar should be nowhere to be
found here (though, attaching logic directly to a model may have other uses
now that I think of it, such as dynamic movement and constraints handling,
in the general case it is likely to be a problem).
note: many games decompose it further, eg, the model itself is composed of
multiple files, eg, mesh+skeleton, individual animations, control
information, ...
one then likely needs an "entity" system (I forget, afaik X3D takes a
different approach, namely embedding logic in the models and having a bunch
of different component types, which imo seems like a bad approach to
things).
usually the entity system is seperate from the geometry, tells what things
are where, tells default properties for the entity, and provides an
interface with the scripts. this (well, along with world geometry) is
typically the domain of "map formats". these tend to have their own file
formats.
often, the glue making everything move is the scripts. the scripts typically
have minimal interaction with the models (apart from controlling
animations). mostly, the scripts work by interacting with the entities.
other occurances, such as the animation changing or the physics causing
something to happen, happens as a result of alteration to entity state.
eg, as an example, the engine may define a number of methods to allow the
scripts to interact with other subsystems, and other subsystems may call
methods in the entity which may cause behaviors.
now, how I would do it using "web technologies" would likely be different.
namely, I would define either a model format, or at least signifigantly
strip down X3D, basically, to the level of representing structure and
geometry only.
I would create a new format for representing entities. likely (quite
contrary to most games) it may make sense to have entities refer directly to
their associated scripts. another (more generic) possibility is loading a
core script, which defines init functions for a lot of basic entity types.
this entity system would likely take up/absorb most of the functionality
associated with the "components" as well, eg, many of which would become
entity types. a simple example here would be lights, which are often
represented as entities in games (though often special, eg, in that the map
build tools may use them in building lightmaps).
many types may be special in that they may be handled by the engine rather
than supplied scripts, but this may be outside the scope of the entity
system.
the init functions for each entity type are called, likely being passed an
object representing the entity, which would contain any fields set in the
definition. the init functions go about setting up any basic properties and
methods.
ok, the resultant entity is then handled as needed by all the other
subsystems (physics, rendering, animation sequencing, ...).
or whatever...
|